home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

29 rows where author_association = "OWNER", "updated_at" is on date 2021-12-18 and user = 9599 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date)

issue 12

  • Optimize all those calls to index_list and foreign_key_list 14
  • Datasette(... files=) should not be a required argument 2
  • Release Datasette 0.60 2
  • Trace should show queries on the write connection too 2
  • db.execute_write(..., executescript=True) parameter 2
  • Refactor TableView.data() method 1
  • Syntax for ?_through= that works as a form field 1
  • validating the sql 1
  • add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on 1
  • _prepare_connection not called on write connections 1
  • Documented JavaScript variables on different templates made available for plugins 1
  • Separate db.execute_write() into three methods 1

user 1

  • simonw · 29 ✖

author_association 1

  • OWNER · 29 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
997272328 https://github.com/simonw/datasette/issues/1566#issuecomment-997272328 https://api.github.com/repos/simonw/datasette/issues/1566 IC_kwDOBm6k_c47cSsI simonw 9599 2021-12-18T19:18:01Z 2021-12-18T19:18:01Z OWNER

Added some useful new documented internal methods in: - #1570

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Release Datasette 0.60 1083669410  
997272223 https://github.com/simonw/datasette/issues/1555#issuecomment-997272223 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cSqf simonw 9599 2021-12-18T19:17:13Z 2021-12-18T19:17:13Z OWNER

That's a good optimization. Still need to deal with the huge flurry of PRAGMA queries though before I can consider this done.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997267583 https://github.com/simonw/datasette/issues/1570#issuecomment-997267583 https://api.github.com/repos/simonw/datasette/issues/1570 IC_kwDOBm6k_c47cRh_ simonw 9599 2021-12-18T18:46:05Z 2021-12-18T18:46:12Z OWNER

This will replace the work done in #1569.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Separate db.execute_write() into three methods 1083921371  
997267416 https://github.com/simonw/datasette/issues/1555#issuecomment-997267416 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cRfY simonw 9599 2021-12-18T18:44:53Z 2021-12-18T18:45:28Z OWNER

Rather than adding a executemany=True parameter, I'm now thinking a better design might be to have three methods:

  • db.execute_write(sql, params=None, block=False)
  • db.execute_writescript(sql, block=False)
  • db.execute_writemany(sql, params_seq, block=False)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997266687 https://github.com/simonw/datasette/issues/1569#issuecomment-997266687 https://api.github.com/repos/simonw/datasette/issues/1569 IC_kwDOBm6k_c47cRT_ simonw 9599 2021-12-18T18:41:40Z 2021-12-18T18:41:40Z OWNER

Updated documentation: https://docs.datasette.io/en/latest/internals.html#await-db-execute-write-sql-params-none-executescript-false-block-false

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
db.execute_write(..., executescript=True) parameter 1083895395  
997266100 https://github.com/simonw/datasette/issues/1555#issuecomment-997266100 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cRK0 simonw 9599 2021-12-18T18:40:02Z 2021-12-18T18:40:02Z OWNER

The implementation of cursor.executemany() looks very efficient - it turns into a call to this C function with multiple set to 1: https://github.com/python/cpython/blob/e002bbc6cce637171fb2b1391ffeca8643a13843/Modules/_sqlite/cursor.c#L468-L469

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997262475 https://github.com/simonw/datasette/issues/1555#issuecomment-997262475 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cQSL simonw 9599 2021-12-18T18:34:18Z 2021-12-18T18:34:18Z OWNER

Using executescript=True that call now takes 1.89ms to create all of those tables.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997249563 https://github.com/simonw/datasette/issues/1569#issuecomment-997249563 https://api.github.com/repos/simonw/datasette/issues/1569 IC_kwDOBm6k_c47cNIb simonw 9599 2021-12-18T18:21:23Z 2021-12-18T18:21:23Z OWNER

Goal here is to gain the ability to use conn.executescript() and still have it show up in the tracer.

https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executescript

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
db.execute_write(..., executescript=True) parameter 1083895395  
997248364 https://github.com/simonw/datasette/issues/1555#issuecomment-997248364 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cM1s simonw 9599 2021-12-18T18:20:10Z 2021-12-18T18:20:10Z OWNER

Idea: teach execute_write to accept an optional executescript=True parameter, like this: ```diff diff --git a/datasette/database.py b/datasette/database.py index 468e936..1a424f5 100644 --- a/datasette/database.py +++ b/datasette/database.py @@ -94,10 +94,14 @@ class Database: f"file:{self.path}{qs}", uri=True, check_same_thread=False )

  • async def execute_write(self, sql, params=None, block=False):
  • async def execute_write(self, sql, params=None, executescript=False, block=False):
  • assert not executescript and params, "Cannot use params with executescript=True" def _inner(conn): with conn:
  • return conn.execute(sql, params or [])
  • if executescript:
  • return conn.executescript(sql)
  • else:
  • return conn.execute(sql, params or [])
     with trace("sql", database=self.name, sql=sql.strip(), params=params):
         results = await self.execute_write_fn(_inner, block=block)
    

    ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997245301 https://github.com/simonw/datasette/issues/1555#issuecomment-997245301 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cMF1 simonw 9599 2021-12-18T18:17:04Z 2021-12-18T18:17:04Z OWNER

One downside of conn.executescript() is that it won't be picked up by the tracing mechanism - in fact nothing that uses await db.execute_write_fn(fn, block=True) or await db.execute_fn(fn, block=True) gets picked up by tracing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997241969 https://github.com/simonw/datasette/issues/1555#issuecomment-997241969 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cLRx simonw 9599 2021-12-18T18:13:04Z 2021-12-18T18:13:04Z OWNER

Also: running all of those CREATE TABLE IF NOT EXISTS in a single call to conn.executescript() rather than as separate queries may speed things up too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997241645 https://github.com/simonw/datasette/issues/1555#issuecomment-997241645 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cLMt simonw 9599 2021-12-18T18:12:26Z 2021-12-18T18:12:26Z OWNER

A simpler optimization would be just to turn all of those column and index reads into a single efficient UNION query against each database, then figure out the most efficient pattern to send them all as writes in one go as opposed to calling .execute_write() in a loop.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997235388 https://github.com/simonw/datasette/issues/1566#issuecomment-997235388 https://api.github.com/repos/simonw/datasette/issues/1566 IC_kwDOBm6k_c47cJq8 simonw 9599 2021-12-18T17:32:07Z 2021-12-18T17:32:07Z OWNER

I can release a new version of datasette-leaflet-freedraw as soon as this is out.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Release Datasette 0.60 1083669410  
997235086 https://github.com/simonw/datasette/issues/1555#issuecomment-997235086 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cJmO simonw 9599 2021-12-18T17:30:13Z 2021-12-18T17:30:13Z OWNER

Now that trace sees write queries (#1568) it's clear that there is a whole lot more DB activity then I had realized:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997234858 https://github.com/simonw/datasette/issues/1555#issuecomment-997234858 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47cJiq simonw 9599 2021-12-18T17:28:44Z 2021-12-18T17:28:44Z OWNER

Maybe it would be worth exploring attaching each DB in turn to the _internal connection in order to perform these queries faster.

I'm a bit worried about leaks though: the internal database isn't meant to be visible, even temporarily attaching another DB to it could cause SQL queries against that DB to be able to access the internal data.

So maybe instead the _internal connection gets to connect to the other DBs? There's a maximum of ten there I think, which is good for most but not all cases. But the cases with the most connected databases will see the worst performance!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997153253 https://github.com/simonw/datasette/issues/1568#issuecomment-997153253 https://api.github.com/repos/simonw/datasette/issues/1568 IC_kwDOBm6k_c47b1nl simonw 9599 2021-12-18T06:20:23Z 2021-12-18T06:20:23Z OWNER

Now running at https://latest-with-plugins.datasette.io/github/commits?_trace=1

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Trace should show queries on the write connection too 1083726550  
997128950 https://github.com/simonw/datasette/issues/1568#issuecomment-997128950 https://api.github.com/repos/simonw/datasette/issues/1568 IC_kwDOBm6k_c47bvr2 simonw 9599 2021-12-18T02:38:01Z 2021-12-18T02:38:01Z OWNER

Prototype:

```diff diff --git a/datasette/database.py b/datasette/database.py index 0a0c104..468e936 100644 --- a/datasette/database.py +++ b/datasette/database.py @@ -99,7 +99,9 @@ class Database: with conn: return conn.execute(sql, params or [])

  • return await self.execute_write_fn(_inner, block=block)
  • with trace("sql", database=self.name, sql=sql.strip(), params=params):
  • results = await self.execute_write_fn(_inner, block=block)
  • return results

    async def execute_write_fn(self, fn, block=False): task_id = uuid.uuid5(uuid.NAMESPACE_DNS, "datasette.io") ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Trace should show queries on the write connection too 1083726550  
997128508 https://github.com/simonw/datasette/issues/1555#issuecomment-997128508 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47bvk8 simonw 9599 2021-12-18T02:33:57Z 2021-12-18T02:33:57Z OWNER

Here's why - trace only applies to read, not write SQL operations: https://github.com/simonw/datasette/blob/7c8f8aa209e4ba7bf83976f8495d67c28fbfca24/datasette/database.py#L209-L211

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997128368 https://github.com/simonw/datasette/issues/1555#issuecomment-997128368 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47bviw simonw 9599 2021-12-18T02:32:43Z 2021-12-18T02:32:43Z OWNER

I wonder why the INSERT INTO queries don't show up in that ?trace=1 view?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997128251 https://github.com/simonw/datasette/issues/1555#issuecomment-997128251 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47bvg7 simonw 9599 2021-12-18T02:31:51Z 2021-12-18T02:31:51Z OWNER

I was thinking it might even be possible to convert this into a insert into tables select from ... query:

https://github.com/simonw/datasette/blob/c00f29affcafce8314366852ba1a0f5a7dd25690/datasette/utils/internal_db.py#L102-L112

But the SELECT runs against a separate database from the INSERT INTO, so I would have to setup a cross-database connection for this which feels a little too complicated.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997128080 https://github.com/simonw/datasette/issues/1555#issuecomment-997128080 https://api.github.com/repos/simonw/datasette/issues/1555 IC_kwDOBm6k_c47bveQ simonw 9599 2021-12-18T02:30:19Z 2021-12-18T02:30:19Z OWNER

I think all of these queries happen in one place - in the populate_schema_tables() function - so optimizing them might be localized to just that area of the code, which would be nice:

https://github.com/simonw/datasette/blob/c00f29affcafce8314366852ba1a0f5a7dd25690/datasette/utils/internal_db.py#L97-L183

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Optimize all those calls to index_list and foreign_key_list 1079149656  
997127784 https://github.com/simonw/datasette/issues/1561#issuecomment-997127784 https://api.github.com/repos/simonw/datasette/issues/1561 IC_kwDOBm6k_c47bvZo simonw 9599 2021-12-18T02:27:56Z 2021-12-18T02:27:56Z OWNER

Oh that's an interesting solution, combining the hashes of all of the individual databases.

I'm actually not a big fan of hashed_url mode - I implemented it right at the start of the project because it felt like a clever hack, and then ended up making it not-the-default a few years ago:

  • 418

  • 419

  • 421

I've since not found myself wanting to use it at all for any of my projects - which makes me nervous, because it means there's a pretty complex feature that I'm not using at all, so it's only really protected by the existing unit tests for it.

What I'd really like to do is figure out how to have hashed URL mode work entirely as a plugin - then I could extract it from Datasette core entirely (which would simplify a bunch of stuff) but people who find the optimization useful would be able to access it.

I'm not sure that the existing plugin hooks are robust enough to do that yet though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on 1082765654  
997127084 https://github.com/simonw/datasette/issues/1563#issuecomment-997127084 https://api.github.com/repos/simonw/datasette/issues/1563 IC_kwDOBm6k_c47bvOs simonw 9599 2021-12-18T02:22:30Z 2021-12-18T02:22:30Z OWNER

Docs here: https://docs.datasette.io/en/latest/internals.html#datasette-class

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette(... files=) should not be a required argument 1083573206  
997125191 https://github.com/simonw/datasette/issues/1563#issuecomment-997125191 https://api.github.com/repos/simonw/datasette/issues/1563 IC_kwDOBm6k_c47buxH simonw 9599 2021-12-18T02:10:20Z 2021-12-18T02:10:20Z OWNER

I should document the usage of this constructor in https://docs.datasette.io/en/stable/internals.html#datasette-class

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette(... files=) should not be a required argument 1083573206  
997124280 https://github.com/simonw/datasette/issues/1546#issuecomment-997124280 https://api.github.com/repos/simonw/datasette/issues/1546 IC_kwDOBm6k_c47bui4 simonw 9599 2021-12-18T02:05:16Z 2021-12-18T02:05:16Z OWNER

Sure - there are actually several levels to this.

The code that creates connections to the database is this: https://github.com/simonw/datasette/blob/83bacfa9452babe7bd66e3579e23af988d00f6ac/datasette/database.py#L72-L95

For files on disk, it does this: ```python

For read-only connections

conn = sqlite3.connect( "file:my.db?mode=ro", uri=True, check_same_thread=False)

For connections that should be treated as immutable:

conn = sqlite3.connect( "file:my.db?immutable=1", uri=True, check_same_thread=False) For in-memory databases it runs this after the connection has been created:python conn.execute("PRAGMA query_only=1") `` SQLitePRAGMAqueries are treated as dangerous: someone could runPRAGMA query_only=0` to turn that previous option off for example.

So this function runs against any incoming SQL to verify that it looks like a SELECT ... and doesn't have anything like that in it.

https://github.com/simonw/datasette/blob/83bacfa9452babe7bd66e3579e23af988d00f6ac/datasette/utils/init.py#L195-L204

You can see the tests for that here: https://github.com/simonw/datasette/blob/b1fed48a95516ae84c0f020582303ab50ab817e2/tests/test_utils.py#L136-L170

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
validating the sql 1076057610  
997122938 https://github.com/simonw/datasette/issues/1564#issuecomment-997122938 https://api.github.com/repos/simonw/datasette/issues/1564 IC_kwDOBm6k_c47buN6 simonw 9599 2021-12-18T01:55:25Z 2021-12-18T01:55:46Z OWNER

Made this change while working on this issue: - #1567

I'm going to write a test for this that uses that sleep() SQL function from c35b84a2aabe2f14aeacf6cda4110ae1e94d6059.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
_prepare_connection not called on write connections 1083581011  
997121215 https://github.com/simonw/datasette/issues/1565#issuecomment-997121215 https://api.github.com/repos/simonw/datasette/issues/1565 IC_kwDOBm6k_c47bty_ simonw 9599 2021-12-18T01:45:44Z 2021-12-18T01:45:44Z OWNER

I want to get this into Datasette 0.60 - #1566 - it's a small change that can unlock a lot of potential.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Documented JavaScript variables on different templates made available for plugins 1083657868  
997120723 https://github.com/simonw/datasette/issues/621#issuecomment-997120723 https://api.github.com/repos/simonw/datasette/issues/621 IC_kwDOBm6k_c47btrT simonw 9599 2021-12-18T01:42:33Z 2021-12-18T01:42:33Z OWNER

I refactored this code out into the filters.py module in aa7f0037a46eb76ae6fe9bf2a1f616c58738ecdf

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Syntax for ?_through= that works as a form field 520681725  
552253893 https://github.com/simonw/datasette/issues/617#issuecomment-552253893 https://api.github.com/repos/simonw/datasette/issues/617 MDEyOklzc3VlQ29tbWVudDU1MjI1Mzg5Mw== simonw 9599 2019-11-11T00:46:42Z 2021-12-18T01:41:47Z OWNER

As noted in https://github.com/simonw/datasette/issues/621#issuecomment-552253208 a common pattern in this method is blocks of code that append new items to the where_clauses, params and extra_human_descriptions arrays. This is a useful refactoring opportunity.

Code that fits this pattern:

  • The code that builds based on the filters: where_clauses, params = filters.build_where_clauses(table) and human_description_en = filters.human_description_en(extra=extra_human_descriptions)
  • Code that handles ?_where=: where_clauses.extend(request.args["_where"]) - though note that this also appends to a extra_wheres_for_ui array which nothing else uses
  • The _through= code, see #621 for details
  • The code that deals with ?_search= FTS

The keyset pagination code modifies where_clauses and params too, but I don't think it's quite going to work with the same abstraction that would cover the above examples.

[UPDATE December 2021 - this comment became the basis for a new filters_from_request plugin hook, see also #473]

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView.data() method 519613116  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 758.481ms · About: github-to-sqlite