home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

7,941 rows sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

user >30

  • simonw 6,783
  • codecov[bot] 142
  • eyeseast 49
  • russss 39
  • psychemedia 32
  • fgregg 32
  • abdusco 26
  • mroswell 20
  • aborruso 19
  • chrismp 18
  • jacobian 14
  • carlmjohnson 14
  • RhetTbull 14
  • tballison 13
  • brandonrobertz 12
  • tsibley 11
  • rixx 11
  • terrycojones 10
  • stonebig 10
  • maxhawkins 9
  • clausjuhl 9
  • bobwhitelock 9
  • rayvoelker 9
  • 20after4 8
  • wragge 8
  • UtahDave 8
  • tomchristie 8
  • bsilverm 8
  • dracos 7
  • rgieseke 7
  • …

issue >30

  • Show column metadata plus links for foreign keys on arbitrary query results 50
  • Redesign default .json format 48
  • Rethink how .ext formats (v.s. ?_format=) works before 1.0 48
  • JavaScript plugin hooks mechanism similar to pluggy 47
  • Updated Dockerfile with SpatiaLite version 5.0 45
  • Complete refactor of TableView and table.html template 45
  • Port Datasette to ASGI 42
  • Authentication (and permissions) as a core concept 40
  • Deploy a live instance of demos/apache-proxy 34
  • await datasette.client.get(path) mechanism for executing internal requests 33
  • Maintain an in-memory SQLite table of connected databases and their tables 32
  • Ability to sort (and paginate) by column 31
  • link_or_copy_directory() error - Invalid cross-device link 28
  • Export to CSV 27
  • base_url configuration setting 27
  • Documentation with recommendations on running Datasette in production without using Docker 27
  • Optimize all those calls to index_list and foreign_key_list 27
  • Support cross-database joins 26
  • Ability for a canned query to write to the database 26
  • table.transform() method for advanced alter table 26
  • New pattern for views that return either JSON or HTML, available for plugins 26
  • Proof of concept for Datasette on AWS Lambda with EFS 25
  • WIP: Add Gmail takeout mbox import 25
  • Redesign register_output_renderer callback 24
  • Make it easier to insert geometries, with documentation and maybe code 24
  • "datasette insert" command and plugin hook 23
  • Datasette Plugins 22
  • .json and .csv exports fail to apply base_url 22
  • Idea: import CSV to memory, run SQL, export in a single command 22
  • Plugin hook for dynamic metadata 22
  • …

author_association 4

  • OWNER 6,297
  • NONE 705
  • MEMBER 486
  • CONTRIBUTOR 349
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1110219185 https://github.com/simonw/datasette/issues/1715#issuecomment-1110219185 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5CLJmx simonw 9599 2022-04-26T20:28:40Z 2022-04-26T20:56:48Z OWNER

The refactor I did in #1719 pretty much clashes with all of the changes in https://github.com/simonw/datasette/commit/5053f1ea83194ecb0a5693ad5dada5b25bf0f7e6 so I'll probably need to start my api-extras branch again from scratch.

Using a new tableview-asyncinject branch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1110239536 https://github.com/simonw/datasette/issues/1715#issuecomment-1110239536 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5CLOkw simonw 9599 2022-04-26T20:54:53Z 2022-04-26T20:54:53Z OWNER

pytest tests/test_table_* runs the tests quickly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1110238896 https://github.com/simonw/datasette/issues/1715#issuecomment-1110238896 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5CLOaw simonw 9599 2022-04-26T20:53:59Z 2022-04-26T20:53:59Z OWNER

I'm going to rename database to database_name and table to table_name to avoid confusion with the Database object as opposed to the string name for the database.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1110229319 https://github.com/simonw/datasette/issues/1715#issuecomment-1110229319 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5CLMFH simonw 9599 2022-04-26T20:41:32Z 2022-04-26T20:44:38Z OWNER

This time I'm not going to bother with the filter_args thing - I'm going to just try to use asyncinject to execute some big high level things in parallel - facets, suggested facets, counts, the query - and then combine it with the extras mechanism I'm trying to introduce too.

Most importantly: I want that extra_template() function that adds more template context for the HTML to be executed as part of an asyncinject flow!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1110212021 https://github.com/simonw/datasette/issues/1720#issuecomment-1110212021 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CLH21 simonw 9599 2022-04-26T20:20:27Z 2022-04-26T20:20:27Z OWNER

Closing this because I have a good enough idea of the design for now - the details of the parameters can be figured out when I implement this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109309683 https://github.com/simonw/datasette/issues/1720#issuecomment-1109309683 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHrjz simonw 9599 2022-04-26T04:12:39Z 2022-04-26T04:12:39Z OWNER

I think the rough shape of the three plugin hooks is right. The detailed decisions that are needed concern what the parameters should be, which I think will mainly happen as part of:

  • 1715

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109306070 https://github.com/simonw/datasette/issues/1720#issuecomment-1109306070 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHqrW simonw 9599 2022-04-26T04:05:20Z 2022-04-26T04:05:20Z OWNER

The proposed plugin for annotations - allowing users to attach comments to database tables, columns and rows - would be a great application for all three of those ?_extra= plugin hooks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109305184 https://github.com/simonw/datasette/issues/1720#issuecomment-1109305184 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHqdg simonw 9599 2022-04-26T04:03:35Z 2022-04-26T04:03:35Z OWNER

I bet there's all kinds of interesting potential extras that could be calculated by loading the results of the query into a Pandas DataFrame.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109200774 https://github.com/simonw/datasette/issues/1720#issuecomment-1109200774 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHQ-G simonw 9599 2022-04-26T01:25:43Z 2022-04-26T01:26:15Z OWNER

Had a thought: if a custom HTML template is going to make use of stuff generated using these extras, it will need a way to tell Datasette to execute those extras even in the absence of the ?_extra=... URL parameters.

Is that necessary? Or should those kinds of plugins use the existing extra_template_vars hook instead?

Or maybe the extra_template_vars hook gets redesigned so it can depend on other extras in some way?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109200335 https://github.com/simonw/datasette/issues/1720#issuecomment-1109200335 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHQ3P simonw 9599 2022-04-26T01:24:47Z 2022-04-26T01:24:47Z OWNER

Sketching out a ?_extra=statistics table plugin:

from datasette import hookimpl

@hookimpl
def register_table_extras(datasette):
    return [statistics]

async def statistics(datasette, query, columns, sql):
    # ... need to figure out which columns are integer/floats
    # then build and execute a SQL query that calculates sum/avg/etc for each column
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109190401 https://github.com/simonw/sqlite-utils/issues/428#issuecomment-1109190401 https://api.github.com/repos/simonw/sqlite-utils/issues/428 IC_kwDOCGYnMM5CHOcB simonw 9599 2022-04-26T01:05:29Z 2022-04-26T01:05:29Z OWNER

Django makes extensive use of savepoints for nested transactions: https://docs.djangoproject.com/en/4.0/topics/db/transactions/#savepoints

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research adding support for savepoints 1215216249  
1109174715 https://github.com/simonw/datasette/issues/1720#issuecomment-1109174715 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHKm7 simonw 9599 2022-04-26T00:40:13Z 2022-04-26T00:43:33Z OWNER

Some of the things I'd like to use ?_extra= for, that may or not make sense as plugins:

  • Performance breakdown information, maybe including explain output for a query/table
  • Information about the tables that were consulted in a query - imagine pulling in additional table metadata
  • Statistical aggregates against the full set of results. This may well be a Datasette core feature at some point in the future, but being able to provide it early as a plugin would be really cool.
  • For tables, what are the other tables they can join against?
  • Suggested facets
  • Facet results themselves
  • New custom facets I haven't thought of - though the register_facet_classes hook covers that already
  • Table schema
  • Table metadata
  • Analytics - how many times has this table been queried? Would be a plugin thing
  • For geospatial data, how about a GeoJSON polygon that represents the bounding box for all returned results? Effectively this is an extra aggregation.

Looking at https://github-to-sqlite.dogsheep.net/github/commits.json?_labels=on&_shape=objects for inspiration.

I think there's a separate potential mechanism in the future that lets you add custom columns to a table. This would affect .csv and the HTML presentation too, which makes it a different concept from the ?_extra= hook that affects the JSON export (and the context that is fed to the HTML templates).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109171871 https://github.com/simonw/datasette/issues/1720#issuecomment-1109171871 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHJ6f simonw 9599 2022-04-26T00:34:48Z 2022-04-26T00:34:48Z OWNER

Let's try sketching out a register_table_extras plugin for something new.

The first idea I came up with suggests adding new fields to the individual row records that come back - my mental model for extras so far has been that they add new keys to the root object.

So if a table result looked like this:

{
  "rows": [
    {"id": 1, "name": "Cleo"},
    {"id": 2, "name": "Suna"}
  ],
  "next_url": null
}

I was initially thinking that ?_extra=facets would add a "facets": {...} key to that root object.

Here's a plugin idea I came up with that would probably justify adding to the individual row objects instead:

  • ?_extra=check404s - does an async HEAD request against every column value that looks like a URL and checks if it returns a 404

This could also work by adding a "check404s": {"url-here": 200} key to the root object though.

I think I need some better plugin concepts before committing to this new hook. There's overlap between this and how I want the enrichments mechanism (see here) to work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109165411 https://github.com/simonw/datasette/issues/1720#issuecomment-1109165411 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHIVj simonw 9599 2022-04-26T00:22:42Z 2022-04-26T00:22:42Z OWNER

Passing pk_values to the plugin hook feels odd. I think I'd pass a row object instead and let the code look up the primary key values on that row (by introspecting the primary keys for the table).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109164803 https://github.com/simonw/datasette/issues/1720#issuecomment-1109164803 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHIMD simonw 9599 2022-04-26T00:21:40Z 2022-04-26T00:21:40Z OWNER

What would the existing https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables feature look like if it was re-imagined as a register_row_extras() plugin?

Rough sketch, copying most of the code from https://github.com/simonw/datasette/blob/579f59dcec43a91dd7d404e00b87a00afd8515f2/datasette/views/row.py#L98

from datasette import hookimpl

@hookimpl
def register_row_extras(datasette):
    return [foreign_key_tables]

async def foreign_key_tables(datasette, database, table, pk_values):
    if len(pk_values) != 1:
        return []
    db = datasette.get_database(database)
    all_foreign_keys = await db.get_all_foreign_keys()
    foreign_keys = all_foreign_keys[table]["incoming"]
    if len(foreign_keys) == 0:
        return []

    sql = "select " + ", ".join(
        [
            "(select count(*) from {table} where {column}=:id)".format(
                table=escape_sqlite(fk["other_table"]),
                column=escape_sqlite(fk["other_column"]),
            )
            for fk in foreign_keys
        ]
    )
    try:
        rows = list(await db.execute(sql, {"id": pk_values[0]}))
    except QueryInterrupted:
        # Almost certainly hit the timeout
        return []

    foreign_table_counts = dict(
        zip(
            [(fk["other_table"], fk["other_column"]) for fk in foreign_keys],
            list(rows[0]),
        )
    )
    foreign_key_tables = []
    for fk in foreign_keys:
        count = (
            foreign_table_counts.get((fk["other_table"], fk["other_column"])) or 0
        )
        key = fk["other_column"]
        if key.startswith("_"):
            key += "__exact"
        link = "{}?{}={}".format(
            self.ds.urls.table(database, fk["other_table"]),
            key,
            ",".join(pk_values),
        )
        foreign_key_tables.append({**fk, **{"count": count, "link": link}})
    return foreign_key_tables
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109162123 https://github.com/simonw/datasette/issues/1720#issuecomment-1109162123 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHHiL simonw 9599 2022-04-26T00:16:42Z 2022-04-26T00:16:51Z OWNER

Actually I'm going to imitate the existing register_* hooks:

  • def register_output_renderer(datasette)
  • def register_facet_classes()
  • def register_routes(datasette)
  • def register_commands(cli)
  • def register_magic_parameters(datasette)

So I'm going to call the new hooks:

  • register_table_extras(datasette)
  • register_row_extras(datasette)
  • register_query_extras(datasette)

They'll return a list of async def functions. The names of those functions will become the names of the extras.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109160226 https://github.com/simonw/datasette/issues/1720#issuecomment-1109160226 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHHEi simonw 9599 2022-04-26T00:14:11Z 2022-04-26T00:14:11Z OWNER

There are four existing plugin hooks that include the word "extra" but use it to mean something else - to mean additional CSS/JS/variables to be injected into the page:

  • def extra_css_urls(...)
  • def extra_js_urls(...)
  • def extra_body_script(...)
  • def extra_template_vars(...)

I think extra_* and *_extras are different enough that they won't be confused with each other.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109159307 https://github.com/simonw/datasette/issues/1720#issuecomment-1109159307 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHG2L simonw 9599 2022-04-26T00:12:28Z 2022-04-26T00:12:28Z OWNER

I'm going to keep table and row separate. So I think I need to add three new plugin hooks:

  • table_extras()
  • row_extras()
  • query_extras()
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1109158903 https://github.com/simonw/datasette/issues/1720#issuecomment-1109158903 https://api.github.com/repos/simonw/datasette/issues/1720 IC_kwDOBm6k_c5CHGv3 simonw 9599 2022-04-26T00:11:42Z 2022-04-26T00:11:42Z OWNER

Places this plugin hook (or hooks?) should be able to affect:

  • JSON for a table/view
  • JSON for a row
  • JSON for a canned query
  • JSON for a custom arbitrary query

I'm going to combine those last two, which means there are three places. But maybe I can combine the table one and the row one as well?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Design plugin hook for extras 1215174094  
1108907238 https://github.com/simonw/datasette/issues/1719#issuecomment-1108907238 https://api.github.com/repos/simonw/datasette/issues/1719 IC_kwDOBm6k_c5CGJTm simonw 9599 2022-04-25T18:34:21Z 2022-04-25T18:34:21Z OWNER

Well this refactor turned out to be pretty quick and really does greatly simplify both the RowView and TableView classes. Very happy with this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor `RowView` and remove `RowTableShared` 1214859703  
1108890170 https://github.com/simonw/datasette/issues/262#issuecomment-1108890170 https://api.github.com/repos/simonw/datasette/issues/262 IC_kwDOBm6k_c5CGFI6 simonw 9599 2022-04-25T18:17:09Z 2022-04-25T18:18:39Z OWNER

I spotted in https://github.com/simonw/datasette/issues/1719#issuecomment-1108888494 that there's actually already an undocumented implementation of ?_extras=foreign_key_tables - https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables

I added that feature all the way back in November 2017! https://github.com/simonw/datasette/commit/a30c5b220c15360d575e94b0e67f3255e120b916

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add ?_extra= mechanism for requesting extra properties in JSON 323658641  
1108888494 https://github.com/simonw/datasette/issues/1719#issuecomment-1108888494 https://api.github.com/repos/simonw/datasette/issues/1719 IC_kwDOBm6k_c5CGEuu simonw 9599 2022-04-25T18:15:42Z 2022-04-25T18:15:42Z OWNER

Here's an undocumented feature I forgot existed: https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables

?_extras=foreign_key_tables

https://github.com/simonw/datasette/blob/0bc5186b7bb4fc82392df08f99a9132f84dcb331/datasette/views/table.py#L1021-L1024

It's even covered by the tests:

https://github.com/simonw/datasette/blob/b9c2b1cfc8692b9700416db98721fa3ec982f6be/tests/test_api.py#L691-L703

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor `RowView` and remove `RowTableShared` 1214859703  
1108884171 https://github.com/simonw/datasette/issues/1719#issuecomment-1108884171 https://api.github.com/repos/simonw/datasette/issues/1719 IC_kwDOBm6k_c5CGDrL simonw 9599 2022-04-25T18:10:46Z 2022-04-25T18:12:45Z OWNER

It looks like the only class method from that shared class needed by RowView is self.display_columns_and_rows().

Which I've been wanting to refactor to provide to QueryView too:

  • 715

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor `RowView` and remove `RowTableShared` 1214859703  
1108875068 https://github.com/simonw/datasette/issues/1715#issuecomment-1108875068 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5CGBc8 simonw 9599 2022-04-25T18:03:13Z 2022-04-25T18:06:33Z OWNER

The RowTableShared class is making this a whole lot more complicated.

I'm going to split the RowView view out into an entirely separate views/row.py module.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1108877454 https://github.com/simonw/datasette/issues/1715#issuecomment-1108877454 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5CGCCO simonw 9599 2022-04-25T18:04:27Z 2022-04-25T18:04:27Z OWNER

Pushed my WIP on this to the api-extras branch: 5053f1ea83194ecb0a5693ad5dada5b25bf0f7e6

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1107873311 https://github.com/simonw/datasette/issues/1718#issuecomment-1107873311 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCM4f simonw 9599 2022-04-24T16:24:14Z 2022-04-24T16:24:14Z OWNER

Wrote up what I learned in a TIL: https://til.simonwillison.net/sphinx/blacken-docs

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107873271 https://github.com/simonw/datasette/issues/1718#issuecomment-1107873271 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCM33 simonw 9599 2022-04-24T16:23:57Z 2022-04-24T16:23:57Z OWNER

Turns out I didn't need that git diff-index trick after all - the blacken-docs command returns a non-zero exit code if it changes any files.

Submitted a documentation PR to that project instead:

  • https://github.com/asottile/blacken-docs/pull/162
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107870788 https://github.com/simonw/datasette/issues/1718#issuecomment-1107870788 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCMRE simonw 9599 2022-04-24T16:09:23Z 2022-04-24T16:09:23Z OWNER

One more attempt at testing the git diff-index trick.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107869884 https://github.com/simonw/datasette/issues/1718#issuecomment-1107869884 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCMC8 simonw 9599 2022-04-24T16:04:03Z 2022-04-24T16:04:03Z OWNER

OK, I'm expecting this one to fail at the git diff-index --quiet HEAD -- check.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107869556 https://github.com/simonw/datasette/issues/1718#issuecomment-1107869556 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCL90 simonw 9599 2022-04-24T16:02:27Z 2022-04-24T16:02:27Z OWNER

Looking at that first error it appears to be a place where I had deliberately omitted the body of the function:

https://github.com/simonw/datasette/blob/36573638b0948174ae237d62e6369b7d55220d7f/docs/internals.rst#L196-L211

I can use ... as the function body here to get it to pass.

Fixing those warnings actually helped me spot a couple of bugs, so I'm glad this happened.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1081861670 https://github.com/simonw/datasette/pull/1693#issuecomment-1081861670 https://api.github.com/repos/simonw/datasette/issues/1693 IC_kwDOBm6k_c5Ae-Ym codecov[bot] 22429695 2022-03-29T13:18:47Z 2022-04-24T15:58:09Z NONE

Codecov Report

Merging #1693 (52f403a) into main (40ef8eb) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1693   +/-   ##
=======================================
  Coverage   91.75%   91.75%           
=======================================
  Files          34       34           
  Lines        4575     4575           
=======================================
  Hits         4198     4198           
  Misses        377      377           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3657363...52f403a. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Bump black from 22.1.0 to 22.3.0 1184850337  
1107868585 https://github.com/simonw/datasette/issues/1718#issuecomment-1107868585 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCLup simonw 9599 2022-04-24T15:57:10Z 2022-04-24T15:57:19Z OWNER

The tests failed there because of what I thought were warnings but turn out to be treated as errors:

% blacken-docs -l 60 docs/*.rst                                        
docs/internals.rst:196: code block parse error Cannot parse: 14:0: <line number missing in source>
docs/json_api.rst:449: code block parse error Cannot parse: 1:0: <link rel="alternate"
docs/plugin_hooks.rst:250: code block parse error Cannot parse: 6:4:     ]
docs/plugin_hooks.rst:311: code block parse error Cannot parse: 38:0: <line number missing in source>
docs/testing_plugins.rst:135: code block parse error Cannot parse: 5:0: <line number missing in source>
% echo $?
1
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107867281 https://github.com/simonw/datasette/issues/1718#issuecomment-1107867281 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCLaR simonw 9599 2022-04-24T15:49:23Z 2022-04-24T15:49:23Z OWNER

I'm going to push the first commit with a deliberate missing formatting to check that the tests fail.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107866013 https://github.com/simonw/datasette/issues/1718#issuecomment-1107866013 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCLGd simonw 9599 2022-04-24T15:42:07Z 2022-04-24T15:42:07Z OWNER

In the absence of --check I can use this to detect if changes are applied:

% git diff-index --quiet HEAD --
% echo $?                       
0
% blacken-docs -l 60 docs/*.rst
docs/authentication.rst: Rewriting...
...
% git diff-index --quiet HEAD --
% echo $?                       
1
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107865493 https://github.com/simonw/datasette/issues/1718#issuecomment-1107865493 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCK-V simonw 9599 2022-04-24T15:39:02Z 2022-04-24T15:39:02Z OWNER

There's no blacken-docs --check option so I filed a feature request:
- https://github.com/asottile/blacken-docs/issues/161

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107863924 https://github.com/simonw/datasette/issues/1718#issuecomment-1107863924 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCKl0 simonw 9599 2022-04-24T15:30:03Z 2022-04-24T15:30:03Z OWNER

On the one hand, I'm not crazy about some of the indentation decisions Black made here - in particular this one, which I had indented deliberately for readability:

 diff --git a/docs/authentication.rst b/docs/authentication.rst
index 0d98cf8..8008023 100644
--- a/docs/authentication.rst
+++ b/docs/authentication.rst
@@ -381,11 +381,7 @@ Authentication plugins can set signed ``ds_actor`` cookies themselves like so:
 .. code-block:: python

     response = Response.redirect("/")
-    response.set_cookie("ds_actor", datasette.sign({
-        "a": {
-            "id": "cleopaws"
-        }
-    }, "actor"))
+    response.set_cookie("ds_actor", datasette.sign({"a": {"id": "cleopaws"}}, "actor"))

But... consistency is a virtue. Maybe I'm OK with just this one disagreement?

Also: I've been mentally trying to keep the line lengths a bit shorter to help them be more readable on mobile devices.

I'll try a different line length using blacken-docs -l 60 docs/*.rst instead.

I like this more - here's the result for that example:

diff --git a/docs/authentication.rst b/docs/authentication.rst
index 0d98cf8..2496073 100644
--- a/docs/authentication.rst
+++ b/docs/authentication.rst
@@ -381,11 +381,10 @@ Authentication plugins can set signed ``ds_actor`` cookies themselves like so:
 .. code-block:: python

     response = Response.redirect("/")
-    response.set_cookie("ds_actor", datasette.sign({
-        "a": {
-            "id": "cleopaws"
-        }
-    }, "actor"))
+    response.set_cookie(
+        "ds_actor",
+        datasette.sign({"a": {"id": "cleopaws"}}, "actor"),
+    )
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107863365 https://github.com/simonw/datasette/issues/1718#issuecomment-1107863365 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCKdF simonw 9599 2022-04-24T15:26:41Z 2022-04-24T15:26:41Z OWNER

Tried this:

pip install blacken-docs
blacken-docs docs/*.rst
git diff | pbcopy

Got this:
`diff diff --git a/docs/authentication.rst b/docs/authentication.rst index 0d98cf8..8008023 100644 --- a/docs/authentication.rst +++ b/docs/authentication.rst @@ -381,11 +381,7 @@ Authentication plugins can set signedds_actor`` cookies themselves like so:
.. code-block:: python

 response = Response.redirect("/")
  • response.set_cookie("ds_actor", datasette.sign({
  • "a": {
  • "id": "cleopaws"
  • }
  • }, "actor"))
  • response.set_cookie("ds_actor", datasette.sign({"a": {"id": "cleopaws"}}, "actor"))

Note that you need to pass "actor" as the namespace to :ref:datasette_sign.

@@ -412,12 +408,16 @@ To include an expiry, add a "e" key to the cookie value containing a `base62
expires_at = int(time.time()) + (24 * 60 * 60)

 response = Response.redirect("/")
  • response.set_cookie("ds_actor", datasette.sign({
  • "a": {
  • "id": "cleopaws"
  • },
  • "e": baseconv.base62.encode(expires_at),
  • }, "actor"))
  • response.set_cookie(
  • "ds_actor",
  • datasette.sign(
  • {
  • "a": {"id": "cleopaws"},
  • "e": baseconv.base62.encode(expires_at),
  • },
  • "actor",
  • ),
  • )

The resulting cookie will encode data that looks something like this:

diff --git a/docs/spatialite.rst b/docs/spatialite.rst
index d1b300b..556bad8 100644
--- a/docs/spatialite.rst
+++ b/docs/spatialite.rst
@@ -58,19 +58,22 @@ Here's a recipe for taking a table with existing latitude and longitude columns,
.. code-block:: python

 import sqlite3
  • conn = sqlite3.connect('museums.db')
    +
  • conn = sqlite3.connect("museums.db")
    # Lead the spatialite extension:
    conn.enable_load_extension(True)
  • conn.load_extension('/usr/local/lib/mod_spatialite.dylib')
  • conn.load_extension("/usr/local/lib/mod_spatialite.dylib")
    # Initialize spatial metadata for this database:
  • conn.execute('select InitSpatialMetadata(1)')
  • conn.execute("select InitSpatialMetadata(1)")
    # Add a geometry column called point_geom to our museums table:
    conn.execute("SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);")
    # Now update that geometry column with the lat/lon points
  • conn.execute('''
  • conn.execute(
  • """
    UPDATE museums SET
    point_geom = GeomFromText('POINT('||"longitude"||' '||"latitude"||')',4326);
  • ''')
  • """
  • )
    # Now add a spatial index to that column
    conn.execute('select CreateSpatialIndex("museums", "point_geom");')
    # If you don't commit your changes will not be persisted:
    @@ -186,13 +189,14 @@ Here's Python code to create a SQLite database, enable SpatiaLite, create a plac
    .. code-block:: python

    import sqlite3
    - conn = sqlite3.connect('places.db')
    +
    + conn = sqlite3.connect("places.db")
    # Enable SpatialLite extension
    conn.enable_load_extension(True)
    - conn.load_extension('/usr/local/lib/mod_spatialite.dylib')
    + conn.load_extension("/usr/local/lib/mod_spatialite.dylib")
    # Create the masic countries table
    - conn.execute('select InitSpatialMetadata(1)')
    - conn.execute('create table places (id integer primary key, name text);')
    + conn.execute("select InitSpatialMetadata(1)")
    + conn.execute("create table places (id integer primary key, name text);")
    # Add a MULTIPOLYGON Geometry column
    conn.execute("SELECT AddGeometryColumn('places', 'geom', 4326, 'MULTIPOLYGON', 2);")
    # Add a spatial index against the new column
    @@ -201,13 +205,17 @@ Here's Python code to create a SQLite database, enable SpatiaLite, create a plac
    from shapely.geometry.multipolygon import MultiPolygon
    from shapely.geometry import shape
    import requests
    - geojson = requests.get('https://data.whosonfirst.org/404/227/475/404227475.geojson').json()
    +
    + geojson = requests.get(
    + "https://data.whosonfirst.org/404/227/475/404227475.geojson"
    + ).json()
    # Convert to "Well Known Text" format
    - wkt = shape(geojson['geometry']).wkt
    + wkt = shape(geojson["geometry"]).wkt
    # Insert and commit the record
    - conn.execute("INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))", (
    - "Wales", wkt
    - ))
    + conn.execute(
    + "INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))",
    + ("Wales", wkt),
    + )
    conn.commit()

Querying polygons using within()
diff --git a/docs/writing_plugins.rst b/docs/writing_plugins.rst
index bd60a4b..5af01f6 100644
--- a/docs/writing_plugins.rst
+++ b/docs/writing_plugins.rst
@@ -18,9 +18,10 @@ The quickest way to start writing a plugin is to create a my_plugin.py file

 from datasette import hookimpl

+
@hookimpl
def prepare_connection(conn):
- conn.create_function('hello_world', 0, lambda: 'Hello world!')
+ conn.create_function("hello_world", 0, lambda: "Hello world!")

If you save this in plugins/my_plugin.py you can then start Datasette like this::

@@ -60,22 +61,18 @@ The example consists of two files: a setup.py file that defines the plugin:

 from setuptools import setup
  • VERSION = '0.1'
  • VERSION = "0.1"

    setup(
    - name='datasette-plugin-demos',
    - description='Examples of plugins for Datasette',
    - author='Simon Willison',
    - url='https://github.com/simonw/datasette-plugin-demos',
    - license='Apache License, Version 2.0',
    + name="datasette-plugin-demos",
    + description="Examples of plugins for Datasette",
    + author="Simon Willison",
    + url="https://github.com/simonw/datasette-plugin-demos",
    + license="Apache License, Version 2.0",
    version=VERSION,
    - py_modules=['datasette_plugin_demos'],
    - entry_points={
    - 'datasette': [
    - 'plugin_demos = datasette_plugin_demos'
    - ]
    - },
    - install_requires=['datasette']
    + py_modules=["datasette_plugin_demos"],
    + entry_points={"datasette": ["plugin_demos = datasette_plugin_demos"]},
    + install_requires=["datasette"],
    )

And a Python module file, datasette_plugin_demos.py, that implements the plugin:
@@ -88,12 +85,12 @@ And a Python module file, datasette_plugin_demos.py, that implements the plu

 @hookimpl
 def prepare_jinja2_environment(env):
  • env.filters['uppercase'] = lambda u: u.upper()
  • env.filters["uppercase"] = lambda u: u.upper()

    @hookimpl
    def prepare_connection(conn):
    - conn.create_function('random_integer', 2, random.randint)
    + conn.create_function("random_integer", 2, random.randint)

Having built a plugin in this way you can turn it into an installable package using the following command::
@@ -123,11 +120,13 @@ To bundle the static assets for a plugin in the package that you publish to PyPI

.. code-block:: python

  • package_data={
  • 'datasette_plugin_name': [
  • 'static/plugin.js',
  • ],
  • },
  • package_data = (
  • {
  • "datasette_plugin_name": [
  • "static/plugin.js",
  • ],
  • },
  • )

Where datasette_plugin_name is the name of the plugin package (note that it uses underscores, not hyphens) and static/plugin.js is the path within that package to the static file.

@@ -152,11 +151,13 @@ Templates should be bundled for distribution using the same package_data mec

.. code-block:: python

  • package_data={
  • 'datasette_plugin_name': [
  • 'templates/my_template.html',
  • ],
  • },
  • package_data = (
  • {
  • "datasette_plugin_name": [
  • "templates/my_template.html",
  • ],
  • },
  • )

You can also use wildcards here such as templates/*.html. See datasette-edit-schema <https://github.com/simonw/datasette-edit-schema>__ for an example of this pattern.
```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107862882 https://github.com/simonw/datasette/issues/1718#issuecomment-1107862882 https://api.github.com/repos/simonw/datasette/issues/1718 IC_kwDOBm6k_c5CCKVi simonw 9599 2022-04-24T15:23:56Z 2022-04-24T15:23:56Z OWNER

Found https://github.com/asottile/blacken-docs via
- https://github.com/psf/black/issues/294

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Code examples in the documentation should be formatted with Black 1213683988  
1107848097 https://github.com/simonw/datasette/pull/1717#issuecomment-1107848097 https://api.github.com/repos/simonw/datasette/issues/1717 IC_kwDOBm6k_c5CCGuh simonw 9599 2022-04-24T14:02:37Z 2022-04-24T14:02:37Z OWNER

This is a neat feature, thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add timeout option to Cloudrun build 1213281044  
1107459446 https://github.com/simonw/datasette/pull/1717#issuecomment-1107459446 https://api.github.com/repos/simonw/datasette/issues/1717 IC_kwDOBm6k_c5CAn12 codecov[bot] 22429695 2022-04-23T11:56:36Z 2022-04-23T11:56:36Z NONE

Codecov Report

Merging #1717 (9b9a314) into main (d57c347) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1717   +/-   ##
=======================================
  Coverage   91.75%   91.75%           
=======================================
  Files          34       34           
  Lines        4574     4575    +1     
=======================================
+ Hits         4197     4198    +1     
  Misses        377      377           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/publish/cloudrun.py</td> <td>97.05% <100.00%> (+0.04%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d57c347...9b9a314. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add timeout option to Cloudrun build 1213281044  
1106989581 https://github.com/simonw/datasette/issues/1715#issuecomment-1106989581 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5B-1IN simonw 9599 2022-04-22T23:03:29Z 2022-04-22T23:03:29Z OWNER

I'm having second thoughts about injecting request - might be better to have the view function pull the relevant pieces out of the request before triggering the rest of the resolution.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1106947168 https://github.com/simonw/datasette/issues/1715#issuecomment-1106947168 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5B-qxg simonw 9599 2022-04-22T22:25:57Z 2022-04-22T22:26:06Z OWNER
async def database(request: Request, datasette: Datasette) -> Database:
    database_route = tilde_decode(request.url_vars["database"])
    try:
        return datasette.get_database(route=database_route)
    except KeyError:
        raise NotFound("Database not found: {}".format(database_route))

async def table_name(request: Request) -> str:
    return tilde_decode(request.url_vars["table"])
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1106945876 https://github.com/simonw/datasette/issues/1715#issuecomment-1106945876 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5B-qdU simonw 9599 2022-04-22T22:24:29Z 2022-04-22T22:24:29Z OWNER

Looking at the start of TableView.data():

https://github.com/simonw/datasette/blob/d57c347f35bcd8cff15f913da851b4b8eb030867/datasette/views/table.py#L333-L346

I'm going to resolve table_name and database from the URL - table_name will be a string, database will be the DB object returned by datasette.get_database(). Then those can be passed in separately too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1106923258 https://github.com/simonw/datasette/issues/1716#issuecomment-1106923258 https://api.github.com/repos/simonw/datasette/issues/1716 IC_kwDOBm6k_c5B-k76 simonw 9599 2022-04-22T22:02:07Z 2022-04-22T22:02:07Z OWNER

https://github.com/simonw/datasette/blame/main/datasette/views/base.py

https://user-images.githubusercontent.com/9599/164801564-d8a11ce9-7d9b-4e85-8947-a547d2986ef3.png">

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Configure git blame to ignore Black commit 1212838949  
1106908642 https://github.com/simonw/datasette/issues/1715#issuecomment-1106908642 https://api.github.com/repos/simonw/datasette/issues/1715 IC_kwDOBm6k_c5B-hXi simonw 9599 2022-04-22T21:47:55Z 2022-04-22T21:47:55Z OWNER

I need a asyncio.Registry with functions registered to perform the role of the table view.

Something like this perhaps:

def table_html_context(facet_results, query, datasette, rows):
    return {...}

That then gets called like this:

async def view(request):
    registry = Registry(facet_results, query, datasette, rows)
    context = await registry.resolve(table_html, request=request, datasette=datasette)
    return Reponse.html(await datasette.render("table.html", context)

It's also interesting to start thinking about this from a Python client library point of view. If I'm writing code outside of the HTTP request cycle, what would it look like?

One thing I could do: break out is the code that turns a request into a list of pairs extracted from the request - this code here: https://github.com/simonw/datasette/blob/8338c66a57502ef27c3d7afb2527fbc0663b2570/datasette/views/table.py#L442-L449

I could turn that into a typed dependency injection function like this:

def filter_args(request: Request) -> List[Tuple[str, str]]:
    # Arguments that start with _ and don't contain a __ are
    # special - things like ?_search= - and should not be
    # treated as filters.
    filter_args = []
    for key in request.args:
        if not (key.startswith("_") and "__" not in key):
            for v in request.args.getlist(key):
                filter_args.append((key, v))
    return filter_args

Then I can either pass a request into a .resolve() call, or I can instead skip that function by passing:

output = registry.resolve(table_context, filter_args=[("foo", "bar")])

I do need to think about where plugins get executed in all of this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor TableView to use asyncinject 1212823665  
1105642187 https://github.com/simonw/datasette/issues/1101#issuecomment-1105642187 https://api.github.com/repos/simonw/datasette/issues/1101 IC_kwDOBm6k_c5B5sLL eyeseast 25778 2022-04-21T18:59:08Z 2022-04-21T18:59:08Z CONTRIBUTOR

Ha! That was your idea (and a good one).

But it's probably worth measuring to see what overhead it adds. It did require both passing in the database and making the whole thing async.

Just timing the queries themselves:

  1. Using AsGeoJSON(geometry) as geometry takes 10.235 ms
  2. Leaving as binary takes 8.63 ms

Looking at the network panel:

  1. Takes about 200 ms for the fetch request
  2. Takes about 300 ms

I'm not sure how best to time the GeoJSON generation, but it would be interesting to check. Maybe I'll write a plugin to add query times to response headers.

The other thing to consider with async streaming is that it might be well-suited for a slower response. When I have to get the whole result and send a response in a fixed amount of time, I need the most efficient query possible. If I can hang onto a connection and get things one chunk at a time, maybe it's ok if there's some overhead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
register_output_renderer() should support streaming data 749283032  
1105615625 https://github.com/simonw/datasette/issues/1101#issuecomment-1105615625 https://api.github.com/repos/simonw/datasette/issues/1101 IC_kwDOBm6k_c5B5lsJ simonw 9599 2022-04-21T18:31:41Z 2022-04-21T18:32:22Z OWNER

The datasette-geojson plugin is actually an interesting case here, because of the way it converts SpatiaLite geometries into GeoJSON: https://github.com/eyeseast/datasette-geojson/blob/602c4477dc7ddadb1c0a156cbcd2ef6688a5921d/datasette_geojson/__init__.py#L61-L66

    if isinstance(geometry, bytes):
        results = await db.execute(
            "SELECT AsGeoJSON(:geometry)", {"geometry": geometry}
        )
        return geojson.loads(results.single_value())

That actually seems to work really well as-is, but it does worry me a bit that it ends up having to execute an extra SELECT query for every single returned row - especially in streaming mode where it might be asked to return 1m rows at once.

My PostgreSQL/MySQL engineering brain says that this would be better handled by doing a chunk of these (maybe 100) at once, to avoid the per-query-overhead - but with SQLite that might not be necessary.

At any rate, this is one of the reasons I'm interested in "iterate over this sequence of chunks of 100 rows at a time" as a potential option here.

Of course, a better solution would be for datasette-geojson to have a way to influence the SQL query before it is executed, adding a AsGeoJSON(geometry) clause to it - so that's something I'm open to as well.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
register_output_renderer() should support streaming data 749283032  
1105608964 https://github.com/simonw/datasette/issues/1101#issuecomment-1105608964 https://api.github.com/repos/simonw/datasette/issues/1101 IC_kwDOBm6k_c5B5kEE simonw 9599 2022-04-21T18:26:29Z 2022-04-21T18:26:29Z OWNER

I'm questioning if the mechanisms should be separate at all now - a single response rendering is really just a case of a streaming response that only pulls the first N records from the iterator.

It probably needs to be an async for iterator, which I've not worked with much before. Good opportunity to learn.

This actually gets a fair bit more complicated due to the work I'm doing right now to improve the default JSON API:

  • 1709

I want to do things like make faceting results optionally available to custom renderers - which is a separate concern from streaming rows.

I'm going to poke around with a bunch of prototypes and see what sticks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
register_output_renderer() should support streaming data 749283032  
1105588651 https://github.com/simonw/datasette/issues/1101#issuecomment-1105588651 https://api.github.com/repos/simonw/datasette/issues/1101 IC_kwDOBm6k_c5B5fGr eyeseast 25778 2022-04-21T18:15:39Z 2022-04-21T18:15:39Z CONTRIBUTOR

What if you split rendering and streaming into two things:

  • render is a function that returns a response
  • stream is a function that sends chunks, or yields chunks passed to an ASGI send callback

That way current plugins still work, and streaming is purely additive. A stream function could get a cursor or iterator of rows, instead of a list, so it could more efficiently handle large queries.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
register_output_renderer() should support streaming data 749283032  
1105571003 https://github.com/simonw/datasette/issues/1101#issuecomment-1105571003 https://api.github.com/repos/simonw/datasette/issues/1101 IC_kwDOBm6k_c5B5ay7 simonw 9599 2022-04-21T18:10:38Z 2022-04-21T18:10:46Z OWNER

Maybe the simplest design for this is to add an optional can_stream to the contract:

    @hookimpl
    def register_output_renderer(datasette):
        return {
            "extension": "tsv",
            "render": render_tsv,
            "can_render": lambda: True,
            "can_stream": lambda: True
        }

When streaming, a new parameter could be passed to the render function - maybe chunks - which is an iterator/generator over a sequence of chunks of rows.

Or it could use the existing rows parameter but treat that as an iterator?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
register_output_renderer() should support streaming data 749283032  
1105474232 https://github.com/dogsheep/github-to-sqlite/issues/72#issuecomment-1105474232 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/72 IC_kwDODFdgUs5B5DK4 simonw 9599 2022-04-21T17:02:15Z 2022-04-21T17:02:15Z MEMBER

That's interesting - yeah it looks like the number of pages can be derived from the Link header, which is enough information to show a progress bar, probably using Click just to avoid adding another dependency.

https://docs.github.com/en/rest/guides/traversing-with-pagination

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
feature: display progress bar when downloading multi-page responses 1211283427  
1105464661 https://github.com/simonw/datasette/pull/1574#issuecomment-1105464661 https://api.github.com/repos/simonw/datasette/issues/1574 IC_kwDOBm6k_c5B5A1V dholth 208018 2022-04-21T16:51:24Z 2022-04-21T16:51:24Z NONE

tfw you have more ephemeral storage than upstream bandwidth

FROM python:3.10-slim AS base

RUN apt update && apt -y install zstd

ENV DATASETTE_SECRET 'sosecret'
RUN --mount=type=cache,target=/root/.cache/pip
    pip install -U datasette datasette-pretty-json datasette-graphql

ENV PORT 8080
EXPOSE 8080

FROM base AS pack

COPY . /app
WORKDIR /app

RUN datasette inspect --inspect-file inspect-data.json
RUN zstd --rm *.db

FROM base AS unpack

COPY --from=pack /app /app
WORKDIR /app

CMD ["/bin/bash", "-c", "shopt -s nullglob && zstd --rm -d *.db.zst && datasette serve --host 0.0.0.0 --cors --inspect-file inspect-data.json --metadata metadata.json --create --port $PORT *.db"]
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
introduce new option for datasette package to use a slim base image 1084193403  
1103312860 https://github.com/simonw/datasette/issues/1713#issuecomment-1103312860 https://api.github.com/repos/simonw/datasette/issues/1713 IC_kwDOBm6k_c5Bwzfc fgregg 536941 2022-04-20T00:52:19Z 2022-04-20T00:52:19Z NONE

feels related to #1402

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette feature for publishing snapshots of query results 1203943272  
1101594549 https://github.com/simonw/sqlite-utils/issues/425#issuecomment-1101594549 https://api.github.com/repos/simonw/sqlite-utils/issues/425 IC_kwDOCGYnMM5BqP-1 simonw 9599 2022-04-18T17:36:14Z 2022-04-18T17:36:14Z OWNER

Releated:
- #408

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`sqlite3.NotSupportedError`: deterministic=True requires SQLite 3.8.3 or higher 1203842656  
1100243987 https://github.com/simonw/datasette/pull/1159#issuecomment-1100243987 https://api.github.com/repos/simonw/datasette/issues/1159 IC_kwDOBm6k_c5BlGQT lovasoa 552629 2022-04-15T17:24:43Z 2022-04-15T17:24:43Z NONE

@simonw : do you think this could be merged ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Improve the display of facets information 774332247  
1099540225 https://github.com/simonw/datasette/issues/1713#issuecomment-1099540225 https://api.github.com/repos/simonw/datasette/issues/1713 IC_kwDOBm6k_c5BiacB eyeseast 25778 2022-04-14T19:09:57Z 2022-04-14T19:09:57Z CONTRIBUTOR

I wonder if this overlaps with what I outlined in #1605. You could run something like this:

datasette freeze -d exports/
aws s3 cp exports/ s3://my-export-bucket/$(date)

And maybe that does what you need. Of course, that plugin isn't built yet. But that's the idea.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette feature for publishing snapshots of query results 1203943272  
1099443468 https://github.com/simonw/datasette/issues/1713#issuecomment-1099443468 https://api.github.com/repos/simonw/datasette/issues/1713 IC_kwDOBm6k_c5BiC0M rayvoelker 9308268 2022-04-14T17:26:27Z 2022-04-14T17:26:27Z NONE

What would be an awesome feature as a plugin would be to be able to save a query (and possibly even results) to a github gist. Being able to share results that way would be super fantastic. Possibly even in Jupyter Notebook format (since github and github gists nicely render those)!

I know there's the handy datasette-saved-queries plugin, but a button that could export stuff out and then even possibly import stuff back in (I'm sort of thinking the way that Google Colab allows you to save to github, and then pull the notebook back in is a really great workflow

https://github.com/cincinnatilibrary/collection-analysis/blob/master/reports/colab_datasette_example.ipynb )

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette feature for publishing snapshots of query results 1203943272  
1098628334 https://github.com/simonw/datasette/issues/1713#issuecomment-1098628334 https://api.github.com/repos/simonw/datasette/issues/1713 IC_kwDOBm6k_c5Be7zu simonw 9599 2022-04-14T01:43:00Z 2022-04-14T01:43:13Z OWNER

Current workaround for fast publishing to S3:

datasette fixtures.db --get /fixtures/facetable.json | \
  s3-credentials put-object my-bucket facetable.json -
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette feature for publishing snapshots of query results 1203943272  
1098548931 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098548931 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5BeobD simonw 9599 2022-04-13T22:41:59Z 2022-04-13T22:41:59Z OWNER

I'm going to close this ticket since it looks like this is a bug in the way the Dockerfile builds Python, but I'm going to ship a fix for that issue I found so the LD_PRELOAD workaround above should work OK with the next release of sqlite-utils. Thanks for the detailed bug report!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1098548090 https://github.com/simonw/sqlite-utils/issues/424#issuecomment-1098548090 https://api.github.com/repos/simonw/sqlite-utils/issues/424 IC_kwDOCGYnMM5BeoN6 simonw 9599 2022-04-13T22:40:15Z 2022-04-13T22:40:15Z OWNER

New error:

>>> from sqlite_utils import Database
>>> db = Database(memory=True)
>>> db["foo"].create({})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/db.py", line 1465, in create
    self.db.create_table(
  File "/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/db.py", line 885, in create_table
    sql = self.create_table_sql(
  File "/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/db.py", line 771, in create_table_sql
    assert columns, "Tables must have at least one column"
AssertionError: Tables must have at least one column
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Better error message if you try to create a table with no columns 1200866134  
1098545390 https://github.com/simonw/sqlite-utils/issues/425#issuecomment-1098545390 https://api.github.com/repos/simonw/sqlite-utils/issues/425 IC_kwDOCGYnMM5Benju simonw 9599 2022-04-13T22:34:52Z 2022-04-13T22:34:52Z OWNER

That broke Python 3.7 because it doesn't support deterministic=True even being passed:

function takes at most 3 arguments (4 given)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`sqlite3.NotSupportedError`: deterministic=True requires SQLite 3.8.3 or higher 1203842656  
1098537000 https://github.com/simonw/sqlite-utils/issues/425#issuecomment-1098537000 https://api.github.com/repos/simonw/sqlite-utils/issues/425 IC_kwDOCGYnMM5Belgo simonw 9599 2022-04-13T22:18:22Z 2022-04-13T22:18:22Z OWNER

I figured out a workaround in https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098535531

The current register(fn) method looks like this: https://github.com/simonw/sqlite-utils/blob/95522ad919f96eb6cc8cd3cd30389b534680c717/sqlite_utils/db.py#L389-L403

This alternative implementation worked in the environment where that failed:

        def register(fn):
            name = fn.__name__
            arity = len(inspect.signature(fn).parameters)
            if not replace and (name, arity) in self._registered_functions:
                return fn
            kwargs = {}
            done = False
            if deterministic:
                # Try this, but fall back if sqlite3.NotSupportedError
                try:
                    self.conn.create_function(name, arity, fn, **dict(kwargs, deterministic=True))
                    done = True
                except sqlite3.NotSupportedError:
                    pass
            if not done:
                self.conn.create_function(name, arity, fn, **kwargs)
            self._registered_functions.add((name, arity))
            return fn
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`sqlite3.NotSupportedError`: deterministic=True requires SQLite 3.8.3 or higher 1203842656  
1098535531 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098535531 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5BelJr simonw 9599 2022-04-13T22:15:48Z 2022-04-13T22:15:48Z OWNER

Trying this alternative implementation of the register() method:

        def register(fn):
            name = fn.__name__
            arity = len(inspect.signature(fn).parameters)
            if not replace and (name, arity) in self._registered_functions:
                return fn
            kwargs = {}
            done = False
            if deterministic:
                # Try this, but fall back if sqlite3.NotSupportedError
                try:
                    self.conn.create_function(name, arity, fn, **dict(kwargs, deterministic=True))
                    done = True
                except sqlite3.NotSupportedError:
                    pass
            if not done:
                self.conn.create_function(name, arity, fn, **kwargs)
            self._registered_functions.add((name, arity))
            return fn

With that fix, the following worked!

LD_PRELOAD=./build/sqlite-autoconf-3360000/.libs/libsqlite3.so sqlite-utils indexes /tmp/global.db --table
table      index_name                    seqno    cid  name       desc  coll      key
---------  --------------------------  -------  -----  -------  ------  ------  -----
countries  idx_countries_country_name        0      1  country       0  BINARY      1
countries  idx_countries_country_name        1      2  name          0  BINARY      1
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1098532220 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098532220 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5BekV8 simonw 9599 2022-04-13T22:09:52Z 2022-04-13T22:09:52Z OWNER

That error is weird - it's not supposed to happen according to this code here: https://github.com/simonw/sqlite-utils/blob/95522ad919f96eb6cc8cd3cd30389b534680c717/sqlite_utils/db.py#L389-L400

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1098531354 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098531354 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5BekIa simonw 9599 2022-04-13T22:08:20Z 2022-04-13T22:08:20Z OWNER

OK I figured out what's going on here. First I added an extra print(sql) statement to the indexes command to see what SQL it was running:

(app-root) sqlite-utils indexes global.db --table

    select
      sqlite_master.name as "table",
      indexes.name as index_name,
      xinfo.*
    from sqlite_master
      join pragma_index_list(sqlite_master.name) indexes
      join pragma_index_xinfo(index_name) xinfo
    where
      sqlite_master.type = 'table'
     and xinfo.key = 1
Error: near "(": syntax error

This made me suspicious that the SQLite version being used here didn't support joining against the pragma_index_list(...) table-valued functions in that way. So I checked the version:

(app-root) sqlite3
SQLite version 3.36.0 2021-06-18 18:36:39

That version should be fine - it's the one you compiled in the Dockerfile.

Then I checked the version that sqlite-utils itself was using:

(app-root) sqlite-utils memory 'select sqlite_version()'
[{"sqlite_version()": "3.7.17"}]

It's running SQLite 3.7.17!

So the problem here is that the Python in that Docker image is running a very old version of SQLite.

I tried using the trick in https://til.simonwillison.net/sqlite/ld-preload as a workaround, and it almost worked:

(app-root) python3 -c 'import sqlite3; print(sqlite3.connect(":memory").execute("select sqlite_version()").fetchone())'
('3.7.17',)
(app-root) LD_PRELOAD=./build/sqlite-autoconf-3360000/.libs/libsqlite3.so python3 -c 'import sqlite3; print(sqlite3.connect(":memory").execute("select sqlite_version()").fetchone())'
('3.36.0',)

But when I try to run sqlite-utils like that I get an error:

(app-root) LD_PRELOAD=./build/sqlite-autoconf-3360000/.libs/libsqlite3.so sqlite-utils indexes /tmp/global.db 
...
  File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py", line 1624, in query
    db.register_fts4_bm25()
  File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 412, in register_fts4_bm25
    self.register_function(rank_bm25, deterministic=True)
  File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 408, in register_function
    register(fn)
  File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 401, in register
    self.conn.create_function(name, arity, fn, **kwargs)
sqlite3.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1098295517 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098295517 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5Bdqjd simonw 9599 2022-04-13T17:16:20Z 2022-04-13T17:16:20Z OWNER

Aha! I was able to replicate the bug using your Dockerfile - thanks very much for providing that.

(app-root) sqlite-utils indexes global.db --table
Error: near "(": syntax error

(That wa sbefore I even ran the extract command.)

To build your Dockerfile I copied it into an empty folder and ran the following:

wget https://www.sqlite.org/2021/sqlite-autoconf-3360000.tar.gz
docker build . -t centos-sqlite-utils
docker run -it centos-sqlite-utils /bin/bash

This gave me a shell in which I could replicate the bug.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1098288158 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1098288158 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5Bdowe simonw 9599 2022-04-13T17:07:53Z 2022-04-13T17:07:53Z OWNER

I can't replicate the bug I'm afraid:

% wget "https://github.com/wri/global-power-plant-database/blob/232a6666/output_database/global_power_plant_database.csv?raw=true"               
...
2022-04-13 10:06:29 (8.97 MB/s) - ‘global_power_plant_database.csv?raw=true’ saved [8856038/8856038]
% sqlite-utils insert global.db power_plants \                      
    'global_power_plant_database.csv?raw=true' --csv
  [------------------------------------]    0%
  [###################################-]   99%  00:00:00%
% sqlite-utils indexes global.db --table                            
table    index_name    seqno    cid    name    desc    coll    key
-------  ------------  -------  -----  ------  ------  ------  -----
% sqlite-utils extract global.db power_plants country country_long \
    --table countries \
    --fk-column country_id \
    --rename country_long name
% sqlite-utils indexes global.db --table                            
table      index_name                    seqno    cid  name       desc  coll      key
---------  --------------------------  -------  -----  -------  ------  ------  -----
countries  idx_countries_country_name        0      1  country       0  BINARY      1
countries  idx_countries_country_name        1      2  name          0  BINARY      1
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1097115034 https://github.com/simonw/datasette/issues/1712#issuecomment-1097115034 https://api.github.com/repos/simonw/datasette/issues/1712 IC_kwDOBm6k_c5BZKWa simonw 9599 2022-04-12T19:12:21Z 2022-04-12T19:12:21Z OWNER

Got a TIL out of this too: https://til.simonwillison.net/spatialite/gunion-to-combine-geometries

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Make "<Binary: 2427344 bytes>" easier to read 1202227104  
1097076622 https://github.com/simonw/datasette/issues/1712#issuecomment-1097076622 https://api.github.com/repos/simonw/datasette/issues/1712 IC_kwDOBm6k_c5BZA-O simonw 9599 2022-04-12T18:42:04Z 2022-04-12T18:42:04Z OWNER

I'm not going to show the tooltip if the formatted number is in bytes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Make "<Binary: 2427344 bytes>" easier to read 1202227104  
1097068474 https://github.com/simonw/datasette/issues/1712#issuecomment-1097068474 https://api.github.com/repos/simonw/datasette/issues/1712 IC_kwDOBm6k_c5BY--6 simonw 9599 2022-04-12T18:38:18Z 2022-04-12T18:38:18Z OWNER

https://user-images.githubusercontent.com/9599/163030785-9dcc5a21-6a1b-42a7-97de-10e7d2874412.png">

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Make "<Binary: 2427344 bytes>" easier to read 1202227104  
1095687566 https://github.com/simonw/datasette/issues/1708#issuecomment-1095687566 https://api.github.com/repos/simonw/datasette/issues/1708 IC_kwDOBm6k_c5BTt2O simonw 9599 2022-04-11T23:24:30Z 2022-04-11T23:24:30Z OWNER

Redesigned template context

Warning: if you use any custom templates with your Datasette instance they are likely to break when you upgrade to 1.0.

The template context has been redesigned to be based on the documented JSON API. This means that the template context can be considered stable going forward, so any custom templates you implement should continue to work when you upgrade Datasette in the future.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1.0a0 release notes 1200649124  
1095675839 https://github.com/simonw/datasette/issues/1708#issuecomment-1095675839 https://api.github.com/repos/simonw/datasette/issues/1708 IC_kwDOBm6k_c5BTq-_ simonw 9599 2022-04-11T23:06:30Z 2022-04-11T23:09:56Z OWNER

Datasette 1.0 alpha 0

This alpha release is the first preview of Datasette 1.0.

Datasette 1.0 marks a significant milestone in the project: it is the point from which various aspects of Datasette can be considered "stable", in that code developed against them should expect not to be broken by future releases in the 1.x series.

This will hold true until the next major version release, Datasette 2.0 - which we hope to hold off releasing for as long as possible.

The following Datasette components should be be considered stable after 1.0:

  • The plugin API. Plugins developed against 1.0 should continue to work unmodified throughout the 1.x series.
  • The JSON API. Code written that interacts with Datasette's default JSON web API should continue to work.
  • The template context. If you build custom templates against Datasette your custom pages should continue to work.

Note that none of these components will cease to introduce new features. New plugin hooks, new JSON APIs and new template context variables can be introduced without breaking existing code.

Since this alpha release previews features that will be frozen for 1.0, please test this thoroughly against your existing Datasette projects.

You can install the alpha using:

pip install datasette==1.0a0

JSON API changes

The most significant changes introduced in this new alpha concern Datasette's JSON API.

The default JSON returned by the /database/table.json endpoint has changed. It now returns an object with two keys: rows - which contains a list of objects representing the rows in the table or query, and more containing a boolean that shows if there are more rows or if this object contains them all.

{
  "rows": [{
    "id": 1,
    "name": "Name 1"
  }, {
    "id": 2,
    "name": "Name 2"
  }],
  "more": false
}

[ Initially I thought about going with next_url, which would be null if you have reached the last page of records. Maybe that would be better? But since next_url cannot be provided on query pages, should this be part of the default format at all? ]

Use ?_extra= to retrieve extra fields

The default format can be expanded using one or more ?_extra= parameters. This takes names of extra keys you would like to include. These can be comma-separated or ?_extra= can be applied multiple times.

For example:

/database/table.json?_extra=total

This adds a "total": 124 field to the returned JSON.

[ Question: if you do ?_facet=foo then do you still need to do ?_extra=facets - I think not? ]

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1.0a0 release notes 1200649124  
1095673947 https://github.com/simonw/datasette/issues/1705#issuecomment-1095673947 https://api.github.com/repos/simonw/datasette/issues/1705 IC_kwDOBm6k_c5BTqhb simonw 9599 2022-04-11T23:03:49Z 2022-04-11T23:03:49Z OWNER

I'll also encourage testing against both Datasette 0.x and Datasette 1.0 using a GitHub Actions matrix.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
How to upgrade your plugin for 1.0 documentation 1197926598  
1095673670 https://github.com/simonw/datasette/issues/1710#issuecomment-1095673670 https://api.github.com/repos/simonw/datasette/issues/1710 IC_kwDOBm6k_c5BTqdG simonw 9599 2022-04-11T23:03:25Z 2022-04-11T23:03:25Z OWNER

Dupe of:
- #1705

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Guide for plugin authors to upgrade their plugins for 1.0 1200649889  
1095671940 https://github.com/simonw/datasette/issues/1709#issuecomment-1095671940 https://api.github.com/repos/simonw/datasette/issues/1709 IC_kwDOBm6k_c5BTqCE simonw 9599 2022-04-11T23:00:39Z 2022-04-11T23:01:41Z OWNER
  • 262

  • 782

  • 1509

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesigned JSON API with ?_extra= parameters 1200649502  
1095672127 https://github.com/simonw/datasette/issues/1711#issuecomment-1095672127 https://api.github.com/repos/simonw/datasette/issues/1711 IC_kwDOBm6k_c5BTqE_ simonw 9599 2022-04-11T23:00:58Z 2022-04-11T23:00:58Z OWNER
  • 1510

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Template context powered entirely by the JSON API format 1200650491  
1095277937 https://github.com/simonw/datasette/issues/1707#issuecomment-1095277937 https://api.github.com/repos/simonw/datasette/issues/1707 IC_kwDOBm6k_c5BSJ1x simonw 9599 2022-04-11T16:32:31Z 2022-04-11T16:33:00Z OWNER

That's a really interesting idea!

That page is one of the least developed at the moment. There's plenty of room for it to grow new useful features.

I like this suggestion because it feels like a good opportunity to introduce some unobtrusive JavaScript. Could use a details/summary element that uses fetch() to load in the extra data for example.

Could even do something with the <datasette-table> Web Component here... https://github.com/simonw/datasette-table

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[feature] expanded detail page 1200224939  
1094453751 https://github.com/simonw/datasette/issues/1699#issuecomment-1094453751 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BPAn3 eyeseast 25778 2022-04-11T01:32:12Z 2022-04-11T01:32:12Z CONTRIBUTOR

Was looking through old issues and realized a bunch of this got discussed in #1101 (including by me!), so sorry to rehash all this. Happy to help with whatever piece of it I can. Would be very excited to be able to use format plugins with exports.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1094152642 https://github.com/simonw/datasette/issues/1706#issuecomment-1094152642 https://api.github.com/repos/simonw/datasette/issues/1706 IC_kwDOBm6k_c5BN3HC simonw 9599 2022-04-10T01:11:54Z 2022-04-10T01:11:54Z OWNER

This relates to this much larger vision:
- #417

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[feature] immutable mode for a directory, not just individual sqlite file 1198822563  
1094152173 https://github.com/simonw/datasette/issues/1706#issuecomment-1094152173 https://api.github.com/repos/simonw/datasette/issues/1706 IC_kwDOBm6k_c5BN2_t simonw 9599 2022-04-10T01:08:50Z 2022-04-10T01:08:50Z OWNER

This is a good idea - it matches the way datasette . works for mutable database files already.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[feature] immutable mode for a directory, not just individual sqlite file 1198822563  
1093454899 https://github.com/simonw/datasette/pull/1693#issuecomment-1093454899 https://api.github.com/repos/simonw/datasette/issues/1693 IC_kwDOBm6k_c5BLMwz simonw 9599 2022-04-08T23:07:04Z 2022-04-08T23:07:04Z OWNER

Tests failed here due to this issue:
- https://github.com/psf/black/pull/2987

A future Black release should fix that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Bump black from 22.1.0 to 22.3.0 1184850337  
1092850719 https://github.com/simonw/datasette/pull/1703#issuecomment-1092850719 https://api.github.com/repos/simonw/datasette/issues/1703 IC_kwDOBm6k_c5BI5Qf codecov[bot] 22429695 2022-04-08T13:18:04Z 2022-04-08T13:18:04Z NONE

Codecov Report

Merging #1703 (73aabe6) into main (90d1be9) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1703   +/-   ##
=======================================
  Coverage   91.75%   91.75%           
=======================================
  Files          34       34           
  Lines        4573     4573           
=======================================
  Hits         4196     4196           
  Misses        377      377           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 90d1be9...73aabe6. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update beautifulsoup4 requirement from <4.11.0,>=4.8.1 to >=4.8.1,<4.12.0 1197298420  
1092386254 https://github.com/simonw/datasette/issues/1699#issuecomment-1092386254 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHH3O eyeseast 25778 2022-04-08T02:39:25Z 2022-04-08T02:39:25Z CONTRIBUTOR

And just to think this through a little more, here's what stream_geojson might look like:

async def stream_geojson(datasette, columns, rows, database, stream):
    db = datasette.get_database(database)
    for row in rows:
        feature = await row_to_geojson(row, db)
        stream.write(feature + "\n") # just assuming newline mode for now

Alternately, that could be an async generator, like this:

async def stream_geojson(datasette, columns, rows, database):
    db = datasette.get_database(database)
    for row in rows:
        feature = await row_to_geojson(row, db)
        yield feature

Not sure which makes more sense, but I think this pattern would open up a lot of possibility. If you had your stream_indented_json function, you could do yield from stream_indented_json(rows, 2) and be one your way.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092370880 https://github.com/simonw/datasette/issues/1699#issuecomment-1092370880 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHEHA eyeseast 25778 2022-04-08T02:07:40Z 2022-04-08T02:07:40Z CONTRIBUTOR

So maybe render_output_render returns something like this:

@hookimpl
def register_output_renderer(datasette):
    return {
        "extension": "geojson",
        "render": render_geojson,
        "stream": stream_geojson,
        "can_render": can_render_geojson,
    }

And stream gets an iterator, instead of a list of rows, so it can efficiently handle large queries. Maybe it also gets passed a destination stream, or it returns an iterator. I'm not sure what makes more sense. Either way, that might cover both CLI exports and streaming responses.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092361727 https://github.com/simonw/datasette/issues/1699#issuecomment-1092361727 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHB3_ simonw 9599 2022-04-08T01:47:43Z 2022-04-08T01:47:43Z OWNER

A render mode for that plugin hook that writes to a stream is exactly what I have in mind:
- #1062

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092357672 https://github.com/simonw/datasette/issues/1699#issuecomment-1092357672 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHA4o eyeseast 25778 2022-04-08T01:39:40Z 2022-04-08T01:39:40Z CONTRIBUTOR

My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me.

That's my thinking, too. It's really the thing I've been wanting since writing datasette-geojson, since I'm always exporting with datasette --get. The workflow I'm always looking for is something like this:

cd alltheplaces-datasette
datasette query dunkin_in_suffolk -f geojson -o dunkin_in_suffolk.geojson

I think this probably needs either a new plugin hook separate from register_output_renderer or a way to use that without going through the HTTP stack. Or maybe a render mode that writes to a stream instead of a response. Maybe there's a new key in the dictionary that register_output_renderer returns that handles CLI exports.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092321966 https://github.com/simonw/datasette/issues/1699#issuecomment-1092321966 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BG4Ku simonw 9599 2022-04-08T00:20:32Z 2022-04-08T00:20:56Z OWNER

If we do this I'm keen to have it be more than just an alternative to the existing sqlite-utils command - especially since if I add sqlite-utils as a dependency of Datasette in the future that command will be installed as part of pip install datasette anyway.

My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me.

One way that could work: a --fmt geojson option to this command which uses the plugin that was registered for the specified extension.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1087428593 https://github.com/simonw/datasette/issues/1549#issuecomment-1087428593 https://api.github.com/repos/simonw/datasette/issues/1549 IC_kwDOBm6k_c5A0Nfx fgregg 536941 2022-04-04T11:17:13Z 2022-04-04T11:17:13Z NONE

another way to get the behavior of downloading the file is to use the download attribute of the anchor tag

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#attr-download

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign CSV export to improve usability 1077620955  
1086784547 https://github.com/simonw/datasette/issues/1698#issuecomment-1086784547 https://api.github.com/repos/simonw/datasette/issues/1698 IC_kwDOBm6k_c5AxwQj simonw 9599 2022-04-03T06:10:24Z 2022-04-03T06:10:24Z OWNER

Warning added here: https://docs.datasette.io/en/latest/publish.html#publishing-to-google-cloud-run

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add a warning about bots and Cloud Run 1190828163  
1085323192 https://github.com/simonw/datasette/issues/1697#issuecomment-1085323192 https://api.github.com/repos/simonw/datasette/issues/1697 IC_kwDOBm6k_c5AsLe4 simonw 9599 2022-04-01T02:01:51Z 2022-04-01T02:01:51Z OWNER

Huh, turns out Request.fake() wasn't yet documented.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`Request.fake(..., url_vars={})` 1189113609  
1084216224 https://github.com/simonw/datasette/pull/1574#issuecomment-1084216224 https://api.github.com/repos/simonw/datasette/issues/1574 IC_kwDOBm6k_c5An9Og fs111 33631 2022-03-31T07:45:25Z 2022-03-31T07:45:25Z NONE

@simonw I like that you want to go "slim by default". Do you want another PR for that or should I just wait?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
introduce new option for datasette package to use a slim base image 1084193403  
1083351437 https://github.com/simonw/datasette/issues/1696#issuecomment-1083351437 https://api.github.com/repos/simonw/datasette/issues/1696 IC_kwDOBm6k_c5AkqGN simonw 9599 2022-03-30T16:20:49Z 2022-03-30T16:21:02Z OWNER

Maybe like this:

https://user-images.githubusercontent.com/9599/160883280-c19d5a22-e923-491f-8bf4-1a4f5215d684.png">

<h3>283 rows
  where dcode = 3 <span style="color: #aaa; font-size: 0.9em">(Human Related: Other)</span>
    </h3>
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Show foreign key label when filtering 1186696202  
1082663746 https://github.com/simonw/datasette/issues/1692#issuecomment-1082663746 https://api.github.com/repos/simonw/datasette/issues/1692 IC_kwDOBm6k_c5AiCNC simonw 9599 2022-03-30T06:14:39Z 2022-03-30T06:14:51Z OWNER

I like your design, though I think it should be "nomodule": True for consistency with the other options.

I think "async": True is worth supporting too.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[plugins][feature request]: Support additional script tag attributes when loading custom JS 1182227211  
1082661795 https://github.com/simonw/datasette/issues/1692#issuecomment-1082661795 https://api.github.com/repos/simonw/datasette/issues/1692 IC_kwDOBm6k_c5AiBuj simonw 9599 2022-03-30T06:11:41Z 2022-03-30T06:11:41Z OWNER

This is a good idea.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[plugins][feature request]: Support additional script tag attributes when loading custom JS 1182227211  
1082617386 https://github.com/simonw/datasette/issues/1695#issuecomment-1082617386 https://api.github.com/repos/simonw/datasette/issues/1695 IC_kwDOBm6k_c5Ah24q simonw 9599 2022-03-30T04:46:18Z 2022-03-30T04:46:18Z OWNER

selected = (column_qs, str(row["value"])) in qs_pairs is wrong.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Option to un-filter facet not shown for `?col__exact=value` 1185868354  
1082617241 https://github.com/simonw/datasette/issues/1695#issuecomment-1082617241 https://api.github.com/repos/simonw/datasette/issues/1695 IC_kwDOBm6k_c5Ah22Z simonw 9599 2022-03-30T04:45:55Z 2022-03-30T04:45:55Z OWNER

Relevant template: https://github.com/simonw/datasette/blob/e73fa72917ca28c152208d62d07a490c81cadf52/datasette/templates/table.html#L168-L172

Populated from here: https://github.com/simonw/datasette/blob/c496f2b663ff0cef908ffaaa68b8cb63111fb5f2/datasette/facets.py#L246-L253

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Option to un-filter facet not shown for `?col__exact=value` 1185868354  
1082476727 https://github.com/simonw/sqlite-utils/issues/420#issuecomment-1082476727 https://api.github.com/repos/simonw/sqlite-utils/issues/420 IC_kwDOCGYnMM5AhUi3 strada 770231 2022-03-29T23:52:38Z 2022-03-29T23:52:38Z NONE

@simonw Thanks for looking into it and documenting the solution!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Document how to use a `--convert` function that runs initialization code first 1178546862  
1081860312 https://github.com/simonw/datasette/pull/1694#issuecomment-1081860312 https://api.github.com/repos/simonw/datasette/issues/1694 IC_kwDOBm6k_c5Ae-DY codecov[bot] 22429695 2022-03-29T13:17:30Z 2022-03-29T13:17:30Z NONE

Codecov Report

Merging #1694 (83ff967) into main (e73fa72) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1694   +/-   ##
=======================================
  Coverage   91.74%   91.74%           
=======================================
  Files          34       34           
  Lines        4565     4565           
=======================================
  Hits         4188     4188           
  Misses        377      377           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e73fa72...83ff967. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update click requirement from <8.1.0,>=7.1.1 to >=7.1.1,<8.2.0 1184850675  
1081079506 https://github.com/simonw/sqlite-utils/issues/421#issuecomment-1081079506 https://api.github.com/repos/simonw/sqlite-utils/issues/421 IC_kwDOCGYnMM5Ab_bS learning4life 24938923 2022-03-28T19:58:55Z 2022-03-28T20:05:57Z NONE

Sure, it is from the documentation example:
Extracting columns into a separate table

wget "https://github.com/wri/global-power-plant-database/blob/232a6666/output_database/global_power_plant_database.csv?raw=true"

sqlite-utils insert global.db power_plants \
    'global_power_plant_database.csv?raw=true' --csv
# Extract those columns:
sqlite-utils extract global.db power_plants country country_long \
    --table countries \
    --fk-column country_id \
    --rename country_long name
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Error: near "(": syntax error" when using sqlite-utils indexes CLI 1180427792  
1081047053 https://github.com/simonw/sqlite-utils/issues/420#issuecomment-1081047053 https://api.github.com/repos/simonw/sqlite-utils/issues/420 IC_kwDOCGYnMM5Ab3gN simonw 9599 2022-03-28T19:22:37Z 2022-03-28T19:22:37Z OWNER

Wrote about this in my weeknotes: https://simonwillison.net/2022/Mar/28/datasette-auth0/#new-features-as-documentation

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Document how to use a `--convert` function that runs initialization code first 1178546862  

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 125.944ms · About: github-to-sqlite