1,328 rows where repo = 107914493 sorted by updated_at descending

View and edit SQL

Suggested facets: milestone, author_association, created_at (date), updated_at (date), closed_at (date)

type

state

repo

  • datasette · 1,328
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app
920884085 MDU6SXNzdWU5MjA4ODQwODU= 1377 Mechanism for plugins to exclude certain paths from CSRF checks simonw 9599 closed 0     3 2021-06-15T00:48:20Z 2021-06-23T22:51:33Z 2021-06-23T22:51:33Z OWNER  

I need this for a plugin I'm building that offers a POST API.

datasette 107914493 issue    
913865304 MDExOlB1bGxSZXF1ZXN0NjYzODM2OTY1 1368 DRAFT: A new plugin hook for dynamic metadata brandonrobertz 2670795 open 0     4 2021-06-07T18:56:00Z 2021-06-23T19:32:01Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1368

Note that this is a WORK IN PROGRESS!

This PR adds the following plugin hook:

get_metadata(
  datasette=self, key=key, database=database, table=table,
  fallback=fallback
)

This gets called when we're building our metdata for the rest of the system to use. Datasette merges whatever the plugins return with any local metadata (from metadata.yml/yaml/json) allowing for a live-editable dynamic Datasette. A major design consideration is this: should Datasette perform the metadata merge? Or should Datasette allow plugins to perform any modifications themselves?

As a security precation, local meta is not overwritable by plugin hooks. The workflow for transitioning to live-meta would be to load the plugin with the full metadata.yaml and save. Then remove the parts of the metadata that you want to be able to change from the file.

I have a WIP dynamic configuration plugin here, for reference: https://github.com/next-LI/datasette-live-config/

datasette 107914493 pull    
913017577 MDU6SXNzdWU5MTMwMTc1Nzc= 1365 pathlib.Path breaks internal schema eyeseast 25778 closed 0     1 2021-06-07T01:40:37Z 2021-06-21T15:57:39Z 2021-06-21T15:57:39Z CONTRIBUTOR  

Ran into an issue while trying to build a plugin to render GeoJSON. I'm using pytest's tmp_path fixture, which is a pathlib.Path, to get a temporary database path. I was getting a weird error involving writes, but I was doing reads. Turns out it's the internal database trying to insert a Path where it wants a string.

My test looked like this:

@pytest.mark.asyncio
async def test_render_feature_collection(tmp_path):
    database = tmp_path / "test.db"
    datasette = Datasette([database])

    # this will break with a path
    await datasette.refresh_schemas()

    # build a url
    url = datasette.urls.table(database.stem, TABLE_NAME, format="geojson")

    response = await datasette.client.get(url)
    fc = response.json()

    assert 200 == response.status_code

I only ran into this while running tests, because passing in database paths from the CLI uses strings, but it's a weird error and probably something other people have run into.

The fix is easy enough: Convert the path to a string and everything works. So this:

@pytest.mark.asyncio
async def test_render_feature_collection(tmp_path):
    database = tmp_path / "test.db"
    datasette = Datasette([str(database)])

    # this is fine now
    await datasette.refresh_schemas()

This could (probably, haven't tested) be fixed here by calling str(db.path) or by doing that conversion earlier.

datasette 107914493 issue    
914130834 MDExOlB1bGxSZXF1ZXN0NjY0MDcyMDQ2 1370 Ensure db.path is a string before trying to insert into internal database eyeseast 25778 closed 0     2 2021-06-08T01:16:48Z 2021-06-21T15:57:39Z 2021-06-21T15:57:39Z CONTRIBUTOR simonw/datasette/pulls/1370

Fixes #1365

This is the simplest possible fix, with a test that will fail without it. There are a bunch of places where db.path is getting converted to and from a Path type, so this fix errs on the side of calling str(db.path) right before it's inserted.

datasette 107914493 pull    
925491857 MDU6SXNzdWU5MjU0OTE4NTc= 1383 Improve test coverage for `inspect.py` simonw 9599 open 0     0 2021-06-20T00:22:43Z 2021-06-20T00:22:49Z   OWNER  

https://codecov.io/gh/simonw/datasette/src/main/datasette/inspect.py shows only 36% coverage for that module at the moment.

datasette 107914493 issue    
925406964 MDU6SXNzdWU5MjU0MDY5NjQ= 1382 Datasette with Glitch - is it possible to use CSV with ISO-8859-1 encoding? reichaves 23701514 closed 0     1 2021-06-19T14:37:20Z 2021-06-20T00:21:02Z 2021-06-20T00:20:06Z NONE  

Hi
Please, I used Remix on Glitch to create a project on Glitch and uploaded a CSV
But it's a CSV with ISO-8859-1 encoding (https://en.wikipedia.org/wiki/ISO/IEC_8859-1)
Is it possible for me to change the encoding to correctly visualize the data?
Example: https://emphasized-carpal-pillow.glitch.me/data/Emendas
Best

datasette 107914493 issue    
923910375 MDExOlB1bGxSZXF1ZXN0NjcyNjIwMTgw 1378 Update pytest-xdist requirement from <2.3,>=2.2.1 to >=2.2.1,<2.4 dependabot[bot] 49699333 closed 0     1 2021-06-17T13:11:56Z 2021-06-20T00:17:07Z 2021-06-20T00:17:06Z CONTRIBUTOR simonw/datasette/pulls/1378

Updates the requirements on pytest-xdist to permit the latest version.

Changelog

Sourced from pytest-xdist's changelog.

pytest-xdist 2.3.0 (2021-06-16)

Deprecations and Removals

Features

Bug Fixes

Trivial Changes

pytest-xdist 2.2.1 (2021-02-09)

Bug Fixes

pytest-xdist 2.2.0 (2020-12-14)

Features

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
datasette 107914493 pull    
924748955 MDU6SXNzdWU5MjQ3NDg5NTU= 1380 Serve all db files in a folder stratosgear 193463 open 0     0 2021-06-18T10:03:32Z 2021-06-18T10:03:32Z   NONE  

I tried to get the serve command to serve all the .db files in the /mnt folder but is seems that the server does not refresh the list of files.

In more detail:

  • Starting datasette as a docker container with:
docker run -p 8001:8001 -v `pwd`:/mnt \
    datasetteproject/datasette \
    datasette -p 8001 -h 0.0.0.0 /mnt
  • Datasette correctly serves all the *.db files found in the /mnt folder
  • When the server is running, if I copy a new file in the $PWD folder, Datasette does not seem to see the new files, forcing me to restart Docker.

Is there an option/setting that I overlooked, or is this something missing?

BTW, the --reload setting, although at first glance is what you think you need, does not seem to do anything in regards of seeing all *.db files.

Thanks!

datasette 107914493 issue    
268176505 MDU6SXNzdWUyNjgxNzY1MDU= 34 Support CSV export with a .csv extension simonw 9599 closed 0     1 2017-10-24T20:34:43Z 2021-06-17T18:14:48Z 2018-05-28T20:45:34Z OWNER  

Maybe do this using streaming with multiple pagination SQL queries so we can support arbritrarily large exports.

How would this work against a view which doesn’t have an obvious efficient pagination mechanism? Maybe limit views to up to 1000 exported records?

Relates to #5

datasette 107914493 issue    
459882902 MDU6SXNzdWU0NTk4ODI5MDI= 526 Stream all results for arbitrary SQL and canned queries matej-fr 50578294 open 0     5 2019-06-24T13:09:45Z 2021-06-17T18:14:25Z   NONE  

I think that there is a difficulty with canned queries.

When I want to stream all results of a canned query TwoDays I get only first 1.000 records.

Example:
http://myserver/history_sample/two_days.csv?_stream=on

returns only first 1.000 records.

If I do the same with the whole database i.e.
http://myserver/history_sample/database.csv?_stream=on

I get correctly all records.

Any ideas?

datasette 107914493 issue    
323681589 MDU6SXNzdWUzMjM2ODE1ODk= 266 Export to CSV simonw 9599 closed 0     27 2018-05-16T15:50:24Z 2021-06-17T18:14:24Z 2018-06-18T06:05:25Z OWNER  

Datasette needs to be able to export data to CSV.

datasette 107914493 issue    
333000163 MDU6SXNzdWUzMzMwMDAxNjM= 312 HTML, CSV and JSON views should support ?_col=&_col= simonw 9599 closed 0     1 2018-06-16T16:53:35Z 2021-06-17T18:14:24Z 2018-06-16T17:00:12Z OWNER  

To support whitelisting columns to display.

datasette 107914493 issue    
335141434 MDU6SXNzdWUzMzUxNDE0MzQ= 326 CSV should respect --cors and return cors headers simonw 9599 closed 0     1 2018-06-24T00:44:07Z 2021-06-17T18:14:24Z 2018-06-24T00:59:45Z OWNER  

Otherwise tools like Vega can't load data via CSV.

datasette 107914493 issue    
395236066 MDU6SXNzdWUzOTUyMzYwNjY= 393 CSV export in "Advanced export" pane doesn't respect query ltrgoddard 1727065 closed 0     6 2019-01-02T12:39:41Z 2021-06-17T18:14:24Z 2019-01-03T02:44:10Z NONE  

It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at songs released in 1989 in the classic-rock/classic-rock-song-list table from the Five Thirty Eight data. The JSON and CSV export links at the top of the page both give me filtered data using Release+Year__exact=1989 in the URL. In the Advanced export tab, though, the CSV option gives me the whole data set, while the JSON options preserve the query.

It may be that this is intended behaviour related to the streaming CSV stuff discussed here, but if that's the case then I think it should be a little clearer.

datasette 107914493 issue    
725184645 MDU6SXNzdWU3MjUxODQ2NDU= 1034 Better way of representing binary data in .csv output simonw 9599 closed 0   0.51 6026070 19 2020-10-20T04:28:58Z 2021-06-17T18:13:21Z 2020-10-29T22:47:46Z OWNER  

I just noticed this: https://latest.datasette.io/fixtures/binary_data.csv

rowid,data
1,b'\x15\x1c\x02\xc7\xad\x05\xfe'
2,b'\x15\x1c\x03\xc7\xad\x05\xfe'

There's no good way to represent binary data in a CSV file, but this seems like one of the more-bad options.

datasette 107914493 issue    
732674148 MDU6SXNzdWU3MzI2NzQxNDg= 1062 Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows simonw 9599 open 0   Datasette 1.0 3268330 2 2020-10-29T21:25:02Z 2021-06-17T18:13:21Z   OWNER  

This can drive the upgrade of the register_output_renderer hook to be able to handle streaming all rows in a large query.

datasette 107914493 issue    
503190241 MDU6SXNzdWU1MDMxOTAyNDE= 584 Codec error in some CSV exports simonw 9599 closed 0     2 2019-10-07T01:15:34Z 2021-06-17T18:13:20Z 2019-10-18T05:23:16Z OWNER  

Got this exploring my Swarm checkins:

/swarm/stickers.csv?stickerType=messageOnly&_size=max

datasette 107914493 issue    
508100844 MDU6SXNzdWU1MDgxMDA4NDQ= 598 Character encoding bug with CSV export JoeGermuska 46313 closed 0     1 2019-10-16T21:09:30Z 2021-06-17T18:13:20Z 2019-10-18T22:52:21Z NONE  

I was just poking around, and at this URL, I encountered this error:

'latin-1' codec can't encode character '\u2019' in position 27: ordinal not in range(256)
datasette 107914493 issue    
516748849 MDU6SXNzdWU1MTY3NDg4NDk= 612 CSV export is broken for tables with null foreign keys simonw 9599 closed 0     2 2019-11-02T22:52:47Z 2021-06-17T18:13:20Z 2019-11-02T23:12:53Z OWNER  

Following on from #406 - this CSV export appears to be broken:

https://14da705.datasette.io/fixtures/foreign_key_references.csv?_labels=on&_size=max

pk,foreign_key_with_label,foreign_key_with_label_label,foreign_key_with_no_label,foreign_key_with_no_label_label
1,1,hello,1,1
2,,

That second row should have 5 values, but it only has 4.

datasette 107914493 issue    
910088936 MDU6SXNzdWU5MTAwODg5MzY= 1355 datasette --get should efficiently handle streaming CSV simonw 9599 open 0     1 2021-06-03T04:40:40Z 2021-06-17T18:12:33Z   OWNER  

It would be great if you could use datasette --get to run queries that return streaming CSV data without running out of RAM.

Current implementation looks like it loads the entire result into memory first: https://github.com/simonw/datasette/blob/f78ebdc04537a6102316d6dbbf6c887565806078/datasette/cli.py#L546-L552

datasette 107914493 issue    
775666296 MDU6SXNzdWU3NzU2NjYyOTY= 1160 "datasette insert" command and plugin hook simonw 9599 open 0     23 2020-12-29T02:37:03Z 2021-06-17T18:12:32Z   OWNER  

Tools for loading data into Datasette currently mostly exist as separate utilities - yaml-to-sqlite and csvs-to-sqlite and suchlike.

Bringing these into Datasette could have some interesting properties:

  • A datasette insert command could be extended with plugins to handle more formats
  • Any format that can be inserted on the command-line could also be inserted using a web UI or web API - which would benefit from new format plugin hooks
  • If Datasette ever grows beyond SQLite (see #670) a built-in import mechanism could work for those other databases as well - without me needing to write yaml-to-postgresql and suchlike
datasette 107914493 issue    
776128269 MDU6SXNzdWU3NzYxMjgyNjk= 1162 First working version of "datasette insert data.db file.csv" simonw 9599 open 0     0 2020-12-29T23:20:11Z 2021-06-17T18:12:32Z   OWNER  

Refs #1160

datasette 107914493 issue    
776128565 MDU6SXNzdWU3NzYxMjg1NjU= 1163 "datasette insert data.db url-to-csv" simonw 9599 open 0     1 2020-12-29T23:21:21Z 2021-06-17T18:12:32Z   OWNER  

Refs #1160 - get filesystem imports working first for #1162, then add import-from-URL.

datasette 107914493 issue    
906385991 MDU6SXNzdWU5MDYzODU5OTE= 1349 CSV ?_stream=on redundantly calculates facets for every page simonw 9599 closed 0     9 2021-05-29T06:11:23Z 2021-06-17T18:12:32Z 2021-06-01T15:52:53Z OWNER  

I'm trying to figure out why a full CSV export from https://covid-19.datasettes.com/covid/ny_times_us_counties runs unbearably slowly.

It's because the streaming endpoint works by scrolling through every page, and it turns out every page calculates facets and suggested facets!

datasette 107914493 issue    
906993731 MDU6SXNzdWU5MDY5OTM3MzE= 1351 Get `?_trace=1` working with CSV and streaming CSVs simonw 9599 closed 0     1 2021-05-31T03:02:15Z 2021-06-17T18:12:32Z 2021-06-01T15:50:09Z OWNER  

I think it's worth getting ?_trace=1 to work with streaming CSV - this would have helped me spot this issue a long time ago.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1349#issuecomment-851133125_

datasette 107914493 issue    
736365306 MDU6SXNzdWU3MzYzNjUzMDY= 1083 Advanced CSV export for arbitrary queries simonw 9599 open 0     2 2020-11-04T19:23:05Z 2021-06-17T18:12:31Z   OWNER  

There's no link to download the CSV file - the table page has that as an advanced export option, but this is missing from the query page.

datasette 107914493 issue    
743359646 MDU6SXNzdWU3NDMzNTk2NDY= 1096 TSV should be a default export option simonw 9599 open 0     1 2020-11-15T22:24:02Z 2021-06-17T18:12:31Z   OWNER  

Refs #1095

datasette 107914493 issue    
759695780 MDU6SXNzdWU3NTk2OTU3ODA= 1133 Option to omit header row in CSV export simonw 9599 closed 0     2 2020-12-08T18:54:46Z 2021-06-17T18:12:31Z 2020-12-10T23:28:51Z OWNER  

?_header=off - for symmetry with existing option ?_nl=on.

datasette 107914493 issue    
763361458 MDU6SXNzdWU3NjMzNjE0NTg= 1142 "Stream all rows" is not at all obvious simonw 9599 open 0     9 2020-12-12T06:24:57Z 2021-06-17T18:12:31Z   OWNER  

Got a question about how to download all rows - the current option isn't at all clear.

https://user-images.githubusercontent.com/9599/101977057-ac660b00-3bff-11eb-88f4-c93ffd03d3e0.png">

datasette 107914493 issue    
732685643 MDU6SXNzdWU3MzI2ODU2NDM= 1063 .csv should link to .blob downloads simonw 9599 closed 0   0.51 6026070 3 2020-10-29T21:45:58Z 2021-06-17T18:12:30Z 2020-10-29T22:47:45Z OWNER  
  • Update .csv output to link to these things (and get that xfail test to pass)
  • <del>Add a .csv?_blob_base64=1 argument that causes them to be output in base64 in the CSV</del>

Moving the CSV work to a separate ticket.
_Originally posted by @simonw in https://github.com/simonw/datasette/pull/1061#issuecomment-719042601_

datasette 107914493 issue    
924203783 MDU6SXNzdWU5MjQyMDM3ODM= 1379 Idea: ?_end=1 option for streaming CSV responses simonw 9599 open 0     0 2021-06-17T18:11:21Z 2021-06-17T18:11:30Z   OWNER  

As discussed in this thread: https://twitter.com/simonw/status/1405554676993433605 - one of the disadvantages of Datasette's streaming CSV feature is that it's hard to tell if you got the whole file or if the connection ended early - or if an error occurred.

Idea: offer an optional ?_end=1 parameter which, if enabled, adds a single row to the end of the CSV file that looks like this:

END,,,,,,,,,

For however many columns the CSV file usually has.

datasette 107914493 issue    
756876238 MDExOlB1bGxSZXF1ZXN0NTMyMzQ4OTE5 1130 Fix footer not sticking to bottom in short pages abdusco 3243482 open 0     4 2020-12-04T07:29:01Z 2021-06-15T13:27:48Z   CONTRIBUTOR simonw/datasette/pulls/1130 datasette 107914493 pull    
919508498 MDU6SXNzdWU5MTk1MDg0OTg= 1375 JSON export dumps JSON fields as TEXT frafra 4068 closed 0     2 2021-06-12T09:45:08Z 2021-06-14T09:41:59Z 2021-06-13T15:37:58Z NONE  

Hi!
When a user tries to export data as JSON, I would expect to see the value of JSON columns represented as JSON instead of being rendered as a string. What do you think?

datasette 107914493 issue    
919822817 MDU6SXNzdWU5MTk4MjI4MTc= 1376 Official Datasette Docker image should use SQLite >= 3.31.0 (for generated columns) jcgregorio 1726460 open 0     3 2021-06-13T15:25:51Z 2021-06-13T15:39:37Z   NONE  

Trying to run datasette via the Docker container doesn't seem to work:

$ docker run -p 8001:8001 -v `pwd`:/mnt     datasetteproject/datasette     datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db
Traceback (most recent call last):
  File "/usr/local/bin/datasette", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 544, in serve
    asyncio.get_event_loop().run_until_complete(check_databases(ds))
  File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 584, in check_databases
    await database.execute_fn(check_connection)
  File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 155, in execute_fn
    return await asyncio.get_event_loop().run_in_executor(
  File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 153, in in_thread
    return fn(conn)
  File "/usr/local/lib/python3.9/site-packages/datasette/utils/__init__.py", line 892, in check_connection
    for r in conn.execute(
sqlite3.DatabaseError: malformed database schema (generated_columns) - near "AS": syntax error

I have confirmed that the downloaded fixtures.db database is fine:

[skia-public] jcgregorio@jcgregorio840 ~/Downloads 
$ sqlite3 fixtures.db 
SQLite version 3.34.1 2021-01-20 14:10:07
Enter ".help" for usage hints.
sqlite> pragma integrity_check;
ok
sqlite> 
datasette 107914493 issue    
916183914 MDExOlB1bGxSZXF1ZXN0NjY1ODkyMzEz 1373 Update trustme requirement from <0.8,>=0.7 to >=0.7,<0.9 dependabot[bot] 49699333 closed 0     1 2021-06-09T13:09:44Z 2021-06-13T15:38:47Z 2021-06-13T15:38:47Z CONTRIBUTOR simonw/datasette/pulls/1373

Updates the requirements on trustme to permit the latest version.

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
datasette 107914493 pull    
918730335 MDExOlB1bGxSZXF1ZXN0NjY4MTI5NDQx 1374 Bump black from 21.5b2 to 21.6b0 dependabot[bot] 49699333 closed 0     1 2021-06-11T13:07:39Z 2021-06-13T15:33:23Z 2021-06-13T15:33:22Z CONTRIBUTOR simonw/datasette/pulls/1374

Bumps black from 21.5b2 to 21.6b0.

Release notes

Sourced from black's releases.

21.6b0

Black

  • Fix failure caused by fmt: skip and indentation (#2281)
  • Account for += assignment when deciding whether to split string (#2312)
  • Correct max string length calculation when there are string operators (#2292)
  • Fixed option usage when using the --code flag (#2259)
  • Do not call uvloop.install() when Black is used as a library (#2303)
  • Added --required-version option to require a specific version to be running (#2300)
  • Fix incorrect custom breakpoint indices when string group contains fake f-strings (#2311)
  • Fix regression where R prefixes would be lowercased for docstrings (#2285)
  • Fix handling of named escapes (\N{...}) when --experimental-string-processing is used (#2319)
Changelog

Sourced from black's changelog.

21.6b0

Black

  • Fix failure caused by fmt: skip and indentation (#2281)
  • Account for += assignment when deciding whether to split string (#2312)
  • Correct max string length calculation when there are string operators (#2292)
  • Fixed option usage when using the --code flag (#2259)
  • Do not call uvloop.install() when Black is used as a library (#2303)
  • Added --required-version option to require a specific version to be running (#2300)
  • Fix incorrect custom breakpoint indices when string group contains fake f-strings (#2311)
  • Fix regression where R prefixes would be lowercased for docstrings (#2285)
  • Fix handling of named escapes (\N{...}) when --experimental-string-processing is used (#2319)
Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
datasette 107914493 pull    
849220154 MDU6SXNzdWU4NDkyMjAxNTQ= 1286 Better default display of arrays of items mroswell 192568 open 0     5 2021-04-02T13:31:40Z 2021-06-12T12:36:15Z   CONTRIBUTOR  

Would be great to have template filters that convert array fields to bullets and/or delimited lists upon table display:

|to_bullets
|to_comma_delimited
|to_semicolon_delimited

or maybe:

|join_array("bullet")
|join_array("bullet","square")
|join_array(";")
|join_array(",")

Keeping in mind that bullets show up in html as \<li> while other delimiting characters appear after the value.

Of course, the fields themselves would remain as facetable arrays.

datasette 107914493 issue    
642388564 MDU6SXNzdWU2NDIzODg1NjQ= 858 publish heroku does not work on Windows 10 simonlau 870912 open 0     7 2020-06-20T14:40:28Z 2021-06-10T17:44:09Z   NONE  

When executing "datasette publish heroku schools.db" on Windows 10, I get the following error

  File "c:\users\dell\.virtualenvs\sec-schools-jn-cwk8z\lib\site-packages\datasette\publish\heroku.py", line 54, in heroku
    line.split()[0] for line in check_output(["heroku", "plugins"]).splitlines()
  File "c:\python38\lib\subprocess.py", line 411, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "c:\python38\lib\subprocess.py", line 489, in run
    with Popen(*popenargs, **kwargs) as process:
  File "c:\python38\lib\subprocess.py", line 854, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "c:\python38\lib\subprocess.py", line 1307, in _execute_child
    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

Changing https://github.com/simonw/datasette/blob/55a6ffb93c57680e71a070416baae1129a0243b8/datasette/publish/heroku.py#L54

to

line.split()[0] for line in check_output(["heroku", "plugins"], shell=True).splitlines()

as well as the other check_output() and call() within the same file leads me to another recursive error about temp files

datasette 107914493 issue    
915455228 MDU6SXNzdWU5MTU0NTUyMjg= 1371 Menu plugin hooks should include the request simonw 9599 closed 0     1 2021-06-08T20:23:35Z 2021-06-10T04:46:01Z 2021-06-10T04:46:01Z OWNER  

https://docs.datasette.io/en/stable/plugin_hooks.html#menu-links-datasette-actor

  • menu_links(datasette, actor)
  • table_actions(datasette, actor, database, table)
  • database_actions(datasette, actor, database)

All three of these should optionally also accept the request object. This would allow them to take into account additional cookies, Authorization headers or the current request URL (including the domain/subdomain) - or even access request.scope for extra context that might have been passed down from ASGI middleware.

datasette 107914493 issue    
915488244 MDU6SXNzdWU5MTU0ODgyNDQ= 1372 Add section to "writing plugins" about security, e.g. avoiding XSS simonw 9599 open 0     0 2021-06-08T20:49:33Z 2021-06-08T20:49:46Z   OWNER  

https://docs.datasette.io/en/stable/writing_plugins.html should have tips on writing secure plugins.

datasette 107914493 issue    
913900374 MDU6SXNzdWU5MTM5MDAzNzQ= 1369 Don't show foreign key IDs twice if no label simonw 9599 open 0     1 2021-06-07T19:47:02Z 2021-06-07T19:47:24Z   OWNER  

datasette 107914493 issue    
913823889 MDU6SXNzdWU5MTM4MjM4ODk= 1367 Navigation menu display bug simonw 9599 closed 0     1 2021-06-07T18:18:08Z 2021-06-07T18:24:19Z 2021-06-07T18:24:19Z OWNER   datasette 107914493 issue    
913809802 MDU6SXNzdWU5MTM4MDk4MDI= 1366 Get rid of this `restore_working_directory` hack entirely simonw 9599 open 0     2 2021-06-07T18:01:21Z 2021-06-07T18:03:03Z   OWNER  

That seems to have fixed it. I'd love to get rid of this restore_working_directory hack entirely.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1361#issuecomment-855308811_

datasette 107914493 issue    
912959264 MDU6SXNzdWU5MTI5NTkyNjQ= 1364 Don't truncate columns on the list of databases simonw 9599 closed 0     0 2021-06-06T22:01:56Z 2021-06-06T22:07:50Z 2021-06-06T22:07:50Z OWNER  

https://covid-19.datasettes.com/covid currently truncates at 9 database columns:

https://user-images.githubusercontent.com/9599/120941536-11467d80-c6d8-11eb-970a-ce469623f92c.png">

Django SQL Dashboard showed me that this is a bad idea - having the full list of columns is actually really useful documentation for crafting custom SQL queries.

datasette 107914493 issue    
912864936 MDU6SXNzdWU5MTI4NjQ5MzY= 1362 Consider using CSP to protect against future XSS simonw 9599 open 0     12 2021-06-06T15:32:20Z 2021-06-06T17:07:49Z   OWNER  

The XSS in #1360 would have been a lot less damaging if Datasette used CSP to protect against such vulnerabilities: https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

datasette 107914493 issue    
325958506 MDU6SXNzdWUzMjU5NTg1MDY= 283 Support cross-database joins simonw 9599 closed 0     26 2018-05-24T04:18:39Z 2021-06-06T09:40:18Z 2021-02-18T22:16:46Z OWNER  

SQLite has the ability to attach multiple databases to a single connection and then run joins across multiple databases.

Since Datasette supports more than one database, this would make a pretty neat feature.

datasette 107914493 issue    
912485040 MDU6SXNzdWU5MTI0ODUwNDA= 1361 Intermittent CI failure: restore_working_directory FileNotFoundError simonw 9599 closed 0     4 2021-06-05T22:48:13Z 2021-06-05T23:16:24Z 2021-06-05T23:16:24Z OWNER  

e.g. in https://github.com/simonw/datasette/runs/2754772233 - this is an intermittent error:

__________ ERROR at setup of test_hook_register_routes_render_message __________
[gw0] linux -- Python 3.8.10 /opt/hostedtoolcache/Python/3.8.10/x64/bin/python

tmpdir = local('/tmp/pytest-of-runner/pytest-0/popen-gw0/test_hook_register_routes_rend0')
request = <SubRequest 'restore_working_directory' for <Function test_hook_register_routes_render_message>>

    @pytest.fixture
    def restore_working_directory(tmpdir, request):
>       previous_cwd = os.getcwd()
E       FileNotFoundError: [Errno 2] No such file or directory
datasette 107914493 issue    
912464443 MDU6SXNzdWU5MTI0NjQ0NDM= 1360 Security flaw, to be fixed in 0.56.1 and 0.57 simonw 9599 closed 0     2 2021-06-05T21:53:51Z 2021-06-05T22:23:23Z 2021-06-05T22:22:06Z OWNER  

See security advisory here for details: https://github.com/simonw/datasette/security/advisories/GHSA-xw7c-jx9m-xh5g - the ?_trace=1 debugging option was not correctly escaping its JSON output, resulting in a reflected cross-site scripting vulnerability.

datasette 107914493 issue    
912418094 MDU6SXNzdWU5MTI0MTgwOTQ= 1358 Release Datasette 0.57 simonw 9599 closed 0     3 2021-06-05T19:56:13Z 2021-06-05T22:20:07Z 2021-06-05T22:20:07Z OWNER   datasette 107914493 issue    
912419349 MDU6SXNzdWU5MTI0MTkzNDk= 1359 `?_trace=1` should only be available with a new `trace_debug` setting simonw 9599 closed 0     0 2021-06-05T19:59:27Z 2021-06-05T20:18:46Z 2021-06-05T20:18:46Z OWNER  

Just like template debug mode is controlled by this off-by-default setting: https://github.com/simonw/datasette/blob/368aa5f1b16ca35f82d90ff747023b9a2bfa27c1/datasette/app.py#L160-L164

datasette 107914493 issue    
849582643 MDExOlB1bGxSZXF1ZXN0NjA4MzM0MDk2 1291 Update docs: explain allow_download setting louispotok 5413548 closed 0     2 2021-04-03T05:28:33Z 2021-06-05T19:48:51Z 2021-06-05T19:48:51Z CONTRIBUTOR simonw/datasette/pulls/1291

This fixes one possible source of confusion seen in #502 and clarifies
when database downloads will be shown and allowed.

datasette 107914493 pull    
910092577 MDU6SXNzdWU5MTAwOTI1Nzc= 1356 Research: syntactic sugar for using --get with SQL queries, maybe "datasette query" simonw 9599 open 0     9 2021-06-03T04:49:42Z 2021-06-05T19:06:06Z   OWNER  

Inspired by https://github.com/simonw/sqlite-utils/issues/264 - in particular this example:

datasette covid.db --get='/covid.yaml?sql=select * from ny_times_us_counties limit 1' 
- date: '2020-01-21'
  county: Snohomish
  state: Washington
  fips: 53061
  cases: 1
  deaths: 0

Having to construct that URL - including potentially URL escaping the SQL query - isn't a great developer experience.

Imagine if you could do this instead:

datasette covid.db --query "select * from ny_times_us_counties limit 1" --format yaml
datasette 107914493 issue    
813899472 MDU6SXNzdWU4MTM4OTk0NzI= 1238 Custom pages don't work with base_url setting tsibley 79913 closed 0     9 2021-02-22T21:58:58Z 2021-06-05T18:59:55Z 2021-06-05T18:59:55Z NONE  

It seems that custom pages aren't routing properly when the base_url setting is used.

To reproduce, with Datasette 0.55.

Create a templates/pages/custom.html with some text.

mkdir -p templates/pages/
echo "Hello, world!" > templates/pages/custom.html

Start Datasette.

datasette --template-dir templates/

Visit http://localhost:8001/custom and see "Hello, world!".

Start Datasette with a base_url.

datasette --template-dir templates/ --setting base_url /prefix/

Visit http://localhost:8001/prefix/custom and see a "Database not found: custom" 404.

Note that like all routes, http://localhost:8001/custom still works when run with base_url.

datasette 107914493 issue    
912394511 MDExOlB1bGxSZXF1ZXN0NjYyNTU3MjQw 1357 Make custom pages compatible with base_url setting simonw 9599 closed 0     1 2021-06-05T18:54:39Z 2021-06-05T18:59:54Z 2021-06-05T18:59:54Z OWNER simonw/datasette/pulls/1357

Refs #1238.

datasette 107914493 pull    
656959584 MDU6SXNzdWU2NTY5NTk1ODQ= 893 pip3 install datasette not serving static on linuxbrew. zodman 44167 closed 0     1 2020-07-14T23:33:38Z 2021-06-02T04:29:56Z 2021-06-02T04:29:56Z NONE  

This error wasn't thrown

Traceback (most recent call last):
  File "/home/linuxbrew/.linuxbrew/opt/python@3.8/lib/python3.8/site-packages/datasette/utils/asgi.py", line 289, in inner_static
    full_path.relative_to(root_path)
  File "/home/linuxbrew/.linuxbrew/opt/python@3.8/lib/python3.8/pathlib.py", line 904, in relative_to
    raise ValueError("{!r} does not start with {!r}"
ValueError: '/home/linuxbrew/.linuxbrew/lib/python3.8/site-packages/datasette/static/app.css' does not start with '/home/linuxbrew/.linuxbrew/opt/python@3.8/lib/python3.8/site-packages/datasette/static'

Linuxbrew install python@3.8 with symbolic links when You call the full_path.relative_to(root_path) throw ValueError. This happened when you install from pip3

when you install with python3 setup.py develop , works good.

Well at the end the static wasn't serving.

datasette 107914493 issue    
756818250 MDU6SXNzdWU3NTY4MTgyNTA= 1127 Make the custom SQL query text box larger or resizable zaneselvans 596279 closed 0     1 2020-12-04T05:37:11Z 2021-06-02T04:29:06Z 2021-06-02T04:28:55Z NONE  

The text entry field for custom SQL queries is too small to display a moderately complex query, especially when it's been formatted. Would it be easy to make the textbox resizable by the user rather than having a fixed height?

datasette 107914493 issue    
864979486 MDExOlB1bGxSZXF1ZXN0NjIxMTE3OTc4 1306 Avoid error sorting by relationships if related tables are not allowed slygent 416374 closed 0     4 2021-04-22T13:53:17Z 2021-06-02T04:27:00Z 2021-06-02T04:25:28Z CONTRIBUTOR simonw/datasette/pulls/1306

Refs #1305

datasette 107914493 pull    
864969683 MDU6SXNzdWU4NjQ5Njk2ODM= 1305 Index view crashes when any database table is not accessible to actor slygent 416374 closed 0     0 2021-04-22T13:44:22Z 2021-06-02T04:26:29Z 2021-06-02T04:26:29Z CONTRIBUTOR  

Because of https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L63, the tables dict built does not include invisible tables; however, if https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L80 is reached (because table_counts was not successfully initialized, e.g. due to a very large database) then as db.get_all_foreign_keys() returns ALL tables, a KeyError will be raised.

This error can be recreated with the fixtures.db if any table is hidden, e.g. by adding something like "foreign_key_references": { "allow": {} } to fixtures-metadata.json and deleting or not table_counts from https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L77.

I'm not sure how to fix this error; perhaps by testing if the table is in the aforementions tables dict.

datasette 107914493 issue    
520655983 MDU6SXNzdWU1MjA2NTU5ODM= 619 "Invalid SQL" page should let you edit the SQL simonw 9599 closed 0   Datasette Next 6158551 14 2019-11-10T20:54:12Z 2021-06-02T04:15:54Z 2021-06-02T04:15:54Z OWNER   datasette 107914493 issue    
904537568 MDExOlB1bGxSZXF1ZXN0NjU1Njg0NDc3 1346 Re-display user's query with an error message if an error occurs simonw 9599 closed 0     3 2021-05-28T02:04:20Z 2021-06-02T03:46:21Z 2021-06-02T03:46:21Z OWNER simonw/datasette/pulls/1346

Refs #619

datasette 107914493 pull    
828811618 MDU6SXNzdWU4Mjg4MTE2MTg= 1257 Table names containing single quotes break things simonw 9599 closed 0     2 2021-03-11T06:29:38Z 2021-06-02T03:28:29Z 2021-06-02T03:28:29Z OWNER  

e.g. I found a table called Yesterday's ELRs by County

It threw an error inside the detect_fts() function attempting to run this SQL query:

        select name from sqlite_master
            where rootpage = 0
            and (
                sql like '%VIRTUAL TABLE%USING FTS%content="Yesterday's ELRs by County"%'
                or sql like '%VIRTUAL TABLE%USING FTS%content=[Yesterday's ELRs by County]%'
                or (
                    tbl_name = "Yesterday's ELRs by County"
                    and sql like '%VIRTUAL TABLE%USING FTS%'
                )
            )

Here's the code at fault: https://github.com/simonw/datasette/blob/640ac7071b73111ba4423812cd683756e0e1936b/datasette/utils/__init__.py#L534-L548

datasette 107914493 issue    
800669347 MDU6SXNzdWU4MDA2NjkzNDc= 1216 /-/databases should reflect connection order, not alphabetical order simonw 9599 closed 0     1 2021-02-03T20:20:23Z 2021-06-02T03:10:19Z 2021-06-02T03:10:19Z OWNER  

The order in which databases are attached to Datasette matters - it affects the homepage, and it's beginning to influence how certain plugins work (see https://github.com/simonw/datasette-tiles/issues/8).

Two years ago in cccea85be6aaaeadb31f3b588ec7f732628815f5 I made /-/databases return things in alphabetical order, to fix a test failure in Python 3.5.

Python 3.5 is no longer supported, so this is no longer necessary - and this behaviour should now be treated as a bug.

datasette 107914493 issue    
908276134 MDExOlB1bGxSZXF1ZXN0NjU4OTkxNDA0 1352 Bump black from 21.5b1 to 21.5b2 dependabot[bot] 49699333 closed 0     1 2021-06-01T13:08:52Z 2021-06-02T02:56:45Z 2021-06-02T02:56:44Z CONTRIBUTOR simonw/datasette/pulls/1352

Bumps black from 21.5b1 to 21.5b2.

Release notes

Sourced from black's releases.

21.5b2

Black

  • A space is no longer inserted into empty docstrings (#2249)
  • Fix handling of .gitignore files containing non-ASCII characters on Windows (#2229)
  • Respect .gitignore files in all levels, not only root/.gitignore file (apply .gitignore rules like git does) (#2225)
  • Restored compatibility with Click 8.0 on Python 3.6 when LANG=C used (#2227)
  • Add extra uvloop install + import support if in python env (#2258)
  • Fix --experimental-string-processing crash when matching parens are not found (#2283)
  • Make sure to split lines that start with a string operator (#2286)
  • Fix regular expression that black uses to identify f-expressions (#2287)

Blackd

  • Add a lower bound for the aiohttp-cors dependency. Only 0.4.0 or higher is supported. (#2231)

Packaging

  • Release self-contained x86_64 MacOS binaries as part of the GitHub release pipeline (#2198)
  • Always build binaries with the latest available Python (#2260)

Documentation

  • Add discussion of magic comments to FAQ page (#2272)
  • --experimental-string-processing will be enabled by default in the future (#2273)
  • Fix typos discovered by codespell (#2228)
  • Fix Vim plugin installation instructions. (#2235)
  • Add new Frequently Asked Questions page (#2247)
  • Fix encoding + symlink issues preventing proper build on Windows (#2262)
Changelog

Sourced from black's changelog.

21.5b2

Black

  • A space is no longer inserted into empty docstrings (#2249)
  • Fix handling of .gitignore files containing non-ASCII characters on Windows (#2229)
  • Respect .gitignore files in all levels, not only root/.gitignore file (apply .gitignore rules like git does) (#2225)
  • Restored compatibility with Click 8.0 on Python 3.6 when LANG=C used (#2227)
  • Add extra uvloop install + import support if in python env (#2258)
  • Fix --experimental-string-processing crash when matching parens are not found (#2283)
  • Make sure to split lines that start with a string operator (#2286)
  • Fix regular expression that black uses to identify f-expressions (#2287)

Blackd

  • Add a lower bound for the aiohttp-cors dependency. Only 0.4.0 or higher is supported. (#2231)

Integrations

  • The official Black action now supports choosing what version to use, and supports the major 3 OSes. (#1940)

Packaging

  • Release self-contained x86_64 MacOS binaries as part of the GitHub release pipeline (#2198)
  • Always build binaries with the latest available Python (#2260)

Documentation

  • Add discussion of magic comments to FAQ page (#2272)
  • --experimental-string-processing will be enabled by default in the future (#2273)
  • Fix typos discovered by codespell (#2228)
  • Fix Vim plugin installation instructions. (#2235)
  • Add new Frequently Asked Questions page (#2247)
  • Fix encoding + symlink issues preventing proper build on Windows (#2262)
Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
datasette 107914493 pull    
323671577 MDU6SXNzdWUzMjM2NzE1Nzc= 263 Facets should not execute for ?shape=array|object simonw 9599 closed 0     3 2018-05-16T15:26:13Z 2021-06-02T02:54:34Z 2021-06-02T02:54:34Z OWNER  

Split off from #255 - there's no point executing the facet SQL for the ?_shape=array and ?_shape=object API responses.

datasette 107914493 issue    
906977719 MDU6SXNzdWU5MDY5Nzc3MTk= 1350 ?_nofacets=1 query string argument for disabling facets and suggested facets simonw 9599 closed 0     2 2021-05-31T02:22:29Z 2021-06-01T16:19:38Z 2021-05-31T02:39:18Z OWNER  

This is needed as an internal option for #1349. datasette-graphql can benefit from this too - maybe can even use it so that if you pass ?_shape=array it gets automatically added, fixing #263.

datasette 107914493 issue    
908446997 MDU6SXNzdWU5MDg0NDY5OTc= 1353 ?_nocount=1 for opting out of table counts simonw 9599 closed 0     2 2021-06-01T15:53:27Z 2021-06-01T16:18:54Z 2021-06-01T16:17:04Z OWNER  

Running a trace against a CSV streaming export with the new _trace=1 feature from #1351 shows that the following code is executing a select count(*) from table for every page of results returned: https://github.com/simonw/datasette/blob/d1d06ace49606da790a765689b4fbffa4c6deecb/datasette/views/table.py#L700-L705

This is inefficient - a new ?_nocount=1 option would let us disable this count in the same way as #1349: https://github.com/simonw/datasette/blob/d1d06ace49606da790a765689b4fbffa4c6deecb/datasette/views/base.py#L264-L276

datasette 107914493 issue    
908465747 MDU6SXNzdWU5MDg0NjU3NDc= 1354 Update help in tests for latest Click simonw 9599 closed 0     1 2021-06-01T16:14:31Z 2021-06-01T16:17:04Z 2021-06-01T16:17:04Z OWNER  

Now that Uvicorn 0.14 is out with an unpinned Click dependency - https://github.com/encode/uvicorn/pull/1033 - our test suite runs against Click 8.0 - which subtly changes the output of --help causing test failures: https://github.com/simonw/datasette/runs/2720383031?check_suite_focus=true

    def test_help_includes(name, filename):
        expected = (docs_path / filename).read_text()
        runner = CliRunner()
        result = runner.invoke(cli, name.split() + ["--help"], terminal_width=88)
        actual = f"$ datasette {name} --help\n\n{result.output}"
        # actual has "Usage: cli package [OPTIONS] FILES"
        # because it doesn't know that cli will be aliased to datasette
        expected = expected.replace("Usage: datasette", "Usage: cli")
>       assert expected == actual
E       AssertionError: assert '$ datasette ...e and exit.\n' == '$ datasette ...e and exit.\n'
E         Skipping 848 identical leading characters in diff, use -v to show
E           nt_id xxx
E         + 
E             --version-note TEXT             Additional note to show on /-/versions
E             --secret TEXT                   Secret used for signing secure values, such as signed
E                                             cookies
E         + 
E             --title TEXT                    Title for metadata
datasette 107914493 issue    
845794436 MDU6SXNzdWU4NDU3OTQ0MzY= 1284 Feature or Documentation Request: Individual table as home page template mroswell 192568 open 0     3 2021-03-31T03:56:17Z 2021-05-31T15:42:10Z   CONTRIBUTOR  

It would be great to have a sample showing how to move a single database that has a single table, to the index page. I'm trying it now, and find there is a real depth of Datasette and Python understanding that's required to be successful.

I've got all the basic jinja concepts down... variables, template control structures, template inheritance, template overrides, css, html, the --template-dir and --static arguments, etc.

But copying the table.html file to index.html doesn't work. There are undocumented functions and filters... I can figure some of them out (yay, url_builder.py and utils/init.py!) but it's a slog better handled by a much stronger Python developer.

One sample would make a world of difference. The ideal form of this documentation would be a diff between the default table.html and how that would look if essentially moved to index.html. The use case is for everyone who wants to create a public-facing website to explore a single table at the root directory. (Maybe a second bit of documentation for people who have a single database with multiple tables.)

(Hmm... might be cool to have a setting for that, where it happens automagically! If only one table, then home page is at the table level. if only one database, then home page is at the database level.... as an option.)

I suppose I could ignore this, and somehow do this in the DNS settings once I hook up Vercel to a domain name, maybe.. and remove the breadcrumbs in table.html... but for now, a documentation request in the form of a diff... for viewing a single table (or a single database) at the root.

(Actually, there's probably room for a whole expanded section on templates. Noticed some nice table metadata in one of the datasette examples, for instance... Hmm... maybe a whole library of solutions in one place... maybe a documentation hackathon! If that's of interest, of course it's a separate issue. )

datasette 107914493 issue    
904071938 MDU6SXNzdWU5MDQwNzE5Mzg= 1345 ?_nocol= does not interact well with default facets simonw 9599 closed 0     7 2021-05-27T18:39:55Z 2021-05-31T02:40:44Z 2021-05-31T02:31:21Z OWNER  

Clicking "Hide this column" on fips on https://covid-19.datasettes.com/covid/ny_times_us_counties shows this error:

https://covid-19.datasettes.com/covid/ny_times_us_counties?_nocol=fips

Invalid SQL

no such column: fips

The reason is that https://covid-19.datasettes.com/-/metadata sets up the following:

  "ny_times_us_counties": {
      "sort_desc": "date",
      "facets": [
          "state",
          "county",
          "fips"
      ],

It's setting fips as a default facet, which breaks if you attempt to remove the column using ?_nocol.

datasette 107914493 issue    
855446829 MDExOlB1bGxSZXF1ZXN0NjEzMTc4OTY4 1296 Dockerfile: use Ubuntu 20.10 as base tmcl-it 82332573 open 0     4 2021-04-12T00:23:32Z 2021-05-28T18:06:11Z   FIRST_TIMER simonw/datasette/pulls/1296

This PR changes the main Dockerfile to use ubuntu:20.10 as base image instead of python:3.9.2-slim-buster (itself based on debian:buster-slim).

The Dockerfile is essentially the one from https://github.com/simonw/datasette/issues/1249#issuecomment-803698983 with some additional cleanups to slim it down.

This fixes a couple of issues:
1. The SQLite version in Debian Buster (2.6.0) doesn't support generated columns
2. Installing SpatiaLite from the Debian sid repositories has the side effect of also installing updates to libc and libstdc++ from sid.

As a bonus, the Docker image becomes smaller:

$ docker image ls
REPOSITORY                   TAG           IMAGE ID       CREATED       SIZE
datasette                    0.56-ubuntu   f7aca255140a   5 hours ago   212MB
datasetteproject/datasette   0.56          efb3b282f390   13 days ago   258MB

Reproduction of the first issue

$ curl -O https://latest.datasette.io/fixtures.db
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  260k    0  260k    0     0   489k      0 --:--:-- --:--:-- --:--:--  489k

$ docker run -v `pwd`:/mnt datasetteproject/datasette:0.56 datasette /mnt/fixtures.db
Traceback (most recent call last):
  File "/usr/local/bin/datasette", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 544, in serve
    asyncio.get_event_loop().run_until_complete(check_databases(ds))
  File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 584, in check_databases
    await database.execute_fn(check_connection)
  File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 155, in execute_fn
    return await asyncio.get_event_loop().run_in_executor(
  File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 153, in in_thread
    return fn(conn)
  File "/usr/local/lib/python3.9/site-packages/datasette/utils/__init__.py", line 892, in check_connection
    for r in conn.execute(
sqlite3.DatabaseError: malformed database schema (generated_columns) - near "AS": syntax error

Here is the SQLite version:

$ docker run -v `pwd`:/mnt -it datasetteproject/datasette:0.56 /bin/bash
root@d9220d3b95dd:/# python3
Python 3.9.2 (default, Mar 27 2021, 02:50:26) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> sqlite3.version
'2.6.0'

Reproduction of the second issue

$ docker build . -t datasette --build-arg VERSION=0.55
[...snip...]
The following packages will be upgraded:
  libc-bin libc6 libstdc++6
[...snip...]
Unpacking libc6:amd64 (2.31-11) over (2.28-10) ...
[...snip...]
Unpacking libstdc++6:amd64 (10.2.1-6) over (8.3.0-6) ...
[...snip...]

Both libc and libstdc++ are backwards compatible, so the image still works, but it will result in a combination of libraries and Python versions that exists only in the Datasette image, so it's likely untested. In addition, since Debian sid is an always-changing rolling-release, the versions of libc, libstdc++, Spatialite, and their dependencies change frequently, so the library versions in the Datasette image will depend on the day when it was built.

datasette 107914493 pull    
904598267 MDExOlB1bGxSZXF1ZXN0NjU1NzQxNDI4 1348 DRAFT: add test and scan for docker images blairdrummond 10801138 open 0     2 2021-05-28T03:02:12Z 2021-05-28T03:06:16Z   CONTRIBUTOR simonw/datasette/pulls/1348

NOTE: I don't think this PR is ready, since the arm/v6 and arm/v7 images are failing pytest due to missing dependencies (gcc and friends). But it's pretty close.

Closes https://github.com/simonw/datasette/issues/1344 . Using a build-matrix for the platforms and this test, we test all the platforms in parallel. I also threw in container scanning.

Switch pip install to use either tags or commit shas

Notably! This also changes the Dockerfile so that it accepts tags or commit-shas.

# It's backwards compatible with tags, but also lets you use shas
root@712071df17af:/# pip install git+git://github.com/simonw/datasette.git@0.56                                                                                                                                            
Collecting git+git://github.com/simonw/datasette.git@0.56                                                                                                                                                                  
  Cloning git://github.com/simonw/datasette.git (to revision 0.56) to /tmp/pip-req-build-u6dhm945                                                                                                                          
  Running command git clone -q git://github.com/simonw/datasette.git /tmp/pip-req-build-u6dhm945                                                                                                                           
  Running command git checkout -q af5a7f1c09f6a902bb2a25e8edf39c7034d2e5de                                                                                                                                                 
Collecting Jinja2<2.12.0,>=2.10.3                                                                            
  Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB) 

This lets you build the containers in CI every push for testing, which maybe resolves this problem?

Workflow run example

You can see the results in my workflow here. The commit history is different because I squashed this branch, also in the testing branch I had to change github.com/simonw to github.com/blairdrummond for the CI to pick up my git_sha.

Why did the builds fail?

NOTE: The results of all the tests fail, but for different reasons! A few fail to install Rust, the amd64 passes the tests (phew!) but has critical CVEs which fail the container scan, the Arm/v6 and Arm/v7 seem to fail to install the test dependencies due to missing programs like gcc. (gcc is not sufficient though, as this run indicates)

datasette 107914493 pull    
904582277 MDExOlB1bGxSZXF1ZXN0NjU1NzI2Mzg3 1347 Test docker platform blair only blairdrummond 10801138 closed 0     0 2021-05-28T02:47:09Z 2021-05-28T02:47:28Z 2021-05-28T02:47:28Z CONTRIBUTOR simonw/datasette/pulls/1347
datasette 107914493 pull    
903986178 MDU6SXNzdWU5MDM5ODYxNzg= 1344 Test Datasette Docker images built for different architectures simonw 9599 open 0     10 2021-05-27T16:52:29Z 2021-05-27T17:52:58Z   OWNER  

Continuing on from #1319 - now that we have the ability to build Datasette's Docker image against multiple architectures we should test that it works.

We can do this with QEMU emulation, see https://twitter.com/nevali/status/1397958044571602945

datasette 107914493 issue    
881219362 MDExOlB1bGxSZXF1ZXN0NjM0ODIxMDY1 1319 Add Docker multi-arch support with Buildx blairdrummond 10801138 closed 0     5 2021-05-08T19:35:03Z 2021-05-27T16:49:24Z 2021-05-27T16:49:24Z CONTRIBUTOR simonw/datasette/pulls/1319

This adds Docker support to extra CPU architectures (like arm) using Docker's Buildx action

You can see what that looks like on Dockerhub

And how it lets Datasette run on a Raspberry Pi (top is my dockerhub, bottom is upstream)

The workflow log here (I subbed blairdrummond for datasetteproject in my branch)

datasette 107914493 pull    
903978133 MDU6SXNzdWU5MDM5NzgxMzM= 1343 Figure out how to publish alpha/beta releases to Docker Hub simonw 9599 closed 0     4 2021-05-27T16:42:17Z 2021-05-27T16:46:37Z 2021-05-27T16:45:41Z OWNER  

It looks like all I need to do to ship an alpha version to Docker Hub is NOT point the latest tag at it after it goes live: https://github.com/simonw/datasette/blob/1a8972f9c012cd22b088c6b70661a9c3d3847853/.github/workflows/publish.yml#L75-L77

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1319#issuecomment-849780481_

datasette 107914493 issue    
898904402 MDU6SXNzdWU4OTg5MDQ0MDI= 1337 "More" link for facets that shows _facet_size=max results simonw 9599 closed 0     7 2021-05-23T00:08:51Z 2021-05-27T16:14:14Z 2021-05-27T16:01:03Z OWNER  

Original title: "More" link for facets that shows the full set of results

The simplest way to do this will be to have it link to a generated SQL query.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1332#issuecomment-846479062_

datasette 107914493 issue    
893890496 MDU6SXNzdWU4OTM4OTA0OTY= 1332 ?_facet_size=X to increase number of facets results on the page mroswell 192568 closed 0     5 2021-05-18T02:40:16Z 2021-05-27T16:13:07Z 2021-05-23T00:34:37Z CONTRIBUTOR  

Is there a way to add a parameter to the URL to modify default_facet_size?

LIkewise, a way to produce a link on the three dots to expand to all items (or match previous number of items, or add x more)?

datasette 107914493 issue    
903902495 MDU6SXNzdWU5MDM5MDI0OTU= 1342 Improve `path_with_replaced_args()` and friends and document them simonw 9599 open 0     3 2021-05-27T15:18:28Z 2021-05-27T15:23:02Z   OWNER  

In order to cleanly implement this I need to expose the path_with_replaced_args utility function to Datasette's template engine. This is the first time this will become an exposed (and hence should-by-documented) API and I don't like its shape much.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1337#issuecomment-849721280_

datasette 107914493 issue    
903200328 MDU6SXNzdWU5MDMyMDAzMjg= 1341 "Show all columns" cog menu item should show if ?_col= is used simonw 9599 closed 0     1 2021-05-27T04:28:17Z 2021-05-27T04:31:16Z 2021-05-27T04:31:16Z OWNER   datasette 107914493 issue    
517451234 MDU6SXNzdWU1MTc0NTEyMzQ= 615 ?_col= and ?_nocol= support for toggling columns on table view simonw 9599 closed 0     16 2019-11-04T22:55:41Z 2021-05-27T04:26:10Z 2021-05-27T04:17:44Z OWNER  

Split off from #292 (I guess this is a re-opening of #312).

datasette 107914493 issue    
326800219 MDU6SXNzdWUzMjY4MDAyMTk= 292 Mechanism for customizing the SQL used to select specific columns in the table view simonw 9599 closed 0     15 2018-05-27T09:05:52Z 2021-05-27T04:25:01Z 2021-05-27T04:25:01Z OWNER  

Some columns don't make a lot of sense in their default representation - binary blobs such as SpatiaLite geometries for example, or lengthy columns that really should be truncated somehow.

We may also find that there are tables where we don't want to show all of the columns - so a mechanism to select a subset of columns would be nice.

I think there are two features here:

  • the ability to request a subset of columns on the table view
  • the ability to override the SQL for a specific column and/or add extra columns - AsGeoJSON(Geometry) for example

Both features should be available via both querystring arguments and in metadata.json

The querystring argument for custom SQL should only work if allow_sql config is turned on.

Refs #276

datasette 107914493 issue    
899851083 MDExOlB1bGxSZXF1ZXN0NjUxNDkyODg4 1339 ?_col=/?_nocol= to show/hide columns on the table page simonw 9599 closed 0     1 2021-05-24T17:15:20Z 2021-05-27T04:17:44Z 2021-05-27T04:17:43Z OWNER simonw/datasette/pulls/1339

See #615. Still to do:

  • Allow combination of ?_col= and ?_nocol= (_nocol wins)
  • Deduplicate same column if passed in ?_col= multiple times
  • Validate that user did not try to remove a primary key
  • Add tests
  • Ensure this works correctly for SQL views
  • Add documentation
datasette 107914493 pull    
901009787 MDU6SXNzdWU5MDEwMDk3ODc= 1340 Research: Cell action menu (like column action but for individual cells) simonw 9599 open 0     1 2021-05-25T15:49:16Z 2021-05-26T18:59:58Z   OWNER  

Had an idea today that it might be useful to select an individual cell and say things like "show me all other rows with the same value" - maybe even a set of other menu options against cells as well.

Mocked up a show-on-hover ellipses demo using the CSS inspector:

datasette 107914493 issue    
564833696 MDU6SXNzdWU1NjQ4MzM2OTY= 670 Prototoype for Datasette on PostgreSQL simonw 9599 open 0     13 2020-02-13T17:17:55Z 2021-05-26T18:33:58Z   OWNER  

I thought this would never happen, but now that I'm deep in the weeds of running SQLite in production for Datasette Cloud I'm starting to reconsider my policy of only supporting SQLite.

Some of the factors making me think PostgreSQL support could be worth the effort:
- Serverless. I'm getting increasingly excited about writable-database use-cases for Datasette. If it could talk to PostgreSQL then users could easily deploy it on Heroku or other serverless providers that can talk to a managed RDS-style PostgreSQL.
- Existing databases. Plenty of organizations have PostgreSQL databases. They can export to SQLite using db-to-sqlite but that's a pretty big barrier to getting started - being able to run datasette postgresql://connection-string and start trying it out would be a massively better experience.
- Data size. I keep running into use-cases where I want to run Datasette against many GBs of data. SQLite can do this but PostgreSQL is much more optimized for large data, especially given the existence of tools like Citus.
- Marketing. Convincing people to trust their data to SQLite is potentially a big barrier to adoption. Even if I've convinced myself it's trustworthy I still have to convince everyone else.
- It might not be that hard? If this required a ground-up rewrite it wouldn't be worth the effort, but I have a hunch that it may not be too hard - most of the SQL in Datasette should work on both databases since it's almost all portable SELECT statements. If Datasette did DML this would be a lot harder, but it doesn't.
- Plugins! This feels like a natural surface for a plugin - at which point people could add MySQL support and suchlike in the future.

The above reasons feel strong enough to justify a prototype.

datasette 107914493 issue    
892457208 MDU6SXNzdWU4OTI0NTcyMDg= 1327 Support Unicode characters in metadata.json GmGniap 20846286 closed 0     2 2021-05-15T14:33:58Z 2021-05-24T19:10:21Z 2021-05-24T19:10:21Z NONE  

Hello , when I used Burmese (Unicode) characters in metadata.json like below -

It gave wrong results when I run datasette -

It would be great & helpful for us if metadata.json can support in Unicode supported Asian Languages.
Thanks & Regards.

datasette 107914493 issue    
884952179 MDU6SXNzdWU4ODQ5NTIxNzk= 1320 Can't use apt-get in Dockerfile when using datasetteproj/datasette as base brandonrobertz 2670795 closed 0     4 2021-05-10T19:37:27Z 2021-05-24T18:15:56Z 2021-05-24T18:07:08Z NONE  

The datasette base Docker image is super convenient, but there's one problem: if any of the plugins you install require additional system dependencies (e.g., xz, git, curl) then any attempt to use apt in said Dockerfile results in an explosion:

$ docker-compose build
Building server
[+] Building 9.9s (7/9)
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                     0.0s
 => => transferring dockerfile: 666B                                                                                                                                                                                                                     0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                        0.0s
 => => transferring context: 34B                                                                                                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/datasetteproject/datasette:latest                                                                                                                                                                             0.6s
 => [base 1/4] FROM docker.io/datasetteproject/datasette@sha256:2250d0fbe57b1d615a8d6df0c9d43deb9533532e00bac68854773d8ff8dcf00a                                                                                                                         0.0s
 => [internal] load build context                                                                                                                                                                                                                        1.8s
 => => transferring context: 2.44MB                                                                                                                                                                                                                      1.8s
 => CACHED [base 2/4] WORKDIR /datasette                                                                                                                                                                                                                 0.0s
 => ERROR [base 3/4] RUN apt-get update     && apt-get install --no-install-recommends -y git ssh curl xz-utils                                                                                                                                          9.2s
------
 > [base 3/4] RUN apt-get update     && apt-get install --no-install-recommends -y git ssh curl xz-utils:
#6 0.446 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
#6 0.449 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]
#6 0.459 Get:3 http://httpredir.debian.org/debian sid InRelease [157 kB]
#6 0.784 Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
#6 0.790 Get:5 http://httpredir.debian.org/debian sid/main amd64 Packages [8626 kB]
#6 1.003 Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
#6 1.180 Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [286 kB]
#6 7.095 Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [10.9 kB]
#6 8.058 Fetched 17.2 MB in 8s (2243 kB/s)
#6 8.058 Reading package lists...
#6 9.166 E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)
#6 9.166 E: Could not open file  - open (2: No such file or directory)
#6 9.166 E: Problem opening
#6 9.166 E: The package lists or status file could not be parsed or opened.

The problem seems to be from completely wiping out /var/lib/dpkg in the upstream Dockerfile:

https://github.com/simonw/datasette/blob/1b697539f5b53cec3fe13c0f4ada13ba655c88c7/Dockerfile#L18

I've tested without removing the directory and apt works as expected.

datasette 107914493 issue    
899169307 MDU6SXNzdWU4OTkxNjkzMDc= 1338 Fix jinja2 warnings simonw 9599 closed 0     0 2021-05-24T01:38:23Z 2021-05-24T01:41:55Z 2021-05-24T01:41:55Z OWNER  

Lots of these in the test suite now, after the Jinja upgrade in #1331:

tests/test_plugins.py::test_hook_render_cell_link_from_json
  datasette/tests/plugins/my_plugin_2.py:45: DeprecationWarning: 'jinja2.escape' is deprecated and will be removed in Jinja 3.1. Import 'markupsafe.escape' instead.
    label=jinja2.escape(data["label"] or "") or "&nbsp;",

tests/test_plugins.py::test_hook_render_cell_link_from_json
  datasette/tests/plugins/my_plugin_2.py:41: DeprecationWarning: 'jinja2.Markup' is deprecated and will be removed in Jinja 3.1. Import 'markupsafe.Markup' instead.
    return jinja2.Markup(
datasette 107914493 issue    
891969037 MDU6SXNzdWU4OTE5NjkwMzc= 1326 How to limit fields returned from the JSON API? bram2000 5268174 closed 0     1 2021-05-14T14:27:41Z 2021-05-23T02:55:06Z 2021-05-23T02:55:00Z NONE  

Hi,

I have quite wide tables, and in many cases only want a subset of the data (to save on network bandwidth). I need to use the JSON API as handling pagination is so much easier, but I can't see a way to select specific columns.

Is there a way to do this, or is it a feature request?

Thanks!

datasette 107914493 issue    
893537744 MDU6SXNzdWU4OTM1Mzc3NDQ= 1331 Add support for Jinja2 version 3.0 MarkusH 475613 closed 0     10 2021-05-17T17:14:36Z 2021-05-23T00:57:39Z 2021-05-23T00:57:39Z NONE  

A week ago, The Pallets Project released new major versions of several of its projects. Among those updates is one for Jinja2, which bumps it to version 3.0.0.

I'd like for datasette to support Jinaj2 version 3.0.

datasette 107914493 issue    
887241681 MDExOlB1bGxSZXF1ZXN0NjQwNDg0OTY2 1321 Bump black from 21.4b2 to 21.5b1 dependabot[bot] 49699333 closed 0     1 2021-05-11T13:12:28Z 2021-05-22T23:55:39Z 2021-05-22T23:55:39Z CONTRIBUTOR simonw/datasette/pulls/1321

Bumps black from 21.4b2 to 21.5b1.

Release notes

Sourced from black's releases.

21.5b1

Black

Documentation

  • Replaced all remaining references to the master branch with the main branch. Some additional changes in the source code were also made. (#2210)
  • Sigificantly reorganized the documentation to make much more sense. Check them out by heading over to the stable docs on RTD. (#2174)

21.5b0

Black

  • Set --pyi mode if --stdin-filename ends in .pyi (#2169)
  • Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are being called but with no arguments (#2182)

Black-Primer

  • Add --no-diff to black-primer to suppress formatting changes (#2187)
Changelog

Sourced from black's changelog.

21.5b1

Black

Documentation

  • Replaced all remaining references to the master branch with the main branch. Some additional changes in the source code were also made. (#2210)
  • Sigificantly reorganized the documentation to make much more sense. Check them out by heading over to the stable docs on RTD. (#2174)

21.5b0

Black

  • Set --pyi mode if --stdin-filename ends in .pyi (#2169)
  • Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are being called but with no arguments (#2182)

Black-Primer

  • Add --no-diff to black-primer to suppress formatting changes (#2187)
Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
890073888 MDExOlB1bGxSZXF1ZXN0NjQzMTQ5Mjcz 1323 Update click requirement from ~=7.1.1 to >=7.1.1,<8.1.0 dependabot[bot] 49699333 closed 0     1 2021-05-12T13:08:56Z 2021-05-22T23:54:48Z 2021-05-22T23:54:48Z CONTRIBUTOR simonw/datasette/pulls/1323

Updates the requirements on click to permit the latest version.

Release notes

Sourced from click's releases.

8.0.0

New major versions of all the core Pallets libraries, including Click 8.0, have been released! :tada:

This represents a significant amount of work, and there are quite a few changes. Be sure to carefully read the changelog, and use tools such as pip-compile and Dependabot to pin your dependencies and control your updates.

Changelog

Sourced from click's changelog.

Version 8.0.0

Released 2021-05-11

  • Drop support for Python 2 and 3.5.
  • Colorama is always installed on Windows in order to provide style and color support. :pr:1784
  • Adds a repr to Command, showing the command name for friendlier debugging. :issue:1267, :pr:1295
  • Add support for distinguishing the source of a command line parameter. :issue:1264, :pr:1329
  • Add an optional parameter to ProgressBar.update to set the current_item. :issue:1226, :pr:1332
  • version_option uses importlib.metadata (or the importlib_metadata backport) instead of pkg_resources. :issue:1582
  • If validation fails for a prompt with hide_input=True, the value is not shown in the error message. :issue:1460
  • An IntRange or FloatRange option shows the accepted range in its help text. :issue:1525, :pr:1303
  • IntRange and FloatRange bounds can be open (<) instead of closed (<=) by setting min_open and max_open. Error messages have changed to reflect this. :issue:1100
  • An option defined with duplicate flag names ("--foo/--foo") raises a ValueError. :issue:1465
  • echo() will not fail when using pytest's capsys fixture on Windows. :issue:1590
  • Resolving commands returns the canonical command name instead of the matched name. This makes behavior such as help text and Context.invoked_subcommand consistent when using patterns like AliasedGroup. :issue:1422
  • The BOOL type accepts the values "on" and "off". :issue:1629
  • A Group with invoke_without_command=True will always invoke its result callback. :issue:1178
  • nargs == -1 and nargs > 1 is parsed and validated for values from environment variables and defaults. :issue:729
  • Detect the program name when executing a module or package with python -m name. :issue:1603
  • Include required parent arguments in help synopsis of subcommands. :issue:1475
  • Help for boolean flags with show_default=True shows the flag name instead of True or False. :issue:1538
  • Non-string objects passed to style() and secho() will be converted to string. :pr:1146
  • edit(require_save=True) will detect saves for editors that exit very fast on filesystems with 1 second resolution. :pr:1050
  • New class attributes make it easier to use custom core objects throughout an entire application. :pr:938

... (truncated)

Commits
  • 9da1669 Merge pull request #1877 from pallets/release-8.0.0
  • dfa6369 release version 8.0.0
  • b862cb1 update requirements
  • f51584c Merge pull request #1876 from pallets/pre-commit-ci-schedule
  • 804c71c update pre-commit monthly
  • ac655f8 Merge pull request #1872 from janLuke/fix/formatter_write_text
  • dcd991d HelpFormatter.write_text uses full width
  • 5215fc1 Merge pull request #1870 from AdrienPensart/allow_colors_in_metavar
  • e3e1691 repr is erasing ANSI escapes codes
  • 482e6e6 Merge pull request #1875 from pallets/pre-commit-ci-update-config
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
890073989 MDExOlB1bGxSZXF1ZXN0NjQzMTQ5MzY0 1325 Update itsdangerous requirement from ~=1.1 to >=1.1,<3.0 dependabot[bot] 49699333 closed 0     2 2021-05-12T13:09:03Z 2021-05-22T23:54:25Z 2021-05-22T23:54:25Z CONTRIBUTOR simonw/datasette/pulls/1325

Updates the requirements on itsdangerous to permit the latest version.

Release notes

Sourced from itsdangerous's releases.

2.0.0

New major versions of all the core Pallets libraries, including ItsDangerous 2.0, have been released! :tada:

This represents a significant amount of work, and there are quite a few changes. Be sure to carefully read the changelog, and use tools such as pip-compile and Dependabot to pin your dependencies and control your updates.

Changelog

Sourced from itsdangerous's changelog.

Version 2.0.0

Released 2021-05-11

  • Drop support for Python 2 and 3.5.
  • JWS support (JSONWebSignatureSerializer, TimedJSONWebSignatureSerializer) is deprecated. Use a dedicated JWS/JWT library such as authlib instead. :issue:129
  • Importing itsdangerous.json is deprecated. Import Python's json module instead. :pr:152
  • Simplejson is no longer used if it is installed. To use a different library, pass it as Serializer(serializer=...). :issue:146
  • datetime values are timezone-aware with timezone.utc. Code using TimestampSigner.unsign(return_timestamp=True) or BadTimeSignature.date_signed may need to change. :issue:150
  • If a signature has an age less than 0, it will raise SignatureExpired rather than appearing valid. This can happen if the timestamp offset is changed. :issue:126
  • BadTimeSignature.date_signed is always a datetime object rather than an int in some cases. :issue:124
  • Added support for key rotation. A list of keys can be passed as secret_key, oldest to newest. The newest key is used for signing, all keys are tried for unsigning. :pr:141
  • Removed the default SHA-512 fallback signer from default_fallback_signers. :issue:155
  • Add type information for static typing tools. :pr:186

Version 1.1.0

Released 2018-10-26

  • Change default signing algorithm back to SHA-1. :pr:113
  • Added a default SHA-512 fallback for users who used the yanked 1.0.0 release which defaulted to SHA-512. :pr:114
  • Add support for fallback algorithms during deserialization to support changing the default in the future without breaking existing signatures. :pr:113
  • Changed capitalization of packages back to lowercase as the change in capitalization broke some tooling. :pr:113

Version 1.0.0

Released 2018-10-18

YANKED

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
893314402 MDExOlB1bGxSZXF1ZXN0NjQ1ODQ5MDI3 1330 Update aiofiles requirement from <0.7,>=0.4 to >=0.4,<0.8 dependabot[bot] 49699333 closed 0     1 2021-05-17T13:07:31Z 2021-05-22T23:53:57Z 2021-05-22T23:53:56Z CONTRIBUTOR simonw/datasette/pulls/1330

Updates the requirements on aiofiles to permit the latest version.

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
895315478 MDExOlB1bGxSZXF1ZXN0NjQ3NTUyMTQx 1335 Fix small typo abdusco 3243482 closed 0     1 2021-05-19T11:17:04Z 2021-05-22T23:53:34Z 2021-05-22T23:53:34Z CONTRIBUTOR simonw/datasette/pulls/1335
datasette 107914493 pull    
642296989 MDU6SXNzdWU2NDIyOTY5ODk= 856 Consider pagination of canned queries simonw 9599 open 0     3 2020-06-20T03:15:59Z 2021-05-21T14:22:41Z   OWNER  

The new canned_queries() plugin hook from #852 combined with plugins like https://github.com/simonw/datasette-saved-queries could mean that some installations end up with hundreds or even thousands of canned queries. I should consider pagination or some other way of ensuring that this doesn't cause performance problems for Datasette.

datasette 107914493 issue    
895686039 MDU6SXNzdWU4OTU2ODYwMzk= 1336 Document turning on WAL for live served SQLite databases simonw 9599 open 0     0 2021-05-19T17:08:58Z 2021-05-19T17:17:48Z   OWNER  

Datasette docs don't talk about WAL yet, which allows you to safely serve reads from a database file while it is accepting writes.

datasette 107914493 issue    
812228314 MDU6SXNzdWU4MTIyMjgzMTQ= 1236 Ability to increase size of the SQL editor window simonw 9599 closed 0     9 2021-02-19T18:09:27Z 2021-05-18T03:28:25Z 2021-02-22T21:05:21Z OWNER  
datasette 107914493 issue    
890073940 MDExOlB1bGxSZXF1ZXN0NjQzMTQ5MzIw 1324 Update jinja2 requirement from <2.12.0,>=2.10.3 to >=2.10.3,<3.1.0 dependabot[bot] 49699333 closed 0     2 2021-05-12T13:08:59Z 2021-05-17T17:19:41Z 2021-05-17T17:19:40Z CONTRIBUTOR simonw/datasette/pulls/1324

Updates the requirements on jinja2 to permit the latest version.

Release notes

Sourced from jinja2's releases.

3.0.0

New major versions of all the core Pallets libraries, including Jinja 3.0, have been released! :tada:

This represents a significant amount of work, and there are quite a few changes. Be sure to carefully read the changelog, and use tools such as pip-compile and Dependabot to pin your dependencies and control your updates.

Changelog

Sourced from jinja2's changelog.

Version 3.0.0

Released 2021-05-11

  • Drop support for Python 2.7 and 3.5.
  • Bump MarkupSafe dependency to >=1.1.
  • Bump Babel optional dependency to >=2.1.
  • Remove code that was marked deprecated.
  • Add type hinting. :pr:1412
  • Use :pep:451 API to load templates with :class:~loaders.PackageLoader. :issue:1168
  • Fix a bug that caused imported macros to not have access to the current template's globals. :issue:688
  • Add ability to ignore trim_blocks using +%}. :issue:1036
  • Fix a bug that caused custom async-only filters to fail with constant input. :issue:1279
  • Fix UndefinedError incorrectly being thrown on an undefined variable instead of Undefined being returned on NativeEnvironment on Python 3.10. :issue:1335
  • Blocks can be marked as required. They must be overridden at some point, but not necessarily by the direct child. :issue:1147
  • Deprecate the autoescape and with extensions, they are built-in to the compiler. :issue:1203
  • The urlize filter recognizes mailto: links and takes extra_schemes (or env.policies["urlize.extra_schemes"]) to recognize other schemes. It tries to balance parentheses within a URL instead of ignoring trailing characters. The parsing in general has been updated to be more efficient and match more cases. URLs without a scheme are linked as https:// instead of http://. :issue:522, 827, 1172, :pr:1195
  • Filters that get attributes, such as map and groupby, can use a false or empty value as a default. :issue:1331
  • Fix a bug that prevented variables set in blocks or loops from being accessed in custom context functions. :issue:768
  • Fix a bug that caused scoped blocks from accessing special loop variables. :issue:1088
  • Update the template globals when calling Environment.get_template(globals=...) even if the template was already loaded. :issue:295
  • Do not raise an error for undefined filters in unexecuted if-statements and conditional expressions. :issue:842
  • Add is filter and is test tests to test if a name is a registered filter or test. This allows checking if a filter is available in a template before using it. Test functions can be decorated with @pass_environment, @pass_eval_context, or @pass_context. :issue:842, :pr:1248
  • Support pgettext and npgettext (message contexts) in i18n extension. :issue:441
  • The |indent filter's width argument can be a string to

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
876431852 MDExOlB1bGxSZXF1ZXN0NjMwNTc4NzM1 1318 Bump black from 21.4b2 to 21.5b0 dependabot[bot] 49699333 closed 0     2 2021-05-05T13:07:51Z 2021-05-11T13:12:32Z 2021-05-11T13:12:31Z CONTRIBUTOR simonw/datasette/pulls/1318

Bumps black from 21.4b2 to 21.5b0.

Release notes

Sourced from black's releases.

21.5b0

Black

  • Set --pyi mode if --stdin-filename ends in .pyi (#2169)
  • Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are being called but with no arguments (#2182)

Black-Primer

  • Add --no-diff to black-primer to suppress formatting changes (#2187)
Changelog

Sourced from black's changelog.

21.5b0

Black

  • Set --pyi mode if --stdin-filename ends in .pyi (#2169)
  • Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are being called but with no arguments (#2182)

Black-Primer

  • Add --no-diff to black-primer to suppress formatting changes (#2187)
Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
842862708 MDU6SXNzdWU4NDI4NjI3MDg= 1280 Ability to run CI against multiple SQLite versions simonw 9599 open 0     2 2021-03-28T23:54:50Z 2021-05-10T19:07:46Z   OWNER  

Issue #1276 happened because I didn't run tests against a SQLite version prior to 3.16.0 (released 2017-01-02).

Glitch is a deployment target and runs SQLite 3.11.0 from 2016-02-15.

If CI ran against that version of SQLite this bug could have been avoided.

datasette 107914493 issue    

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);