1,392 rows where user = 9599 sorted by updated_at descending

View and edit SQL

Suggested facets: milestone, author_association, created_at (date), updated_at (date), closed_at (date)

type

state

id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app
926777310 MDU6SXNzdWU5MjY3NzczMTA= 290 `db.query()` method (renamed `db.execute_returning_dicts()`) simonw 9599 closed 0     6 2021-06-22T03:03:54Z 2021-06-24T23:17:38Z 2021-06-24T22:54:43Z OWNER  

Most of this library deals with lists of Python dictionaries - .insert_all(), .rows, .rows_where(), .search().

The db.execute() method is the only thing that returns a sqlite3 cursor.

There is a clumsily named db.execute_returning_dicts(sql) method but it's not currently mentioned in the documentation.

It needs a better name, and needs to be properly documented.

sqlite-utils 140912432 issue    
927766296 MDU6SXNzdWU5Mjc3NjYyOTY= 291 Adopt flake8 simonw 9599 closed 0     2 2021-06-23T01:19:37Z 2021-06-24T17:50:27Z 2021-06-24T17:50:27Z OWNER  
sqlite-utils 140912432 issue    
920884085 MDU6SXNzdWU5MjA4ODQwODU= 1377 Mechanism for plugins to exclude certain paths from CSRF checks simonw 9599 closed 0     3 2021-06-15T00:48:20Z 2021-06-23T22:51:33Z 2021-06-23T22:51:33Z OWNER  

I need this for a plugin I'm building that offers a POST API.

datasette 107914493 issue    
927789811 MDU6SXNzdWU5Mjc3ODk4MTE= 292 Add contributing documentation simonw 9599 open 0     0 2021-06-23T02:13:05Z 2021-06-23T02:13:05Z   OWNER  

Like https://docs.datasette.io/en/latest/contributing.html (but simpler) - should cover how to run black and flake8 and mypy and how to run the tests.

sqlite-utils 140912432 issue    
465815372 MDU6SXNzdWU0NjU4MTUzNzI= 37 Experiment with type hints simonw 9599 open 0     3 2019-07-09T14:30:34Z 2021-06-22T18:17:53Z   OWNER  

Since it's designed to be used in Jupyter or for rapid prototyping in an IDE (and it's still pretty small) sqlite-utils feels like a great candidate for me to finally try out Python type hints.

https://veekaybee.github.io/2019/07/08/python-type-hints/ is good.

It suggests the mypy docs for getting started: https://mypy.readthedocs.io/en/latest/existing_code.html plus this tutorial: https://pymbook.readthedocs.io/en/latest/typehinting.html

sqlite-utils 140912432 issue    
925487946 MDU6SXNzdWU5MjU0ODc5NDY= 286 Add installation instructions simonw 9599 closed 0     1 2021-06-19T23:55:36Z 2021-06-20T18:47:13Z 2021-06-20T18:47:13Z OWNER  

pip install sqlite-utils, pipx install sqlite-utils and brew install sqlite-utils

sqlite-utils 140912432 issue    
925544070 MDU6SXNzdWU5MjU1NDQwNzA= 287 Update rowid examples in the docs simonw 9599 closed 0     0 2021-06-20T08:03:00Z 2021-06-20T18:26:21Z 2021-06-20T18:26:21Z OWNER  

Changed in #284 - a couple of examples need updating on https://github.com/simonw/sqlite-utils/blob/3.10/docs/cli.rst.

sqlite-utils 140912432 issue    
925545468 MDU6SXNzdWU5MjU1NDU0Njg= 288 sqlite-utils memory blah.json --schema simonw 9599 closed 0     0 2021-06-20T08:10:40Z 2021-06-20T18:26:21Z 2021-06-20T18:26:21Z OWNER  

Like --dump but only outputs the schema - useful for understanding what you are about to run queries against.

sqlite-utils 140912432 issue    
925491857 MDU6SXNzdWU5MjU0OTE4NTc= 1383 Improve test coverage for `inspect.py` simonw 9599 open 0     0 2021-06-20T00:22:43Z 2021-06-20T00:22:49Z   OWNER  

https://codecov.io/gh/simonw/datasette/src/main/datasette/inspect.py shows only 36% coverage for that module at the moment.

datasette 107914493 issue    
921878733 MDU6SXNzdWU5MjE4Nzg3MzM= 272 Idea: import CSV to memory, run SQL, export in a single command simonw 9599 closed 0     22 2021-06-15T23:02:48Z 2021-06-19T23:36:48Z 2021-06-18T15:05:03Z OWNER  

I quite often load a CSV file into a SQLite DB, then do stuff with it (like export results back out again as a new CSV) without any intention of keeping the CSV file around afterwards.

What if sqlite-utils could do this for me? Something like this:

sqlite-utils --csv blah.csv --csv baz.csv "select * from blah join baz ..."
sqlite-utils 140912432 issue    
925320167 MDU6SXNzdWU5MjUzMjAxNjc= 284 .transform(types=) turns rowid into a concrete column simonw 9599 closed 0     5 2021-06-19T05:25:27Z 2021-06-19T15:28:30Z 2021-06-19T15:28:30Z OWNER  

Noticed this in the tests for sqlite-utils memory in #282 - is it possible to fix this?

https://github.com/simonw/sqlite-utils/commit/ec5174ed40fa283cb06f25ee0c0136297ec313ae

sqlite-utils 140912432 issue    
925410305 MDU6SXNzdWU5MjU0MTAzMDU= 285 Introspection property for telling if a table is a rowid table simonw 9599 closed 0     7 2021-06-19T14:56:16Z 2021-06-19T15:12:33Z 2021-06-19T15:12:33Z OWNER   sqlite-utils 140912432 issue    
925319214 MDU6SXNzdWU5MjUzMTkyMTQ= 283 memory: Shouldn't detect types for JSON simonw 9599 closed 0     1 2021-06-19T05:17:35Z 2021-06-19T14:52:48Z 2021-06-19T14:52:48Z OWNER  

https://github.com/simonw/sqlite-utils/blob/ec5174ed40fa283cb06f25ee0c0136297ec313ae/sqlite_utils/cli.py#L1244-L1251

This runs against JSON as well as CSV/TSV - which isn't necessary and In fact throws errors if there is any nested data.

sqlite-utils 140912432 issue    
925305186 MDU6SXNzdWU5MjUzMDUxODY= 282 Automatic type detection for CSV data simonw 9599 closed 0     4 2021-06-19T03:33:21Z 2021-06-19T04:42:03Z 2021-06-19T04:38:00Z OWNER  

I've touched on this before in #179 - but now that I've added sqlite-utils memory this is much more important - because unlike with sqlite-utils insert the in-memory command doesn't give you the opportunity to fix any types you imported from CSV, so queries like select * from stdin where age > 3 are never going to work correctly against these temporary in-memory tables.

Teaching sqlite-utils insert to detect types for columns in a CSV file would be a backwards-compatibility breaking change. Teaching sqlite-utils memory that trick would not be, since it hasn't been included in a release yet.

It's a little inconsistent, but I'm going to have sqlite-utils memory default to detecting types while sqlite-utils insert does not. In each case this can be controlled by a new command-line option:

cat file.csv | sqlite-utils memory - --no-detect-types

To opt-in for sqlite-utils insert:

cat file.csv | sqlite-utils insert blah.db blah - --detect-types

I'll have short options for these too: -n for --no-detect-types and -d for --detect-types.

sqlite-utils 140912432 issue    
709577625 MDU6SXNzdWU3MDk1Nzc2MjU= 179 sqlite-utils transform/insert --detect-types simonw 9599 closed 0     4 2020-09-26T17:28:55Z 2021-06-19T03:36:16Z 2021-06-19T03:36:05Z OWNER  

Idea from https://github.com/simonw/datasette-edit-tables/issues/13 - provide Python utility methods and accompanying CLI options for detecting the likely types of TEXT columns.

So if you have a text column that actually contained exclusively integer string values, it can let you know and let you run transform against it.

sqlite-utils 140912432 issue    
924990677 MDU6SXNzdWU5MjQ5OTA2Nzc= 279 sqlite-utils memory should handle TSV and JSON in addition to CSV simonw 9599 closed 0     7 2021-06-18T15:02:54Z 2021-06-19T03:11:59Z 2021-06-19T03:11:59Z OWNER  
  • Use sniff to detect CSV or TSV (if :tsv or :csv was not specified) and delimiters

Follow-on from #272

sqlite-utils 140912432 issue    
924992318 MDU6SXNzdWU5MjQ5OTIzMTg= 281 Mechanism for explicitly stating CSV or JSON or TSV for sqlite-utils memory simonw 9599 closed 0     1 2021-06-18T15:04:53Z 2021-06-19T03:11:59Z 2021-06-19T03:11:59Z OWNER  

Follows #272

sqlite-utils 140912432 issue    
924991194 MDU6SXNzdWU5MjQ5OTExOTQ= 280 Add --encoding option to sqlite-utils memory simonw 9599 closed 0     0 2021-06-18T15:03:32Z 2021-06-18T15:29:46Z 2021-06-18T15:29:46Z OWNER  

Follow-on from #272 - this will work like --encoding on sqlite-utils insert and will affect all CSV files processed by sqlite-utils memory.

sqlite-utils 140912432 issue    
922099793 MDExOlB1bGxSZXF1ZXN0NjcxMDE0NzUx 273 sqlite-utils memory command for directly querying CSV/JSON data simonw 9599 closed 0     8 2021-06-16T05:04:58Z 2021-06-18T15:01:17Z 2021-06-18T15:00:52Z OWNER simonw/sqlite-utils/pulls/273

Refs #272. Initial implementation only does CSV data, still needs:

  • Implement --save
  • Add --dump to the documentation
  • Add --attach example to the documentation
  • Replace :memory: in documentation
sqlite-utils 140912432 pull    
268176505 MDU6SXNzdWUyNjgxNzY1MDU= 34 Support CSV export with a .csv extension simonw 9599 closed 0     1 2017-10-24T20:34:43Z 2021-06-17T18:14:48Z 2018-05-28T20:45:34Z OWNER  

Maybe do this using streaming with multiple pagination SQL queries so we can support arbritrarily large exports.

How would this work against a view which doesn’t have an obvious efficient pagination mechanism? Maybe limit views to up to 1000 exported records?

Relates to #5

datasette 107914493 issue    
323681589 MDU6SXNzdWUzMjM2ODE1ODk= 266 Export to CSV simonw 9599 closed 0     27 2018-05-16T15:50:24Z 2021-06-17T18:14:24Z 2018-06-18T06:05:25Z OWNER  

Datasette needs to be able to export data to CSV.

datasette 107914493 issue    
333000163 MDU6SXNzdWUzMzMwMDAxNjM= 312 HTML, CSV and JSON views should support ?_col=&_col= simonw 9599 closed 0     1 2018-06-16T16:53:35Z 2021-06-17T18:14:24Z 2018-06-16T17:00:12Z OWNER  

To support whitelisting columns to display.

datasette 107914493 issue    
335141434 MDU6SXNzdWUzMzUxNDE0MzQ= 326 CSV should respect --cors and return cors headers simonw 9599 closed 0     1 2018-06-24T00:44:07Z 2021-06-17T18:14:24Z 2018-06-24T00:59:45Z OWNER  

Otherwise tools like Vega can't load data via CSV.

datasette 107914493 issue    
725184645 MDU6SXNzdWU3MjUxODQ2NDU= 1034 Better way of representing binary data in .csv output simonw 9599 closed 0   0.51 6026070 19 2020-10-20T04:28:58Z 2021-06-17T18:13:21Z 2020-10-29T22:47:46Z OWNER  

I just noticed this: https://latest.datasette.io/fixtures/binary_data.csv

rowid,data
1,b'\x15\x1c\x02\xc7\xad\x05\xfe'
2,b'\x15\x1c\x03\xc7\xad\x05\xfe'

There's no good way to represent binary data in a CSV file, but this seems like one of the more-bad options.

datasette 107914493 issue    
732674148 MDU6SXNzdWU3MzI2NzQxNDg= 1062 Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows simonw 9599 open 0   Datasette 1.0 3268330 2 2020-10-29T21:25:02Z 2021-06-17T18:13:21Z   OWNER  

This can drive the upgrade of the register_output_renderer hook to be able to handle streaming all rows in a large query.

datasette 107914493 issue    
503190241 MDU6SXNzdWU1MDMxOTAyNDE= 584 Codec error in some CSV exports simonw 9599 closed 0     2 2019-10-07T01:15:34Z 2021-06-17T18:13:20Z 2019-10-18T05:23:16Z OWNER  

Got this exploring my Swarm checkins:

/swarm/stickers.csv?stickerType=messageOnly&_size=max

datasette 107914493 issue    
516748849 MDU6SXNzdWU1MTY3NDg4NDk= 612 CSV export is broken for tables with null foreign keys simonw 9599 closed 0     2 2019-11-02T22:52:47Z 2021-06-17T18:13:20Z 2019-11-02T23:12:53Z OWNER  

Following on from #406 - this CSV export appears to be broken:

https://14da705.datasette.io/fixtures/foreign_key_references.csv?_labels=on&_size=max

pk,foreign_key_with_label,foreign_key_with_label_label,foreign_key_with_no_label,foreign_key_with_no_label_label
1,1,hello,1,1
2,,

That second row should have 5 values, but it only has 4.

datasette 107914493 issue    
910088936 MDU6SXNzdWU5MTAwODg5MzY= 1355 datasette --get should efficiently handle streaming CSV simonw 9599 open 0     1 2021-06-03T04:40:40Z 2021-06-17T18:12:33Z   OWNER  

It would be great if you could use datasette --get to run queries that return streaming CSV data without running out of RAM.

Current implementation looks like it loads the entire result into memory first: https://github.com/simonw/datasette/blob/f78ebdc04537a6102316d6dbbf6c887565806078/datasette/cli.py#L546-L552

datasette 107914493 issue    
775666296 MDU6SXNzdWU3NzU2NjYyOTY= 1160 "datasette insert" command and plugin hook simonw 9599 open 0     23 2020-12-29T02:37:03Z 2021-06-17T18:12:32Z   OWNER  

Tools for loading data into Datasette currently mostly exist as separate utilities - yaml-to-sqlite and csvs-to-sqlite and suchlike.

Bringing these into Datasette could have some interesting properties:

  • A datasette insert command could be extended with plugins to handle more formats
  • Any format that can be inserted on the command-line could also be inserted using a web UI or web API - which would benefit from new format plugin hooks
  • If Datasette ever grows beyond SQLite (see #670) a built-in import mechanism could work for those other databases as well - without me needing to write yaml-to-postgresql and suchlike
datasette 107914493 issue    
776128269 MDU6SXNzdWU3NzYxMjgyNjk= 1162 First working version of "datasette insert data.db file.csv" simonw 9599 open 0     0 2020-12-29T23:20:11Z 2021-06-17T18:12:32Z   OWNER  

Refs #1160

datasette 107914493 issue    
776128565 MDU6SXNzdWU3NzYxMjg1NjU= 1163 "datasette insert data.db url-to-csv" simonw 9599 open 0     1 2020-12-29T23:21:21Z 2021-06-17T18:12:32Z   OWNER  

Refs #1160 - get filesystem imports working first for #1162, then add import-from-URL.

datasette 107914493 issue    
906385991 MDU6SXNzdWU5MDYzODU5OTE= 1349 CSV ?_stream=on redundantly calculates facets for every page simonw 9599 closed 0     9 2021-05-29T06:11:23Z 2021-06-17T18:12:32Z 2021-06-01T15:52:53Z OWNER  

I'm trying to figure out why a full CSV export from https://covid-19.datasettes.com/covid/ny_times_us_counties runs unbearably slowly.

It's because the streaming endpoint works by scrolling through every page, and it turns out every page calculates facets and suggested facets!

datasette 107914493 issue    
906993731 MDU6SXNzdWU5MDY5OTM3MzE= 1351 Get `?_trace=1` working with CSV and streaming CSVs simonw 9599 closed 0     1 2021-05-31T03:02:15Z 2021-06-17T18:12:32Z 2021-06-01T15:50:09Z OWNER  

I think it's worth getting ?_trace=1 to work with streaming CSV - this would have helped me spot this issue a long time ago.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1349#issuecomment-851133125_

datasette 107914493 issue    
736365306 MDU6SXNzdWU3MzYzNjUzMDY= 1083 Advanced CSV export for arbitrary queries simonw 9599 open 0     2 2020-11-04T19:23:05Z 2021-06-17T18:12:31Z   OWNER  

There's no link to download the CSV file - the table page has that as an advanced export option, but this is missing from the query page.

datasette 107914493 issue    
743359646 MDU6SXNzdWU3NDMzNTk2NDY= 1096 TSV should be a default export option simonw 9599 open 0     1 2020-11-15T22:24:02Z 2021-06-17T18:12:31Z   OWNER  

Refs #1095

datasette 107914493 issue    
759695780 MDU6SXNzdWU3NTk2OTU3ODA= 1133 Option to omit header row in CSV export simonw 9599 closed 0     2 2020-12-08T18:54:46Z 2021-06-17T18:12:31Z 2020-12-10T23:28:51Z OWNER  

?_header=off - for symmetry with existing option ?_nl=on.

datasette 107914493 issue    
763361458 MDU6SXNzdWU3NjMzNjE0NTg= 1142 "Stream all rows" is not at all obvious simonw 9599 open 0     9 2020-12-12T06:24:57Z 2021-06-17T18:12:31Z   OWNER  

Got a question about how to download all rows - the current option isn't at all clear.

https://user-images.githubusercontent.com/9599/101977057-ac660b00-3bff-11eb-88f4-c93ffd03d3e0.png">

datasette 107914493 issue    
732685643 MDU6SXNzdWU3MzI2ODU2NDM= 1063 .csv should link to .blob downloads simonw 9599 closed 0   0.51 6026070 3 2020-10-29T21:45:58Z 2021-06-17T18:12:30Z 2020-10-29T22:47:45Z OWNER  
  • Update .csv output to link to these things (and get that xfail test to pass)
  • <del>Add a .csv?_blob_base64=1 argument that causes them to be output in base64 in the CSV</del>

Moving the CSV work to a separate ticket.
_Originally posted by @simonw in https://github.com/simonw/datasette/pull/1061#issuecomment-719042601_

datasette 107914493 issue    
924203783 MDU6SXNzdWU5MjQyMDM3ODM= 1379 Idea: ?_end=1 option for streaming CSV responses simonw 9599 open 0     0 2021-06-17T18:11:21Z 2021-06-17T18:11:30Z   OWNER  

As discussed in this thread: https://twitter.com/simonw/status/1405554676993433605 - one of the disadvantages of Datasette's streaming CSV feature is that it's hard to tell if you got the whole file or if the connection ended early - or if an error occurred.

Idea: offer an optional ?_end=1 parameter which, if enabled, adds a single row to the end of the CSV file that looks like this:

END,,,,,,,,,

For however many columns the CSV file usually has.

datasette 107914493 issue    
922955697 MDU6SXNzdWU5MjI5NTU2OTc= 275 Enable code coverage simonw 9599 closed 0     1 2021-06-16T18:33:49Z 2021-06-17T00:12:12Z 2021-06-17T00:12:12Z OWNER  

https://app.codecov.io/gh/simonw/sqlite-utils

Same mechanism as Datasette. Need to copy across the token from that page and add an equivalent of this workflow: https://github.com/simonw/datasette/blob/main/.github/workflows/test-coverage.yml

sqlite-utils 140912432 issue    
922832113 MDU6SXNzdWU5MjI4MzIxMTM= 274 sqlite-utils dump my.db command simonw 9599 closed 0     0 2021-06-16T16:30:14Z 2021-06-16T23:51:54Z 2021-06-16T23:51:54Z OWNER  

Inspired by the --dump mechanism I added to sqlite-utils memory here: https://github.com/simonw/sqlite-utils/issues/272#issuecomment-862018937

Can use .iterdump() to implement this: https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.iterdump

Maybe instead (or as-well-as) offer --dump which dumps out the SQL from that.

sqlite-utils 140912432 issue    
675753042 MDU6SXNzdWU2NzU3NTMwNDI= 131 sqlite-utils insert: options for column types simonw 9599 open 0     4 2020-08-09T18:59:11Z 2021-06-16T15:52:33Z   OWNER  

The insert command currently results in string types for every column - at least when used against CSV or TSV inputs.

It would be useful if you could do the following:

  • automatically detects the column types based on eg the first 1000 records
  • explicitly state the rule for specific columns

--detect-types could work for the former - or it could do that by default and allow opt-out using --no-detect-types

For specific columns maybe this:

sqlite-utils insert db.db images images.tsv \
  --tsv \
  -c id int \
  -c score float
sqlite-utils 140912432 issue    
913135723 MDU6SXNzdWU5MTMxMzU3MjM= 266 Add some types, enforce with mypy simonw 9599 open 0     3 2021-06-07T06:05:56Z 2021-06-15T01:33:13Z   OWNER  

A good starting point would be adding type information to the members of these named tuples and the introspection methods that return them:

https://github.com/simonw/sqlite-utils/blob/9dff7a38831d471b1dff16d40d89eb5c3b4e84d6/sqlite_utils/db.py#L51-L75

sqlite-utils 140912432 issue    
919733213 MDU6SXNzdWU5MTk3MzMyMTM= 33 Searching for whitespace throws an error simonw 9599 closed 0     0 2021-06-13T06:57:57Z 2021-06-13T14:36:39Z 2021-06-13T14:36:39Z MEMBER  

https://datasette.io/-/beta?q=+ returns a 500

fts5: syntax error near ""

dogsheep-beta 197431109 issue    
919702451 MDU6SXNzdWU5MTk3MDI0NTE= 271 table.upsert_all() fails if input has a single column that should be a primary key simonw 9599 closed 0     1 2021-06-13T02:50:27Z 2021-06-13T02:57:29Z 2021-06-13T02:57:29Z OWNER  

This works:

>>> db['foo'].insert_all([{"name": "hello"}], pk="name")
<Table foo (name)>

But this fails:

>>> db['foo3'].upsert_all([{"name": "hello"}], pk="name")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/simon/.local/share/virtualenvs/datasette.io-TK86ygSO/lib/python3.9/site-packages/sqlite_utils/db.py", line 1837, in upsert_all
    return self.insert_all(
  File "/Users/simon/.local/share/virtualenvs/datasette.io-TK86ygSO/lib/python3.9/site-packages/sqlite_utils/db.py", line 1778, in insert_all
    self.insert_chunk(
  File "/Users/simon/.local/share/virtualenvs/datasette.io-TK86ygSO/lib/python3.9/site-packages/sqlite_utils/db.py", line 1588, in insert_chunk
    result = self.db.execute(query, params)
  File "/Users/simon/.local/share/virtualenvs/datasette.io-TK86ygSO/lib/python3.9/site-packages/sqlite_utils/db.py", line 213, in execute
    return self.conn.execute(sql, parameters)
sqlite3.OperationalError: near "WHERE": syntax error

With the debugger:

>>> import pdb; pdb.pm()
> /Users/simon/.local/share/virtualenvs/datasette.io-TK86ygSO/lib/python3.9/site-packages/sqlite_utils/db.py(213)execute()
-> return self.conn.execute(sql, parameters)
(Pdb) print(sql, parameters)
UPDATE [foo3] SET  WHERE [name] = ? ['hello']
sqlite-utils 140912432 issue    
919181559 MDU6SXNzdWU5MTkxODE1NTk= 268 db.schema property and sqlite-utils schema command simonw 9599 closed 0     4 2021-06-11T20:25:47Z 2021-06-11T20:51:56Z 2021-06-11T20:51:56Z OWNER  

table.schema returns the schema for a table. db.schema should return the schema for the whole databes.

Can do this using select sql from sqlite_master where sql is not null:

https://latest.datasette.io/fixtures?sql=select+sql+from+sqlite_master+where+sql+is+not+null

sqlite-utils 140912432 issue    
915455228 MDU6SXNzdWU5MTU0NTUyMjg= 1371 Menu plugin hooks should include the request simonw 9599 closed 0     1 2021-06-08T20:23:35Z 2021-06-10T04:46:01Z 2021-06-10T04:46:01Z OWNER  

https://docs.datasette.io/en/stable/plugin_hooks.html#menu-links-datasette-actor

  • menu_links(datasette, actor)
  • table_actions(datasette, actor, database, table)
  • database_actions(datasette, actor, database)

All three of these should optionally also accept the request object. This would allow them to take into account additional cookies, Authorization headers or the current request URL (including the domain/subdomain) - or even access request.scope for extra context that might have been passed down from ASGI middleware.

datasette 107914493 issue    
915488244 MDU6SXNzdWU5MTU0ODgyNDQ= 1372 Add section to "writing plugins" about security, e.g. avoiding XSS simonw 9599 open 0     0 2021-06-08T20:49:33Z 2021-06-08T20:49:46Z   OWNER  

https://docs.datasette.io/en/stable/writing_plugins.html should have tips on writing secure plugins.

datasette 107914493 issue    
913900374 MDU6SXNzdWU5MTM5MDAzNzQ= 1369 Don't show foreign key IDs twice if no label simonw 9599 open 0     1 2021-06-07T19:47:02Z 2021-06-07T19:47:24Z   OWNER  

datasette 107914493 issue    
913823889 MDU6SXNzdWU5MTM4MjM4ODk= 1367 Navigation menu display bug simonw 9599 closed 0     1 2021-06-07T18:18:08Z 2021-06-07T18:24:19Z 2021-06-07T18:24:19Z OWNER   datasette 107914493 issue    
913809802 MDU6SXNzdWU5MTM4MDk4MDI= 1366 Get rid of this `restore_working_directory` hack entirely simonw 9599 open 0     2 2021-06-07T18:01:21Z 2021-06-07T18:03:03Z   OWNER  

That seems to have fixed it. I'd love to get rid of this restore_working_directory hack entirely.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1361#issuecomment-855308811_

datasette 107914493 issue    
912959264 MDU6SXNzdWU5MTI5NTkyNjQ= 1364 Don't truncate columns on the list of databases simonw 9599 closed 0     0 2021-06-06T22:01:56Z 2021-06-06T22:07:50Z 2021-06-06T22:07:50Z OWNER  

https://covid-19.datasettes.com/covid currently truncates at 9 database columns:

https://user-images.githubusercontent.com/9599/120941536-11467d80-c6d8-11eb-970a-ce469623f92c.png">

Django SQL Dashboard showed me that this is a bad idea - having the full list of columns is actually really useful documentation for crafting custom SQL queries.

datasette 107914493 issue    
912864936 MDU6SXNzdWU5MTI4NjQ5MzY= 1362 Consider using CSP to protect against future XSS simonw 9599 open 0     12 2021-06-06T15:32:20Z 2021-06-06T17:07:49Z   OWNER  

The XSS in #1360 would have been a lot less damaging if Datasette used CSP to protect against such vulnerabilities: https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

datasette 107914493 issue    
325958506 MDU6SXNzdWUzMjU5NTg1MDY= 283 Support cross-database joins simonw 9599 closed 0     26 2018-05-24T04:18:39Z 2021-06-06T09:40:18Z 2021-02-18T22:16:46Z OWNER  

SQLite has the ability to attach multiple databases to a single connection and then run joins across multiple databases.

Since Datasette supports more than one database, this would make a pretty neat feature.

datasette 107914493 issue    
912485040 MDU6SXNzdWU5MTI0ODUwNDA= 1361 Intermittent CI failure: restore_working_directory FileNotFoundError simonw 9599 closed 0     4 2021-06-05T22:48:13Z 2021-06-05T23:16:24Z 2021-06-05T23:16:24Z OWNER  

e.g. in https://github.com/simonw/datasette/runs/2754772233 - this is an intermittent error:

__________ ERROR at setup of test_hook_register_routes_render_message __________
[gw0] linux -- Python 3.8.10 /opt/hostedtoolcache/Python/3.8.10/x64/bin/python

tmpdir = local('/tmp/pytest-of-runner/pytest-0/popen-gw0/test_hook_register_routes_rend0')
request = <SubRequest 'restore_working_directory' for <Function test_hook_register_routes_render_message>>

    @pytest.fixture
    def restore_working_directory(tmpdir, request):
>       previous_cwd = os.getcwd()
E       FileNotFoundError: [Errno 2] No such file or directory
datasette 107914493 issue    
912464443 MDU6SXNzdWU5MTI0NjQ0NDM= 1360 Security flaw, to be fixed in 0.56.1 and 0.57 simonw 9599 closed 0     2 2021-06-05T21:53:51Z 2021-06-05T22:23:23Z 2021-06-05T22:22:06Z OWNER  

See security advisory here for details: https://github.com/simonw/datasette/security/advisories/GHSA-xw7c-jx9m-xh5g - the ?_trace=1 debugging option was not correctly escaping its JSON output, resulting in a reflected cross-site scripting vulnerability.

datasette 107914493 issue    
912418094 MDU6SXNzdWU5MTI0MTgwOTQ= 1358 Release Datasette 0.57 simonw 9599 closed 0     3 2021-06-05T19:56:13Z 2021-06-05T22:20:07Z 2021-06-05T22:20:07Z OWNER   datasette 107914493 issue    
912419349 MDU6SXNzdWU5MTI0MTkzNDk= 1359 `?_trace=1` should only be available with a new `trace_debug` setting simonw 9599 closed 0     0 2021-06-05T19:59:27Z 2021-06-05T20:18:46Z 2021-06-05T20:18:46Z OWNER  

Just like template debug mode is controlled by this off-by-default setting: https://github.com/simonw/datasette/blob/368aa5f1b16ca35f82d90ff747023b9a2bfa27c1/datasette/app.py#L160-L164

datasette 107914493 issue    
910092577 MDU6SXNzdWU5MTAwOTI1Nzc= 1356 Research: syntactic sugar for using --get with SQL queries, maybe "datasette query" simonw 9599 open 0     9 2021-06-03T04:49:42Z 2021-06-05T19:06:06Z   OWNER  

Inspired by https://github.com/simonw/sqlite-utils/issues/264 - in particular this example:

datasette covid.db --get='/covid.yaml?sql=select * from ny_times_us_counties limit 1' 
- date: '2020-01-21'
  county: Snohomish
  state: Washington
  fips: 53061
  cases: 1
  deaths: 0

Having to construct that URL - including potentially URL escaping the SQL query - isn't a great developer experience.

Imagine if you could do this instead:

datasette covid.db --query "select * from ny_times_us_counties limit 1" --format yaml
datasette 107914493 issue    
912394511 MDExOlB1bGxSZXF1ZXN0NjYyNTU3MjQw 1357 Make custom pages compatible with base_url setting simonw 9599 closed 0     1 2021-06-05T18:54:39Z 2021-06-05T18:59:54Z 2021-06-05T18:59:54Z OWNER simonw/datasette/pulls/1357

Refs #1238.

datasette 107914493 pull    
906356331 MDU6SXNzdWU5MDYzNTYzMzE= 263 `sqlite-utils indexes` command simonw 9599 closed 0     6 2021-05-29T04:52:34Z 2021-06-03T04:34:38Z 2021-06-03T04:34:38Z OWNER  

While working on #260 I realized there's no command to show indexes in a database, even though there is one for showing tables and one for triggers.

I should implement #261 first.

sqlite-utils 140912432 issue    
906345899 MDU6SXNzdWU5MDYzNDU4OTk= 261 `table.xindexes` using `PRAGMA index_xinfo(table)` simonw 9599 closed 0     5 2021-05-29T04:23:48Z 2021-06-03T03:54:14Z 2021-06-03T03:51:32Z OWNER  

PRAGMA index_xinfo(table) DOES return that data:
(Pdb) [c[0] for c in fresh_db.execute("PRAGMA > index_xinfo('idx_dogs_age_name')").description] ['seqno', 'cid', 'name', 'desc', 'coll', 'key'] (Pdb) fresh_db.execute("PRAGMA index_xinfo('idx_dogs_age_name')").fetchall() [(0, 2, 'age', 1, 'BINARY', 1), (1, 0, 'name', 0, 'BINARY', 1), (2, -1, None, 0, 'BINARY', 0)]
See https://sqlite.org/pragma.html#pragma_index_xinfo

Example output: https://covid-19.datasettes.com/covid?sql=select+*+from+pragma_index_xinfo%28%27idx_ny_times_us_counties_date%27%29
_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/260#issuecomment-850766552_

sqlite-utils 140912432 issue    
520655983 MDU6SXNzdWU1MjA2NTU5ODM= 619 "Invalid SQL" page should let you edit the SQL simonw 9599 closed 0   Datasette Next 6158551 14 2019-11-10T20:54:12Z 2021-06-02T04:15:54Z 2021-06-02T04:15:54Z OWNER   datasette 107914493 issue    
904537568 MDExOlB1bGxSZXF1ZXN0NjU1Njg0NDc3 1346 Re-display user's query with an error message if an error occurs simonw 9599 closed 0     3 2021-05-28T02:04:20Z 2021-06-02T03:46:21Z 2021-06-02T03:46:21Z OWNER simonw/datasette/pulls/1346

Refs #619

datasette 107914493 pull    
828811618 MDU6SXNzdWU4Mjg4MTE2MTg= 1257 Table names containing single quotes break things simonw 9599 closed 0     2 2021-03-11T06:29:38Z 2021-06-02T03:28:29Z 2021-06-02T03:28:29Z OWNER  

e.g. I found a table called Yesterday's ELRs by County

It threw an error inside the detect_fts() function attempting to run this SQL query:

        select name from sqlite_master
            where rootpage = 0
            and (
                sql like '%VIRTUAL TABLE%USING FTS%content="Yesterday's ELRs by County"%'
                or sql like '%VIRTUAL TABLE%USING FTS%content=[Yesterday's ELRs by County]%'
                or (
                    tbl_name = "Yesterday's ELRs by County"
                    and sql like '%VIRTUAL TABLE%USING FTS%'
                )
            )

Here's the code at fault: https://github.com/simonw/datasette/blob/640ac7071b73111ba4423812cd683756e0e1936b/datasette/utils/__init__.py#L534-L548

datasette 107914493 issue    
800669347 MDU6SXNzdWU4MDA2NjkzNDc= 1216 /-/databases should reflect connection order, not alphabetical order simonw 9599 closed 0     1 2021-02-03T20:20:23Z 2021-06-02T03:10:19Z 2021-06-02T03:10:19Z OWNER  

The order in which databases are attached to Datasette matters - it affects the homepage, and it's beginning to influence how certain plugins work (see https://github.com/simonw/datasette-tiles/issues/8).

Two years ago in cccea85be6aaaeadb31f3b588ec7f732628815f5 I made /-/databases return things in alphabetical order, to fix a test failure in Python 3.5.

Python 3.5 is no longer supported, so this is no longer necessary - and this behaviour should now be treated as a bug.

datasette 107914493 issue    
323671577 MDU6SXNzdWUzMjM2NzE1Nzc= 263 Facets should not execute for ?shape=array|object simonw 9599 closed 0     3 2018-05-16T15:26:13Z 2021-06-02T02:54:34Z 2021-06-02T02:54:34Z OWNER  

Split off from #255 - there's no point executing the facet SQL for the ?_shape=array and ?_shape=object API responses.

datasette 107914493 issue    
906977719 MDU6SXNzdWU5MDY5Nzc3MTk= 1350 ?_nofacets=1 query string argument for disabling facets and suggested facets simonw 9599 closed 0     2 2021-05-31T02:22:29Z 2021-06-01T16:19:38Z 2021-05-31T02:39:18Z OWNER  

This is needed as an internal option for #1349. datasette-graphql can benefit from this too - maybe can even use it so that if you pass ?_shape=array it gets automatically added, fixing #263.

datasette 107914493 issue    
908446997 MDU6SXNzdWU5MDg0NDY5OTc= 1353 ?_nocount=1 for opting out of table counts simonw 9599 closed 0     2 2021-06-01T15:53:27Z 2021-06-01T16:18:54Z 2021-06-01T16:17:04Z OWNER  

Running a trace against a CSV streaming export with the new _trace=1 feature from #1351 shows that the following code is executing a select count(*) from table for every page of results returned: https://github.com/simonw/datasette/blob/d1d06ace49606da790a765689b4fbffa4c6deecb/datasette/views/table.py#L700-L705

This is inefficient - a new ?_nocount=1 option would let us disable this count in the same way as #1349: https://github.com/simonw/datasette/blob/d1d06ace49606da790a765689b4fbffa4c6deecb/datasette/views/base.py#L264-L276

datasette 107914493 issue    
908465747 MDU6SXNzdWU5MDg0NjU3NDc= 1354 Update help in tests for latest Click simonw 9599 closed 0     1 2021-06-01T16:14:31Z 2021-06-01T16:17:04Z 2021-06-01T16:17:04Z OWNER  

Now that Uvicorn 0.14 is out with an unpinned Click dependency - https://github.com/encode/uvicorn/pull/1033 - our test suite runs against Click 8.0 - which subtly changes the output of --help causing test failures: https://github.com/simonw/datasette/runs/2720383031?check_suite_focus=true

    def test_help_includes(name, filename):
        expected = (docs_path / filename).read_text()
        runner = CliRunner()
        result = runner.invoke(cli, name.split() + ["--help"], terminal_width=88)
        actual = f"$ datasette {name} --help\n\n{result.output}"
        # actual has "Usage: cli package [OPTIONS] FILES"
        # because it doesn't know that cli will be aliased to datasette
        expected = expected.replace("Usage: datasette", "Usage: cli")
>       assert expected == actual
E       AssertionError: assert '$ datasette ...e and exit.\n' == '$ datasette ...e and exit.\n'
E         Skipping 848 identical leading characters in diff, use -v to show
E           nt_id xxx
E         + 
E             --version-note TEXT             Additional note to show on /-/versions
E             --secret TEXT                   Secret used for signing secure values, such as signed
E                                             cookies
E         + 
E             --title TEXT                    Title for metadata
datasette 107914493 issue    
904071938 MDU6SXNzdWU5MDQwNzE5Mzg= 1345 ?_nocol= does not interact well with default facets simonw 9599 closed 0     7 2021-05-27T18:39:55Z 2021-05-31T02:40:44Z 2021-05-31T02:31:21Z OWNER  

Clicking "Hide this column" on fips on https://covid-19.datasettes.com/covid/ny_times_us_counties shows this error:

https://covid-19.datasettes.com/covid/ny_times_us_counties?_nocol=fips

Invalid SQL

no such column: fips

The reason is that https://covid-19.datasettes.com/-/metadata sets up the following:

  "ny_times_us_counties": {
      "sort_desc": "date",
      "facets": [
          "state",
          "county",
          "fips"
      ],

It's setting fips as a default facet, which breaks if you attempt to remove the column using ?_nocol.

datasette 107914493 issue    
838148087 MDU6SXNzdWU4MzgxNDgwODc= 250 Handle byte order marks (BOMs) in CSV files simonw 9599 closed 0     3 2021-03-22T22:13:18Z 2021-05-29T05:34:21Z 2021-05-29T05:34:21Z OWNER  

I often find sqlite-utils insert ... --csv creates a first column with a weird character at the start of it - which it turns out is the UTF-8 BOM. Fix that.

sqlite-utils 140912432 issue    
906355849 MDExOlB1bGxSZXF1ZXN0NjU3MzczNzI2 262 Ability to add descending order indexes simonw 9599 closed 0     0 2021-05-29T04:51:04Z 2021-05-29T05:01:42Z 2021-05-29T05:01:39Z OWNER simonw/sqlite-utils/pulls/262

Refs #260

sqlite-utils 140912432 pull    
906330187 MDU6SXNzdWU5MDYzMzAxODc= 260 Support creating descending order indexes simonw 9599 closed 0     12 2021-05-29T03:42:59Z 2021-05-29T05:01:39Z 2021-05-29T05:01:39Z OWNER  

SQLite lets you create indexes in reverse order, which can have a surprisingly big impact on performance, see https://github.com/simonw/covid-19-datasette/issues/27

I tried doing this using sqlite-utils like so, but it's didn't work:

db["ny_times_us_counties"].create_index(["date desc"])
sqlite-utils 140912432 issue    
858501079 MDU6SXNzdWU4NTg1MDEwNzk= 255 transform --help should tell you the available types simonw 9599 closed 0     0 2021-04-15T05:24:48Z 2021-05-29T03:55:52Z 2021-05-29T03:55:52Z OWNER  
Usage: sqlite-utils transform [OPTIONS] PATH TABLE

  Transform a table beyond the capabilities of ALTER TABLE

Options:
  --type <TEXT TEXT>...     Change column type to X

This should specify that the possible types are 'INTEGER', 'TEXT', 'FLOAT', 'BLOB'.

sqlite-utils 140912432 issue    
903986178 MDU6SXNzdWU5MDM5ODYxNzg= 1344 Test Datasette Docker images built for different architectures simonw 9599 open 0     10 2021-05-27T16:52:29Z 2021-05-27T17:52:58Z   OWNER  

Continuing on from #1319 - now that we have the ability to build Datasette's Docker image against multiple architectures we should test that it works.

We can do this with QEMU emulation, see https://twitter.com/nevali/status/1397958044571602945

datasette 107914493 issue    
903978133 MDU6SXNzdWU5MDM5NzgxMzM= 1343 Figure out how to publish alpha/beta releases to Docker Hub simonw 9599 closed 0     4 2021-05-27T16:42:17Z 2021-05-27T16:46:37Z 2021-05-27T16:45:41Z OWNER  

It looks like all I need to do to ship an alpha version to Docker Hub is NOT point the latest tag at it after it goes live: https://github.com/simonw/datasette/blob/1a8972f9c012cd22b088c6b70661a9c3d3847853/.github/workflows/publish.yml#L75-L77

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1319#issuecomment-849780481_

datasette 107914493 issue    
898904402 MDU6SXNzdWU4OTg5MDQ0MDI= 1337 "More" link for facets that shows _facet_size=max results simonw 9599 closed 0     7 2021-05-23T00:08:51Z 2021-05-27T16:14:14Z 2021-05-27T16:01:03Z OWNER  

Original title: "More" link for facets that shows the full set of results

The simplest way to do this will be to have it link to a generated SQL query.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1332#issuecomment-846479062_

datasette 107914493 issue    
903902495 MDU6SXNzdWU5MDM5MDI0OTU= 1342 Improve `path_with_replaced_args()` and friends and document them simonw 9599 open 0     3 2021-05-27T15:18:28Z 2021-05-27T15:23:02Z   OWNER  

In order to cleanly implement this I need to expose the path_with_replaced_args utility function to Datasette's template engine. This is the first time this will become an exposed (and hence should-by-documented) API and I don't like its shape much.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1337#issuecomment-849721280_

datasette 107914493 issue    
903200328 MDU6SXNzdWU5MDMyMDAzMjg= 1341 "Show all columns" cog menu item should show if ?_col= is used simonw 9599 closed 0     1 2021-05-27T04:28:17Z 2021-05-27T04:31:16Z 2021-05-27T04:31:16Z OWNER   datasette 107914493 issue    
517451234 MDU6SXNzdWU1MTc0NTEyMzQ= 615 ?_col= and ?_nocol= support for toggling columns on table view simonw 9599 closed 0     16 2019-11-04T22:55:41Z 2021-05-27T04:26:10Z 2021-05-27T04:17:44Z OWNER  

Split off from #292 (I guess this is a re-opening of #312).

datasette 107914493 issue    
326800219 MDU6SXNzdWUzMjY4MDAyMTk= 292 Mechanism for customizing the SQL used to select specific columns in the table view simonw 9599 closed 0     15 2018-05-27T09:05:52Z 2021-05-27T04:25:01Z 2021-05-27T04:25:01Z OWNER  

Some columns don't make a lot of sense in their default representation - binary blobs such as SpatiaLite geometries for example, or lengthy columns that really should be truncated somehow.

We may also find that there are tables where we don't want to show all of the columns - so a mechanism to select a subset of columns would be nice.

I think there are two features here:

  • the ability to request a subset of columns on the table view
  • the ability to override the SQL for a specific column and/or add extra columns - AsGeoJSON(Geometry) for example

Both features should be available via both querystring arguments and in metadata.json

The querystring argument for custom SQL should only work if allow_sql config is turned on.

Refs #276

datasette 107914493 issue    
899851083 MDExOlB1bGxSZXF1ZXN0NjUxNDkyODg4 1339 ?_col=/?_nocol= to show/hide columns on the table page simonw 9599 closed 0     1 2021-05-24T17:15:20Z 2021-05-27T04:17:44Z 2021-05-27T04:17:43Z OWNER simonw/datasette/pulls/1339

See #615. Still to do:

  • Allow combination of ?_col= and ?_nocol= (_nocol wins)
  • Deduplicate same column if passed in ?_col= multiple times
  • Validate that user did not try to remove a primary key
  • Add tests
  • Ensure this works correctly for SQL views
  • Add documentation
datasette 107914493 pull    
901009787 MDU6SXNzdWU5MDEwMDk3ODc= 1340 Research: Cell action menu (like column action but for individual cells) simonw 9599 open 0     1 2021-05-25T15:49:16Z 2021-05-26T18:59:58Z   OWNER  

Had an idea today that it might be useful to select an individual cell and say things like "show me all other rows with the same value" - maybe even a set of other menu options against cells as well.

Mocked up a show-on-hover ellipses demo using the CSS inspector:

datasette 107914493 issue    
564833696 MDU6SXNzdWU1NjQ4MzM2OTY= 670 Prototoype for Datasette on PostgreSQL simonw 9599 open 0     13 2020-02-13T17:17:55Z 2021-05-26T18:33:58Z   OWNER  

I thought this would never happen, but now that I'm deep in the weeds of running SQLite in production for Datasette Cloud I'm starting to reconsider my policy of only supporting SQLite.

Some of the factors making me think PostgreSQL support could be worth the effort:
- Serverless. I'm getting increasingly excited about writable-database use-cases for Datasette. If it could talk to PostgreSQL then users could easily deploy it on Heroku or other serverless providers that can talk to a managed RDS-style PostgreSQL.
- Existing databases. Plenty of organizations have PostgreSQL databases. They can export to SQLite using db-to-sqlite but that's a pretty big barrier to getting started - being able to run datasette postgresql://connection-string and start trying it out would be a massively better experience.
- Data size. I keep running into use-cases where I want to run Datasette against many GBs of data. SQLite can do this but PostgreSQL is much more optimized for large data, especially given the existence of tools like Citus.
- Marketing. Convincing people to trust their data to SQLite is potentially a big barrier to adoption. Even if I've convinced myself it's trustworthy I still have to convince everyone else.
- It might not be that hard? If this required a ground-up rewrite it wouldn't be worth the effort, but I have a hunch that it may not be too hard - most of the SQL in Datasette should work on both databases since it's almost all portable SELECT statements. If Datasette did DML this would be a lot harder, but it doesn't.
- Plugins! This feels like a natural surface for a plugin - at which point people could add MySQL support and suchlike in the future.

The above reasons feel strong enough to justify a prototype.

datasette 107914493 issue    
899169307 MDU6SXNzdWU4OTkxNjkzMDc= 1338 Fix jinja2 warnings simonw 9599 closed 0     0 2021-05-24T01:38:23Z 2021-05-24T01:41:55Z 2021-05-24T01:41:55Z OWNER  

Lots of these in the test suite now, after the Jinja upgrade in #1331:

tests/test_plugins.py::test_hook_render_cell_link_from_json
  datasette/tests/plugins/my_plugin_2.py:45: DeprecationWarning: 'jinja2.escape' is deprecated and will be removed in Jinja 3.1. Import 'markupsafe.escape' instead.
    label=jinja2.escape(data["label"] or "") or "&nbsp;",

tests/test_plugins.py::test_hook_render_cell_link_from_json
  datasette/tests/plugins/my_plugin_2.py:41: DeprecationWarning: 'jinja2.Markup' is deprecated and will be removed in Jinja 3.1. Import 'markupsafe.Markup' instead.
    return jinja2.Markup(
datasette 107914493 issue    
642296989 MDU6SXNzdWU2NDIyOTY5ODk= 856 Consider pagination of canned queries simonw 9599 open 0     3 2020-06-20T03:15:59Z 2021-05-21T14:22:41Z   OWNER  

The new canned_queries() plugin hook from #852 combined with plugins like https://github.com/simonw/datasette-saved-queries could mean that some installations end up with hundreds or even thousands of canned queries. I should consider pagination or some other way of ensuring that this doesn't cause performance problems for Datasette.

datasette 107914493 issue    
897212458 MDU6SXNzdWU4OTcyMTI0NTg= 63 Ability to fetch commits from branches other than the default simonw 9599 open 0     0 2021-05-20T17:58:08Z 2021-05-20T17:58:08Z   MEMBER  

This tool is currently almost entirely ignorant of the concept of branches. One example: you can't retrieve commits from any branch other than the default (usually main).

github-to-sqlite 207052882 issue    
895686039 MDU6SXNzdWU4OTU2ODYwMzk= 1336 Document turning on WAL for live served SQLite databases simonw 9599 open 0     0 2021-05-19T17:08:58Z 2021-05-19T17:17:48Z   OWNER  

Datasette docs don't talk about WAL yet, which allows you to safely serve reads from a database file while it is accepting writes.

datasette 107914493 issue    
894948100 MDU6SXNzdWU4OTQ5NDgxMDA= 259 Suggest the --alter option if a new column cannot be added simonw 9599 closed 0     1 2021-05-19T03:17:38Z 2021-05-19T03:27:33Z 2021-05-19T03:26:26Z OWNER  

Refs #256.

sqlite-utils 140912432 issue    
812228314 MDU6SXNzdWU4MTIyMjgzMTQ= 1236 Ability to increase size of the SQL editor window simonw 9599 closed 0     9 2021-02-19T18:09:27Z 2021-05-18T03:28:25Z 2021-02-22T21:05:21Z OWNER  
datasette 107914493 issue    
842862708 MDU6SXNzdWU4NDI4NjI3MDg= 1280 Ability to run CI against multiple SQLite versions simonw 9599 open 0     2 2021-03-28T23:54:50Z 2021-05-10T19:07:46Z   OWNER  

Issue #1276 happened because I didn't run tests against a SQLite version prior to 3.16.0 (released 2017-01-02).

Glitch is a deployment target and runs SQLite 3.11.0 from 2016-02-15.

If CI ran against that version of SQLite this bug could have been avoided.

datasette 107914493 issue    
777333388 MDU6SXNzdWU3NzczMzMzODg= 1168 Mechanism for storing metadata in _metadata tables simonw 9599 open 0     19 2021-01-01T18:47:27Z 2021-05-07T17:22:52Z   OWNER  

Original title: Perhaps metadata should all live in a _metadata in-memory database

Inspired by #1150 - metadata should be exposed as an API, and for large Datasette instances that API may need to be paginated. So why not expose it through an in-memory database table?

One catch to this: plugins. #860 aims to add a plugin hook for metadata. But if the metadata comes from an in-memory table, how do the plugins interact with it?

The need to paginate over metadata does make a plugin hook that returns metadata for an individual table seem less wise, since we don't want to have to do 10,000 plugin hook invocations to show a list of all metadata.

If those plugins write directly to the in-memory table how can their contributions survive the server restarting?

datasette 107914493 issue    
496415321 MDU6SXNzdWU0OTY0MTUzMjE= 1 Figure out some interesting example SQL queries simonw 9599 open 0     9 2019-09-20T15:28:07Z 2021-05-03T03:46:23Z   MEMBER  

My knowledge of genetics has left me short here. I'd love to be able to provide some interesting example SELECT queries - maybe one that spots if you are likely to have red hair?

genome-to-sqlite 209590345 issue    
871304967 MDU6SXNzdWU4NzEzMDQ5Njc= 1315 settings.json should be picked up by "datasette publish cloudrun" simonw 9599 open 0     0 2021-04-29T18:16:41Z 2021-04-29T18:16:41Z   OWNER  
datasette 107914493 issue    
866668415 MDU6SXNzdWU4NjY2Njg0MTU= 1308 Columns named "link" display in bold simonw 9599 closed 0     3 2021-04-24T05:58:11Z 2021-04-24T06:07:49Z 2021-04-24T06:07:49Z OWNER  

Reported in office hours today.

datasette 107914493 issue    
856895291 MDU6SXNzdWU4NTY4OTUyOTE= 1299 Design better empty states simonw 9599 open 0     0 2021-04-13T12:06:12Z 2021-04-13T12:06:12Z   OWNER  

Inspiration here: https://emptystat.es/

datasette 107914493 issue    
636511683 MDU6SXNzdWU2MzY1MTE2ODM= 830 Redesign register_facet_classes plugin hook simonw 9599 open 0   Datasette 1.0 3268330 2 2020-06-10T20:03:27Z 2021-04-12T01:07:27Z   OWNER  

Nothing uses this plugin hook yet, so the design is not yet proven.

I'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.

I'll add a warning about this to the documentation.

datasette 107914493 issue    
855296937 MDU6SXNzdWU4NTUyOTY5Mzc= 1295 Errors should have links to further information simonw 9599 open 0     1 2021-04-11T12:39:12Z 2021-04-11T12:41:06Z   OWNER  

Inspired by this tweet:
https://twitter.com/willmcgugan/status/1381186384510255104

While I am thinking about faqs. I’d also like to add short URLs to Rich exceptions.

I loath cryptic error messages, and I’ve created a fair few myself. In Rich I’ve tried to make them as plain English as possible. But...

would be great if every error message linked to a page that explains the error in detail and offers fixes.

datasette 107914493 issue    
849978964 MDU6SXNzdWU4NDk5Nzg5NjQ= 1293 Research: Automatically display foreign key links on arbitrary query pages simonw 9599 open 0     18 2021-04-04T22:59:42Z 2021-04-05T16:16:18Z   OWNER   datasette 107914493 issue    

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);