id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 267515678,MDU6SXNzdWUyNjc1MTU2Nzg=,3,"Make individual column valuables addressable, with smart content types",9599,open,0,,,1,2017-10-23T01:11:32Z,2017-12-10T03:11:58Z,,OWNER,,"Some SQLite databases embed images in columns. It would be cool if these had URLs. /database-name-7sha256/table-name/compound-pk/column /database-name-7sha256/table-name/compound-pk/column.json /database-name-7sha256/table-name/compound-pk/column.png /database-name-7sha256/table-name/compound-pk/column.gif /database-name-7sha256/table-name/compound-pk/column.txt The one without an explicit file extension auto-detects the correct extension.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/3/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 268110769,MDU6SXNzdWUyNjgxMTA3Njk=,33,Use locust for benchmarking and load tests,9599,open,0,,,0,2017-10-24T17:00:09Z,2017-12-10T03:12:16Z,,OWNER,,"https://github.com/locustio/locust Needed for #32 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/33/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 274615452,MDU6SXNzdWUyNzQ2MTU0NTI=,111,Add “updated” to metadata,9599,open,0,,,12,2017-11-16T18:22:20Z,2021-09-21T22:48:27Z,,OWNER,,"To give an indication as to when the data was last updated. This should be a field in the metadata that is then shown on the index page and in the footer, if it is set. Also support setting it using an option to “datasette publish” and “datasette package” - which can either be a string or can be the magic string “today” to set it to today’s date: datasette publish file.db --updated=today",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/111/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275125561,MDU6SXNzdWUyNzUxMjU1NjE=,123,Datasette serve should accept paths/URLs to CSVs and other file formats,9599,open,0,,,9,2017-11-19T02:05:48Z,2021-07-19T00:04:32Z,,OWNER,,"This would remove the csvs-to-sqlite step which I end up using for almost everything. I'm hesitant to introduce pandas as a required dependency though since it require compiling numpy. Could build it so this option is only available if you have pandas installed.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/123/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",, 275159710,MDU6SXNzdWUyNzUxNTk3MTA=,128,"Every visualization should have an ""embed"" button",9599,open,0,,,0,2017-11-19T13:38:13Z,2019-05-13T18:33:51Z,,OWNER,,"At least for the first round of visualizations, any time you construct one using the UI the result should include an ""embed this"" button that returns source code to copy and paste These examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/128/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275415799,MDU6SXNzdWUyNzU0MTU3OTk=,137,Ability to combine multiple SQL queries on a single graph,9599,open,0,,,1,2017-11-20T16:26:57Z,2019-05-13T18:33:51Z,,OWNER,,This would make visualizations significantly more powerful. The interesting challenge will be around the URL design. It would be useful to be able to combine either multiple explicit SQL queries or multiple queries based on the filter string parameters passed to one or more table views.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/137/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275755475,MDU6SXNzdWUyNzU3NTU0NzU=,140,Heatmap visualization plugin,9599,open,0,,,2,2017-11-21T15:34:23Z,2019-05-13T18:33:51Z,,OWNER,,Could use https://github.com/scottbedard/svelte-heatmap,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/140/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 281110295,MDU6SXNzdWUyODExMTAyOTU=,173,I18n and L10n support,50138,open,0,,,2,2017-12-11T17:49:58Z,2021-04-26T12:10:01Z,,NONE,,It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/173/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 285168503,MDU6SXNzdWUyODUxNjg1MDM=,176,Add GraphQL endpoint,173848,open,0,,,8,2017-12-29T23:21:01Z,2020-04-21T14:16:24Z,,NONE,,Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/176/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 288438570,MDU6SXNzdWUyODg0Mzg1NzA=,179,More metadata options for template authors ,9599,open,0,,,2,2018-01-14T20:51:04Z,2019-05-13T18:33:33Z,,OWNER,,See this thread on Twitter: https://twitter.com/simonw/status/952637152797458432,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/179/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 299760684,MDU6SXNzdWUyOTk3NjA2ODQ=,185,Metadata should be a nested arbitrary KV store,222245,open,0,,,12,2018-02-23T16:02:07Z,2019-05-13T18:33:33Z,,NONE,,"I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication. Ideally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like `metadata.page.key` vs. `metadata.this.key` might work as an interface.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/185/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 309047460,MDU6SXNzdWUzMDkwNDc0NjA=,188,Ability to bundle metadata and templates inside the SQLite file,9599,open,0,,,4,2018-03-27T16:42:07Z,2020-12-04T17:18:34Z,,OWNER,,"One of the nicest qualities of SQLite as a data format is that you get a single file which you can then backup or share with other people. Datasette breaks this a little once you start including custom metadata.json or template files and CSS. It would be cool if there was an optional mechanism for baking that extra configuration into the SQLite file itself. That way entire datasette mini-applications (including canned queries and custom HTML and CSS) could be constructed as single .db files. Since datasette configuration is all file-based, one way to achieve that would be to support a ""datasette_files"" table which, if present is used to search for file contents by path. This is inline with the philosophy described by https://www.sqlite.org/appfileformat.html ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/188/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 312395790,MDU6SXNzdWUzMTIzOTU3OTA=,197,Ability to sort by more than one column,9599,open,0,,,0,2018-04-09T05:13:30Z,2018-07-10T17:45:37Z,,OWNER,,"Split off from #189. I'd like to support ""sort by X descending, then by Y ascending if there are dupes for X"" as well. Suggested syntax for that: ?_sort_desc=X&_sort=Y we currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example: ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/197/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 312396095,MDU6SXNzdWUzMTIzOTYwOTU=,198,Ability to sort with nulls last,9599,open,0,,,0,2018-04-09T05:15:40Z,2018-07-10T17:45:37Z,,OWNER,,"Split off from #189 Here's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid order by case when career_ranypa is null then 1 else 0 end, career_ranypa",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/198/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 314771615,MDU6SXNzdWUzMTQ3NzE2MTU=,218,"Support custom unit display in order to handle ""$10,000""",9599,open,0,,,0,2018-04-16T18:39:31Z,2018-07-10T17:45:38Z,,OWNER,,"I tried to get Datasette to display `$10,000` using the new units support but we currently only display units as a suffix: https://github.com/simonw/datasette/blob/10a34f995c70daa37a8a2aa02c3135a4b023a24c/datasette/app.py#L563-L572 It would be neat if there was a mechanism for specifying a custom unit display - maybe something like this: ``` { ""custom_units"": { ""us_dollar"": { ""unit"": ""us_dollar = [] = $"", ""format"": ""${:,}"" } } } ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 314834783,MDU6SXNzdWUzMTQ4MzQ3ODM=,219,Expose units in the JSON API?,45057,open,0,,,0,2018-04-16T22:04:25Z,2018-04-16T22:04:25Z,,CONTRIBUTOR,,"From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map. I'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/219/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 316621102,MDU6SXNzdWUzMTY2MjExMDI=,235,Add limit on the size in KB of data returned from a single query,9599,open,0,,,2,2018-04-22T23:01:15Z,2018-04-24T00:30:02Z,,OWNER,,"Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized. It does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server. I think the easiest place to implement that is here: https://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190 Currently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits. The bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/235/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 317001500,MDU6SXNzdWUzMTcwMDE1MDA=,236,datasette publish lambda plugin,9599,open,0,,,11,2018-04-23T22:10:30Z,2023-03-12T14:04:15Z,,OWNER,,"Refs #217 - create a publish plugin that can deploy to AWS Lambda. https://docs.aws.amazon.com/lambda/latest/dg/limits.html says lambda packages can be up to 50 MB, so this would only work with smaller databases (the command can check the filesize before attempting to package and deploy it). Lambdas do get a 512 MB `/tmp` directory too, so for larger databases the function could start and then download up to 512MB from an S3 bucket - so the plugin could take an optional S3 bucket to write to and know how to upload the `.db` file there and then have the lambda download it on startup.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/236/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 318490133,MDU6SXNzdWUzMTg0OTAxMzM=,241,Default datasette logging format should be JSON,9599,open,0,,,0,2018-04-27T17:32:48Z,2018-07-10T17:45:40Z,,OWNER,,"Structured logs are better. Datasette should default to outputting it's HTTP access log lines as newline delimited JSON instead of the Sanic default format it uses at the moment. For improved greppability these logs should have keys ordered in a consistent way. Python's JSON module can do this with ordered dictionaries.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/241/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 319449852,MDU6SXNzdWUzMTk0NDk4NTI=,247,SQLite code decoupled from Datasette,11912854,open,0,,,1,2018-05-02T08:03:28Z,2018-05-21T15:29:31Z,,NONE,,"I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with [PyTables](https://github.com/PyTables/PyTables) format. In order to accomplish that, I've started [a fork for decoupling the code related with SQLite](https://github.com/jsancho-gpl/datasette/tree/feature/db-type-plugin) and putting it in an external connector to allow future connectors for a lot of file formats. It'd be nice if you could look at it and suggest improvements for a possible PR.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/247/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 320132682,MDU6SXNzdWUzMjAxMzI2ODI=,250,Setup some issue templates,9599,open,0,,,0,2018-05-04T01:49:07Z,2018-05-04T01:49:07Z,,OWNER,,"https://twitter.com/left_pad/status/99216385740464537 I like the idea of using these to help people understand some of the ways I want to use issues.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/250/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323223872,MDU6SXNzdWUzMjMyMjM4NzI=,260,Validate metadata.json on startup,9599,open,0,,,7,2018-05-15T13:42:56Z,2023-06-21T12:51:22Z,,OWNER,,"It's easy to misspell the name of a database or table and then be puzzled when the metadata settings silently fail. To avoid this, let's sanity check the provided metadata.json on startup and quit with a useful error message if we find any obvious mistakes.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/260/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323658641,MDU6SXNzdWUzMjM2NTg2NDE=,262,Add ?_extra= mechanism for requesting extra properties in JSON,9599,open,0,,3268330,27,2018-05-16T14:55:42Z,2023-03-29T06:22:22Z,,OWNER,,"Datasette views currently work by creating a set of data that should be returned as JSON, then defining an additional, optional `template_data()` function which is called if the view is being rendered as HTML. This `template_data()` function calculates extra template context variables which are necessary for the HTML view but should not be included in the JSON. Example of how that is used today: https://github.com/simonw/datasette/blob/2b79f2bdeb1efa86e0756e741292d625f91cb93d/datasette/views/table.py#L672-L704 With features like Facets in #255 I'm beginning to want to move more items into the `template_data()` - in the case of facets it's the `suggested_facets` array. This saves that feature from being calculated (involving several SQL queries) for the JSON case where it is unlikely to be used. But... as an API user, I want to still optionally be able to access that information. Solution: Add a `?_extra=suggested_facets&_extra=table_metadata` argument which can be used to optionally request additional blocks to be added to the JSON API. Then redefine as many of the current `template_data()` features as extra arguments instead, and teach Datasette to return certain extras by default when rendering templates. This could allow the JSON representation to be slimmed down further (removing e.g. the `table_definition` and `view_definition` keys) while still making that information available to API users who need it.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/262/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323718842,MDU6SXNzdWUzMjM3MTg4NDI=,268,Mechanism for ranking results from SQLite full-text search,9599,open,0,,,12,2018-05-16T17:36:40Z,2022-01-13T22:21:28Z,,OWNER,,This isn't particularly straight-forward - all the more reason for Datasette to implement it for you. This article is helpful: http://charlesleifer.com/blog/using-sqlite-full-text-search-with-python/,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/268/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 326599525,MDU6SXNzdWUzMjY1OTk1MjU=,286,Database hash should include current datasette version,9599,open,0,,,2,2018-05-25T17:03:42Z,2018-05-25T17:07:36Z,,OWNER,,"Right now deploying a new version of datasette doesn't invalidate existing URLs, so users may still see a cached copy of the old templates. We can fix this by including the current datasette version in the input to the hash function (which currently just the database file contents).",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/286/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 326778161,MDU6SXNzdWUzMjY3NzgxNjE=,290,Consider increasing the default for num_sql_threads (currently 3),9599,open,0,,,0,2018-05-27T00:52:41Z,2018-05-27T00:52:41Z,,OWNER,,"I ran a very rough micro-benchmark on the new `num_sql_threads` config option (added in #285) datasette --config num_sql_threads:1 fivethirtyeight.db Then ab -n 100 -c 10 'http://127.0.0.1:8011/fivethirtyeight-2628db9/twitter-ratio%2Fsenators' | Number of threads | Requests/second | |---|---| | 1 | 4.57 | | 3 | 9.77 | | 10 | 13.53 | | 20 | 15.24 | 50 | 8.21 | This was on my early 2018 OS X laptop. Need to benchmark in other common environments before making a decision on changing the default. That said, the default of 3 was a number I plucked out of thin air.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/290/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 327365110,MDU6SXNzdWUzMjczNjUxMTA=,294,inspect should record column types,9599,open,0,,,7,2018-05-29T15:10:41Z,2019-06-28T16:45:28Z,,OWNER,,"For each table we want to know the columns, their order and what type they are. I'm going to break with SQLite defaults a little on this one and allow datasette to define additional types - to start with just a `geometry` type for columns that are detected as SpatiaLite geometries. Possible JSON design: ""columns"": [{ ""name"": ""title"", ""type"": ""text"" }, ...] Refs #276",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/294/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 327395270,MDU6SXNzdWUzMjczOTUyNzA=,296,Per-database and per-table /-/ URL namespace,9599,open,0,,,3,2018-05-29T16:23:13Z,2019-06-28T16:46:34Z,,OWNER,,"Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph` To start: * `/dbname/-/inspect` * `/dbname/-/metadata` * `/dbname/tablename/-/inspect` * `/dbname/tablename/-/metadata` This means we will no longer allow databases or tables to have the name `""-""` - I think that's OK We will continue to support rows with a primary key of `""-""` at the following URL: * `/dbname/tablename/-`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/296/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 328155946,MDU6SXNzdWUzMjgxNTU5NDY=,301,"--spatialite option for ""datasette publish heroku""",9599,open,0,,,1,2018-05-31T14:13:09Z,2022-01-20T21:28:50Z,,OWNER,,Split off from #243. Need to figure out how to install and configure SpatiaLite on Heroku.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/301/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 330826972,MDU6SXNzdWUzMzA4MjY5NzI=,308,"Support extra Heroku apps:create options - region, space, team",78156,open,0,,,2,2018-06-08T23:08:33Z,2018-09-21T14:09:28Z,,NONE,,"It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 335200136,MDU6SXNzdWUzMzUyMDAxMzY=,327,Explore if SquashFS can be used to shrink size of packaged Docker containers,9599,open,0,,,4,2018-06-24T18:15:16Z,2022-02-17T23:37:24Z,,OWNER,,"Inspired by this article: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html#sqlite-database-indexed--squashed https://en.wikipedia.org/wiki/SquashFS is ""a compressed read-only file system for Linux"" - which means it could be a really nice fit for Datasette and its read-only SQLite databases. It would be interesting to explore a Dockerfile recipe that used SquashFS to compress the SQLite database file that was bundled up by `datasette package` and friends.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/327/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 341228846,MDU6SXNzdWUzNDEyMjg4NDY=,343,Render boolean fields better by default,45057,open,0,,,1,2018-07-14T11:10:29Z,2018-07-14T14:17:14Z,,CONTRIBUTOR,,These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/343/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 344654623,MDU6SXNzdWUzNDQ2NTQ2MjM=,347,"Rename ""datasette package"" to ""datasette publish docker""",9599,open,0,,,0,2018-07-26T00:42:46Z,2018-07-26T00:42:46Z,,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/347/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 346026869,MDU6SXNzdWUzNDYwMjY4Njk=,354,Handle many-to-many relationships,9599,open,0,,,0,2018-07-31T04:03:13Z,2020-11-24T19:51:18Z,,OWNER,,This is a master tracking ticket for various many-2-many features.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/354/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 346027040,MDU6SXNzdWUzNDYwMjcwNDA=,355,Table view should support filtering via many-to-many relationships,9599,open,0,,,10,2018-07-31T04:04:16Z,2019-05-23T06:04:03Z,,OWNER,,Parent: #354 ,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/355/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 348043884,MDU6SXNzdWUzNDgwNDM4ODQ=,357,Plugin hook for loading metadata.json,9599,open,0,,,6,2018-08-06T19:00:01Z,2020-06-21T22:19:58Z,,OWNER,,"For https://github.com/simonw/russian-ira-facebook-ads-datasette/tree/af6d956995e14afd585c35a6a06bb01da32043ba I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements. Example yaml metadata file: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/af6d956995e14afd585c35a6a06bb01da32043ba/russian-ads-metadata.yaml It would be useful if Datasette could be fed a YAML file directly: datasette -m metadata.yaml Question is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a `datasette-metadata-yaml` plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/357/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 352768017,MDU6SXNzdWUzNTI3NjgwMTc=,362,Add option to include/exclude columns in search filters,78156,open,0,,,1,2018-08-22T01:32:08Z,2020-11-03T19:01:59Z,,NONE,,"I have a dataset with many columns, of which only some are likely to be of interest for searching. It would be great for usability if the search filters in the UI could be configured to include/exclude columns. See also: https://github.com/simonw/datasette/issues/292",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/362/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 355299310,MDExOlB1bGxSZXF1ZXN0MjExODYwNzA2,363,Search all apps during heroku publish,436032,open,0,,,1,2018-08-29T19:25:10Z,2018-08-31T14:39:45Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/363,Adds the `-A` option to include apps from all organizations when searching app names for publish.,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/363/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 359075028,MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx,364,Support for other types of databases using external connectors,11912854,open,0,,,0,2018-09-11T14:31:47Z,2018-09-11T14:31:47Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/364,"This PR is related to #293, but now all commits have been merged. The purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points. The modifications in the original datasette code are minimal and many are in a separated file.",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/364/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 374953006,MDU6SXNzdWUzNzQ5NTMwMDY=,369,Interface should show same JSON shape options for custom SQL queries,416374,open,0,,3268330,2,2018-10-29T10:39:15Z,2020-05-30T17:24:06Z,,CONTRIBUTOR,,"At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the `_shape` parameter to the JSON API URL manually still works, so perhaps there should be consistency in the interface by having the same ""Advanced Export"" box for custom SQL queries.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/369/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 377155320,MDU6SXNzdWUzNzcxNTUzMjA=,370,Integration with JupyterLab,82988,open,0,,,4,2018-11-04T13:57:13Z,2022-09-29T08:17:47Z,,CONTRIBUTOR,,"I just watched a demo video for the [JupyterLab Chart Editor](https://www.crowdcast.io/e/introducing-JupyterLab-Chart-Editor/) which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue). JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe. This made me wonder about what `datasette` integration with JupyterLab might do. For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a *View / Run as datasette* file viewer option that will: - run the CSV file through `csvs-to-sqlite`; - launch the `datasette` server and display the `datasette` view in a JupyterLab panel. (? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all `datasette` viewed CSVs and runs on a single port?) As a freebie, the `datasette` API would allow you to run efficient SQL queries against the file eg using using `pandas.read_sql()` queries in a notebook in the same space. Related: - [JupyterLab extensions docs](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html) - a [cookiecutter for wrting JupyterLab extensions using Javascript](https://github.com/jupyterlab/extension-cookiecutter-js) - a [cookiecutter for writing JupyterLab extensions using Typescript](https://github.com/jupyterlab/extension-cookiecutter-ts) - tutorial: [Let’s Make an xkcd JupyterLab Extension](https://jupyterlab.readthedocs.io/en/stable/developer/xkcd_extension_tutorial.html)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/370/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 377166793,MDU6SXNzdWUzNzcxNjY3OTM=,372,Docker build tools,82988,open,0,,,0,2018-11-04T16:02:35Z,2018-11-04T16:02:35Z,,CONTRIBUTOR,,"In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as `requirements.txt` files. If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base `datasette` container may be required. Examples of Dockerfile generators / container builders: - [openshift/source-to-image (s2i)](https://github.com/openshift/source-to-image) - [jupyter/repo2docker](https://github.com/jupyter/repo2docker) - [stencila/dockter](https://github.com/stencila/dockter) Discussions / threads (via Binderhub gitter) on: - [why `repo2docker` not `s2i`](http://words.yuvi.in/post/why-not-s2i/) - [why `dockter` not `repo2docker`](https://twitter.com/choldgraf/status/1058499607309647872) - [composability in `s2i`](https://trello.com/c/AexIVZNf/1008-8-composable-builds-builds-evg) Relates to things like: - https://github.com/simonw/datasette/pull/280",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/372/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 2, ""rocket"": 0, ""eyes"": 0}",, 400340905,MDU6SXNzdWU0MDAzNDA5MDU=,402,Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs,9599,open,0,,,3,2019-01-17T15:52:28Z,2019-01-17T16:15:21Z,,OWNER,,"> Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html https://twitter.com/ignoredambience/status/1085926961413869568",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/402/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 411257981,MDU6SXNzdWU0MTEyNTc5ODE=,412,Linked Data(sette),43340,open,0,,,2,2019-02-18T00:38:14Z,2019-03-19T10:09:46Z,,NONE,,"I've a radical feature idea (possible first as an extension in order to experiment?): I'd like to link to a remote table from a remote database, e.g. with a function ""linked_datasette()"". So one could do following query: ``` SELECT foo.id, foo.a, remote_party.b FROM foo JOIN linked_datasette(""https://parlgov.datasettes.com/parlgov-b42a2f2"") AS remote_party ON foo.id=remote_party.id ``` This is inspired by SPARQL's SERVICE keyword for remote RDF ""endpoints"". There's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ). And here's an implementation from me in Postgres FDW to connect another Postgres ""endpoint"": https://pastebin.com/Fz2v64Cz .",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/412/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 421546944,MDU6SXNzdWU0MjE1NDY5NDQ=,417,Datasette Library,9599,open,0,,,12,2019-03-15T14:30:22Z,2020-12-29T14:34:50Z,,OWNER,,"The ability to run Datasette in a mode where it automatically picks up new (or modified) files in a directory tree without needing to restart the server. Suggested command: datasette library /path/to/mydbs/",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/417/reactions"", ""total_count"": 8, ""+1"": 8, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 426722204,MDU6SXNzdWU0MjY3MjIyMDQ=,423,?_search_col=X not reflected correctly in the UI,9599,open,0,,,0,2019-03-28T21:48:19Z,2020-11-03T19:01:59Z,,OWNER,,"e.g. https://latest.datasette.io/fixtures/searchable?_search_text1=barry ![2019-03-28 at 2 47 PM](https://user-images.githubusercontent.com/9599/55195035-84ebb800-5168-11e9-910b-fc9868bcd93e.png) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/423/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 440325850,MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2,452,SQL builder utility classes,45057,open,0,,,0,2019-05-04T13:57:47Z,2019-05-04T14:03:04Z,,CONTRIBUTOR,simonw/datasette/pulls/452,"This adds a straightforward set of classes to aid in the construction of SQL queries. My plan for this was to allow plugins to manipulate the Datasette-generated SQL in a more structured way. I'm not sure that's going to work, but I feel like this is still a step forward - it reduces the number of intermediate variables in `TableView.data` which aids readability, and also factors out a lot of the boring string concatenation. There are a fair number of minor structure changes in here too as I've tried to make the ordering of `TableView.data` a bit more logical. As far as I can tell, I haven't broken anything...",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/452/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 443021509,MDU6SXNzdWU0NDMwMjE1MDk=,461,Paginate + search for databases/tables on the homepage,9599,open,0,,3268330,4,2019-05-11T18:05:34Z,2020-12-17T22:14:46Z,,OWNER,,Split out from #460 - in order to support large numbers of connected databases the homepage needs to be paginated.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/461/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 447408527,MDU6SXNzdWU0NDc0MDg1Mjc=,483,Option to facet by date using month or year,9599,open,0,,,5,2019-05-23T01:25:29Z,2019-05-29T21:38:27Z,,OWNER,,"Facet by date (from #481) can take datetimes and facet them by the day component. https://latest.datasette.io/fixtures/facetable?_facet_date=created I'd like to also be able to facet by month or year. I'm not sure what the best way to achieve this is. Could be two more Facet classes (YearFacet and MonthFacet) but I think it might be nicer if the existing DateFacet could take an optional argument that changed its behaviour. But... if I do that, do I expose it in the UI somewhere or is it only available to URL-hackers?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/483/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 447451492,MDU6SXNzdWU0NDc0NTE0OTI=,484,Mechanism for displaying summary of m2m relationships in rows on table view,9599,open,0,,,1,2019-05-23T05:02:41Z,2019-05-23T06:34:05Z,,OWNER,,"Part of #354 (m2m support) It would be fantastic if rows that are part of a m2m relationship could display it in an additional column in the table view. It might look something like this: https://russian-ira-facebook-ads.datasettes.com/russian-ads-919cbfd/display_ads?_search=black+lives+matter That example [was achieved](https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/daf51a8c50a78e8bc7971c211005fd85e66ccf64/russian-ads-metadata.yaml#L72-L77) using a custom SQL query and [datasette-json-html](https://github.com/simonw/datasette-json-html) - but I'd like this to be a built-in feature instead.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/484/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 447469253,MDU6SXNzdWU0NDc0NjkyNTM=,485,Improvements to table label detection ,9599,open,0,9599,,10,2019-05-23T06:19:49Z,2022-10-03T00:04:42Z,,OWNER,,"Label detection doesn't work if the primary key is called pk rather than id, so this page doesn't work: https://latest.datasette.io/fixtures/roadside_attraction_characteristics Code is here: https://github.com/simonw/datasette/blob/cccea85be6aaaeadb31f3b588ec7f732628815f5/datasette/app.py#L644-L653",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/485/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 449445715,MDU6SXNzdWU0NDk0NDU3MTU=,491,Figure out how to use Firebase with cloudrun to enable vanity URLs and CDN caching,9599,open,0,,,0,2019-05-28T19:48:06Z,2019-05-28T19:48:35Z,,OWNER,,"It looks like Firebase can solve a couple of problems with the existing `datasette publish cloudrun` hosting mechanism: * The URLs it produces aren't pretty enough. Firebase offers more control over vanity URLs. * CDN caching (as seen in `datasette publish now`) is great for improving performance and saving money on Cloud Run execution time. https://firebase.google.com/docs/hosting/cloud-run looks like it can help with both of these. Lots of interesting questions: * Should this be a new `datasette publish firebase` command or should it instead be implemented as additional custom options to `datasette publish cloudrun`? * How much harder does it become to do account setup? * How much will this option cost users?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/491/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 450032134,MDU6SXNzdWU0NTAwMzIxMzQ=,495,facet_m2m gets confused by multiple relationships,9599,open,0,,,2,2019-05-29T21:37:28Z,2020-12-17T05:08:22Z,,OWNER,,"I got this for a database I was playing with: I think this is because of these three tables: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/495/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 451585764,MDU6SXNzdWU0NTE1ODU3NjQ=,499,Accessibility for non-techie newsies? ,7936571,open,0,,,3,2019-06-03T16:49:37Z,2019-06-05T21:22:55Z,,NONE,,"Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/499/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 455852801,MDU6SXNzdWU0NTU4NTI4MDE=,507,Every datasette plugin on the ecosystem page should have a screenshot,9599,open,0,,,4,2019-06-13T17:02:51Z,2020-09-17T02:47:35Z,,OWNER,,https://github.com/simonw/datasette/blob/master/docs/ecosystem.rst,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/507/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 456569067,MDU6SXNzdWU0NTY1NjkwNjc=,510,Ability to facet by delimiter (e.g. comma separated fields),9599,open,0,9599,,1,2019-06-15T19:34:41Z,2019-07-08T15:44:51Z,,OWNER,,"E.g. if a field contains ""Tags,With,Commas"" be able to facet them in the same way as `_facet_array=` lets you facet `[""Tags"", ""With"", ""Commas""]`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/510/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 456578474,MDU6SXNzdWU0NTY1Nzg0NzQ=,511,Get Datasette tests passing on Windows in GitHub Actions,9599,open,0,,,13,2019-06-15T21:41:58Z,2021-07-11T17:23:05Z,,OWNER,,"This should almost happen as a side-effect or moving from Sanic to Uvicorn during the port to ASGI: #272 Additional steps: - test it manually - update documentation - set up some form of Windows CI ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/511/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 457147936,MDU6SXNzdWU0NTcxNDc5MzY=,512,"""about"" parameter in metadata does not appear when alone",7936571,open,0,,,3,2019-06-17T21:04:20Z,2019-10-11T15:49:13Z,,NONE,,"Here's an example of metadata I have for one database on datasette. ``` ""Records-requests"": { ""tables"": { ""Some table"": { ""about"": ""This table has data."" } } } ``` The text in `about` does not show up when I publish the data. But it shows up after I add a `""source""` parameter in the metadata. Is this intended?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/512/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459469278,MDU6SXNzdWU0NTk0NjkyNzg=,515,Try shrinking official image with docker-slim,9599,open,0,,,0,2019-06-22T12:25:37Z,2019-06-22T12:25:37Z,,OWNER,,"This looks really promising: https://github.com/docker-slim/docker-slim If it can shave substantial size from our official container reliably we could add it to the automated build process.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/515/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459509126,MDU6SXNzdWU0NTk1MDkxMjY=,516,Enforce import sort order with isort,9599,open,0,,,8,2019-06-22T20:35:50Z,2023-08-23T02:15:36Z,,OWNER,,"I want to use isort to order imports. A few steps here: - [x] Add a .isort.cfg file (see below) - [x] Use `isort -rc` to reformat existing code - [ ] Commit this change - [x] Add a unit test that ensures future changes remain isort compatible",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/516/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459622390,MDU6SXNzdWU0NTk2MjIzOTA=,522,Handle case-insensitive headers in a nicer way,9599,open,0,,,1,2019-06-23T21:56:34Z,2019-06-26T18:48:53Z,,OWNER,,Spun out from https://github.com/simonw/datasette/pull/518#discussion_r296486289,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/522/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459882902,MDU6SXNzdWU0NTk4ODI5MDI=,526,Stream all results for arbitrary SQL and canned queries,50578294,open,0,,,23,2019-06-24T13:09:45Z,2022-09-28T04:01:25Z,,NONE,,"I think that there is a difficulty with canned queries. When I want to stream all results of a canned query TwoDays I get only first 1.000 records. Example: `http://myserver/history_sample/two_days.csv?_stream=on` returns only first 1.000 records. If I do the same with the whole database i.e. `http://myserver/history_sample/database.csv?_stream=on` I get correctly all records. Any ideas?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/526/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 460095928,MDU6SXNzdWU0NjAwOTU5Mjg=,528,Establish a pattern for Datasette plugins built on top of Pandas,9599,open,0,,,0,2019-06-24T21:05:52Z,2019-06-24T21:05:52Z,,OWNER,,"The Pandas ecosystem is huge, varied and full of tools that are really good at doing interesting analysis on top of tabular data. Pandas should not be a dependency of Datasette core, but I think there is a lot of potential in having plugins which use Pandas to apply interesting analysis to data sucked out of Datasette's SQLite tables. One example ([thanks, Tony](https://twitter.com/psychemedia/status/1143259809715752962)): https://github.com/ResidentMario/missingno could form the basis of a fantastic plugin for getting a high-level overview of how complete each column in a table is. Some thought is needed here about what shape these kind of plugins might take, and what plugin hooks they would use.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/528/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 462117311,MDU6SXNzdWU0NjIxMTczMTE=,531,/database/-/inspect,9599,open,0,,,1,2019-06-28T16:33:41Z,2019-07-08T15:43:57Z,,OWNER,,"Build `/database/-/inspect` which shows tables, columns, column types and foreign keys It won't show table counts. Or maybe it will include them optionally but only for `-i` databases, in a special area of the JSON reserved for immutable-only inspect details. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/465#issuecomment-506797086_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/531/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 463492815,MDU6SXNzdWU0NjM0OTI4MTU=,534,500 error on m2m facet detection,9599,open,0,,,1,2019-07-03T00:42:42Z,2020-12-17T05:08:22Z,,OWNER,,"This may help debug: ``` diff --git a/datasette/facets.py b/datasette/facets.py index 76d73e5..07a4034 100644 --- a/datasette/facets.py +++ b/datasette/facets.py @@ -499,11 +499,14 @@ class ManyToManyFacet(Facet): ""outgoing"" ] if len(other_table_outgoing_foreign_keys) == 2: - destination_table = [ - t - for t in other_table_outgoing_foreign_keys - if t[""other_table""] != self.table - ][0][""other_table""] + try: + destination_table = [ + t + for t in other_table_outgoing_foreign_keys + if t[""other_table""] != self.table + ][0][""other_table""] + except IndexError: + import pdb; pdb.pm() # Only suggest if it's not selected already if (""_facet_m2m"", destination_table) in args: continue ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/534/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 463544206,MDU6SXNzdWU0NjM1NDQyMDY=,537,"Populate ""endpoint"" key in ASGI scope",9599,open,0,,,12,2019-07-03T04:54:47Z,2019-07-22T06:03:18Z,,OWNER,,"This is a trick used by Starlette so that other layers of ASGI middleware can see which route was selected. They added it here: https://github.com/encode/starlette/commit/34d0097feb6f057bd050d5057df5a2f96b97384e If Datasette supports it as well we can benefit from it if we integrate this sentry_asgi middleware (probably as a `datasette-sentry` plugin): https://github.com/encode/sentry-asgi/blob/c6a42d44d31f85885b79e4ee898683ecf8104971/sentry_asgi/middleware.py#L34-L35",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/537/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 464987783,MDExOlB1bGxSZXF1ZXN0Mjk1MTI3MjEz,546,Facet by delimiter,9599,open,0,,,2,2019-07-07T20:06:05Z,2019-11-18T23:46:01Z,,OWNER,simonw/datasette/pulls/546,Refs #510,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/546/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 465003070,MDU6SXNzdWU0NjUwMDMwNzA=,551,Ship many-to-many faceting support (and facet-by-delimiter),9599,open,0,,,2,2019-07-07T23:11:45Z,2019-07-08T15:45:23Z,,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/551/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 465019882,MDU6SXNzdWU0NjUwMTk4ODI=,552,"Add --plugin-secret support to ""datasette package""",9599,open,0,,,1,2019-07-08T01:46:47Z,2019-07-08T01:47:30Z,,OWNER,,"Split out from #544. I think I should combine this with #347 (renaming `datasette package` to `datasette publish docker`).",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/552/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 465327844,MDU6SXNzdWU0NjUzMjc4NDQ=,553,Potential improvements to facet-by-date,9599,open,0,,,3,2019-07-08T15:37:53Z,2019-07-08T15:41:55Z,,OWNER,,"In addition to #483 Tobias had some useful suggestions on Twitter: https://twitter.com/rixxtr/status/1148253926476701696 > I think for date facets, it might be more meaningful to order them by date, rather than by size? Or offer both? I'm *definitely* often interested in size-over-time, so https://data.rixx.de/django_tickets/tickets?_facet_date=created#facet-created … isn't all that helpful! Screenshot of that link: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/553/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 473288428,MDExOlB1bGxSZXF1ZXN0MzAxNDgzNjEz,564,First proof-of-concept of Datasette Library,9599,open,0,,,1,2019-07-26T10:22:26Z,2023-02-07T15:14:11Z,,OWNER,simonw/datasette/pulls/564,"Refs #417. Run it like this: datasette -d ~/Library Uses a new plugin hook - available_databases() ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/564/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1, 476852861,MDU6SXNzdWU0NzY4NTI4NjE=,568,Add database_color as a configurable option,50906992,open,0,,,1,2019-08-05T13:14:45Z,2023-08-11T05:19:42Z,,NONE,,This would be really useful as it would allow us to tie in with colour schemes.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/568/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 481885279,MDU6SXNzdWU0ODE4ODUyNzk=,569,More advanced connection pooling,9599,open,0,,,4,2019-08-17T13:20:41Z,2019-10-02T22:44:37Z,,OWNER,,"We need a much smarter way of handling database connections. Today, connections are simple: Datasette runs a number of threads (defaults to 3) and each thread gets a threadlocal read-only (or immutable) connection to each attached database - opened on demand. For Datasette Library (#417) I want to support potentially hundreds of attached databases. Datasette Edit (#567) is going to introduce a need for writable connections too. I'd also like to be able to run joins across multiple databases (#283) which further complicates things. Supporting thousands of open SQLite connections at once feels like it won't provide good enough performance (though I should benchmark that to be sure). Some kind of connection pooling is likely to be necessary.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/569/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 501773982,MDExOlB1bGxSZXF1ZXN0MzIzOTgzNzMy,579,New connection pooling,9599,open,0,,,1,2019-10-02T23:22:19Z,2019-11-15T22:57:21Z,,OWNER,simonw/datasette/pulls/579,See #569,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/579/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 503053243,MDU6SXNzdWU1MDMwNTMyNDM=,582,Datasette should not completely crash if one SQLite database is malformed,9599,open,0,,,0,2019-10-06T05:11:43Z,2019-10-06T05:11:43Z,,OWNER,,"If you run Datasette against a number of database files and one of them is malformed, you get this 500 error on the index page: It would be better if Datasette still worked and listed the databases that were NOT malformed, then showed an inline error message just for the one that could not be accessed.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/582/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 507454958,MDU6SXNzdWU1MDc0NTQ5NTg=,596,Handle really wide tables better,9599,open,0,,,9,2019-10-15T20:05:46Z,2022-09-07T00:58:41Z,,OWNER,,"If a table has hundreds of columns the Datasette UI starts getting unwieldy. Addressing this would be neat. One option would be to only select the first 30 columns by default and provide a UI for selecting more.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/596/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 510076368,MDU6SXNzdWU1MTAwNzYzNjg=,605,Support queries at the table level,12617395,open,0,,,2,2019-10-21T15:58:30Z,2019-10-30T18:55:37Z,,NONE,,"Per the issue described in [issue #588](https://github.com/simonw/datasette/issues/588), it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/605/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 516874735,MDU6SXNzdWU1MTY4NzQ3MzU=,613,Basic join support for table view,9599,open,0,,,1,2019-11-03T19:12:53Z,2019-11-03T19:14:01Z,,OWNER,,"I think it would be possible to support basic foreign key joins on the table page. The user could specify columns that should result in a join (from a set of suggestions similar to how facets work right now) and they could then be passed as `?_join=city_id` arguments. This feature will make a lot of sense when combined with the ability to show / hide / customize columns, see #292",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/613/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 520667773,MDU6SXNzdWU1MjA2Njc3NzM=,620,Mechanism for indicating foreign key relationships in the table and query page URLs,9599,open,0,,,6,2019-11-10T22:26:27Z,2021-04-05T03:57:22Z,,OWNER,,"Datasette currently only inflates foreign keys (into names hyperlinks) if it detects them as foreign key constraints in the underlying database. It would be useful if you could specify additional ""foreign keys"" using both `metadata.json` and the querystring - similar time how you can pass `?_fts_table=x` https://datasette.readthedocs.io/en/stable/full_text_search.html#configuring-full-text-search-for-a-table-or-view",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/620/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 1}",, 520681725,MDU6SXNzdWU1MjA2ODE3MjU=,621,Syntax for ?_through= that works as a form field,9599,open,0,,,7,2019-11-11T00:19:03Z,2021-12-18T01:42:33Z,,OWNER,,"The current syntax for `?_through=` uses JSON to avoid any risk of confusion with table or column names that contain special characters. This means you can't target a form field at it. We should be able to support both - `?x.y.z=value` for tables and columns with ""regular"" names, falling back to the current JSON syntax for columns or tables that won't work with the key/value syntax.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/621/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 527670799,MDU6SXNzdWU1Mjc2NzA3OTk=,639,updating metadata.json without recreating the app,172847,open,0,,,6,2019-11-24T09:19:53Z,2019-11-30T06:08:50Z,,NONE,,"I've sucessfully ""uploaded"" an SQLite database (with a metadata.json file) to heroku using: $ datasette publish heroku so-sales.db -m metadata.json -n so-sales The question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database. The directions on heroku indicate I should run: heroku git:clone -a so-sales But this just results in an empty directory with a warning: warning: You appear to have cloned an empty repository. I've been able to ""clone"" the heroku ""app"" using the command: $ heroku slugs:download -a so-sales but this is not a git repository.... Ideally, it seems to me, there'd be an option of the `datasette` CLI to allow a file to be updated, or there'd be some way to create a local git ""clone"" of the app so that the heroku instructions for ""Deploying with git"" would apply. (p.s. I ran `datasette publish heroku -m metadata.json -n so-sales` in the hope that that would not cause the .db file to be wiped, but of course it was.) (p.p.s. Thanks for Datasette!)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/639/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 527710055,MDU6SXNzdWU1Mjc3MTAwNTU=,640,Nicer error message for heroku publish name clash,82988,open,0,,,1,2019-11-24T14:57:07Z,2019-12-06T07:19:34Z,,CONTRIBUTOR,,"If you try to publish to Heroku using no set name (i.e. the default `datasette` name) and a project already exists under that name, you get a meaningful error report on the first line followed by Py error messages that drown it out: ``` Creating datasette... ! ▸ Name datasette is already taken Traceback (most recent call last): File ""/usr/local/bin/datasette"", line 10, in sys.exit(cli()) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 764, in __call__ return self.main(*args, **kwargs) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 717, in main rv = self.invoke(ctx) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 555, in invoke return callback(*args, **kwargs) File ""/Users/NNNNN/Library/Python/3.7/lib/python/site-packages/datasette/publish/heroku.py"", line 124, in heroku create_output = check_output(cmd).decode(""utf8"") File ""/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py"", line 411, in check_output **kwargs).stdout File ""/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py"", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['heroku', 'apps:create', 'datasette', '--json']' returned non-zero exit status 1. ``` It would be neater if: - the Py error message was caught; - the report suggested setting a project name using `-n` etc. It may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/640/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 530468212,MDU6SXNzdWU1MzA0NjgyMTI=,643,Set up some basic benchmarks as part of the unit tests,9599,open,0,,,0,2019-11-29T19:24:19Z,2019-11-29T19:24:19Z,,OWNER,,"https://pypi.org/project/pytest-benchmark/ looks great for this. Here's how to run it as a github action: https://github.com/rhysd/github-action-benchmark/blob/master/examples/pytest/README.md",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/643/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 531502365,MDU6SXNzdWU1MzE1MDIzNjU=,646,Make database level information from metadata.json available in the index.html template,18017473,open,0,,3268330,3,2019-12-02T19:55:10Z,2022-03-15T20:50:34Z,,NONE,,"Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/646/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 534629631,MDU6SXNzdWU1MzQ2Mjk2MzE=,650,Add a glossary to the documentation,9599,open,0,,,3,2019-12-09T00:23:45Z,2022-01-13T22:04:56Z,,OWNER,,"Call it `glossary.rst` - it can use a definition list something like this: ```rst .. _glossary: Glossary ======== Term A definition of the term. Another term Another definition. ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/650/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 548591089,MDU6SXNzdWU1NDg1OTEwODk=,657,Allow creation of virtual tables at startup,1055831,open,0,,,4,2020-01-12T16:10:55Z,2021-01-15T20:24:35Z,,NONE,,"Hi, I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial CREATE VIRTUAL TABLE ......... command that is required to load the data from the parquet file into the table. It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time? My thoughts are that something at init time could detect that the user was loading a *.parquet file and then switch to a mode were it loads that via the ""CREATE VIRTUAL TABLE..."" rather than loading the *.db file in the default case?? I'm happy to contribute code and testing, I just need some pointers on the best approach. Thanks Darren",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/657/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 550293770,MDU6SXNzdWU1NTAyOTM3NzA=,658,How do I use the app.css as style sheet?,49656826,open,0,,,2,2020-01-15T16:27:57Z,2020-02-07T00:29:50Z,,NONE,,"Simon, I'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't. Is this possible? Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/658/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 559964149,MDU6SXNzdWU1NTk5NjQxNDk=,665,Introduce a SQL statement parser in Python,9599,open,0,,,1,2020-02-04T20:36:05Z,2020-02-04T20:36:48Z,,OWNER,,#254 and #653 are both examples of problems that could be solved using a real SQL parser in Python.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/665/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 564833696,MDU6SXNzdWU1NjQ4MzM2OTY=,670,Prototoype for Datasette on PostgreSQL,9599,open,0,,,15,2020-02-13T17:17:55Z,2023-11-17T15:32:21Z,,OWNER,,"I thought this would never happen, but now that I'm deep in the weeds of running SQLite in production for Datasette Cloud I'm starting to reconsider my policy of only supporting SQLite. Some of the factors making me think PostgreSQL support could be worth the effort: - Serverless. I'm getting increasingly excited about writable-database use-cases for Datasette. If it could talk to PostgreSQL then users could easily deploy it on Heroku or other serverless providers that can talk to a managed RDS-style PostgreSQL. - Existing databases. Plenty of organizations have PostgreSQL databases. They can export to SQLite using [db-to-sqlite](https://github.com/simonw/db-to-sqlite) but that's a pretty big barrier to getting started - being able to run `datasette postgresql://connection-string` and start trying it out would be a massively better experience. - Data size. I keep running into use-cases where I want to run Datasette against many GBs of data. SQLite can do this but PostgreSQL is much more optimized for large data, especially given the existence of tools like Citus. - Marketing. Convincing people to trust their data to SQLite is potentially a big barrier to adoption. Even if I've convinced myself it's trustworthy I still have to convince everyone else. - It might not be that hard? If this required a ground-up rewrite it wouldn't be worth the effort, but I have a hunch that it may not be too hard - most of the SQL in Datasette should work on both databases since it's almost all portable SELECT statements. If Datasette did DML this would be a lot harder, but it doesn't. - Plugins! This feels like a natural surface for a plugin - at which point people could add MySQL support and suchlike in the future. The above reasons feel strong enough to justify a prototype.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/670/reactions"", ""total_count"": 19, ""+1"": 14, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 5, ""rocket"": 0, ""eyes"": 0}",, 565064079,MDExOlB1bGxSZXF1ZXN0Mzc1MTgwODMy,672,--dirs option for scanning directories for SQLite databases,9599,open,0,,,15,2020-02-14T02:25:52Z,2020-03-27T01:03:53Z,,OWNER,simonw/datasette/pulls/672,Refs #417.,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/672/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 567902704,MDU6SXNzdWU1Njc5MDI3MDQ=,675,--cp option for datasette publish and datasette package for shipping additional files and directories,141844,open,0,,,12,2020-02-19T22:55:56Z,2020-12-28T18:49:21Z,,NONE,,"I’m working on integrating Datasette into a documentation-oriented publishing workflow internally in my company, and in order to deploy the Docker image created by `datasette package` I need to add an additional file to the image — in my case, it’s a sort of a deployment directive. I’ve worked out a way to do this after the image has been created, but it’s convoluted and brittle. So it’d be excellent if there was an additional option for this command, something like, like, `--copy`. I’d envision it looking something like: ```shell $ datasette package --copy /the/source/path:/the/target/path data.db ``` I’d be happy to help design, specify, implement, and test this feature, if you’d be interested. Thanks for the fantastic tools!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/675/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 574021194,MDU6SXNzdWU1NzQwMjExOTQ=,691,--reload sould reload server if code in --plugins-dir changes,9599,open,0,,,1,2020-03-02T14:42:21Z,2020-06-14T02:35:17Z,,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/691/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 574035432,MDU6SXNzdWU1NzQwMzU0MzI=,692,is_hidden_table context variable on table.html page,9599,open,0,,,1,2020-03-02T15:03:25Z,2020-03-02T15:03:48Z,,OWNER,,It's useful to know if a table is hidden when rendering that page. `datasette-configure-fts` for example may want to disallow enabling search on hidden tables.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/692/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 593006814,MDU6SXNzdWU1OTMwMDY4MTQ=,715,Refactor duplicate cell display logic,9599,open,0,,,0,2020-04-03T00:58:11Z,2020-04-03T00:58:11Z,,OWNER,,"The logic for rendering cells in table view and in database (or canned query) view is currently very similar: https://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/base.py#L514-L539 Compared with: https://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/table.py#L104-L195 I'll be changing this a bit in #698 but I should still try to clean this up more further in the future.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/715/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 594237015,MDU6SXNzdWU1OTQyMzcwMTU=,718,Plugin idea: datasette-redirects,9599,open,0,,,0,2020-04-05T03:41:38Z,2023-08-30T22:17:31Z,,OWNER,,"I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin. https://github.com/simonw/museums/blob/6b1faf00c463b2228860d4d62d104b11935e01b1/plugins/redirect_www.py",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/718/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,reopened 607223136,MDU6SXNzdWU2MDcyMjMxMzY=,741,"Replace ""datasette publish --extra-options"" with ""--setting""",9599,open,0,,3268330,9,2020-04-27T04:29:04Z,2022-05-12T19:21:16Z,,OWNER,,"See https://github.com/simonw/datasette-publish-now/issues/9#issuecomment-618155764 - the `--extra-options` mechanism is in practice just used to set `--config` options in data that you publish, but that means you end up with pretty messy looking commands: datasette publish my.db --extra-options=""--config default_page_size:50 --config sql_time_limit_ms:3500"" A neater design would be to support `--config` as an option for `datasette publish` directly: datasette publish my.db --config default_page_size:50 --config sql_time_limit_ms:3500 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/741/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 612382643,MDU6SXNzdWU2MTIzODI2NDM=,758,Question: Access to immutable database-path,2181410,open,0,,,6,2020-05-05T07:01:18Z,2020-05-28T08:23:27Z,,NONE,,"Hi Simon Is there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context (`databases[0][""path"")` when rendering urls of the database (eg. `/db-44b06v9/cases`...), but where can I find the hashed url when rendering the index-page? I'm trying to avoid redirects. Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/758/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 613422636,MDU6SXNzdWU2MTM0MjI2MzY=,760,Way of seeing full schema for a database,9599,open,0,,,3,2020-05-06T15:46:08Z,2020-05-06T23:49:06Z,,OWNER,,"I find myself wanting to quickly figure out all of the BLOB columns in a database. A `/-/schema` page showing the full schema (actually since it's per-database probably `/dbname/-/schema` or `/-/schema/dbname`) would be really handy. It would need to be carefully constructed from various queries against `sqlite_master` - just doing `select * from sqlite_master where type='table'` isn't quite enough because I also want to show indexes, triggers etc.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/760/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 613491342,MDU6SXNzdWU2MTM0OTEzNDI=,762,Experiment with PRAGMA hard_heap_limit ,9599,open,0,,,0,2020-05-06T17:33:23Z,2020-05-07T03:08:44Z,,OWNER,,"This was added in SQLite 2020-01-22 (3.31.0): https://www.sqlite.org/changes.html#version_3_31_0 > Add the [sqlite3_hard_heap_limit64()](https://www.sqlite.org/c3ref/hard_heap_limit64.html) interface and the corresponding [PRAGMA hard_heap_limit](https://www.sqlite.org/pragma.html#pragma_hard_heap_limit) command. This sounds like it could be a nice extra safety measure.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/762/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 616087149,MDU6SXNzdWU2MTYwODcxNDk=,765,publish heroku should default to currently tagged version,9599,open,0,,,1,2020-05-11T18:24:06Z,2020-05-11T18:25:43Z,,OWNER,,"Had a report that deploying to Heroku was using the previously installed version of Datasette, not the latest. Could be because of this: https://github.com/simonw/datasette/blob/af6c6c5d6f929f951c0e63bfd1c82e37a071b50f/datasette/publish/heroku.py#L172-L179 Heroku documentation recommends pinning to specific versions https://devcenter.heroku.com/articles/python-pip So... we could ensure we default to an install value of `[""datasette>=current_tag""]`.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/765/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 617323873,MDU6SXNzdWU2MTczMjM4NzM=,766,Enable wildcard-searches by default,2181410,open,0,,,2,2020-05-13T10:14:48Z,2021-03-05T16:35:21Z,,NONE,,"Hi Simon. It seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the [docs](https://datasette.readthedocs.io/en/stable/full_text_search.html#the-table-page-and-table-view-api) (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance. I would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring ""_searchmode=raw"" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using ""_searchmode=raw""? Is the ""escape_fts""-function from datasette.utils ignored? Thanks! /Claus",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/766/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,