id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 1054244712,I_kwDOBm6k_c4-1n9o,1510,Datasette 1.0 documented template context (maybe via API docs),9599,open,0,,3268330,3,2021-11-15T23:23:58Z,2023-06-28T02:05:21Z,,OWNER,,Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1510/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1054243511,I_kwDOBm6k_c4-1nq3,1509,Datasette 1.0 JSON API (and documentation),9599,open,0,,3268330,3,2021-11-15T23:22:45Z,2022-03-15T20:38:56Z,,OWNER,,"The new JSON API in a stable, documented form.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1509/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1054246919,I_kwDOBm6k_c4-1ogH,1511,Review plugin hooks for Datasette 1.0,9599,open,0,,3268330,1,2021-11-15T23:26:05Z,2021-11-16T01:20:14Z,,OWNER,,I need to perform a detailed review of the plugin interface - especially the plugin hooks like [register_facet_classes()](https://docs.datasette.io/en/stable/plugin_hooks.html#register-facet-classes) which I don't yet have complete confidence in.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1511/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1055469073,I_kwDOBm6k_c4-6S4R,1513,Research: CTEs and union all to calculate facets AND query at the same time,9599,closed,0,,,12,2021-11-16T22:26:45Z,2021-11-16T23:41:46Z,2021-11-16T23:41:46Z,OWNER,,"Consider this page: https://global-power-plants.datasettes.com/global-power-plants/global-power-plants?_search=plant&_facet=owner&_facet=country_long&_facet=primary_fuel Datasette needs to run the main query for the rows on that page, a count query for the total query, then a separate query for each of those three specified facets. This is a `_search=` query, so it needs to execute the FTS code once for the rows, again for the count, and then three more times for each of the facets. Could running that query as a CTE and doing the other queries as part of the same large query produce significant speed improvements?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1513/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1056746091,I_kwDOBm6k_c4-_Kpr,1515,Handle foreign keys that point to a non-existent table,9599,open,0,,,0,2021-11-17T23:40:13Z,2021-11-18T01:31:56Z,,OWNER,,"Spotted in https://github.com/simonw/datasette-graphql/issues/79 Demo: https://datasette-graphql-demo.datasette.io/fixtures/bad_foreign_key The foreign key links to a 404 page. ![B87009C7-CFCA-4DF9-8FBA-FA3E6CA28EC2](https://user-images.githubusercontent.com/9599/142334788-4d1a4acd-bc87-4426-b333-d46b221afcec.jpeg) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1515/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1049946823,I_kwDOBm6k_c4-lOrH,1502,"Full-text search: No support to unary ""-"" operator",516827,open,0,,,0,2021-11-10T15:11:19Z,2021-11-10T15:11:19Z,,NONE,,"Reference: https://www.sqlite.org/fts3.html#set_operations_using_the_standard_query_syntax Test: https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort+-freedman&_sort=rowid",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1502/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1050163432,I_kwDOBm6k_c4-mDjo,1503,`?_nocol=` removes that column from the filter interface,9599,closed,0,,,1,2021-11-10T18:22:50Z,2021-11-14T05:08:27Z,2021-11-14T04:53:07Z,OWNER,,"e.g. on https://latest.datasette.io/fixtures/sortable?_nocol=sortable This causes weird behaviour when you e.g. facet by a hidden column, since selecting facets and then re-submitting the form will clear the selected filter. ![nocol-bug](https://user-images.githubusercontent.com/9599/141171135-aded71d1-a4cb-4b7f-a4ea-26828fa98906.gif) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1503/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1051277222,I_kwDOBm6k_c4-qTem,1504,Link to ?_size=max at bottom of table page,9599,open,0,,,0,2021-11-11T19:06:33Z,2021-11-11T19:06:33Z,,OWNER,,"This can have text such as ""Show 1,000 rows per page"", based on the max size limit setting. Would make it easier for people to see more data at once without having to know how to hack the URL, similar to the `...` for facet sizes I added in #1337.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1504/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1052247023,I_kwDOBm6k_c4-uAPv,1505,Datasette should have an option to output CSV with semicolons,9599,open,0,,,1,2021-11-12T18:02:21Z,2021-11-16T11:40:52Z,,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1505/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1052826038,I_kwDOBm6k_c4-wNm2,1506,Columns beginning with an underscore do not facet correctly,9599,closed,0,,,1,2021-11-14T02:20:32Z,2021-11-14T04:45:21Z,2021-11-14T04:45:21Z,OWNER,,"Datasette treats columns that start with an underscore as querystring parameters it should ignore! Discovered in https://github.com/simonw/git-history/issues/14#issuecomment-968192464",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1506/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1052851176,I_kwDOBm6k_c4-wTvo,1507,ReadTheDocs build failed for 0.59.2 release,9599,closed,0,,,6,2021-11-14T05:24:34Z,2021-11-14T05:41:55Z,2021-11-14T05:41:55Z,OWNER,,"I had to cancel the 0.59.2 release because ReadTheDocs was failing to build the documentation. https://readthedocs.org/projects/datasette/builds/15268454/ ``` /home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/bin/python -m sphinx -T -b html -d _build/doctrees -D language=en . _build/html Running Sphinx v1.8.5 loading translations [en]... done making output directory... building [mo]: targets for 0 po files that are out of date building [html]: targets for 27 source files that are out of date updating environment: 27 added, 0 changed, 0 removed reading sources... [ 3%] authentication Traceback (most recent call last): File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/cmd/build.py"", line 304, in build_main app.build(args.force_all, filenames) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/application.py"", line 341, in build self.builder.build_update() File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py"", line 347, in build_update len(to_build)) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py"", line 360, in build updated_docnames = set(self.read()) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py"", line 468, in read self._read_serial(docnames) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py"", line 490, in _read_serial self.read_doc(docname) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py"", line 534, in read_doc doctree = read_doc(self.app, self.env, self.env.doc2path(docname)) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/io.py"", line 318, in read_doc pub.publish() File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py"", line 219, in publish self.apply_transforms() File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py"", line 200, in apply_transforms self.document.transformer.apply_transforms() File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/__init__.py"", line 90, in apply_transforms Transformer.apply_transforms(self) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/transforms/__init__.py"", line 171, in apply_transforms transform.apply(**kwargs) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/__init__.py"", line 245, in apply apply_source_workaround(n) File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py"", line 94, in apply_source_workaround for classifier in reversed(node.parent.traverse(nodes.classifier)): TypeError: argument to reversed() must be a sequence Exception occurred: File ""/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py"", line 94, in apply_source_workaround for classifier in reversed(node.parent.traverse(nodes.classifier)): TypeError: argument to reversed() must be a sequence The full traceback has been saved in /tmp/sphinx-err-vkl0oE.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. A bug report can be filed in the tracker at . Thanks! ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1507/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1006016302,I_kwDOBm6k_c479pcu,1477,Consider adding request to the documented default template context,9599,open,0,,,0,2021-09-24T02:34:09Z,2021-09-24T02:34:09Z,,OWNER,,I made a plugin for this today but I think perhaps it should be a default thing instead: https://datasette.io/plugins/datasette-template-request,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1477/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 999902754,I_kwDOBm6k_c47mU4i,1473,base logo link visits `undefined` rather than href url,192568,open,0,,,2,2021-09-18T04:17:04Z,2021-09-19T00:45:32Z,,CONTRIBUTOR,,"I have two connected sites: http://www.SaferOrToxic.org (a Hugo website) and: http://disinfectants.SaferOrToxic.org/disinfectants/listN (a datasette table page) The latter is linked as ""The List"" in the former's menu. (I'd love a prettier URL, but that's what I've got.) On: http://disinfectants.SaferOrToxic.org/disinfectants/listN ... all the other menu links should point back to: https://www.SaferOrToxic.org And they do! But the logo, for some reason--though it has an href pointing to: https://www.SaferOrToxic.org Keeps going to this instead: https://disinfectants.saferortoxic.org/disinfectants/undefined What is causing that? How can I fix it? In #1284 back in March, I was doing battle with the index.html template, in a still unresolved issue. (I wanted only a single table page at the root.) But I thought, well, if I can't resolve that, at least I could just point the main website to the datasette page (""The List,"") and then have the List point back to the home website. The menu hrefs to https://www.SaferOrToxic.org work just fine, exactly as they should, from the datasette page. Even the Home link works properly. But the logo link keeps rewriting to: https://disinfectants.saferortoxic.org/disinfectants/undefined This is the HTML: ``` ``` Is this somehow related to cloudflare? Or something in the datasette code? I'm starting to think it's a cloudflare issue. Can I at least rule out it being a datasette issue? My repository is here: https://github.com/mroswell/list-N (BTW, I couldn't figure out how to reference a local image, either, on the datasette side, which is why I'm using the image from the www home page.) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1473/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1021550542,I_kwDOBm6k_c4845_O,1482,Support Python 3.10,9599,closed,0,,,2,2021-10-09T00:30:52Z,2021-10-24T22:21:40Z,2021-10-24T22:19:55Z,OWNER,,"I started work on this in #1481 where I found a Python 3.10 bug that needs a workaround in Janus, see: - https://github.com/aio-libs/janus/issues/358 This is a tracking issue for anything else that shows up. This is also needed for the Homebrew package to upgrade to 3.10: - https://github.com/Homebrew/homebrew-core/pull/86932",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1482/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1021849766,I_kwDOBm6k_c486DCm,1483,Running a search on page 2 of results should not preserve ?_next=,9599,closed,0,,,0,2021-10-10T01:18:12Z,2021-10-13T21:08:10Z,2021-10-13T21:08:10Z,OWNER,,Reported by @eigenfoo in https://github.com/simonw/datasette/issues/1470,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1483/reactions"", ""total_count"": 2, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1006781949,I_kwDOBm6k_c48AkX9,1478,Documentation Request: Feature alternative ID instead of default ID,192568,open,0,,,0,2021-09-24T19:56:13Z,2021-09-25T16:18:54Z,,CONTRIBUTOR,,"My data already has an ID that comes from a federal agency. Would love to have documentation on how to modify the template to: - Remove the generated ID from the table - Link the federal ID to the detail page - and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key. I don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports. Perhaps this isn't a template issue, maybe more of a db manipulation... Margie",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1478/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1010112818,I_kwDOBm6k_c48NRky,1479,"Win32 ""used by another process"" error with datasette publish",76450761,open,0,,,7,2021-09-28T19:12:00Z,2023-09-07T02:14:16Z,,NONE,,"I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette. Failed to deploy. Attaching logs: 1. Tried with an app created via `flyctl apps create frosty-fog-8565` and the ran `datasette publish fly covid.db --app frosty-fog-8565` ``` Deploying frosty-fog-8565 ==> Validating app configuration --> Validating app configuration done Services TCP 80/443 ⇢ 8080 Error error connecting to docker: An unknown error occured. Traceback (most recent call last): File ""c:\users\grott\anaconda3\lib\runpy.py"", line 193, in _run_module_as_main ""__main__"", mod_spec) File ""c:\users\grott\anaconda3\lib\runpy.py"", line 85, in _run_code exec(code, run_globals) File ""C:\Users\grott\Anaconda3\Scripts\datasette.exe\__main__.py"", line 7, in File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly\__init__.py"", line 156, in fly ""--remote-only"", File ""c:\users\grott\anaconda3\lib\contextlib.py"", line 119, in __exit__ next(self.gen) File ""c:\users\grott\anaconda3\lib\site-packages\datasette\utils\__init__.py"", line 451, in temporary_docker_directory tmp.cleanup() File ""c:\users\grott\anaconda3\lib\tempfile.py"", line 811, in cleanup _shutil.rmtree(self.name) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 516, in rmtree return _rmtree_unsafe(path, onerror) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\grott\\AppData\\Local\\Temp\\tmpgcm8cz66\\frosty-fog-8565' ``` 2. Tried also with an app that gets autogenerate when running `flyctl launch`. This also generates the .toml file. Ran then `datasette publish fly covid.db --app dark-feather-168` **but different error now** ```Deploying dark-feather-168 ==> Validating app configuration Error not possible to validate configuration: server returned Post ""https://api.fly.io/graphql"": unexpected EOF Traceback (most recent call last): File ""c:\users\grott\anaconda3\lib\runpy.py"", line 193, in _run_module_as_main ""__main__"", mod_spec) exec(code, run_globals) File ""C:\Users\grott\Anaconda3\Scripts\datasette.exe\__main__.py"", line 7, in File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly\__init__.py"", line 156, in fly ""--remote-only"", File ""c:\users\grott\anaconda3\lib\contextlib.py"", line 119, in __exit__ next(self.gen) File ""c:\users\grott\anaconda3\lib\site-packages\datasette\utils\__init__.py"", line 451, in temporary_docker_directory tmp.cleanup() File ""c:\users\grott\anaconda3\lib\tempfile.py"", line 811, in cleanup _shutil.rmtree(self.name) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 516, in rmtree return _rmtree_unsafe(path, onerror) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\grott\\AppData\\Local\\Temp\\tmpnoyewcre\\dark-feather-168' ``` These are also the contents of the generated **.toml file** in 2 scenario: ``` # fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00 app = ""dark-feather-168"" kill_signal = ""SIGINT"" kill_timeout = 5 processes = [] [env] [experimental] allowed_public_ports = [] auto_rollback = true [[services]] http_checks = [] internal_port = 8080 processes = [""app""] protocol = ""tcp"" script_checks = [] [services.concurrency] hard_limit = 25 soft_limit = 20 type = ""connections"" [[services.ports]] handlers = [""http""] port = 80 [[services.ports]] handlers = [""tls"", ""http""] port = 443 [[services.tcp_checks]] grace_period = ""1s"" interval = ""15s"" restart_limit = 6 timeout = ""2s"" ``` 3. But also trying `datasette package covid.db` to create a local DOCKERFILE to later try to push it via `flyctl deploy` fails as well. ```[+] Building 147.3s (11/11) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 396B 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.8 4.7s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.1s => => transferring context: 82.37kB 0.0s => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s => [2/5] COPY . /app 2.3s => [3/5] WORKDIR /app 0.2s => [4/5] RUN pip install -U datasette 26.9s => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s => exporting to image 1.2s => => exporting layers 1.2s => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Traceback (most recent call last): ""__main__"", mod_spec) File ""c:\users\grott\anaconda3\lib\runpy.py"", line 85, in _run_code exec(code, run_globals) File ""C:\Users\grott\Anaconda3\Scripts\datasette.exe\__main__.py"", line 7, in File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""c:\users\grott\anaconda3\lib\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""c:\users\grott\anaconda3\lib\site-packages\datasette\cli.py"", line 283, in package call(args) File ""c:\users\grott\anaconda3\lib\contextlib.py"", line 119, in __exit__ next(self.gen) File ""c:\users\grott\anaconda3\lib\site-packages\datasette\utils\__init__.py"", line 451, in temporary_docker_directory tmp.cleanup() File ""c:\users\grott\anaconda3\lib\tempfile.py"", line 811, in cleanup _shutil.rmtree(self.name) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 516, in rmtree return _rmtree_unsafe(path, onerror) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File ""c:\users\grott\anaconda3\lib\shutil.py"", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\grott\\AppData\\Local\\Temp\\tmpkb27qid3\\datasette'```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1479/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1023243105,I_kwDOBm6k_c48_XNh,1486,pipx installation instructions for plugins don't reference pipx inject,41546558,closed,0,,,0,2021-10-12T00:43:42Z,2021-10-13T21:09:11Z,2021-10-13T21:09:11Z,CONTRIBUTOR,,"The datasette [installation instructions](https://github.com/simonw/datasette/blob/main/docs/installation.rst) discuss how to install with pipx, how to upgrade with pipx, and how to upgrade plugins with pipx but do not mention how to install a plugin with pipx. You discussed this on your [blog](https://til.simonwillison.net/python/installing-upgrading-plugins-with-pipx) but looks like this didn't make it in when you updated the docs for pipx (#756). I'll submit a PR shortly to fix this.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1486/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1015646369,I_kwDOBm6k_c48iYih,1480,Exceeding Cloud Run memory limits when deploying a 4.8G database,110420,open,0,,,9,2021-10-04T21:20:24Z,2022-10-07T04:39:10Z,,CONTRIBUTOR,,"When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message: > Memory limit of 8192M exceeded with 8826M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits Unfortunately, the maximum amount of memory that can be allocated to an instance is 8192M. Naively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally: - Real Memory Size: 70.6 MB - Virtual Memory Size: 4.51 GB - Shared Memory Size: 2.5 MB - Private Memory Size: 57.4 MB I'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow. This is somewhat related to #1082, but on a different platform, so I decided to open a new issue.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1480/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1025754125,I_kwDOBm6k_c49I8QN,1488,Upgrade to httpx 0.20.0 (request() got an unexpected keyword argument 'allow_redirects'),9599,closed,0,,,5,2021-10-13T22:37:22Z,2021-10-14T18:03:45Z,2021-10-14T18:03:45Z,OWNER,,This is caused by a change made to `httpx` in https://github.com/encode/httpx/releases/tag/0.20.0,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1488/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1028115674,I_kwDOBm6k_c49R8za,1493,`--get '/:memory:.json?sql=select+3*5'` error with datasette 0.59,1580956,closed,0,,,1,2021-10-16T18:22:22Z,2021-10-19T04:39:11Z,2021-10-19T04:39:11Z,NONE,,"👋 trying to upgrade the formula to use the latest release, but runs into some regression test issue with `--get` command. My QQ is does this `datasette --get '/:memory:.json?sql=select+3*5'` supposed to return 15? Thanks! relates to https://github.com/Homebrew/homebrew-core/pull/87369",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1493/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1033864602,I_kwDOBm6k_c49n4Wa,1496,Named parameters docs should include an example of a cast,9599,closed,0,,,1,2021-10-22T18:56:04Z,2021-10-22T19:38:23Z,2021-10-22T19:34:27Z,OWNER,,"https://docs.datasette.io/en/stable/sql_queries.html#named-parameters It's not obvious that the values from parameters are always SQLite strings, which means that you can't do e.g. integer comparisons on them without casting them first. The documentation here should include an example of this.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1496/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1034535001,I_kwDOBm6k_c49qcBZ,1497,"Publish to Docker Hub failing with ""libcrypt.so.1: cannot open shared object file""",9599,closed,0,,,18,2021-10-24T22:57:07Z,2023-01-18T17:13:45Z,2021-10-24T23:36:55Z,OWNER,,"This means the Datasette 0.59.1 release has not been published to Docker Hub. Here's where that failed: https://github.com/simonw/datasette/runs/3991043374?check_suite_focus=true ``` Preparing to unpack .../libc6_2.32-4_amd64.deb ... debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/x86_64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.) debconf: falling back to frontend: Teletype Checking for services that may need to be restarted... Checking init scripts... Unpacking libc6:amd64 (2.32-4) over (2.28-10) ... Setting up libc6:amd64 (2.32-4) ... /usr/bin/perl: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory dpkg: error processing package libc6:amd64 (--configure): installed libc6:amd64 package post-installation script subprocess returned error exit status 127 Errors were encountered while processing: libc6:amd64 E: Sub-process /usr/bin/dpkg returned an error code (1) The command '/bin/sh -c apt-get update && apt-get -y --no-install-recommends install software-properties-common && add-apt-repository ""deb http://httpredir.debian.org/debian sid main"" && apt-get update && apt-get -t sid install -y --no-install-recommends libsqlite3-mod-spatialite && apt-get remove -y software-properties-common && apt clean && rm -rf /var/lib/apt && rm -rf /var/lib/dpkg/info/*' returned a non-zero code: 100 ``` Same problem when I attempted to publish using the ""Push specific Docker tag"" workflow: https://github.com/simonw/datasette/runs/3991059912?check_suite_focus=true",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1497/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1072106103,I_kwDOBm6k_c4_5wp3,1542,feature request: order and dependency of plugins (that use js),33631,open,0,,,1,2021-12-06T12:40:45Z,2021-12-15T17:47:08Z,,NONE,,"I have been playing with datasette for the last couple of weeks and it is great! I am a big fan of `datasette-cluster-map` and wanted to enhance it a bit with a what I would call a sub-plugin. I basically want to add more controls to the map that cluster map provides. I have been looking into its code and how the plugin management works, but it seems what I am trying to do is not doable without hacks in js. Basically what would like to have is a way to say load my plugin after the plugins I depend on have been loaded and rendered. There seems to be no prior art where plugins have these dependencies on the js level so I was wondering if that could be added or if it exists how to do it. Basically what I want to do is: my-awesome-plugin has a dependency on datastte-cluster-map. Whenever datasette cluster map has finished rendering on page load, call my plugin, but no earlier. To make that work datasette probably needs some total order in which way plugins are loaded intialized. Since I am new to datastte, I may be missing something obvious, so please let me know if the above makes no sense.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1542/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1057996111,I_kwDOBm6k_c4_D71P,1517,Let `register_routes()` over-ride default routes within Datasette,9599,closed,0,,3268330,2,2021-11-19T00:22:15Z,2021-11-19T03:20:00Z,2021-11-19T03:07:27Z,OWNER,,"See https://github.com/simonw/datasette/issues/878#issuecomment-973554024_ - right now `register_routes()` can't replace default Datasette routes. It would be neat if plugins could do this - especially if there was a neat documented way for them to then re-dispatch to the original route code after making some kind of modification.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1517/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1058072543,I_kwDOBm6k_c4_EOff,1518,Complete refactor of TableView and table.html template,9599,open,0,,3268330,45,2021-11-19T02:55:16Z,2022-03-15T18:35:49Z,,OWNER,,"Split from #878. The current `TableView` class is by far the most complex part of Datasette, and the most difficult to work on: https://github.com/simonw/datasette/blob/0.59.2/datasette/views/table.py In #878 I started exploring a new pattern for building views. In doing so it became clear that `TableView` is the first beast that I need to slay - if I can refactor that into something neat the pattern for building other views will emerge as a natural consequence. I've been trying to build this as a `register_routes()` plugin, as originally suggested in #870 - though unfortunately it looks like those plugins can't replace existing Datasette default views at the moment, see #1517. [UPDATE: I was wrong about this, plugins can over-ride default views just fine] I also know that I want to have a fully documented template context for `table.html` as a major step on the way to Datasette 1.0, see #1510. All of this adds up to the `TableView` factor being a major project that will unblock a whole flurry of other things - so I'm going to work on that in this separate issue.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1518/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1058790545,I_kwDOBm6k_c4_G9yR,1519,base_url is omitted in JSON and CSV views,157158,closed,0,,,22,2021-11-19T18:10:45Z,2021-12-01T17:50:09Z,2021-11-20T19:11:21Z,NONE,,"I have a datasette deployment, using Apache2 to reverse proxy: ProxyPass /ged http://thor.phfactor.net:8001 ProxyPreserveHost On In settings.json I have ```json { ""base_url"": ""/ged/"", ""trace_debug"": 1, ""template_debug"": 1 } ``` and datasette works correctly. However, if you view a query and then click on the 'This data as json, CSV' both links omit the base_url prefix and are therefore 404.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1519/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1058803238,I_kwDOBm6k_c4_HA4m,1520,Pattern for avoiding accidental URL over-rides,9599,open,0,,,1,2021-11-19T18:28:05Z,2021-11-19T18:29:26Z,,OWNER,,"Following #1517 I'm experimenting with a plugin that does this: ```python @hookimpl def register_routes(): return [ (r""/(?P[^/]+)/(?P[^/]+?)$"", Table().view), ] ``` This is supposed to replace the default table page with new code... but there's a problem: `/-/versions` on that instance now returns 404 `Database '-' does not exist`! Need to figure out a pattern to avoid that happening. Plugins get to add their routes before Datasette's default routes, which is why this is happening here.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1520/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1058815557,I_kwDOBm6k_c4_HD5F,1521,Docker configuration for exercising Datasette behind Apache mod_proxy,9599,closed,0,,,10,2021-11-19T18:46:18Z,2021-11-19T20:32:29Z,2021-11-19T20:32:29Z,OWNER,,"> Having a live demo running on Cloud Run that proxies through Apache and uses `base_url` would be incredibly useful for replicating and debugging this kind of thing. I wonder how hard it is to run Apache and `mod_proxy` in the same Docker container as Datasette? _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1519#issuecomment-974310208_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1521/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1058896236,I_kwDOBm6k_c4_HXls,1522,Deploy a live instance of demos/apache-proxy,9599,closed,0,,,34,2021-11-19T20:32:55Z,2021-11-23T03:00:34Z,2021-11-20T18:51:56Z,OWNER,,"> I'll get this working on my laptop first, but then I want to get it up and running on Cloud Run - maybe with a GitHub Actions workflow in this repo that re-deploys it on manual execution. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1521#issuecomment-974322178_ I started by following https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/ - see example in https://github.com/ahmetb/multi-process-container-lazy-solution",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1522/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1059209412,I_kwDOBm6k_c4_IkDE,1523,Come up with a more elegant solution for base_url than ds.urls.path(),9599,open,0,,,0,2021-11-20T19:05:22Z,2021-11-20T19:05:22Z,,OWNER,,"While fixing #1519 I added a lot of ugly code that looks like this: https://github.com/simonw/datasette/blob/08947fa76433d18988aa1ee1d929bd8320c75fe2/datasette/facets.py#L228-L230 See these two commits in particular: fe687fd0207c4c56c4778d3e92e3505fc4b18172 and 08947fa76433d18988aa1ee1d929bd8320c75fe2 It would be great to come up with a less verbose and error-prone way of handling this problem.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1523/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1059219106,I_kwDOBm6k_c4_Imai,1524,"Improve Apache proxy documentation, link to demo",9599,closed,0,,,4,2021-11-20T20:03:14Z,2021-11-20T23:34:03Z,2021-11-20T23:34:03Z,OWNER,,"> The latest demo is now live at https://datasette-apache-proxy-demo.fly.dev/prefix/fixtures/sortable?_facet=pk2 _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1519#issuecomment-974697824_ I'm going to put out 0.59.3 bugfix release with this, but I'd like to first improve the documentation on https://docs.datasette.io/en/stable/deploying.html#apache-proxy-configuration to highlight the new demo.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1524/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1059549523,I_kwDOBm6k_c4_J3FT,1526,"Add to vercel.json, rather than overwriting it.",192568,closed,0,,,2,2021-11-22T00:47:12Z,2021-11-22T04:49:45Z,2021-11-22T04:13:47Z,CONTRIBUTOR,,"I'd like to be able to add to vercel.json. But Datasette overwrites whatever I put in that file. I originally reported this here: https://github.com/simonw/datasette-publish-vercel/issues/51 In that case, I wanted to do a rewrite... and now I need to do 301 redirects (because we had to rename our site). Can this be addressed? ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1526/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1059555791,I_kwDOBm6k_c4_J4nP,1527,Columns starting with an underscore behave poorly in filters,9599,closed,0,,7571612,7,2021-11-22T01:01:36Z,2022-01-14T00:57:08Z,2022-01-14T00:57:08Z,OWNER,,"Similar bug to #1525 (and #1506 before it). Start on https://latest.datasette.io/fixtures/facetable?_facet=_neighborhood - then select a neighborhood - then try to remove that filter using the little ""x"" and submitting the form again. ![filter-bug](https://user-images.githubusercontent.com/9599/142786754-31d265a2-944d-4ea2-af6f-305d445a2ccb.gif) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1527/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1059509927,I_kwDOBm6k_c4_Jtan,1525,"""Links from other tables"" broken for columns starting with underscore",9599,closed,0,,,3,2021-11-21T22:55:08Z,2021-11-30T06:39:01Z,2021-11-30T06:34:35Z,OWNER,,"Same bug as #1506, this time it's this link or the row page: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1525/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1060631257,I_kwDOBm6k_c4_N_LZ,1528,"Add new `""sql_file""` key to Canned Queries in metadata?",15178711,open,0,,,3,2021-11-22T21:58:01Z,2022-06-10T03:23:08Z,,CONTRIBUTOR,,"Currently for canned queries, you have to inline SQL in your `metadata.yaml` like so: ```yaml databases: fixtures: queries: neighborhood_search: sql: |- select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood title: Search neighborhoods ``` This works fine, but for a few reasons, I usually have my canned queries already written in separate `.sql` files. I'd like to instead re-use those instead of re-writing it. So, I'd like to see a new `""sql_file""` key that works like so: `metadata.yaml`: ```yaml databases: fixtures: queries: neighborhood_search: sql_file: neighborhood_search.sql title: Search neighborhoods ``` `neighborhood_search.sql`: ```sql select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood ``` Both of these would work in the exact same way, where Datasette would instead open + include `neighborhood_search.sql` on startup. A few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml: - Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code - Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit - Works well with other tools (can pipe `.sql` files into the `sqlite3` CLI, or use with other SQLite clients easier) - Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible Let me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1528/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1073712378,I_kwDOBm6k_c4__4z6,1544,Code that detects the label column for a table is case-sensitive,9599,closed,0,,,2,2021-12-07T20:01:25Z,2021-12-07T20:03:43Z,2021-12-07T20:03:43Z,OWNER,,I just noticed that a column called `Name` is not being picked up as the label column for a table.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1544/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1065429936,I_kwDOBm6k_c4_gSuw,1532,Use datasette-table Web Component to guide the design of the JSON API for 1.0,9599,open,0,,3268330,4,2021-11-28T20:37:18Z,2022-03-16T20:13:34Z,,OWNER,,"I realized that one of the reasons I'm having trouble committing to nailing down the JSON API for 1.0 is that I don't use it much myself - I use the `?_shape=array` one quite often, but I don't have any projects that are using the default, more fully-featured API. As an experiment I built a Web Component for embedding Datasette tables on pages - https://github.com/simonw/datasette-table - and I think it's actually going to be a really useful tool for helping me dog food the v1.0 API design.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1532/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1065431383,I_kwDOBm6k_c4_gTFX,1533,"Add `Link: rel=""alternate""` header pointing to JSON for a table/query",9599,closed,0,,3268330,4,2021-11-28T20:43:25Z,2022-02-02T07:56:51Z,2022-02-02T07:49:33Z,OWNER,,"Originally explored in https://github.com/simonw/datasette-notebook/issues/2#issuecomment-980789406 - I wanted an efficient way to scan a list of URLs and figure out which if any of those corresponded to Datasette tables, canned queries or SQL output that could be represented as a table on a page. It looks like a neat way to do that is with ` Link:` header like this: `Link: http://127.0.0.1:8058/fixtures/compound_three_primary_keys.json; rel=""alternate""; type=""application/datasette+json""` I can put a ` The query_only pragma prevents data changes on database files when enabled. When this pragma is enabled, any attempt to CREATE, DELETE, DROP, INSERT, or UPDATE will result in an [SQLITE_READONLY](https://www.sqlite.org/rescode.html#readonly) error. However, the database is not truly read-only. You can still run a [checkpoint](https://www.sqlite.org/wal.html#ckpt) or a [COMMIT](https://www.sqlite.org/lang_transaction.html) and the return value of the [sqlite3_db_readonly()](https://www.sqlite.org/c3ref/db_readonly.html) routine is not affected. Would it be worth adding this as an extra protection against accidental writes to a DB file over a read-only connection?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1539/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1068791148,I_kwDOBm6k_c4_tHVs,1540,Idea: hover to reveal details of linked row,9599,open,0,,,6,2021-12-01T19:28:07Z,2021-12-09T23:38:39Z,,OWNER,," Hovering over that could work a little bit like GitHub issue links: ![hover](https://user-images.githubusercontent.com/9599/144300537-9cd9e9af-ac16-42db-842f-37661bc94063.gif) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1540/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1069881276,I_kwDOBm6k_c4_xRe8,1541,Different default layout for row page,9599,open,0,,,1,2021-12-02T18:56:36Z,2021-12-02T18:56:54Z,,OWNER,,"The row page displays as a table even though it only has one table row. maybe default to the same display as the narrow page version, even for wide pages?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1541/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1955676270,I_kwDOBm6k_c50kUBu,2201,Discord invite link is invalid,11708906,open,0,,,0,2023-10-21T21:50:05Z,2023-10-21T21:50:05Z,,NONE,,"https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2201/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1977726056,I_kwDOBm6k_c514bRo,2203,custom plugin not seen as sql function,7113541,open,0,,,0,2023-11-05T10:30:19Z,2023-11-05T10:30:19Z,,NONE,,"Hi, I'm not sure if this is the right repo for this issue. I'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly. Now I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works). If I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error: ```duckdb.duckdb.CatalogException: Catalog Error: Scalar Function with name hello_world does not exist!``` Since the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2203/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1978023780,I_kwDOBm6k_c515j9k,2205,request.post_vars() method obliterates form keys with multiple values,9599,open,0,,8755003,3,2023-11-05T23:25:08Z,2023-11-06T04:10:34Z,,OWNER,,"https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139 In GET requests you can do `?foo=1&foo=2` - you can do the same in POST requests, but the `dict()` call here eliminates those duplicates. You can't even try calling `post_body()` and implement your own custom parsing because of: - #2204",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2205/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1978022687,I_kwDOBm6k_c515jsf,2204,request.post_body() can only be called once,9599,open,0,,,0,2023-11-05T23:22:03Z,2023-11-05T23:23:23Z,,OWNER,,"This code here: https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L127-L135 It consumes the messages, which means if you try to call it a second time you won't be able to get at the body. This is efficient - we don't end up with a `request` object property with potentially megabytes of content that we never look at again - but it's inconvenient for cases like middleware or functions where we don't know if the body has been consumed yet or not. Potential solution: set `request._body` the first time it is called, and return that on subsequent calls. Potential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call `post_body()` multiple times against one of those larger bodies. I'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2204/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1994845152,I_kwDOBm6k_c525uvg,2207,ModuleNotFoundError: No module named 'click_default_group,283441,open,0,,,0,2023-11-15T14:04:32Z,2023-11-15T14:04:32Z,,NONE,,"No matter what I do, I'm getting this error: ``` $ datasette Traceback (most recent call last): File ""/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/bin/datasette"", line 5, in from datasette.cli import cli File ""/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/lib/python3.11/site-packages/datasette/cli.py"", line 6, in from click_default_group import DefaultGroup ModuleNotFoundError: No module named 'click_default_group' ``` I have datasette in my dependencies like this: ```toml [tool.poetry.group.dev.dependencies] datasette = {version = ""1.0a7"", allow-prereleases = true} ``` I had the latest regular version (not pre-release) there originally, but the result was the same: ```toml [tool.poetry.group.dev.dependencies] datasette = ""0.64.5"" ``` Full pyproject.toml is at https://github.com/honzajavorek/junior.guru/ Previously datasette worked for me, but I guess something had to upgrade and now I can't even launch it.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2207/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1994857251,I_kwDOBm6k_c525xsj,2208,No suggested facets when a column named 'value' is included,198537,open,0,,,1,2023-11-15T14:11:17Z,2023-11-15T14:18:59Z,,CONTRIBUTOR,,"When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'. https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L168-L174 Currently the following is shown (from https://latest.datasette.io/fixtures/facetable) ![image](https://github.com/simonw/datasette/assets/198537/a919509a-ea88-461b-b25b-8b776720c7c5) When I add a column named 'value' only the JSON facets are processed. ![image](https://github.com/simonw/datasette/assets/198537/092bd0b3-4c20-434e-88f8-47e2b8994a1d) I think that not using aliases could be a solution (except if someone wants to use a column named `count(*)` though this seems to be unlikely). I'll open a PR with that. There is also a TODO with a similar question in the same file. I have not looked into that yet. https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L512",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2208/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 2028698018,I_kwDOBm6k_c5463mi,2213,feature request: gzip compression of database downloads,536941,open,0,,,1,2023-12-06T14:35:03Z,2023-12-06T15:05:46Z,,CONTRIBUTOR,,"At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of ""application"" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2213/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 2019811176,I_kwDOBm6k_c54Y99o,2211,Unreachable exception handlers for `sqlite3.OperationalError`,1214074,open,0,,,0,2023-12-01T00:50:22Z,2023-12-01T00:50:22Z,,NONE,,"There are several places where `sqlite3.OperationalError` is caught as part of an exception handler which catches multiple exceptions, but is then caught again immediately afterwards by a dedicated exception handler. Because the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then `sqlite3.OperationalError` should be removed from the tuple of exceptions in the first handler. This issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it. One example: https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/views/database.py#L534-L537 Other instances: https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270 https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2211/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 2029908157,I_kwDOBm6k_c54_fC9,2214,CSV export fails for some `text` foreign key references,2874,open,0,,,1,2023-12-07T05:04:34Z,2023-12-07T07:36:34Z,,NONE,,"I'm starting this issue without a clear reproduction in case someone else has seen this behavior, and to use the issue as a notebook for research. I'm using Datasette with the [SWITRS](https://iswitrs.chp.ca.gov/) data set, which is a California Highway Patrol collection of traffic incident data from the past decade or so. I receive data from them in CSV and want to work with it in Datasette, then export it to CSV for mapping in Felt.com. Their data makes extensive use of codes for incident column data (`1` for `Monday` and so on), some of it integer codes and some of it letter/text codes. The text codes are sometimes blank or `-`. During import, I'm creating lookup tables for foreign key references to make the Datasette UI presentation of the data easier to read. If I import the data and set up the integer foreign keys, everything works fine, but if I set up the text foreign keys, CSV export starts to fail. The foreign key configuration is as follows: ``` # Some tables use integer ids, like sensible tables do. Let's import them first # since we favor them. for TABLE in DAY_OF_WEEK CHP_SHIFT POPULATION SPECIAL_COND BEAT_TYPE COLLISION_SEVERITY do sqlite-utils create-table records.db $TABLE id integer name text --pk=id sqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv sqlite-utils add-foreign-key records.db collisions $TABLE $TABLE id sqlite-utils create-index records.db collisions $TABLE done # *Other* tables use letter keys, like they were raised by WOLVES. Let's put them # at the end of the import queue. for TABLE in WEATHER_1 WEATHER_2 LOCATION_TYPE RAMP_INTERSECTION SIDE_OF_HWY \ PRIMARY_COLL_FACTOR PCF_CODE_OF_VIOL PCF_VIOL_CATEGORY TYPE_OF_COLLISION MVIW \ PED_ACTION ROAD_SURFACE ROAD_COND_1 ROAD_COND_2 LIGHTING CONTROL_DEVICE \ STWD_VEHTYPE_AT_FAULT CHP_VEHTYPE_AT_FAULT PRIMARY_RAMP SECONDARY_RAMP do sqlite-utils create-table records.db $TABLE key text name text --pk=key sqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv sqlite-utils add-foreign-key records.db collisions $TABLE $TABLE key sqlite-utils create-index records.db collisions $TABLE done ``` You can see the full code and import script here: https://github.com/radical-bike-lobby/switrs-db If I run this code and then hit the CSV export link in the Datasette interface (the simple link or the ""advanced"" dialog), export fails after a small number of CSV rows are written. I am not seeing any detailed error messages but this appears in the logging output: ``` INFO: 127.0.0.1:57885 - ""GET /records/collisions.csv?_facet=PRIMARY_RD&PRIMARY_RD=ASHBY+AV&_labels=on&_size=max HTTP/1.1"" 200 OK Caught this error: ``` (No other output follows `error:` other than a blank line.) I've stared at the rows directly after the error occurs and can't yet see what is causing the problem. I'm going to set up a development environment and see if I get any more detailed error output, and then stare more at some problematic lines to see if I can get a simple reproduction.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2214/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 2023057255,I_kwDOBm6k_c54lWdn,2212,Can't filter with numbers,605070,open,0,,,0,2023-12-04T05:26:29Z,2023-12-04T05:26:29Z,,NONE,,"I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean). I can't seem to get the facet to work or even filtering on this column. My guess is that Datasette is ""stringifying"" the number and it's not matching? Example: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2212/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1087913724,I_kwDOBm6k_c5A2D78,1577,Drop support for Python 3.6,9599,closed,0,,3268330,6,2021-12-23T18:17:03Z,2022-01-25T23:30:03Z,2022-01-20T04:31:41Z,OWNER,,"*Original title: Decide when to drop support for Python 3.6* > `context_vars` can solve this but they were introduced in Python 3.7: https://www.python.org/dev/peps/pep-0567/ > > Python 3.6 support ends in a few days time, and it looks like Glitch has updated to 3.7 now - so maybe I can get away with Datasette needing 3.7 these days? > > Tweeted about that here: https://twitter.com/simonw/status/1473761478155010048 _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1576#issuecomment-999878907_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1577/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1087919372,I_kwDOBm6k_c5A2FUM,1578,Confirm if documented nginx proxy config works for row pages with escaped characters in their primary key,9599,open,0,,,4,2021-12-23T18:27:59Z,2021-12-24T21:33:19Z,,OWNER,,"Found this while working on https://github.com/simonw/datasette-tiddlywiki Then clicking on `/tiddlywiki/tiddlers/%24%3A%2FDefaultTiddlers` returns a 404.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1578/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1087931918,I_kwDOBm6k_c5A2IYO,1579,`.execute_write(... block=True)` should be the default behaviour,9599,closed,0,,7571612,7,2021-12-23T18:54:28Z,2022-01-13T22:28:08Z,2021-12-23T19:18:26Z,OWNER,,"Every single piece of code I've written against the write APIs has used the `block=True` option to wait for the result. Without that, it instead fires the write into the queue but then continues even before it has finished executing. `block=True` should clearly be the default behaviour here!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1579/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1089529555,I_kwDOBm6k_c5A8ObT,1581,"when hashed urls are turned on, the _memory db has improperly long-lived cache expiry",536941,closed,0,,,1,2021-12-28T00:05:48Z,2022-03-24T04:08:18Z,2022-03-24T04:08:18Z,CONTRIBUTOR,,"if hashed_urls are on, then a -000 suffix is added to the `_memory` database, and the cache settings are set just as if it was a normal hashed database. in particular, this header is set: `cache-control: max-age=31536000` this is not appropriate because the `_memory-000` database isn't really hashed based on the contents of the databases (see #1561). Either the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1581/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1076057610,I_kwDOBm6k_c5AI1YK,1546,validating the sql,50336793,closed,0,,,1,2021-12-09T21:35:57Z,2021-12-18T02:05:17Z,2021-12-18T02:05:16Z,NONE,,Could someone tell me that part of the code is responsible for validating the sql that guarantees that only a table can be read,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1546/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1075893249,I_kwDOBm6k_c5AINQB,1545,Custom pages don't work on windows,559711,closed,0,,,3,2021-12-09T18:53:05Z,2022-02-03T02:08:31Z,2022-02-03T01:58:35Z,NONE,,"It seems that custom pages don't work when put in templates/pages To reproduce on datasette version 0.59.4 using PowerShell on WIndows 10 with Python 3.10.0 mkdir -p templates/pages echo ""hello world"" >> templates/pages/about.html Start datasette datasette --template-dir templates/ Navigate to [http://127.0.0.1:8001/about](url) and receive: Error 404: Database not found: about ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1545/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1076388044,I_kwDOBm6k_c5AKGDM,1547,Writable canned queries fail to load custom templates,127565,closed,0,,7571612,6,2021-12-10T03:31:48Z,2022-01-13T22:27:59Z,2021-12-19T21:12:00Z,CONTRIBUTOR,,"I've created a canned query with `""write"": true` set. I've also created a custom template for it, but the template doesn't seem to be found. If I look in the HTML I see (`stock_exchange` is the db name): `` My non-writeable canned queries pick up custom templates as expected, and if I look at their HTML I see the canned query name added to the templates considered (the canned query here is `date_search`): `` So it seems like the writeable canned query is behaving differently for some reason. Is it an authentication thing? I'm using the built in `--root` authentication. Thanks! ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1547/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1077628073,I_kwDOBm6k_c5AO0yp,1550,Research option for returning all rows from arbitrary query,9599,open,0,,,2,2021-12-11T19:31:11Z,2021-12-11T23:43:24Z,,OWNER,,"Inspired by thinking about #1549 - returning ALL rows from an arbitrary query is a lot easier if you just run that query and keep iterating over the cursor. I've avoided doing that in the past because it could tie up a connection for a long time - but in private instances this wouldn't be such a problem.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1550/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1077620955,I_kwDOBm6k_c5AOzDb,1549,Redesign CSV export to improve usability,536941,open,0,,3268330,5,2021-12-11T19:02:12Z,2022-04-04T11:17:13Z,,CONTRIBUTOR,,"*Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser* Right now, if the user clicks on the CSV related to a table or a query, the response header for the content type is ""content-type: text/plain; charset=utf-8"" Most browsers will try to open a file with this content-type in the browser. This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser. It would be great if the response header could be something like ``` 'Content-type: text/csv'); 'Content-disposition: attachment;filename=MyVerySpecial.csv'); ``` which would lead browsers to open a download dialog. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1549/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1077893013,I_kwDOBm6k_c5AP1eV,1551,`keep_blank_values=True` when parsing `request.args`,9599,closed,0,,7571612,3,2021-12-12T19:53:07Z,2022-01-13T22:26:04Z,2021-12-12T20:02:01Z,OWNER,,"This code in `TableView` wouldn't be necessary: https://github.com/simonw/datasette/blob/492f9835aa7e90540dd0c6324282b109f73df71b/datasette/views/table.py#L396-L399 If that happened here instead: https://github.com/simonw/datasette/blob/492f9835aa7e90540dd0c6324282b109f73df71b/datasette/utils/asgi.py#L98-L100 _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1518#issuecomment-991827468_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1551/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1078702875,I_kwDOBm6k_c5AS7Mb,1552,Allow to set `facets_array` in metadata (like current `facets`),3556,closed,0,,7571612,9,2021-12-13T16:00:44Z,2022-01-13T22:26:15Z,2021-12-16T18:47:48Z,CONTRIBUTOR,,"For now, you can set a `facets` value (array) in your metadata file but I couldn't find a way to set a `facets_array` in order to provide default facets for arrays (like tags). My use-case is to access to [that kind of view](https://latest.datasette.io/fixtures/facetable?_facet_array=tags) by default without URL's parameters as with other default facets. _I'm new to datasette, and I'm willing to help with a PR if that is not already implemented and I missed it!_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1552/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1079111498,I_kwDOBm6k_c5AUe9K,1553,if csv export is truncated in non streaming mode set informative response header,536941,open,0,,,3,2021-12-13T22:50:44Z,2021-12-16T19:17:28Z,,CONTRIBUTOR,,"streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit. it would be great if a response is truncated that an header signalling that was set in the header. i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1553/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1079149656,I_kwDOBm6k_c5AUoRY,1555,Optimize all those calls to index_list and foreign_key_list,9599,closed,0,,7571612,27,2021-12-13T23:50:56Z,2022-01-13T22:27:32Z,2021-12-19T20:55:59Z,OWNER,,"On the first hit to a restarted index I'm seeing this in the SQL traces: https://latest-with-plugins.datasette.io/github/commits?_trace=1 I imagine this could be sped up a lot using tricks like this one from the SQLite documentation: https://sqlite.org/pragma.html#pragfunc ```sql SELECT DISTINCT m.name || '.' || ii.name AS 'indexed-columns' FROM sqlite_schema AS m, pragma_index_list(m.name) AS il, pragma_index_info(il.name) AS ii WHERE m.type='table' ORDER BY 1; ``` https://latest-with-plugins.datasette.io/fixtures?sql=SELECT+DISTINCT+m.name+%7C%7C+%27.%27+%7C%7C+ii.name+AS+%27indexed-columns%27%0D%0A++FROM+sqlite_schema+AS+m%2C%0D%0A+++++++pragma_index_list%28m.name%29+AS+il%2C%0D%0A+++++++pragma_index_info%28il.name%29+AS+ii%0D%0A+WHERE+m.type%3D%27table%27%0D%0A+ORDER+BY+1%3B",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1555/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1081318247,I_kwDOBm6k_c5Ac5tn,1556,"Show count of facet values always, not just for `?_facet_size=max`",9599,closed,0,,7571612,1,2021-12-15T17:49:01Z,2022-01-13T22:26:07Z,2021-12-15T17:58:06Z,OWNER,,"> You've caused me to rethink this feature - I no longer think there's value in only showing these numbers if `?_facet_size=max` as opposed to all of the time. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1423#issuecomment-995023410_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1556/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,completed 1082564912,I_kwDOBm6k_c5AhqEw,1557,`?_nosuggest=1` parameter for disabling facet suggestions on table view,9599,closed,0,,7571612,1,2021-12-16T19:21:42Z,2022-01-13T22:26:48Z,2021-12-16T19:24:59Z,OWNER,,"Found I wanted this while I was debugging #625 just to clean up the debug traces, but it makes sense as a partner to `?_nofacet=1` and `?_nocount=1` from #1350 and #1353.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1557/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1082584499,I_kwDOBm6k_c5Ahu2z,1558,Redesign `facet_results` JSON structure prior to Datasette 1.0,9599,open,0,,3268330,3,2021-12-16T19:45:10Z,2023-01-09T15:31:17Z,,OWNER,,"> Decision: as an initial fix I'm going to de-duplicate those keys by using `tags__array` etc - with a `_2` on the end if that key is already used. > > I'll open a separate issue to redesign this better for Datasette 1.0. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/625#issuecomment-996130862_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1558/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1082746149,I_kwDOBm6k_c5AiWUl,1560,"Table page title has ""where where"" in it",9599,closed,0,,7571612,0,2021-12-17T00:05:48Z,2022-01-13T22:28:35Z,2022-01-13T22:20:15Z,OWNER,,"Just noticed this while working on #1518. ``` % curl -s 'https://latest.datasette.io/fixtures/facetable?_sort=pk&on_earth__exact=1' | grep -C 1 '' <head> <title>fixtures: facetable: 14 rows where where on_earth = 1 sorted by pk ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1560/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1082765654,I_kwDOBm6k_c5AibFW,1561,"add hash id to ""_memory"" url if hashed url mode is turned on and crossdb is also turned on",536941,closed,0,,,3,2021-12-17T00:45:12Z,2022-03-19T04:45:40Z,2022-03-19T04:45:40Z,CONTRIBUTOR,,"If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. One way that it could work is to have the _memory hash be a hash of all the individual databases. Otherwise, crossdb queries can get quit out of data if using aggressive caching. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1561/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083657868,I_kwDOBm6k_c5Al06M,1565,Documented JavaScript variables on different templates made available for plugins,9599,open,0,,,8,2021-12-17T22:30:51Z,2021-12-19T22:37:29Z,,OWNER,,"While working on https://github.com/simonw/datasette-leaflet-freedraw/issues/10 I found myself writing this atrocity to figure out the SQL query used for a specific table page: ```javascript let innerSql = Array.from(document.getElementsByTagName(""span"")).filter( el => el.innerText == ""View and edit SQL"" )[0].parentElement.getAttribute(""title"") ``` This is obviously bad - it's very brittle, and will break if I ever change the text on that link (like localizing it for example). Instead, I think pages like that one should have a block of script at the bottom something like this: ```javascript window.datasette = window.datasette || {}; datasette.view_name = 'table'; datasette.table_sql = 'select * from ...'; ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1565/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1083669410,I_kwDOBm6k_c5Al3ui,1566,Release Datasette 0.60,9599,closed,0,,7571612,6,2021-12-17T22:58:12Z,2022-01-14T01:59:55Z,2022-01-14T01:59:55Z,OWNER,,Using this as a tracking issue. I'm hoping to get the bulk of the JSON redesign work from the refactor in #1554 in for this release.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1566/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083573206,I_kwDOBm6k_c5AlgPW,1563,Datasette(... files=) should not be a required argument,9599,closed,0,,7571612,2,2021-12-17T19:54:18Z,2022-01-13T22:27:18Z,2021-12-18T02:19:40Z,OWNER,,"```pycon >>> ds = Datasette(memory=True) Traceback (most recent call last): File """", line 1, in TypeError: __init__() missing 1 required positional argument: 'files' >>> ds = Datasette(memory=True, files=[]) ``` I wanted to create an in-memory Datasette for running some tests, no point in forcing me to pass `files=[]` to do that.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1563/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083581011,I_kwDOBm6k_c5AliJT,1564,_prepare_connection not called on write connections,9599,closed,0,,7571612,1,2021-12-17T20:06:47Z,2022-01-20T21:29:43Z,2021-12-18T01:58:44Z,OWNER,,"I was trying to initalize SpatiaLite in a write connection: ```pycon >>> from datasette.app import Datasette >>> ds = Datasette(memory=True, files=[], sqlite_extensions=[""spatialite""]) >>> db = ds.add_memory_database('geo') >>> await db.execute_write(""select InitSpatialMetadata(1)"") UUID('3f143baa-4e3d-5842-a36f-4fa2f683b72f') no such function: InitSpatialMetadata ``` It looks like the code that loads additional modules only works on read-only connections, not on write connections: https://github.com/simonw/datasette/blob/92a5280d2e75c39424a75ad6226fc74400ae984f/datasette/database.py#L146-L153 Compared to: https://github.com/simonw/datasette/blob/92a5280d2e75c39424a75ad6226fc74400ae984f/datasette/database.py#L124-L132",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1564/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083921371,I_kwDOBm6k_c5Am1Pb,1570,Separate db.execute_write() into three methods,9599,closed,0,,7571612,2,2021-12-18T18:45:54Z,2022-01-13T22:27:38Z,2021-12-18T18:57:25Z,OWNER,,"> Rather than adding a `executemany=True` parameter, I'm now thinking a better design might be to have three methods: > > - `db.execute_write(sql, params=None, block=False)` > - `db.execute_write_script(sql, block=False)` > - `db.execute_write_many(sql, params_seq, block=False)` _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1555#issuecomment-997267416_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1570/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083927147,I_kwDOBm6k_c5Am2pr,1571,Track number of executions for execute_write_many() in traces,9599,closed,0,,7571612,0,2021-12-18T19:16:17Z,2022-01-13T22:27:49Z,2021-12-19T20:30:40Z,OWNER,,"Spotted while working on #1555 There's no indication there of how many times `execute_write_many()` executed the SQL. Solving this is a tiny bit tricky because `params_seq` is an iterator that we don't want to exhaust before passing it to `conn.executemany()` - so we need to instead wrap it in something that counts how many times it was called. But then we need a way to attach that to the trace here: https://github.com/simonw/datasette/blob/d637ed46762fdbbd8e32b86f258cd9a53c1cfdc7/datasette/database.py#L115-L122 So probably need to redesign the `trace()` decorator to allow extra pairs to be attached to it within the `with` statement. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1571/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083718998,I_kwDOBm6k_c5AmD1W,1567,Remove undocumented sqlite_functions mechanism,9599,closed,0,,7571612,0,2021-12-18T01:51:10Z,2022-01-13T22:27:04Z,2021-12-18T01:54:46Z,OWNER,,"I added this in 0b8c1b0a6da9cb8ac0d28cc90dd783de87554036 but it's never been documented and the same thing can now be achieved using the `prepare_connection` plugin hook. https://github.com/simonw/datasette/blob/0c91e59d2bbfc08884cfcf5d1b902a2f4968b7ff/datasette/app.py#L262 https://github.com/simonw/datasette/blob/0c91e59d2bbfc08884cfcf5d1b902a2f4968b7ff/datasette/app.py#L551-L552 It's used here in the tests: https://github.com/simonw/datasette/blob/69244a617b1118dcbd04a8f102173f04680cf08c/tests/fixtures.py#L156",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1567/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083726550,I_kwDOBm6k_c5AmFrW,1568,Trace should show queries on the write connection too,9599,closed,0,,7571612,2,2021-12-18T02:34:12Z,2022-01-13T22:27:23Z,2021-12-18T02:42:34Z,OWNER,,"> Here's why - `trace` only applies to read, not write SQL operations: https://github.com/simonw/datasette/blob/7c8f8aa209e4ba7bf83976f8495d67c28fbfca24/datasette/database.py#L209-L211 _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1555#issuecomment-997128508_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1568/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1083895395,I_kwDOBm6k_c5Amu5j,1569,"db.execute_write(..., executescript=True) parameter",9599,closed,0,,7571612,2,2021-12-18T18:20:47Z,2022-01-13T22:27:27Z,2021-12-18T18:34:18Z,OWNER,,"> Idea: teach `execute_write` to accept an optional `executescript=True` parameter, like this: ```diff diff --git a/datasette/database.py b/datasette/database.py index 468e936..1a424f5 100644 --- a/datasette/database.py +++ b/datasette/database.py @@ -94,10 +94,14 @@ class Database: f""file:{self.path}{qs}"", uri=True, check_same_thread=False ) - async def execute_write(self, sql, params=None, block=False): + async def execute_write(self, sql, params=None, executescript=False, block=False): + assert not executescript and params, ""Cannot use params with executescript=True"" def _inner(conn): with conn: - return conn.execute(sql, params or []) + if executescript: + return conn.executescript(sql) + else: + return conn.execute(sql, params or []) with trace(""sql"", database=self.name, sql=sql.strip(), params=params): results = await self.execute_write_fn(_inner, block=block) ``` _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1555#issuecomment-997248364_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1569/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1084185188,I_kwDOBm6k_c5An1pk,1573,Make trace() a documented internal API,9599,open,0,,,1,2021-12-19T20:32:56Z,2021-12-19T21:13:13Z,,OWNER,,"This should be documented so plugin authors can use it to add their own custom traces: https://github.com/simonw/datasette/blob/8f311d6c1d9f73f4ec643009767749c17b5ca5dd/datasette/tracer.py#L28-L52 Including the new `kwargs` pattern I added in #1571: https://github.com/simonw/datasette/blob/f65817000fdf87ce8a0c23edc40784ebe33b5842/datasette/database.py#L128-L132",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1573/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1084007781,I_kwDOBm6k_c5AnKVl,1572,"""Query took"" should be ""Queries took""",9599,closed,0,,7571612,0,2021-12-19T04:03:00Z,2022-01-13T22:27:43Z,2021-12-19T04:03:24Z,OWNER,,"This is misleading, since usually there have been more than one query executed: ![CleanShot 2021-12-18 at 20 02 35@2x](https://user-images.githubusercontent.com/9599/146663457-9c4c2900-5cc0-4650-a565-bb1ff0b8a725.png) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1572/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1084257842,I_kwDOBm6k_c5AoHYy,1575,__call__() got an unexpected keyword argument 'specname',9599,closed,0,,,1,2021-12-20T01:24:04Z,2021-12-20T01:48:03Z,2021-12-20T01:47:57Z,OWNER,,"> I've installed the alpha version but get an error when starting up Datasette: ``` Traceback (most recent call last): File ""/Users/tim/.pyenv/versions/stock-exchange/bin/datasette"", line 5, in from datasette.cli import cli File ""/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/cli.py"", line 15, in from .app import Datasette, DEFAULT_SETTINGS, SETTINGS, SQLITE_LIMIT_ATTACHED, pm File ""/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/app.py"", line 31, in from .views.database import DatabaseDownload, DatabaseView File ""/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/views/database.py"", line 25, in from datasette.plugins import pm File ""/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/plugins.py"", line 29, in mod = importlib.import_module(plugin) File ""/Users/tim/.pyenv/versions/3.8.5/lib/python3.8/importlib/__init__.py"", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File ""/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/filters.py"", line 9, in @hookimpl(specname=""filters_from_request"") TypeError: __call__() got an unexpected keyword argument 'specname' ``` _Originally posted by @wragge in https://github.com/simonw/datasette/issues/1547#issuecomment-997511968_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1575/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1087181951,I_kwDOBm6k_c5AzRR_,1576,Traces should include SQL executed by subtasks created with `asyncio.gather`,9599,closed,0,,3268330,12,2021-12-22T20:52:02Z,2022-02-05T05:21:35Z,2022-02-05T05:19:53Z,OWNER,,"I tried running some parallel SQL queries using `asyncio.gather()` but the SQL that was executed didn't show up in the trace rendered by https://datasette.io/plugins/datasette-pretty-traces I realized that was because traces are keyed against the current task ID, which changes when a sub-task is run using `asyncio.gather` or similar. The faceting and suggest faceting queries are missing from this trace: ![image](https://user-images.githubusercontent.com/9599/147153855-2d611f07-922a-4d18-9e6e-4be89e010dc4.png) > The reason they aren't showing up in the traces is that traces are stored just for the currently executing `asyncio` task ID: https://github.com/simonw/datasette/blob/ace86566b28280091b3844cf5fbecd20158e9004/datasette/tracer.py#L13-L25 > > This is so traces for other incoming requests don't end up mixed together. But there's no current mechanism to track async tasks that are effectively ""child tasks"" of the current request, and hence should be tracked the same. > > https://stackoverflow.com/a/69349501/6083 suggests that you pass the task ID as an argument to the child tasks that are executed using `asyncio.gather()` to work around this kind of problem. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1518#issuecomment-999870993_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1576/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1104691662,I_kwDOBm6k_c5B2EHO,1600,plugins --all example should use cog,9599,closed,0,,,1,2022-01-15T11:47:49Z,2022-01-20T05:06:21Z,2022-01-20T05:04:16Z,OWNER,,The example output for `datasette plugins --all`on this page has got out of date: https://docs.datasette.io/en/stable/plugins.html#seeing-what-plugins-are-installed,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1600/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1105916061,I_kwDOBm6k_c5B6vCd,1601,Add KNN and data_licenses to hidden tables list,25778,closed,0,,,5,2022-01-17T14:19:57Z,2022-01-20T21:29:44Z,2022-01-20T04:38:54Z,CONTRIBUTOR,,"They're generated by Spatialite and not very interesting in most cases. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1601/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1090810196,I_kwDOBm6k_c5BBHFU,1583,consider adding deletion step of cloudbuild artifacts to gcloud publish,536941,open,0,,,1,2021-12-30T00:33:23Z,2021-12-30T00:34:16Z,,CONTRIBUTOR,,"right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1583/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1091257796,I_kwDOBm6k_c5BC0XE,1584,give error with recursive sql,58088336,open,0,,,0,2021-12-30T18:53:16Z,2021-12-30T18:53:16Z,,NONE,,"I got an error ""near ""WITH"": syntax error"" after I upgraded to version 0.59 from 0.52.4. This error is related to recursive sql. It works great on the previous version but it failed after upgraded. Below is an example of sql: WITH RECURSIVE manager_of(position, super_position) AS (SELECT position, case ifnull(INDIRECT_SUPER_POSITION,'') when '' then super_position else INDIRECT_SUPER_POSITION end as SUPER_POSITION FROM position where super_position<>'SGV000000001' and super_position!='' and position <> super_position),chain_manager_of_position(position, level) AS (SELECT super_position, 1 as level FROM manager_of WHERE super_position!='' and (position=:pos or position in (Select position from employee where employee=:ein)) UNION ALL SELECT super_position, level+1 as level FROM manager_of JOIN chain_manager_of_position USING(position)) SELECT * FROM chain_manager_of_position left join employee using(position) where employee is not NULL order by level limit 1",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1584/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1091838742,I_kwDOBm6k_c5BFCMW,1585,Fire base caching for `publish cloudrun`,9599,open,0,,,1,2022-01-01T15:38:15Z,2022-01-01T15:40:38Z,,OWNER,,"https://gist.github.com/steren/03d3e58c58c9a53fd49bb78f58541872 has a recipe for this, via https://twitter.com/steren/status/1477038411114446848 Could this enable easier vanity URLs of the format `https://$project_id.web.app/`? How about CDN caching?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1585/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1096536240,I_kwDOBm6k_c5BW9Cw,1586,run analyze on all databases as part of start up or publishing,536941,open,0,,,1,2022-01-07T17:52:34Z,2022-02-02T07:13:37Z,,CONTRIBUTOR,,"Running `analyze;` lets sqlite's query planner make *much* better use of any indices. It might be nice if the analyze was run as part of the start up of ""serve"" or ""publish"".",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1586/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1097040427,I_kwDOBm6k_c5BY4Ir,1587,Add `sqlite_stat1`(-4) tables to hidden table list,9599,closed,0,,,2,2022-01-08T21:28:20Z,2022-01-20T04:12:59Z,2022-01-20T04:12:59Z,OWNER,,"> Running `ANALYZE` creates a new visible table called `sqlite_stat1`: https://www.sqlite.org/fileformat.html#the_sqlite_stat1_table > > This should be added to the default list of hidden tables in Datasette.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1587/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1097101917,I_kwDOBm6k_c5BZHJd,1588,`explain query plan select` is too strict about whitespace,9599,closed,0,,7571612,3,2022-01-09T04:22:42Z,2022-01-13T22:28:19Z,2022-01-13T20:35:05Z,OWNER,,"`explain query plan select * from facetable` is allowed: https://latest.datasette.io/fixtures?sql=explain+query+plan+select+*+from+facetable But... `explain query plan select * from facetable` (with two spaces before the `select`) returns a ""Statement must be a SELECT"" error: https://latest.datasette.io/fixtures?sql=explain+query+plan++select+*+from+facetable",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1588/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1099723916,I_kwDOBm6k_c5BjHSM,1590,Table+query JSON and CSV links broken when using `base_url` setting,1001306,closed,0,,7571612,11,2022-01-11T23:46:39Z,2022-01-14T01:16:34Z,2022-01-14T01:16:08Z,NONE,,"Datasette appends the prefix found in the `base_url` setting twice if a `base_url` is set. In the follow asgi example, I'm hosting a custom Datasette instance: ```python # asgi.py import pathlib from asgi_cors import asgi_cors from channels.routing import URLRouter from django.urls import re_path from datasette.app import Datasette datasette_ = Datasette( files=[], settings={ ""base_url"": ""/datasettes/"", ""plugins"": {} }, config_dir=pathlib.Path('.'), ) application = URLRouter([ re_path(r""^datasettes/.*"", asgi_cors(datasette_.app(), allow_all=True)), ]) ``` Running it with: ```shell $ daphne -p 8002 asgi:application ``` Using a simple query on the `_memory` table: ```sql select sqlite_version() ``` http://localhost:8002/datasettes/_memory?sql=select+sqlite_version%28%29 It renders the following upon inspection: ![image](https://user-images.githubusercontent.com/1001306/149038851-aa842950-126a-467c-9a86-fae13bce6221.png) I am using datasette version `0.59.4`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1590/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1100015398,I_kwDOBm6k_c5BkOcm,1591,Maybe let plugins define custom serve options?,9599,open,0,,,7,2022-01-12T08:18:47Z,2022-01-15T11:56:59Z,,OWNER,,"https://twitter.com/psychemedia/status/1481171650934714370 > can extensions be passed their own cli args? eg `--ext-tiddlywiki-dbname tiddlywiki2.sqlite` ? I've thought something like this might be useful for other plugins in the past, too.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1591/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1100499619,I_kwDOBm6k_c5BmEqj,1592,Row pages should show links to foreign keys,9599,open,0,,,1,2022-01-12T15:50:20Z,2022-01-12T15:52:17Z,,OWNER,,Refs #1518 refactor.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1592/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1102568047,I_kwDOBm6k_c5Bt9pv,1596,Documentation page warning of changes coming in 1.0,9599,open,0,,,0,2022-01-13T23:26:04Z,2022-01-13T23:26:04Z,,OWNER,,I should start this relatively soon.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1596/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1102359726,I_kwDOBm6k_c5BtKyu,1594,"Add a CLI reference page to the docs, inspired by sqlite-utils",9599,closed,0,,7571612,3,2022-01-13T20:55:08Z,2022-01-13T22:28:22Z,2022-01-13T21:38:48Z,OWNER,,"Thought of this while posting this comment: https://github.com/simonw/datasette/issues/1591#issuecomment-1012506595 I added https://sqlite-utils.datasette.io/en/stable/cli-reference.html to `sqlite-utils` in https://github.com/simonw/sqlite-utils/issues/383 and I _really_ like it - it's a page showing the `--help` output of every CLI command for that tool. It's maintained using `cog`. One of the benefits is that I get a free commit history of changes to `--help` at https://github.com/simonw/sqlite-utils/commits/main/docs/cli-reference.rst",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1594/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1102484126,I_kwDOBm6k_c5BtpKe,1595,Release notes for 0.60,9599,closed,0,,7571612,4,2022-01-13T22:23:14Z,2022-01-14T01:37:39Z,2022-01-14T01:37:39Z,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1595/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1102612922,I_kwDOBm6k_c5BuIm6,1597,"""datasette inspect"" has no help summary",9599,closed,0,,,1,2022-01-14T00:02:16Z,2022-01-14T00:07:36Z,2022-01-14T00:07:36Z,OWNER,,"Made obvious by the new CLI reference page added in #1594. https://docs.datasette.io/en/latest/cli-reference.html#datasette-inspect-help ``` Commands: serve* Serve up specified SQLite database files with a web UI inspect install Install Python packages - e.g. ``` ``` Usage: datasette inspect [OPTIONS] [FILES]... Options: --inspect-file TEXT --load-extension TEXT Path to a SQLite extension to load --help Show this message and exit. ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1597/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1102637351,I_kwDOBm6k_c5BuOkn,1598,Replace update-docs-help.py script with cog,9599,closed,0,,,1,2022-01-14T00:33:27Z,2022-01-14T00:47:57Z,2022-01-14T00:47:57Z,OWNER,,"I introduced `cog` in #1594 - I can use this to replace the older `update-docs-help.py` mechanism: https://github.com/simonw/datasette/blob/76d66d5b2bf10249c0beaac0999b93ac8d757f48/tests/test_docs.py#L36-L53",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1598/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed