{"id": 1054244712, "node_id": "I_kwDOBm6k_c4-1n9o", "number": 1510, "title": "Datasette 1.0 documented template context (maybe via API docs)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:23:58Z", "updated_at": "2023-06-28T02:05:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054243511, "node_id": "I_kwDOBm6k_c4-1nq3", "number": 1509, "title": "Datasette 1.0 JSON API (and documentation)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:22:45Z", "updated_at": "2022-03-15T20:38:56Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The new JSON API in a stable, documented form.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1509/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054246919, "node_id": "I_kwDOBm6k_c4-1ogH", "number": 1511, "title": "Review plugin hooks for Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2021-11-15T23:26:05Z", "updated_at": "2021-11-16T01:20:14Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I need to perform a detailed review of the plugin interface - especially the plugin hooks like [register_facet_classes()](https://docs.datasette.io/en/stable/plugin_hooks.html#register-facet-classes) which I don't yet have complete confidence in.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1511/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1055469073, "node_id": "I_kwDOBm6k_c4-6S4R", "number": 1513, "title": "Research: CTEs and union all to calculate facets AND query at the same time", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2021-11-16T22:26:45Z", "updated_at": "2021-11-16T23:41:46Z", "closed_at": "2021-11-16T23:41:46Z", "author_association": "OWNER", "pull_request": null, "body": "Consider this page: https://global-power-plants.datasettes.com/global-power-plants/global-power-plants?_search=plant&_facet=owner&_facet=country_long&_facet=primary_fuel\r\n\r\nDatasette needs to run the main query for the rows on that page, a count query for the total query, then a separate query for each of those three specified facets.\r\n\r\nThis is a `_search=` query, so it needs to execute the FTS code once for the rows, again for the count, and then three more times for each of the facets.\r\n\r\nCould running that query as a CTE and doing the other queries as part of the same large query produce significant speed improvements?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1513/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1056746091, "node_id": "I_kwDOBm6k_c4-_Kpr", "number": 1515, "title": "Handle foreign keys that point to a non-existent table", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-17T23:40:13Z", "updated_at": "2021-11-18T01:31:56Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted in https://github.com/simonw/datasette-graphql/issues/79\r\n\r\nDemo: https://datasette-graphql-demo.datasette.io/fixtures/bad_foreign_key\r\n\r\nThe foreign key links to a 404 page.\r\n\r\n![B87009C7-CFCA-4DF9-8FBA-FA3E6CA28EC2](https://user-images.githubusercontent.com/9599/142334788-4d1a4acd-bc87-4426-b333-d46b221afcec.jpeg)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1515/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1049946823, "node_id": "I_kwDOBm6k_c4-lOrH", "number": 1502, "title": "Full-text search: No support to unary \"-\" operator", "user": {"value": 516827, "label": "gustavorps"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-10T15:11:19Z", "updated_at": "2021-11-10T15:11:19Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Reference: https://www.sqlite.org/fts3.html#set_operations_using_the_standard_query_syntax\r\n\r\nTest: https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort+-freedman&_sort=rowid", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1502/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1050163432, "node_id": "I_kwDOBm6k_c4-mDjo", "number": 1503, "title": "`?_nocol=` removes that column from the filter interface", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-11-10T18:22:50Z", "updated_at": "2021-11-14T05:08:27Z", "closed_at": "2021-11-14T04:53:07Z", "author_association": "OWNER", "pull_request": null, "body": "e.g. on https://latest.datasette.io/fixtures/sortable?_nocol=sortable\r\n\r\n\"fixtures__sortable__201_rows\"\r\n\r\nThis causes weird behaviour when you e.g. facet by a hidden column, since selecting facets and then re-submitting the form will clear the selected filter.\r\n\r\n![nocol-bug](https://user-images.githubusercontent.com/9599/141171135-aded71d1-a4cb-4b7f-a4ea-26828fa98906.gif)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1503/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1051277222, "node_id": "I_kwDOBm6k_c4-qTem", "number": 1504, "title": "Link to ?_size=max at bottom of table page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-11T19:06:33Z", "updated_at": "2021-11-11T19:06:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This can have text such as \"Show 1,000 rows per page\", based on the max size limit setting. Would make it easier for people to see more data at once without having to know how to hack the URL, similar to the `...` for facet sizes I added in #1337.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1504/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1052247023, "node_id": "I_kwDOBm6k_c4-uAPv", "number": 1505, "title": "Datasette should have an option to output CSV with semicolons", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-11-12T18:02:21Z", "updated_at": "2021-11-16T11:40:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": null, "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1505/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1052826038, "node_id": "I_kwDOBm6k_c4-wNm2", "number": 1506, "title": "Columns beginning with an underscore do not facet correctly", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-11-14T02:20:32Z", "updated_at": "2021-11-14T04:45:21Z", "closed_at": "2021-11-14T04:45:21Z", "author_association": "OWNER", "pull_request": null, "body": "Datasette treats columns that start with an underscore as querystring parameters it should ignore!\r\n\r\n\"bchydro__item_versions__99_918_rows\"\r\n\r\nDiscovered in https://github.com/simonw/git-history/issues/14#issuecomment-968192464", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1506/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1052851176, "node_id": "I_kwDOBm6k_c4-wTvo", "number": 1507, "title": "ReadTheDocs build failed for 0.59.2 release", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2021-11-14T05:24:34Z", "updated_at": "2021-11-14T05:41:55Z", "closed_at": "2021-11-14T05:41:55Z", "author_association": "OWNER", "pull_request": null, "body": "I had to cancel the 0.59.2 release because ReadTheDocs was failing to build the documentation.\r\n\r\nhttps://readthedocs.org/projects/datasette/builds/15268454/\r\n\r\n```\r\n /home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/bin/python -m sphinx -T -b html -d _build/doctrees -D language=en . _build/html\r\nRunning Sphinx v1.8.5\r\nloading translations [en]... done\r\nmaking output directory...\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 27 source files that are out of date\r\nupdating environment: 27 added, 0 changed, 0 removed\r\nreading sources... [ 3%] authentication\r\n\r\nTraceback (most recent call last):\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/cmd/build.py\", line 304, in build_main\r\n app.build(args.force_all, filenames)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/application.py\", line 341, in build\r\n self.builder.build_update()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py\", line 347, in build_update\r\n len(to_build))\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py\", line 360, in build\r\n updated_docnames = set(self.read())\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py\", line 468, in read\r\n self._read_serial(docnames)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py\", line 490, in _read_serial\r\n self.read_doc(docname)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/__init__.py\", line 534, in read_doc\r\n doctree = read_doc(self.app, self.env, self.env.doc2path(docname))\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/io.py\", line 318, in read_doc\r\n pub.publish()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py\", line 219, in publish\r\n self.apply_transforms()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py\", line 200, in apply_transforms\r\n self.document.transformer.apply_transforms()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/__init__.py\", line 90, in apply_transforms\r\n Transformer.apply_transforms(self)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/transforms/__init__.py\", line 171, in apply_transforms\r\n transform.apply(**kwargs)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/__init__.py\", line 245, in apply\r\n apply_source_workaround(n)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py\", line 94, in apply_source_workaround\r\n for classifier in reversed(node.parent.traverse(nodes.classifier)):\r\nTypeError: argument to reversed() must be a sequence\r\n\r\nException occurred:\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py\", line 94, in apply_source_workaround\r\n for classifier in reversed(node.parent.traverse(nodes.classifier)):\r\nTypeError: argument to reversed() must be a sequence\r\nThe full traceback has been saved in /tmp/sphinx-err-vkl0oE.log, if you want to report the issue to the developers.\r\nPlease also report this if it was a user error, so that a better error message can be provided next time.\r\nA bug report can be filed in the tracker at . Thanks! \r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1507/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1006016302, "node_id": "I_kwDOBm6k_c479pcu", "number": 1477, "title": "Consider adding request to the documented default template context", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T02:34:09Z", "updated_at": "2021-09-24T02:34:09Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I made a plugin for this today but I think perhaps it should be a default thing instead: https://datasette.io/plugins/datasette-template-request", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1477/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 999902754, "node_id": "I_kwDOBm6k_c47mU4i", "number": 1473, "title": "base logo link visits `undefined` rather than href url", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-09-18T04:17:04Z", "updated_at": "2021-09-19T00:45:32Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I have two connected sites:\r\nhttp://www.SaferOrToxic.org \r\n(a Hugo website)\r\nand:\r\nhttp://disinfectants.SaferOrToxic.org/disinfectants/listN\r\n(a datasette table page)\r\n\r\nThe latter is linked as \"The List\" in the former's menu.\r\n(I'd love a prettier URL, but that's what I've got.)\r\n\r\nOn:\r\nhttp://disinfectants.SaferOrToxic.org/disinfectants/listN\r\n... all the other menu links should point back to:\r\nhttps://www.SaferOrToxic.org \r\nAnd they do!\r\n\r\nBut the logo, for some reason--though it has an href pointing to:\r\nhttps://www.SaferOrToxic.org\r\nKeeps going to this instead:\r\nhttps://disinfectants.saferortoxic.org/disinfectants/undefined\r\n\r\nWhat is causing that? How can I fix it?\r\n\r\nIn #1284 back in March, I was doing battle with the index.html template, in a still unresolved issue. (I wanted only a single table page at the root.)\r\n\r\nBut I thought, well, if I can't resolve that, at least I could just point the main website to the datasette page (\"The List,\") and then have the List point back to the home website.\r\n\r\nThe menu hrefs to https://www.SaferOrToxic.org work just fine, exactly as they should, from the datasette page. Even the Home link works properly.\r\n\r\nBut the logo link keeps rewriting to: https://disinfectants.saferortoxic.org/disinfectants/undefined\r\n\r\nThis is the HTML:\r\n```\r\n\"Logo:\r\n```\r\n\r\n\r\nIs this somehow related to cloudflare?\r\nOr something in the datasette code?\r\n\r\nI'm starting to think it's a cloudflare issue.\r\n\"Screen\r\n\r\nCan I at least rule out it being a datasette issue?\r\n\r\nMy repository is here:\r\nhttps://github.com/mroswell/list-N\r\n\r\n(BTW, I couldn't figure out how to reference a local image, either, on the datasette side, which is why I'm using the image from the www home page.)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1473/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1021550542, "node_id": "I_kwDOBm6k_c4845_O", "number": 1482, "title": "Support Python 3.10", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-10-09T00:30:52Z", "updated_at": "2021-10-24T22:21:40Z", "closed_at": "2021-10-24T22:19:55Z", "author_association": "OWNER", "pull_request": null, "body": "I started work on this in #1481 where I found a Python 3.10 bug that needs a workaround in Janus, see:\r\n\r\n- https://github.com/aio-libs/janus/issues/358\r\n\r\nThis is a tracking issue for anything else that shows up.\r\n\r\nThis is also needed for the Homebrew package to upgrade to 3.10:\r\n\r\n- https://github.com/Homebrew/homebrew-core/pull/86932", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1482/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1021849766, "node_id": "I_kwDOBm6k_c486DCm", "number": 1483, "title": "Running a search on page 2 of results should not preserve ?_next=", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-10-10T01:18:12Z", "updated_at": "2021-10-13T21:08:10Z", "closed_at": "2021-10-13T21:08:10Z", "author_association": "OWNER", "pull_request": null, "body": "Reported by @eigenfoo in https://github.com/simonw/datasette/issues/1470", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1483/reactions\", \"total_count\": 2, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1006781949, "node_id": "I_kwDOBm6k_c48AkX9", "number": 1478, "title": "Documentation Request: Feature alternative ID instead of default ID", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T19:56:13Z", "updated_at": "2021-09-25T16:18:54Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "My data already has an ID that comes from a federal agency.\r\nWould love to have documentation on how to modify the template to:\r\n- Remove the generated ID from the table\r\n- Link the federal ID to the detail page\r\n- and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key.\r\n\r\nI don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports.\r\n\r\nPerhaps this isn't a template issue, maybe more of a db manipulation...\r\n\r\nMargie", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1478/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1010112818, "node_id": "I_kwDOBm6k_c48NRky", "number": 1479, "title": "Win32 \"used by another process\" error with datasette publish", "user": {"value": 76450761, "label": "kirajano"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2021-09-28T19:12:00Z", "updated_at": "2023-09-07T02:14:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette.\r\n\r\nFailed to deploy. Attaching logs:\r\n1. Tried with an app created via `flyctl apps create frosty-fog-8565` and the ran `datasette publish fly covid.db --app frosty-fog-8565` \r\n``` \r\nDeploying frosty-fog-8565\r\n==> Validating app configuration\r\n--> Validating app configuration done\r\nServices\r\nTCP 80/443 \u21e2 8080\r\n\r\nError error connecting to docker: An unknown error occured.\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\grott\\Anaconda3\\Scripts\\datasette.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette_publish_fly\\__init__.py\", line 156, in fly\r\n \"--remote-only\",\r\n File \"c:\\users\\grott\\anaconda3\\lib\\contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\utils\\__init__.py\", line 451, in temporary_docker_directory\r\n tmp.cleanup()\r\n File \"c:\\users\\grott\\anaconda3\\lib\\tempfile.py\", line 811, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 404, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 402, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\grott\\\\AppData\\\\Local\\\\Temp\\\\tmpgcm8cz66\\\\frosty-fog-8565'\r\n```\r\n\r\n2. Tried also with an app that gets autogenerate when running `flyctl launch`. This also generates the .toml file. Ran then `datasette publish fly covid.db --app dark-feather-168` **but different error now**\r\n```Deploying dark-feather-168\r\n==> Validating app configuration\r\n\r\nError not possible to validate configuration: server returned Post \"https://api.fly.io/graphql\": unexpected EOF\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 193, in _run_module_as_main \r\n \"__main__\", mod_spec)\r\n exec(code, run_globals)\r\n File \"C:\\Users\\grott\\Anaconda3\\Scripts\\datasette.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette_publish_fly\\__init__.py\", line 156, in fly\r\n \"--remote-only\",\r\n File \"c:\\users\\grott\\anaconda3\\lib\\contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\utils\\__init__.py\", line 451, in temporary_docker_directory\r\n tmp.cleanup()\r\n File \"c:\\users\\grott\\anaconda3\\lib\\tempfile.py\", line 811, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 404, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 402, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\grott\\\\AppData\\\\Local\\\\Temp\\\\tmpnoyewcre\\\\dark-feather-168'\r\n```\r\n\r\nThese are also the contents of the generated **.toml file** in 2 scenario:\r\n\r\n```\r\n# fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00\r\n\r\napp = \"dark-feather-168\"\r\n\r\nkill_signal = \"SIGINT\"\r\nkill_timeout = 5\r\nprocesses = []\r\n\r\n[env]\r\n\r\n[experimental]\r\n allowed_public_ports = []\r\n auto_rollback = true\r\n\r\n[[services]]\r\n http_checks = []\r\n internal_port = 8080\r\n processes = [\"app\"]\r\n protocol = \"tcp\"\r\n script_checks = []\r\n\r\n [services.concurrency]\r\n hard_limit = 25\r\n soft_limit = 20\r\n type = \"connections\"\r\n\r\n [[services.ports]]\r\n handlers = [\"http\"]\r\n port = 80\r\n\r\n [[services.ports]]\r\n handlers = [\"tls\", \"http\"]\r\n port = 443\r\n\r\n [[services.tcp_checks]]\r\n grace_period = \"1s\"\r\n interval = \"15s\"\r\n restart_limit = 6\r\n timeout = \"2s\"\r\n```\r\n\r\n3. But also trying `datasette package covid.db` to create a local DOCKERFILE to later try to push it via `flyctl deploy` fails as well.\r\n\r\n```[+] Building 147.3s (11/11) FINISHED\r\n => [internal] load build definition from Dockerfile 0.2s \r\n => => transferring dockerfile: 396B 0.0s \r\n => [internal] load .dockerignore 0.1s \r\n => => transferring context: 2B 0.0s \r\n => [internal] load metadata for docker.io/library/python:3.8 4.7s \r\n => [auth] library/python:pull token for registry-1.docker.io 0.0s \r\n => [internal] load build context 0.1s \r\n => => transferring context: 82.37kB 0.0s \r\n => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s \r\n => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s \r\n => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s \r\n => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s \r\n => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s \r\n => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s \r\n => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s \r\n => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s \r\n => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s \r\n => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s \r\n => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s \r\n => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s \r\n => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s \r\n => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s \r\n => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s \r\n => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s \r\n => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s \r\n => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s \r\n => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s \r\n => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s \r\n => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s \r\n => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s \r\n => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s \r\n => [2/5] COPY . /app 2.3s \r\n => [3/5] WORKDIR /app 0.2s \r\n => [4/5] RUN pip install -U datasette 26.9s \r\n => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s\r\n => exporting to image 1.2s \r\n => => exporting layers 1.2s \r\n => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s\r\n\r\nUse 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them\r\nTraceback (most recent call last):\r\n \"__main__\", mod_spec)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\grott\\Anaconda3\\Scripts\\datasette.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\cli.py\", line 283, in package\r\n call(args)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\utils\\__init__.py\", line 451, in temporary_docker_directory\r\n tmp.cleanup()\r\n File \"c:\\users\\grott\\anaconda3\\lib\\tempfile.py\", line 811, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 404, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 402, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\grott\\\\AppData\\\\Local\\\\Temp\\\\tmpkb27qid3\\\\datasette'```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1479/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1023243105, "node_id": "I_kwDOBm6k_c48_XNh", "number": 1486, "title": "pipx installation instructions for plugins don't reference pipx inject", "user": {"value": 41546558, "label": "RhetTbull"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-10-12T00:43:42Z", "updated_at": "2021-10-13T21:09:11Z", "closed_at": "2021-10-13T21:09:11Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The datasette [installation instructions](https://github.com/simonw/datasette/blob/main/docs/installation.rst) discuss how to install with pipx, how to upgrade with pipx, and how to upgrade plugins with pipx but do not mention how to install a plugin with pipx. You discussed this on your [blog](https://til.simonwillison.net/python/installing-upgrading-plugins-with-pipx) but looks like this didn't make it in when you updated the docs for pipx (#756). \r\n\r\nI'll submit a PR shortly to fix this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1486/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1015646369, "node_id": "I_kwDOBm6k_c48iYih", "number": 1480, "title": "Exceeding Cloud Run memory limits when deploying a 4.8G database", "user": {"value": 110420, "label": "ghing"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2021-10-04T21:20:24Z", "updated_at": "2022-10-07T04:39:10Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message:\r\n\r\n> Memory limit of 8192M exceeded with 8826M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits\r\n\r\nUnfortunately, the maximum amount of memory that can be allocated to an instance is 8192M.\r\n\r\nNaively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally:\r\n\r\n- Real Memory Size: 70.6 MB\r\n- Virtual Memory Size: 4.51 GB\r\n- Shared Memory Size: 2.5 MB\r\n- Private Memory Size: 57.4 MB\r\n\r\nI'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow.\r\n\r\nThis is somewhat related to #1082, but on a different platform, so I decided to open a new issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1480/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1025754125, "node_id": "I_kwDOBm6k_c49I8QN", "number": 1488, "title": "Upgrade to httpx 0.20.0 (request() got an unexpected keyword argument 'allow_redirects')", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-10-13T22:37:22Z", "updated_at": "2021-10-14T18:03:45Z", "closed_at": "2021-10-14T18:03:45Z", "author_association": "OWNER", "pull_request": null, "body": "This is caused by a change made to `httpx` in https://github.com/encode/httpx/releases/tag/0.20.0", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1488/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1028115674, "node_id": "I_kwDOBm6k_c49R8za", "number": 1493, "title": "`--get '/:memory:.json?sql=select+3*5'` error with datasette 0.59", "user": {"value": 1580956, "label": "chenrui333"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-10-16T18:22:22Z", "updated_at": "2021-10-19T04:39:11Z", "closed_at": "2021-10-19T04:39:11Z", "author_association": "NONE", "pull_request": null, "body": "\ud83d\udc4b trying to upgrade the formula to use the latest release, but runs into some regression test issue with `--get` command.\r\n\r\nMy QQ is does this `datasette --get '/:memory:.json?sql=select+3*5'` supposed to return 15? Thanks!\r\n\r\nrelates to https://github.com/Homebrew/homebrew-core/pull/87369", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1493/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1033864602, "node_id": "I_kwDOBm6k_c49n4Wa", "number": 1496, "title": "Named parameters docs should include an example of a cast", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-10-22T18:56:04Z", "updated_at": "2021-10-22T19:38:23Z", "closed_at": "2021-10-22T19:34:27Z", "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/stable/sql_queries.html#named-parameters\r\n\r\nIt's not obvious that the values from parameters are always SQLite strings, which means that you can't do e.g. integer comparisons on them without casting them first. The documentation here should include an example of this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1496/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1034535001, "node_id": "I_kwDOBm6k_c49qcBZ", "number": 1497, "title": "Publish to Docker Hub failing with \"libcrypt.so.1: cannot open shared object file\"", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 18, "created_at": "2021-10-24T22:57:07Z", "updated_at": "2023-01-18T17:13:45Z", "closed_at": "2021-10-24T23:36:55Z", "author_association": "OWNER", "pull_request": null, "body": "This means the Datasette 0.59.1 release has not been published to Docker Hub.\r\n\r\nHere's where that failed: https://github.com/simonw/datasette/runs/3991043374?check_suite_focus=true\r\n\r\n```\r\nPreparing to unpack .../libc6_2.32-4_amd64.deb ...\r\ndebconf: unable to initialize frontend: Dialog\r\ndebconf: (TERM is not set, so the dialog frontend is not usable.)\r\ndebconf: falling back to frontend: Readline\r\ndebconf: unable to initialize frontend: Readline\r\ndebconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/x86_64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)\r\ndebconf: falling back to frontend: Teletype\r\nChecking for services that may need to be restarted...\r\nChecking init scripts...\r\nUnpacking libc6:amd64 (2.32-4) over (2.28-10) ...\r\nSetting up libc6:amd64 (2.32-4) ...\r\n/usr/bin/perl: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory\r\ndpkg: error processing package libc6:amd64 (--configure):\r\n installed libc6:amd64 package post-installation script subprocess returned error exit status 127\r\nErrors were encountered while processing:\r\n libc6:amd64\r\nE: Sub-process /usr/bin/dpkg returned an error code (1)\r\nThe command '/bin/sh -c apt-get update && apt-get -y --no-install-recommends install software-properties-common && add-apt-repository \"deb http://httpredir.debian.org/debian sid main\" && apt-get update && apt-get -t sid install -y --no-install-recommends libsqlite3-mod-spatialite && apt-get remove -y software-properties-common && apt clean && rm -rf /var/lib/apt && rm -rf /var/lib/dpkg/info/*' returned a non-zero code: 100\r\n```\r\nSame problem when I attempted to publish using the \"Push specific Docker tag\" workflow: https://github.com/simonw/datasette/runs/3991059912?check_suite_focus=true", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1497/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1072106103, "node_id": "I_kwDOBm6k_c4_5wp3", "number": 1542, "title": "feature request: order and dependency of plugins (that use js)", "user": {"value": 33631, "label": "fs111"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-06T12:40:45Z", "updated_at": "2021-12-15T17:47:08Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have been playing with datasette for the last couple of weeks and it is great! I am a big fan of `datasette-cluster-map` and wanted to enhance it a bit with a what I would call a sub-plugin. I basically want to add more controls to the map that cluster map provides. I have been looking into its code and how the plugin management works, but it seems what I am trying to do is not doable without hacks in js.\r\n\r\nBasically what would like to have is a way to say load my plugin after the plugins I depend on have been loaded and rendered. There seems to be no prior art where plugins have these dependencies on the js level so I was wondering if that could be added or if it exists how to do it.\r\n\r\nBasically what I want to do is:\r\n\r\nmy-awesome-plugin has a dependency on datastte-cluster-map. Whenever datasette cluster map has finished rendering on page load, call my plugin, but no earlier. To make that work datasette probably needs some total order in which way plugins are loaded intialized.\r\n\r\nSince I am new to datastte, I may be missing something obvious, so please let me know if the above makes no sense.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1542/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1057996111, "node_id": "I_kwDOBm6k_c4_D71P", "number": 1517, "title": "Let `register_routes()` over-ride default routes within Datasette", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2021-11-19T00:22:15Z", "updated_at": "2021-11-19T03:20:00Z", "closed_at": "2021-11-19T03:07:27Z", "author_association": "OWNER", "pull_request": null, "body": "See https://github.com/simonw/datasette/issues/878#issuecomment-973554024_ - right now `register_routes()` can't replace default Datasette routes.\r\n\r\nIt would be neat if plugins could do this - especially if there was a neat documented way for them to then re-dispatch to the original route code after making some kind of modification.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1517/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1058072543, "node_id": "I_kwDOBm6k_c4_EOff", "number": 1518, "title": "Complete refactor of TableView and table.html template", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 45, "created_at": "2021-11-19T02:55:16Z", "updated_at": "2022-03-15T18:35:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split from #878. The current `TableView` class is by far the most complex part of Datasette, and the most difficult to work on: https://github.com/simonw/datasette/blob/0.59.2/datasette/views/table.py\r\n\r\nIn #878 I started exploring a new pattern for building views. In doing so it became clear that `TableView` is the first beast that I need to slay - if I can refactor that into something neat the pattern for building other views will emerge as a natural consequence.\r\n\r\nI've been trying to build this as a `register_routes()` plugin, as originally suggested in #870 - though unfortunately it looks like those plugins can't replace existing Datasette default views at the moment, see #1517. [UPDATE: I was wrong about this, plugins can over-ride default views just fine]\r\n\r\nI also know that I want to have a fully documented template context for `table.html` as a major step on the way to Datasette 1.0, see #1510.\r\n\r\nAll of this adds up to the `TableView` factor being a major project that will unblock a whole flurry of other things - so I'm going to work on that in this separate issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1518/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1058790545, "node_id": "I_kwDOBm6k_c4_G9yR", "number": 1519, "title": "base_url is omitted in JSON and CSV views", "user": {"value": 157158, "label": "phubbard"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 22, "created_at": "2021-11-19T18:10:45Z", "updated_at": "2021-12-01T17:50:09Z", "closed_at": "2021-11-20T19:11:21Z", "author_association": "NONE", "pull_request": null, "body": "I have a datasette deployment, using Apache2 to reverse proxy:\r\n\r\n ProxyPass /ged http://thor.phfactor.net:8001\r\n ProxyPreserveHost On\r\n\r\nIn settings.json I have\r\n```json\r\n{\r\n \"base_url\": \"/ged/\",\r\n \"trace_debug\": 1,\r\n \"template_debug\": 1\r\n}\r\n```\r\nand datasette works correctly. However, if you view a query and then click on the 'This data as json, CSV' both links omit the base_url prefix and are therefore 404.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1519/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1058803238, "node_id": "I_kwDOBm6k_c4_HA4m", "number": 1520, "title": "Pattern for avoiding accidental URL over-rides", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-11-19T18:28:05Z", "updated_at": "2021-11-19T18:29:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Following #1517 I'm experimenting with a plugin that does this:\r\n```python\r\n@hookimpl\r\ndef register_routes():\r\n return [\r\n (r\"/(?P[^/]+)/(?P[^/]+?)$\", Table().view),\r\n ]\r\n```\r\nThis is supposed to replace the default table page with new code... but there's a problem: `/-/versions` on that instance now returns 404 `Database '-' does not exist`!\r\n\r\nNeed to figure out a pattern to avoid that happening. Plugins get to add their routes before Datasette's default routes, which is why this is happening here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1520/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1058815557, "node_id": "I_kwDOBm6k_c4_HD5F", "number": 1521, "title": "Docker configuration for exercising Datasette behind Apache mod_proxy", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-11-19T18:46:18Z", "updated_at": "2021-11-19T20:32:29Z", "closed_at": "2021-11-19T20:32:29Z", "author_association": "OWNER", "pull_request": null, "body": "> Having a live demo running on Cloud Run that proxies through Apache and uses `base_url` would be incredibly useful for replicating and debugging this kind of thing. I wonder how hard it is to run Apache and `mod_proxy` in the same Docker container as Datasette?\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1519#issuecomment-974310208_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1521/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1058896236, "node_id": "I_kwDOBm6k_c4_HXls", "number": 1522, "title": "Deploy a live instance of demos/apache-proxy", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 34, "created_at": "2021-11-19T20:32:55Z", "updated_at": "2021-11-23T03:00:34Z", "closed_at": "2021-11-20T18:51:56Z", "author_association": "OWNER", "pull_request": null, "body": "> I'll get this working on my laptop first, but then I want to get it up and running on Cloud Run - maybe with a GitHub Actions workflow in this repo that re-deploys it on manual execution.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1521#issuecomment-974322178_\r\n\r\nI started by following https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/ - see example in https://github.com/ahmetb/multi-process-container-lazy-solution", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1522/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1059209412, "node_id": "I_kwDOBm6k_c4_IkDE", "number": 1523, "title": "Come up with a more elegant solution for base_url than ds.urls.path()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-20T19:05:22Z", "updated_at": "2021-11-20T19:05:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "While fixing #1519 I added a lot of ugly code that looks like this: https://github.com/simonw/datasette/blob/08947fa76433d18988aa1ee1d929bd8320c75fe2/datasette/facets.py#L228-L230\r\n\r\nSee these two commits in particular: fe687fd0207c4c56c4778d3e92e3505fc4b18172 and 08947fa76433d18988aa1ee1d929bd8320c75fe2\r\n\r\nIt would be great to come up with a less verbose and error-prone way of handling this problem.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1523/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1059219106, "node_id": "I_kwDOBm6k_c4_Imai", "number": 1524, "title": "Improve Apache proxy documentation, link to demo", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-11-20T20:03:14Z", "updated_at": "2021-11-20T23:34:03Z", "closed_at": "2021-11-20T23:34:03Z", "author_association": "OWNER", "pull_request": null, "body": "> The latest demo is now live at https://datasette-apache-proxy-demo.fly.dev/prefix/fixtures/sortable?_facet=pk2\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1519#issuecomment-974697824_\r\n\r\nI'm going to put out 0.59.3 bugfix release with this, but I'd like to first improve the documentation on https://docs.datasette.io/en/stable/deploying.html#apache-proxy-configuration to highlight the new demo.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1524/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1059549523, "node_id": "I_kwDOBm6k_c4_J3FT", "number": 1526, "title": "Add to vercel.json, rather than overwriting it.", "user": {"value": 192568, "label": "mroswell"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-11-22T00:47:12Z", "updated_at": "2021-11-22T04:49:45Z", "closed_at": "2021-11-22T04:13:47Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'd like to be able to add to vercel.json. But Datasette overwrites whatever I put in that file. I originally reported this here:\r\nhttps://github.com/simonw/datasette-publish-vercel/issues/51\r\n\r\nIn that case, I wanted to do a rewrite... and now I need to do 301 redirects (because we had to rename our site).\r\n\r\nCan this be addressed?\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1526/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1059555791, "node_id": "I_kwDOBm6k_c4_J4nP", "number": 1527, "title": "Columns starting with an underscore behave poorly in filters", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 7, "created_at": "2021-11-22T01:01:36Z", "updated_at": "2022-01-14T00:57:08Z", "closed_at": "2022-01-14T00:57:08Z", "author_association": "OWNER", "pull_request": null, "body": "Similar bug to #1525 (and #1506 before it). Start on https://latest.datasette.io/fixtures/facetable?_facet=_neighborhood - then select a neighborhood - then try to remove that filter using the little \"x\" and submitting the form again.\r\n\r\n![filter-bug](https://user-images.githubusercontent.com/9599/142786754-31d265a2-944d-4ea2-af6f-305d445a2ccb.gif)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1527/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1059509927, "node_id": "I_kwDOBm6k_c4_Jtan", "number": 1525, "title": "\"Links from other tables\" broken for columns starting with underscore", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-21T22:55:08Z", "updated_at": "2021-11-30T06:39:01Z", "closed_at": "2021-11-30T06:34:35Z", "author_association": "OWNER", "pull_request": null, "body": "Same bug as #1506, this time it's this link or the row page:\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1525/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1060631257, "node_id": "I_kwDOBm6k_c4_N_LZ", "number": 1528, "title": "Add new `\"sql_file\"` key to Canned Queries in metadata?", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-22T21:58:01Z", "updated_at": "2022-06-10T03:23:08Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Currently for canned queries, you have to inline SQL in your `metadata.yaml` like so:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql: |-\r\n select neighborhood, facet_cities.name, state\r\n from facetable\r\n join facet_cities on facetable.city_id = facet_cities.id\r\n where neighborhood like '%' || :text || '%'\r\n order by neighborhood\r\n title: Search neighborhoods\r\n```\r\n\r\nThis works fine, but for a few reasons, I usually have my canned queries already written in separate `.sql` files. I'd like to instead re-use those instead of re-writing it. \r\n\r\nSo, I'd like to see a new `\"sql_file\"` key that works like so:\r\n\r\n`metadata.yaml`:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql_file: neighborhood_search.sql\r\n title: Search neighborhoods\r\n```\r\n`neighborhood_search.sql`:\r\n```sql\r\nselect neighborhood, facet_cities.name, state\r\nfrom facetable\r\njoin facet_cities on facetable.city_id = facet_cities.id\r\nwhere neighborhood like '%' || :text || '%'\r\norder by neighborhood\r\n```\r\n\r\nBoth of these would work in the exact same way, where Datasette would instead open + include `neighborhood_search.sql` on startup. \r\n\r\n\r\nA few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:\r\n\r\n- Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code\r\n- Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit\r\n- Works well with other tools (can pipe `.sql` files into the `sqlite3` CLI, or use with other SQLite clients easier)\r\n- Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible\r\n\r\nLet me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1073712378, "node_id": "I_kwDOBm6k_c4__4z6", "number": 1544, "title": "Code that detects the label column for a table is case-sensitive", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-12-07T20:01:25Z", "updated_at": "2021-12-07T20:03:43Z", "closed_at": "2021-12-07T20:03:43Z", "author_association": "OWNER", "pull_request": null, "body": "I just noticed that a column called `Name` is not being picked up as the label column for a table.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1544/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1065429936, "node_id": "I_kwDOBm6k_c4_gSuw", "number": 1532, "title": "Use datasette-table Web Component to guide the design of the JSON API for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2021-11-28T20:37:18Z", "updated_at": "2022-03-16T20:13:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I realized that one of the reasons I'm having trouble committing to nailing down the JSON API for 1.0 is that I don't use it much myself - I use the `?_shape=array` one quite often, but I don't have any projects that are using the default, more fully-featured API.\r\n\r\nAs an experiment I built a Web Component for embedding Datasette tables on pages - https://github.com/simonw/datasette-table - and I think it's actually going to be a really useful tool for helping me dog food the v1.0 API design.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1532/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1065431383, "node_id": "I_kwDOBm6k_c4_gTFX", "number": 1533, "title": "Add `Link: rel=\"alternate\"` header pointing to JSON for a table/query", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2021-11-28T20:43:25Z", "updated_at": "2022-02-02T07:56:51Z", "closed_at": "2022-02-02T07:49:33Z", "author_association": "OWNER", "pull_request": null, "body": "Originally explored in https://github.com/simonw/datasette-notebook/issues/2#issuecomment-980789406 - I wanted an efficient way to scan a list of URLs and figure out which if any of those corresponded to Datasette tables, canned queries or SQL output that could be represented as a table on a page.\r\n\r\nIt looks like a neat way to do that is with ` Link:` header like this:\r\n\r\n`Link: http://127.0.0.1:8058/fixtures/compound_three_primary_keys.json; rel=\"alternate\"; type=\"application/datasette+json\"`\r\n\r\nI can put a ` The query_only pragma prevents data changes on database files when enabled. When this pragma is enabled, any attempt to CREATE, DELETE, DROP, INSERT, or UPDATE will result in an [SQLITE_READONLY](https://www.sqlite.org/rescode.html#readonly) error. However, the database is not truly read-only. You can still run a [checkpoint](https://www.sqlite.org/wal.html#ckpt) or a [COMMIT](https://www.sqlite.org/lang_transaction.html) and the return value of the [sqlite3_db_readonly()](https://www.sqlite.org/c3ref/db_readonly.html) routine is not affected.\r\n\r\nWould it be worth adding this as an extra protection against accidental writes to a DB file over a read-only connection?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1539/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1068791148, "node_id": "I_kwDOBm6k_c4_tHVs", "number": 1540, "title": "Idea: hover to reveal details of linked row", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2021-12-01T19:28:07Z", "updated_at": "2021-12-09T23:38:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "\"fara__item_version__7_rows_where_where__item___5236\"\r\n\r\nHovering over that could work a little bit like GitHub issue links:\r\n\r\n![hover](https://user-images.githubusercontent.com/9599/144300537-9cd9e9af-ac16-42db-842f-37661bc94063.gif)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1540/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1069881276, "node_id": "I_kwDOBm6k_c4_xRe8", "number": 1541, "title": "Different default layout for row page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-02T18:56:36Z", "updated_at": "2021-12-02T18:56:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The row page displays as a table even though it only has one table row.\r\n\r\nmaybe default to the same display as the narrow page version, even for wide pages?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1541/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1955676270, "node_id": "I_kwDOBm6k_c50kUBu", "number": 2201, "title": "Discord invite link is invalid", "user": {"value": 11708906, "label": "andrewsanchez"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-21T21:50:05Z", "updated_at": "2023-10-21T21:50:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following:\r\n\r\n\"CleanShot\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2201/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1977726056, "node_id": "I_kwDOBm6k_c514bRo", "number": 2203, "title": "custom plugin not seen as sql function", "user": {"value": 7113541, "label": "LyzardKing"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T10:30:19Z", "updated_at": "2023-11-05T10:30:19Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi, I'm not sure if this is the right repo for this issue.\r\n\r\nI'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly.\r\n\r\nNow I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works).\r\nIf I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error:\r\n```duckdb.duckdb.CatalogException: Catalog Error: Scalar Function with name hello_world does not exist!```\r\n\r\nSince the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2203/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978023780, "node_id": "I_kwDOBm6k_c515j9k", "number": 2205, "title": "request.post_vars() method obliterates form keys with multiple values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 3, "created_at": "2023-11-05T23:25:08Z", "updated_at": "2023-11-06T04:10:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139\r\n\r\nIn GET requests you can do `?foo=1&foo=2` - you can do the same in POST requests, but the `dict()` call here eliminates those duplicates.\r\n\r\nYou can't even try calling `post_body()` and implement your own custom parsing because of:\r\n- #2204", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978022687, "node_id": "I_kwDOBm6k_c515jsf", "number": 2204, "title": "request.post_body() can only be called once", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T23:22:03Z", "updated_at": "2023-11-05T23:23:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This code here:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L127-L135\r\n\r\nIt consumes the messages, which means if you try to call it a second time you won't be able to get at the body.\r\n\r\nThis is efficient - we don't end up with a `request` object property with potentially megabytes of content that we never look at again - but it's inconvenient for cases like middleware or functions where we don't know if the body has been consumed yet or not.\r\n\r\nPotential solution: set `request._body` the first time it is called, and return that on subsequent calls.\r\n\r\nPotential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call `post_body()` multiple times against one of those larger bodies.\r\n\r\nI'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2204/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1994845152, "node_id": "I_kwDOBm6k_c525uvg", "number": 2207, "title": "ModuleNotFoundError: No module named 'click_default_group", "user": {"value": 283441, "label": "honzajavorek"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-15T14:04:32Z", "updated_at": "2023-11-15T14:04:32Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "No matter what I do, I'm getting this error:\r\n\r\n```\r\n$ datasette\r\nTraceback (most recent call last):\r\n File \"/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/bin/datasette\", line 5, in \r\n from datasette.cli import cli\r\n File \"/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/lib/python3.11/site-packages/datasette/cli.py\", line 6, in \r\n from click_default_group import DefaultGroup\r\nModuleNotFoundError: No module named 'click_default_group'\r\n```\r\n\r\nI have datasette in my dependencies like this:\r\n\r\n```toml\r\n[tool.poetry.group.dev.dependencies]\r\ndatasette = {version = \"1.0a7\", allow-prereleases = true}\r\n```\r\n\r\nI had the latest regular version (not pre-release) there originally, but the result was the same:\r\n\r\n```toml\r\n[tool.poetry.group.dev.dependencies]\r\ndatasette = \"0.64.5\"\r\n```\r\n\r\nFull pyproject.toml is at https://github.com/honzajavorek/junior.guru/ Previously datasette worked for me, but I guess something had to upgrade and now I can't even launch it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2207/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1994857251, "node_id": "I_kwDOBm6k_c525xsj", "number": 2208, "title": "No suggested facets when a column named 'value' is included", "user": {"value": 198537, "label": "rgieseke"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-11-15T14:11:17Z", "updated_at": "2023-11-15T14:18:59Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'.\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L168-L174\r\n\r\nCurrently the following is shown (from https://latest.datasette.io/fixtures/facetable)\r\n\r\n![image](https://github.com/simonw/datasette/assets/198537/a919509a-ea88-461b-b25b-8b776720c7c5)\r\n\r\nWhen I add a column named 'value' only the JSON facets are processed.\r\n\r\n![image](https://github.com/simonw/datasette/assets/198537/092bd0b3-4c20-434e-88f8-47e2b8994a1d)\r\n\r\nI think that not using aliases could be a solution (except if someone wants to use a column named `count(*)` though this seems to be unlikely). I'll open a PR with that.\r\n\r\nThere is also a TODO with a similar question in the same file. I have not looked into that yet.\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L512", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2208/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2028698018, "node_id": "I_kwDOBm6k_c5463mi", "number": 2213, "title": "feature request: gzip compression of database downloads", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-12-06T14:35:03Z", "updated_at": "2023-12-06T15:05:46Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. \r\n\r\nthis is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of \"application\"\r\n\r\n(see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2213/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2019811176, "node_id": "I_kwDOBm6k_c54Y99o", "number": 2211, "title": "Unreachable exception handlers for `sqlite3.OperationalError`", "user": {"value": 1214074, "label": "mattparmett"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-01T00:50:22Z", "updated_at": "2023-12-01T00:50:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There are several places where `sqlite3.OperationalError` is caught as part of an exception handler which catches multiple exceptions, but is then caught again immediately afterwards by a dedicated exception handler.\r\n\r\nBecause the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then `sqlite3.OperationalError` should be removed from the tuple of exceptions in the first handler.\r\n\r\nThis issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it.\r\n\r\nOne example:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/views/database.py#L534-L537\r\n\r\nOther instances:\r\n\r\nhttps://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270\r\nhttps://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2211/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2029908157, "node_id": "I_kwDOBm6k_c54_fC9", "number": 2214, "title": "CSV export fails for some `text` foreign key references", "user": {"value": 2874, "label": "precipice"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-12-07T05:04:34Z", "updated_at": "2023-12-07T07:36:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm starting this issue without a clear reproduction in case someone else has seen this behavior, and to use the issue as a notebook for research. \r\n\r\nI'm using Datasette with the [SWITRS](https://iswitrs.chp.ca.gov/) data set, which is a California Highway Patrol collection of traffic incident data from the past decade or so. I receive data from them in CSV and want to work with it in Datasette, then export it to CSV for mapping in Felt.com.\r\n\r\nTheir data makes extensive use of codes for incident column data (`1` for `Monday` and so on), some of it integer codes and some of it letter/text codes. The text codes are sometimes blank or `-`. During import, I'm creating lookup tables for foreign key references to make the Datasette UI presentation of the data easier to read.\r\n\r\nIf I import the data and set up the integer foreign keys, everything works fine, but if I set up the text foreign keys, CSV export starts to fail. \r\n\r\nThe foreign key configuration is as follows:\r\n\r\n```\r\n# Some tables use integer ids, like sensible tables do. Let's import them first\r\n# since we favor them.\r\n\r\nfor TABLE in DAY_OF_WEEK CHP_SHIFT POPULATION SPECIAL_COND BEAT_TYPE COLLISION_SEVERITY\r\ndo\r\n\tsqlite-utils create-table records.db $TABLE id integer name text --pk=id\r\n\tsqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv\r\n\tsqlite-utils add-foreign-key records.db collisions $TABLE $TABLE id\r\n\tsqlite-utils create-index records.db collisions $TABLE\r\ndone\r\n\r\n# *Other* tables use letter keys, like they were raised by WOLVES. Let's put them\r\n# at the end of the import queue.\r\n\r\nfor TABLE in WEATHER_1 WEATHER_2 LOCATION_TYPE RAMP_INTERSECTION SIDE_OF_HWY \\\r\nPRIMARY_COLL_FACTOR PCF_CODE_OF_VIOL PCF_VIOL_CATEGORY TYPE_OF_COLLISION MVIW \\\r\nPED_ACTION ROAD_SURFACE ROAD_COND_1 ROAD_COND_2 LIGHTING CONTROL_DEVICE \\\r\nSTWD_VEHTYPE_AT_FAULT CHP_VEHTYPE_AT_FAULT PRIMARY_RAMP SECONDARY_RAMP\r\ndo\r\n\tsqlite-utils create-table records.db $TABLE key text name text --pk=key\r\n\tsqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv\r\n\tsqlite-utils add-foreign-key records.db collisions $TABLE $TABLE key\r\n\tsqlite-utils create-index records.db collisions $TABLE\r\ndone\r\n```\r\n\r\nYou can see the full code and import script here: https://github.com/radical-bike-lobby/switrs-db\r\n\r\nIf I run this code and then hit the CSV export link in the Datasette interface (the simple link or the \"advanced\" dialog), export fails after a small number of CSV rows are written. I am not seeing any detailed error messages but this appears in the logging output:\r\n\r\n```\r\nINFO: 127.0.0.1:57885 - \"GET /records/collisions.csv?_facet=PRIMARY_RD&PRIMARY_RD=ASHBY+AV&_labels=on&_size=max HTTP/1.1\" 200 OK\r\nCaught this error: \r\n\r\n```\r\n\r\n(No other output follows `error:` other than a blank line.)\r\n\r\nI've stared at the rows directly after the error occurs and can't yet see what is causing the problem. I'm going to set up a development environment and see if I get any more detailed error output, and then stare more at some problematic lines to see if I can get a simple reproduction.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2214/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2023057255, "node_id": "I_kwDOBm6k_c54lWdn", "number": 2212, "title": "Can't filter with numbers", "user": {"value": 605070, "label": "fzakaria"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-04T05:26:29Z", "updated_at": "2023-12-04T05:26:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean).\r\nI can't seem to get the facet to work or even filtering on this column.\r\n\r\nMy guess is that Datasette is \"stringifying\" the number and it's not matching?\r\nExample: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2212/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1087913724, "node_id": "I_kwDOBm6k_c5A2D78", "number": 1577, "title": "Drop support for Python 3.6", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 6, "created_at": "2021-12-23T18:17:03Z", "updated_at": "2022-01-25T23:30:03Z", "closed_at": "2022-01-20T04:31:41Z", "author_association": "OWNER", "pull_request": null, "body": "*Original title: Decide when to drop support for Python 3.6*\r\n\r\n> `context_vars` can solve this but they were introduced in Python 3.7: https://www.python.org/dev/peps/pep-0567/\r\n>\r\n> Python 3.6 support ends in a few days time, and it looks like Glitch has updated to 3.7 now - so maybe I can get away with Datasette needing 3.7 these days?\r\n>\r\n> Tweeted about that here: https://twitter.com/simonw/status/1473761478155010048\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1576#issuecomment-999878907_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1577/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1087919372, "node_id": "I_kwDOBm6k_c5A2FUM", "number": 1578, "title": "Confirm if documented nginx proxy config works for row pages with escaped characters in their primary key", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-12-23T18:27:59Z", "updated_at": "2021-12-24T21:33:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Found this while working on https://github.com/simonw/datasette-tiddlywiki\r\n\r\n\"image\"\r\n\r\nThen clicking on `/tiddlywiki/tiddlers/%24%3A%2FDefaultTiddlers` returns a 404.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1578/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1087931918, "node_id": "I_kwDOBm6k_c5A2IYO", "number": 1579, "title": "`.execute_write(... block=True)` should be the default behaviour", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 7, "created_at": "2021-12-23T18:54:28Z", "updated_at": "2022-01-13T22:28:08Z", "closed_at": "2021-12-23T19:18:26Z", "author_association": "OWNER", "pull_request": null, "body": "Every single piece of code I've written against the write APIs has used the `block=True` option to wait for the result.\r\n\r\nWithout that, it instead fires the write into the queue but then continues even before it has finished executing.\r\n\r\n`block=True` should clearly be the default behaviour here!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1579/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1089529555, "node_id": "I_kwDOBm6k_c5A8ObT", "number": 1581, "title": "when hashed urls are turned on, the _memory db has improperly long-lived cache expiry", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-28T00:05:48Z", "updated_at": "2022-03-24T04:08:18Z", "closed_at": "2022-03-24T04:08:18Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "if hashed_urls are on, then a -000 suffix is added to the `_memory` database, and the cache settings are set just as if it was a normal hashed database.\r\n\r\nin particular, this header is set:\r\n\r\n`cache-control: max-age=31536000`\r\n\r\nthis is not appropriate because the `_memory-000` database isn't really hashed based on the contents of the databases (see #1561).\r\n\r\nEither the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1581/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1076057610, "node_id": "I_kwDOBm6k_c5AI1YK", "number": 1546, "title": "validating the sql", "user": {"value": 50336793, "label": "jadsongmatos"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-09T21:35:57Z", "updated_at": "2021-12-18T02:05:17Z", "closed_at": "2021-12-18T02:05:16Z", "author_association": "NONE", "pull_request": null, "body": "Could someone tell me that part of the code is responsible for validating the sql that guarantees that only a table can be read", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1546/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1075893249, "node_id": "I_kwDOBm6k_c5AINQB", "number": 1545, "title": "Custom pages don't work on windows", "user": {"value": 559711, "label": "ryascott"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-09T18:53:05Z", "updated_at": "2022-02-03T02:08:31Z", "closed_at": "2022-02-03T01:58:35Z", "author_association": "NONE", "pull_request": null, "body": "It seems that custom pages don't work when put in templates/pages\r\n\r\nTo reproduce on datasette version 0.59.4 using PowerShell on WIndows 10 with Python 3.10.0\r\n\r\n mkdir -p templates/pages\r\n\r\n echo \"hello world\" >> templates/pages/about.html\r\n\r\nStart datasette\r\n \r\n datasette --template-dir templates/\r\n\r\nNavigate to [http://127.0.0.1:8001/about](url) and receive:\r\n \r\n Error 404:\r\n Database not found: about\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1545/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1076388044, "node_id": "I_kwDOBm6k_c5AKGDM", "number": 1547, "title": "Writable canned queries fail to load custom templates", "user": {"value": 127565, "label": "wragge"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 6, "created_at": "2021-12-10T03:31:48Z", "updated_at": "2022-01-13T22:27:59Z", "closed_at": "2021-12-19T21:12:00Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I've created a canned query with `\"write\": true` set. I've also created a custom template for it, but the template doesn't seem to be found. If I look in the HTML I see (`stock_exchange` is the db name):\r\n\r\n``\r\n\r\nMy non-writeable canned queries pick up custom templates as expected, and if I look at their HTML I see the canned query name added to the templates considered (the canned query here is `date_search`):\r\n\r\n``\r\n\r\nSo it seems like the writeable canned query is behaving differently for some reason. Is it an authentication thing? I'm using the built in `--root` authentication.\r\n\r\nThanks!\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1547/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1077628073, "node_id": "I_kwDOBm6k_c5AO0yp", "number": 1550, "title": "Research option for returning all rows from arbitrary query", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-12-11T19:31:11Z", "updated_at": "2021-12-11T23:43:24Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by thinking about #1549 - returning ALL rows from an arbitrary query is a lot easier if you just run that query and keep iterating over the cursor.\r\n\r\nI've avoided doing that in the past because it could tie up a connection for a long time - but in private instances this wouldn't be such a problem.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1550/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1077620955, "node_id": "I_kwDOBm6k_c5AOzDb", "number": 1549, "title": "Redesign CSV export to improve usability", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2021-12-11T19:02:12Z", "updated_at": "2022-04-04T11:17:13Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser*\r\n\r\nRight now, if the user clicks on the CSV related to a table or a query, the response header for the content type is \r\n\r\n\"content-type: text/plain; charset=utf-8\"\r\n\r\nMost browsers will try to open a file with this content-type in the browser. \r\n\r\nThis is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser.\r\n\r\nIt would be great if the response header could be something like \r\n\r\n```\r\n'Content-type: text/csv');\r\n'Content-disposition: attachment;filename=MyVerySpecial.csv');\r\n```\r\n\r\nwhich would lead browsers to open a download dialog.\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1549/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1077893013, "node_id": "I_kwDOBm6k_c5AP1eV", "number": 1551, "title": "`keep_blank_values=True` when parsing `request.args`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 3, "created_at": "2021-12-12T19:53:07Z", "updated_at": "2022-01-13T22:26:04Z", "closed_at": "2021-12-12T20:02:01Z", "author_association": "OWNER", "pull_request": null, "body": "This code in `TableView` wouldn't be necessary: https://github.com/simonw/datasette/blob/492f9835aa7e90540dd0c6324282b109f73df71b/datasette/views/table.py#L396-L399\r\n\r\nIf that happened here instead: https://github.com/simonw/datasette/blob/492f9835aa7e90540dd0c6324282b109f73df71b/datasette/utils/asgi.py#L98-L100\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1518#issuecomment-991827468_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1551/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1078702875, "node_id": "I_kwDOBm6k_c5AS7Mb", "number": 1552, "title": "Allow to set `facets_array` in metadata (like current `facets`)", "user": {"value": 3556, "label": "davidbgk"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 9, "created_at": "2021-12-13T16:00:44Z", "updated_at": "2022-01-13T22:26:15Z", "closed_at": "2021-12-16T18:47:48Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "For now, you can set a `facets` value (array) in your metadata file but I couldn't find a way to set a `facets_array` in order to provide default facets for arrays (like tags). My use-case is to access to [that kind of view](https://latest.datasette.io/fixtures/facetable?_facet_array=tags) by default without URL's parameters as with other default facets.\r\n\r\n_I'm new to datasette, and I'm willing to help with a PR if that is not already implemented and I missed it!_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1552/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1079111498, "node_id": "I_kwDOBm6k_c5AUe9K", "number": 1553, "title": "if csv export is truncated in non streaming mode set informative response header", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-13T22:50:44Z", "updated_at": "2021-12-16T19:17:28Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.\r\n\r\nit would be great if a response is truncated that an header signalling that was set in the header.\r\n\r\ni need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1553/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1079149656, "node_id": "I_kwDOBm6k_c5AUoRY", "number": 1555, "title": "Optimize all those calls to index_list and foreign_key_list", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 27, "created_at": "2021-12-13T23:50:56Z", "updated_at": "2022-01-13T22:27:32Z", "closed_at": "2021-12-19T20:55:59Z", "author_association": "OWNER", "pull_request": null, "body": "On the first hit to a restarted index I'm seeing this in the SQL traces: https://latest-with-plugins.datasette.io/github/commits?_trace=1\r\n\r\n\"image\"\r\n\r\nI imagine this could be sped up a lot using tricks like this one from the SQLite documentation: https://sqlite.org/pragma.html#pragfunc\r\n\r\n```sql\r\nSELECT DISTINCT m.name || '.' || ii.name AS 'indexed-columns'\r\n FROM sqlite_schema AS m,\r\n pragma_index_list(m.name) AS il,\r\n pragma_index_info(il.name) AS ii\r\n WHERE m.type='table'\r\n ORDER BY 1;\r\n```\r\nhttps://latest-with-plugins.datasette.io/fixtures?sql=SELECT+DISTINCT+m.name+%7C%7C+%27.%27+%7C%7C+ii.name+AS+%27indexed-columns%27%0D%0A++FROM+sqlite_schema+AS+m%2C%0D%0A+++++++pragma_index_list%28m.name%29+AS+il%2C%0D%0A+++++++pragma_index_info%28il.name%29+AS+ii%0D%0A+WHERE+m.type%3D%27table%27%0D%0A+ORDER+BY+1%3B", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1555/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1081318247, "node_id": "I_kwDOBm6k_c5Ac5tn", "number": 1556, "title": "Show count of facet values always, not just for `?_facet_size=max`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 1, "created_at": "2021-12-15T17:49:01Z", "updated_at": "2022-01-13T22:26:07Z", "closed_at": "2021-12-15T17:58:06Z", "author_association": "OWNER", "pull_request": null, "body": "> You've caused me to rethink this feature - I no longer think there's value in only showing these numbers if `?_facet_size=max` as opposed to all of the time.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1423#issuecomment-995023410_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1556/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1082564912, "node_id": "I_kwDOBm6k_c5AhqEw", "number": 1557, "title": "`?_nosuggest=1` parameter for disabling facet suggestions on table view", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 1, "created_at": "2021-12-16T19:21:42Z", "updated_at": "2022-01-13T22:26:48Z", "closed_at": "2021-12-16T19:24:59Z", "author_association": "OWNER", "pull_request": null, "body": "Found I wanted this while I was debugging #625 just to clean up the debug traces, but it makes sense as a partner to `?_nofacet=1` and `?_nocount=1` from #1350 and #1353.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1557/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1082584499, "node_id": "I_kwDOBm6k_c5Ahu2z", "number": 1558, "title": "Redesign `facet_results` JSON structure prior to Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-12-16T19:45:10Z", "updated_at": "2023-01-09T15:31:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Decision: as an initial fix I'm going to de-duplicate those keys by using `tags__array` etc - with a `_2` on the end if that key is already used.\r\n>\r\n> I'll open a separate issue to redesign this better for Datasette 1.0.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/625#issuecomment-996130862_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1082746149, "node_id": "I_kwDOBm6k_c5AiWUl", "number": 1560, "title": "Table page title has \"where where\" in it", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 0, "created_at": "2021-12-17T00:05:48Z", "updated_at": "2022-01-13T22:28:35Z", "closed_at": "2022-01-13T22:20:15Z", "author_association": "OWNER", "pull_request": null, "body": "Just noticed this while working on #1518.\r\n\r\n```\r\n% curl -s 'https://latest.datasette.io/fixtures/facetable?_sort=pk&on_earth__exact=1' | grep -C 1 ''\r\n<head>\r\n <title>fixtures: facetable: 14 rows\r\n where where on_earth = 1 sorted by pk\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1560/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1082765654, "node_id": "I_kwDOBm6k_c5AibFW", "number": 1561, "title": "add hash id to \"_memory\" url if hashed url mode is turned on and crossdb is also turned on", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-17T00:45:12Z", "updated_at": "2022-03-19T04:45:40Z", "closed_at": "2022-03-19T04:45:40Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. \r\n\r\nOne way that it could work is to have the _memory hash be a hash of all the individual databases.\r\n\r\nOtherwise, crossdb queries can get quit out of data if using aggressive caching.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1561/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083657868, "node_id": "I_kwDOBm6k_c5Al06M", "number": 1565, "title": "Documented JavaScript variables on different templates made available for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2021-12-17T22:30:51Z", "updated_at": "2021-12-19T22:37:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "While working on https://github.com/simonw/datasette-leaflet-freedraw/issues/10 I found myself writing this atrocity to figure out the SQL query used for a specific table page:\r\n\r\n```javascript\r\nlet innerSql = Array.from(document.getElementsByTagName(\"span\")).filter(\r\n el => el.innerText == \"View and edit SQL\"\r\n)[0].parentElement.getAttribute(\"title\")\r\n```\r\nThis is obviously bad - it's very brittle, and will break if I ever change the text on that link (like localizing it for example).\r\n\r\nInstead, I think pages like that one should have a block of script at the bottom something like this:\r\n```javascript\r\nwindow.datasette = window.datasette || {};\r\ndatasette.view_name = 'table';\r\ndatasette.table_sql = 'select * from ...';\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1565/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1083669410, "node_id": "I_kwDOBm6k_c5Al3ui", "number": 1566, "title": "Release Datasette 0.60", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 6, "created_at": "2021-12-17T22:58:12Z", "updated_at": "2022-01-14T01:59:55Z", "closed_at": "2022-01-14T01:59:55Z", "author_association": "OWNER", "pull_request": null, "body": "Using this as a tracking issue. I'm hoping to get the bulk of the JSON redesign work from the refactor in #1554 in for this release.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1566/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083573206, "node_id": "I_kwDOBm6k_c5AlgPW", "number": 1563, "title": "Datasette(... files=) should not be a required argument", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 2, "created_at": "2021-12-17T19:54:18Z", "updated_at": "2022-01-13T22:27:18Z", "closed_at": "2021-12-18T02:19:40Z", "author_association": "OWNER", "pull_request": null, "body": "```pycon\r\n>>> ds = Datasette(memory=True)\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nTypeError: __init__() missing 1 required positional argument: 'files'\r\n>>> ds = Datasette(memory=True, files=[])\r\n```\r\nI wanted to create an in-memory Datasette for running some tests, no point in forcing me to pass `files=[]` to do that.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1563/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083581011, "node_id": "I_kwDOBm6k_c5AliJT", "number": 1564, "title": "_prepare_connection not called on write connections", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 1, "created_at": "2021-12-17T20:06:47Z", "updated_at": "2022-01-20T21:29:43Z", "closed_at": "2021-12-18T01:58:44Z", "author_association": "OWNER", "pull_request": null, "body": "I was trying to initalize SpatiaLite in a write connection:\r\n```pycon\r\n>>> from datasette.app import Datasette\r\n>>> ds = Datasette(memory=True, files=[], sqlite_extensions=[\"spatialite\"])\r\n>>> db = ds.add_memory_database('geo')\r\n>>> await db.execute_write(\"select InitSpatialMetadata(1)\")\r\nUUID('3f143baa-4e3d-5842-a36f-4fa2f683b72f')\r\nno such function: InitSpatialMetadata\r\n```\r\nIt looks like the code that loads additional modules only works on read-only connections, not on write connections:\r\n\r\nhttps://github.com/simonw/datasette/blob/92a5280d2e75c39424a75ad6226fc74400ae984f/datasette/database.py#L146-L153\r\n\r\nCompared to:\r\n\r\nhttps://github.com/simonw/datasette/blob/92a5280d2e75c39424a75ad6226fc74400ae984f/datasette/database.py#L124-L132", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1564/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083921371, "node_id": "I_kwDOBm6k_c5Am1Pb", "number": 1570, "title": "Separate db.execute_write() into three methods", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 2, "created_at": "2021-12-18T18:45:54Z", "updated_at": "2022-01-13T22:27:38Z", "closed_at": "2021-12-18T18:57:25Z", "author_association": "OWNER", "pull_request": null, "body": "> Rather than adding a `executemany=True` parameter, I'm now thinking a better design might be to have three methods:\r\n>\r\n> - `db.execute_write(sql, params=None, block=False)`\r\n> - `db.execute_write_script(sql, block=False)`\r\n> - `db.execute_write_many(sql, params_seq, block=False)`\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1555#issuecomment-997267416_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1570/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083927147, "node_id": "I_kwDOBm6k_c5Am2pr", "number": 1571, "title": "Track number of executions for execute_write_many() in traces", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 0, "created_at": "2021-12-18T19:16:17Z", "updated_at": "2022-01-13T22:27:49Z", "closed_at": "2021-12-19T20:30:40Z", "author_association": "OWNER", "pull_request": null, "body": "Spotted while working on #1555\r\n\r\n\"image\"\r\n\r\nThere's no indication there of how many times `execute_write_many()` executed the SQL.\r\n\r\nSolving this is a tiny bit tricky because `params_seq` is an iterator that we don't want to exhaust before passing it to `conn.executemany()` - so we need to instead wrap it in something that counts how many times it was called.\r\n\r\nBut then we need a way to attach that to the trace here: https://github.com/simonw/datasette/blob/d637ed46762fdbbd8e32b86f258cd9a53c1cfdc7/datasette/database.py#L115-L122\r\n\r\nSo probably need to redesign the `trace()` decorator to allow extra pairs to be attached to it within the `with` statement.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1571/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083718998, "node_id": "I_kwDOBm6k_c5AmD1W", "number": 1567, "title": "Remove undocumented sqlite_functions mechanism", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 0, "created_at": "2021-12-18T01:51:10Z", "updated_at": "2022-01-13T22:27:04Z", "closed_at": "2021-12-18T01:54:46Z", "author_association": "OWNER", "pull_request": null, "body": "I added this in 0b8c1b0a6da9cb8ac0d28cc90dd783de87554036 but it's never been documented and the same thing can now be achieved using the `prepare_connection` plugin hook.\r\n\r\nhttps://github.com/simonw/datasette/blob/0c91e59d2bbfc08884cfcf5d1b902a2f4968b7ff/datasette/app.py#L262\r\n\r\nhttps://github.com/simonw/datasette/blob/0c91e59d2bbfc08884cfcf5d1b902a2f4968b7ff/datasette/app.py#L551-L552\r\n\r\nIt's used here in the tests:\r\n\r\nhttps://github.com/simonw/datasette/blob/69244a617b1118dcbd04a8f102173f04680cf08c/tests/fixtures.py#L156", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1567/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083726550, "node_id": "I_kwDOBm6k_c5AmFrW", "number": 1568, "title": "Trace should show queries on the write connection too", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 2, "created_at": "2021-12-18T02:34:12Z", "updated_at": "2022-01-13T22:27:23Z", "closed_at": "2021-12-18T02:42:34Z", "author_association": "OWNER", "pull_request": null, "body": "> Here's why - `trace` only applies to read, not write SQL operations: https://github.com/simonw/datasette/blob/7c8f8aa209e4ba7bf83976f8495d67c28fbfca24/datasette/database.py#L209-L211\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1555#issuecomment-997128508_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1568/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1083895395, "node_id": "I_kwDOBm6k_c5Amu5j", "number": 1569, "title": "db.execute_write(..., executescript=True) parameter", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 2, "created_at": "2021-12-18T18:20:47Z", "updated_at": "2022-01-13T22:27:27Z", "closed_at": "2021-12-18T18:34:18Z", "author_association": "OWNER", "pull_request": null, "body": "> Idea: teach `execute_write` to accept an optional `executescript=True` parameter, like this:\r\n```diff\r\ndiff --git a/datasette/database.py b/datasette/database.py\r\nindex 468e936..1a424f5 100644\r\n--- a/datasette/database.py\r\n+++ b/datasette/database.py\r\n@@ -94,10 +94,14 @@ class Database:\r\n f\"file:{self.path}{qs}\", uri=True, check_same_thread=False\r\n )\r\n \r\n- async def execute_write(self, sql, params=None, block=False):\r\n+ async def execute_write(self, sql, params=None, executescript=False, block=False):\r\n+ assert not executescript and params, \"Cannot use params with executescript=True\"\r\n def _inner(conn):\r\n with conn:\r\n- return conn.execute(sql, params or [])\r\n+ if executescript:\r\n+ return conn.executescript(sql)\r\n+ else:\r\n+ return conn.execute(sql, params or [])\r\n \r\n with trace(\"sql\", database=self.name, sql=sql.strip(), params=params):\r\n results = await self.execute_write_fn(_inner, block=block)\r\n```\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1555#issuecomment-997248364_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1569/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1084185188, "node_id": "I_kwDOBm6k_c5An1pk", "number": 1573, "title": "Make trace() a documented internal API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-19T20:32:56Z", "updated_at": "2021-12-19T21:13:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This should be documented so plugin authors can use it to add their own custom traces: https://github.com/simonw/datasette/blob/8f311d6c1d9f73f4ec643009767749c17b5ca5dd/datasette/tracer.py#L28-L52\r\n\r\nIncluding the new `kwargs` pattern I added in #1571: https://github.com/simonw/datasette/blob/f65817000fdf87ce8a0c23edc40784ebe33b5842/datasette/database.py#L128-L132", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1573/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1084007781, "node_id": "I_kwDOBm6k_c5AnKVl", "number": 1572, "title": "\"Query took\" should be \"Queries took\"", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 0, "created_at": "2021-12-19T04:03:00Z", "updated_at": "2022-01-13T22:27:43Z", "closed_at": "2021-12-19T04:03:24Z", "author_association": "OWNER", "pull_request": null, "body": "This is misleading, since usually there have been more than one query executed:\r\n\r\n![CleanShot 2021-12-18 at 20 02 35@2x](https://user-images.githubusercontent.com/9599/146663457-9c4c2900-5cc0-4650-a565-bb1ff0b8a725.png)\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1572/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1084257842, "node_id": "I_kwDOBm6k_c5AoHYy", "number": 1575, "title": "__call__() got an unexpected keyword argument 'specname'", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-20T01:24:04Z", "updated_at": "2021-12-20T01:48:03Z", "closed_at": "2021-12-20T01:47:57Z", "author_association": "OWNER", "pull_request": null, "body": "> I've installed the alpha version but get an error when starting up Datasette:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/tim/.pyenv/versions/stock-exchange/bin/datasette\", line 5, in \r\n from datasette.cli import cli\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/cli.py\", line 15, in \r\n from .app import Datasette, DEFAULT_SETTINGS, SETTINGS, SQLITE_LIMIT_ATTACHED, pm\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/app.py\", line 31, in \r\n from .views.database import DatabaseDownload, DatabaseView\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/views/database.py\", line 25, in \r\n from datasette.plugins import pm\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/plugins.py\", line 29, in \r\n mod = importlib.import_module(plugin)\r\n File \"/Users/tim/.pyenv/versions/3.8.5/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/filters.py\", line 9, in \r\n @hookimpl(specname=\"filters_from_request\")\r\nTypeError: __call__() got an unexpected keyword argument 'specname'\r\n```\r\n\r\n_Originally posted by @wragge in https://github.com/simonw/datasette/issues/1547#issuecomment-997511968_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1575/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1087181951, "node_id": "I_kwDOBm6k_c5AzRR_", "number": 1576, "title": "Traces should include SQL executed by subtasks created with `asyncio.gather`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 12, "created_at": "2021-12-22T20:52:02Z", "updated_at": "2022-02-05T05:21:35Z", "closed_at": "2022-02-05T05:19:53Z", "author_association": "OWNER", "pull_request": null, "body": "I tried running some parallel SQL queries using `asyncio.gather()` but the SQL that was executed didn't show up in the trace rendered by https://datasette.io/plugins/datasette-pretty-traces\r\n\r\nI realized that was because traces are keyed against the current task ID, which changes when a sub-task is run using `asyncio.gather` or similar.\r\n\r\nThe faceting and suggest faceting queries are missing from this trace:\r\n\r\n![image](https://user-images.githubusercontent.com/9599/147153855-2d611f07-922a-4d18-9e6e-4be89e010dc4.png)\r\n\r\n> The reason they aren't showing up in the traces is that traces are stored just for the currently executing `asyncio` task ID: https://github.com/simonw/datasette/blob/ace86566b28280091b3844cf5fbecd20158e9004/datasette/tracer.py#L13-L25\r\n>\r\n> This is so traces for other incoming requests don't end up mixed together. But there's no current mechanism to track async tasks that are effectively \"child tasks\" of the current request, and hence should be tracked the same.\r\n>\r\n> https://stackoverflow.com/a/69349501/6083 suggests that you pass the task ID as an argument to the child tasks that are executed using `asyncio.gather()` to work around this kind of problem.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1518#issuecomment-999870993_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1576/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1104691662, "node_id": "I_kwDOBm6k_c5B2EHO", "number": 1600, "title": "plugins --all example should use cog", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-15T11:47:49Z", "updated_at": "2022-01-20T05:06:21Z", "closed_at": "2022-01-20T05:04:16Z", "author_association": "OWNER", "pull_request": null, "body": "The example output for `datasette plugins --all`on this page has got out of date: https://docs.datasette.io/en/stable/plugins.html#seeing-what-plugins-are-installed", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1600/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1105916061, "node_id": "I_kwDOBm6k_c5B6vCd", "number": 1601, "title": "Add KNN and data_licenses to hidden tables list", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-01-17T14:19:57Z", "updated_at": "2022-01-20T21:29:44Z", "closed_at": "2022-01-20T04:38:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "They're generated by Spatialite and not very interesting in most cases.\r\n\r\n\"Screen\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1601/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1090810196, "node_id": "I_kwDOBm6k_c5BBHFU", "number": 1583, "title": "consider adding deletion step of cloudbuild artifacts to gcloud publish", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-30T00:33:23Z", "updated_at": "2021-12-30T00:34:16Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun.\r\n\r\nafter successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1583/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1091257796, "node_id": "I_kwDOBm6k_c5BC0XE", "number": 1584, "title": "give error with recursive sql", "user": {"value": 58088336, "label": "tunguyenatwork"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-12-30T18:53:16Z", "updated_at": "2021-12-30T18:53:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I got an error \"near \"WITH\": syntax error\" after I upgraded to version 0.59 from 0.52.4. This error is related to recursive sql. It works great on the previous version but it failed after upgraded. Below is an example of sql:\r\n\r\nWITH RECURSIVE manager_of(position, super_position) AS (SELECT position, case ifnull(INDIRECT_SUPER_POSITION,'') when '' then super_position else INDIRECT_SUPER_POSITION end as SUPER_POSITION FROM position where super_position<>'SGV000000001' and super_position!='' and position <> super_position),chain_manager_of_position(position, level) AS (SELECT super_position, 1 as level FROM manager_of WHERE super_position!='' and (position=:pos or position in (Select position from employee where employee=:ein)) UNION ALL SELECT super_position, level+1 as level FROM manager_of JOIN chain_manager_of_position USING(position)) SELECT * FROM chain_manager_of_position left join employee using(position) where employee is not NULL order by level limit 1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1584/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1091838742, "node_id": "I_kwDOBm6k_c5BFCMW", "number": 1585, "title": "Fire base caching for `publish cloudrun`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-01T15:38:15Z", "updated_at": "2022-01-01T15:40:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://gist.github.com/steren/03d3e58c58c9a53fd49bb78f58541872 has a recipe for this, via https://twitter.com/steren/status/1477038411114446848\r\n\r\nCould this enable easier vanity URLs of the format `https://$project_id.web.app/`? How about CDN caching?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1585/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1096536240, "node_id": "I_kwDOBm6k_c5BW9Cw", "number": 1586, "title": "run analyze on all databases as part of start up or publishing", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-07T17:52:34Z", "updated_at": "2022-02-02T07:13:37Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Running `analyze;` lets sqlite's query planner make *much* better use of any indices.\r\n\r\nIt might be nice if the analyze was run as part of the start up of \"serve\" or \"publish\".", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1586/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1097040427, "node_id": "I_kwDOBm6k_c5BY4Ir", "number": 1587, "title": "Add `sqlite_stat1`(-4) tables to hidden table list", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-01-08T21:28:20Z", "updated_at": "2022-01-20T04:12:59Z", "closed_at": "2022-01-20T04:12:59Z", "author_association": "OWNER", "pull_request": null, "body": "> Running `ANALYZE` creates a new visible table called `sqlite_stat1`: https://www.sqlite.org/fileformat.html#the_sqlite_stat1_table\r\n>\r\n> This should be added to the default list of hidden tables in Datasette.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1587/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1097101917, "node_id": "I_kwDOBm6k_c5BZHJd", "number": 1588, "title": "`explain query plan select` is too strict about whitespace", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 3, "created_at": "2022-01-09T04:22:42Z", "updated_at": "2022-01-13T22:28:19Z", "closed_at": "2022-01-13T20:35:05Z", "author_association": "OWNER", "pull_request": null, "body": "`explain query plan select * from facetable` is allowed: https://latest.datasette.io/fixtures?sql=explain+query+plan+select+*+from+facetable\r\n\r\nBut... `explain query plan select * from facetable` (with two spaces before the `select`) returns a \"Statement must be a SELECT\" error: https://latest.datasette.io/fixtures?sql=explain+query+plan++select+*+from+facetable", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1588/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1099723916, "node_id": "I_kwDOBm6k_c5BjHSM", "number": 1590, "title": "Table+query JSON and CSV links broken when using `base_url` setting", "user": {"value": 1001306, "label": "eelkevdbos"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 11, "created_at": "2022-01-11T23:46:39Z", "updated_at": "2022-01-14T01:16:34Z", "closed_at": "2022-01-14T01:16:08Z", "author_association": "NONE", "pull_request": null, "body": "Datasette appends the prefix found in the `base_url` setting twice if a `base_url` is set.\r\n\r\nIn the follow asgi example, I'm hosting a custom Datasette instance:\r\n\r\n```python\r\n# asgi.py\r\nimport pathlib\r\nfrom asgi_cors import asgi_cors\r\nfrom channels.routing import URLRouter\r\nfrom django.urls import re_path\r\nfrom datasette.app import Datasette\r\n\r\ndatasette_ = Datasette(\r\n files=[],\r\n settings={\r\n \"base_url\": \"/datasettes/\",\r\n \"plugins\": {}\r\n },\r\n config_dir=pathlib.Path('.'),\r\n)\r\napplication = URLRouter([\r\n re_path(r\"^datasettes/.*\", asgi_cors(datasette_.app(), allow_all=True)),\r\n])\r\n```\r\n\r\nRunning it with:\r\n```shell\r\n$ daphne -p 8002 asgi:application\r\n```\r\n\r\nUsing a simple query on the `_memory` table: \r\n```sql\r\nselect sqlite_version()\r\n```\r\n\r\nhttp://localhost:8002/datasettes/_memory?sql=select+sqlite_version%28%29\r\n\r\nIt renders the following upon inspection:\r\n![image](https://user-images.githubusercontent.com/1001306/149038851-aa842950-126a-467c-9a86-fae13bce6221.png)\r\n\r\nI am using datasette version `0.59.4`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1590/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1100015398, "node_id": "I_kwDOBm6k_c5BkOcm", "number": 1591, "title": "Maybe let plugins define custom serve options?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-01-12T08:18:47Z", "updated_at": "2022-01-15T11:56:59Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://twitter.com/psychemedia/status/1481171650934714370\r\n\r\n> can extensions be passed their own cli args? eg `--ext-tiddlywiki-dbname tiddlywiki2.sqlite` ?\r\n\r\nI've thought something like this might be useful for other plugins in the past, too.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1591/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1100499619, "node_id": "I_kwDOBm6k_c5BmEqj", "number": 1592, "title": "Row pages should show links to foreign keys", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-12T15:50:20Z", "updated_at": "2022-01-12T15:52:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Refs #1518 refactor.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1592/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1102568047, "node_id": "I_kwDOBm6k_c5Bt9pv", "number": 1596, "title": "Documentation page warning of changes coming in 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-13T23:26:04Z", "updated_at": "2022-01-13T23:26:04Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I should start this relatively soon.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1596/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1102359726, "node_id": "I_kwDOBm6k_c5BtKyu", "number": 1594, "title": "Add a CLI reference page to the docs, inspired by sqlite-utils", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 3, "created_at": "2022-01-13T20:55:08Z", "updated_at": "2022-01-13T22:28:22Z", "closed_at": "2022-01-13T21:38:48Z", "author_association": "OWNER", "pull_request": null, "body": "Thought of this while posting this comment: https://github.com/simonw/datasette/issues/1591#issuecomment-1012506595\r\n\r\nI added https://sqlite-utils.datasette.io/en/stable/cli-reference.html to `sqlite-utils` in https://github.com/simonw/sqlite-utils/issues/383 and I _really_ like it - it's a page showing the `--help` output of every CLI command for that tool.\r\n\r\nIt's maintained using `cog`. One of the benefits is that I get a free commit history of changes to `--help` at https://github.com/simonw/sqlite-utils/commits/main/docs/cli-reference.rst", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1594/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1102484126, "node_id": "I_kwDOBm6k_c5BtpKe", "number": 1595, "title": "Release notes for 0.60", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 4, "created_at": "2022-01-13T22:23:14Z", "updated_at": "2022-01-14T01:37:39Z", "closed_at": "2022-01-14T01:37:39Z", "author_association": "OWNER", "pull_request": null, "body": null, "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1595/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1102612922, "node_id": "I_kwDOBm6k_c5BuIm6", "number": 1597, "title": "\"datasette inspect\" has no help summary", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-14T00:02:16Z", "updated_at": "2022-01-14T00:07:36Z", "closed_at": "2022-01-14T00:07:36Z", "author_association": "OWNER", "pull_request": null, "body": "Made obvious by the new CLI reference page added in #1594. https://docs.datasette.io/en/latest/cli-reference.html#datasette-inspect-help\r\n```\r\nCommands:\r\n serve* Serve up specified SQLite database files with a web UI\r\n inspect\r\n install Install Python packages - e.g.\r\n```\r\n```\r\nUsage: datasette inspect [OPTIONS] [FILES]...\r\n\r\nOptions:\r\n --inspect-file TEXT\r\n --load-extension TEXT Path to a SQLite extension to load\r\n --help Show this message and exit.\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1597/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}