{"id": 1054244712, "node_id": "I_kwDOBm6k_c4-1n9o", "number": 1510, "title": "Datasette 1.0 documented template context (maybe via API docs)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:23:58Z", "updated_at": "2023-06-28T02:05:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054243511, "node_id": "I_kwDOBm6k_c4-1nq3", "number": 1509, "title": "Datasette 1.0 JSON API (and documentation)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:22:45Z", "updated_at": "2022-03-15T20:38:56Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The new JSON API in a stable, documented form.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1509/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054246919, "node_id": "I_kwDOBm6k_c4-1ogH", "number": 1511, "title": "Review plugin hooks for Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2021-11-15T23:26:05Z", "updated_at": "2021-11-16T01:20:14Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I need to perform a detailed review of the plugin interface - especially the plugin hooks like [register_facet_classes()](https://docs.datasette.io/en/stable/plugin_hooks.html#register-facet-classes) which I don't yet have complete confidence in.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1511/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1056746091, "node_id": "I_kwDOBm6k_c4-_Kpr", "number": 1515, "title": "Handle foreign keys that point to a non-existent table", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-17T23:40:13Z", "updated_at": "2021-11-18T01:31:56Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted in https://github.com/simonw/datasette-graphql/issues/79\r\n\r\nDemo: https://datasette-graphql-demo.datasette.io/fixtures/bad_foreign_key\r\n\r\nThe foreign key links to a 404 page.\r\n\r\n![B87009C7-CFCA-4DF9-8FBA-FA3E6CA28EC2](https://user-images.githubusercontent.com/9599/142334788-4d1a4acd-bc87-4426-b333-d46b221afcec.jpeg)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1515/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1049946823, "node_id": "I_kwDOBm6k_c4-lOrH", "number": 1502, "title": "Full-text search: No support to unary \"-\" operator", "user": {"value": 516827, "label": "gustavorps"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-10T15:11:19Z", "updated_at": "2021-11-10T15:11:19Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Reference: https://www.sqlite.org/fts3.html#set_operations_using_the_standard_query_syntax\r\n\r\nTest: https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort+-freedman&_sort=rowid", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1502/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1051277222, "node_id": "I_kwDOBm6k_c4-qTem", "number": 1504, "title": "Link to ?_size=max at bottom of table page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-11T19:06:33Z", "updated_at": "2021-11-11T19:06:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This can have text such as \"Show 1,000 rows per page\", based on the max size limit setting. Would make it easier for people to see more data at once without having to know how to hack the URL, similar to the `...` for facet sizes I added in #1337.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1504/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1052247023, "node_id": "I_kwDOBm6k_c4-uAPv", "number": 1505, "title": "Datasette should have an option to output CSV with semicolons", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-11-12T18:02:21Z", "updated_at": "2021-11-16T11:40:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": null, "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1505/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1006016302, "node_id": "I_kwDOBm6k_c479pcu", "number": 1477, "title": "Consider adding request to the documented default template context", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T02:34:09Z", "updated_at": "2021-09-24T02:34:09Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I made a plugin for this today but I think perhaps it should be a default thing instead: https://datasette.io/plugins/datasette-template-request", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1477/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 999902754, "node_id": "I_kwDOBm6k_c47mU4i", "number": 1473, "title": "base logo link visits `undefined` rather than href url", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-09-18T04:17:04Z", "updated_at": "2021-09-19T00:45:32Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I have two connected sites:\r\nhttp://www.SaferOrToxic.org \r\n(a Hugo website)\r\nand:\r\nhttp://disinfectants.SaferOrToxic.org/disinfectants/listN\r\n(a datasette table page)\r\n\r\nThe latter is linked as \"The List\" in the former's menu.\r\n(I'd love a prettier URL, but that's what I've got.)\r\n\r\nOn:\r\nhttp://disinfectants.SaferOrToxic.org/disinfectants/listN\r\n... all the other menu links should point back to:\r\nhttps://www.SaferOrToxic.org \r\nAnd they do!\r\n\r\nBut the logo, for some reason--though it has an href pointing to:\r\nhttps://www.SaferOrToxic.org\r\nKeeps going to this instead:\r\nhttps://disinfectants.saferortoxic.org/disinfectants/undefined\r\n\r\nWhat is causing that? How can I fix it?\r\n\r\nIn #1284 back in March, I was doing battle with the index.html template, in a still unresolved issue. (I wanted only a single table page at the root.)\r\n\r\nBut I thought, well, if I can't resolve that, at least I could just point the main website to the datasette page (\"The List,\") and then have the List point back to the home website.\r\n\r\nThe menu hrefs to https://www.SaferOrToxic.org work just fine, exactly as they should, from the datasette page. Even the Home link works properly.\r\n\r\nBut the logo link keeps rewriting to: https://disinfectants.saferortoxic.org/disinfectants/undefined\r\n\r\nThis is the HTML:\r\n```\r\n\"Logo:\r\n```\r\n\r\n\r\nIs this somehow related to cloudflare?\r\nOr something in the datasette code?\r\n\r\nI'm starting to think it's a cloudflare issue.\r\n\"Screen\r\n\r\nCan I at least rule out it being a datasette issue?\r\n\r\nMy repository is here:\r\nhttps://github.com/mroswell/list-N\r\n\r\n(BTW, I couldn't figure out how to reference a local image, either, on the datasette side, which is why I'm using the image from the www home page.)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1473/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1006781949, "node_id": "I_kwDOBm6k_c48AkX9", "number": 1478, "title": "Documentation Request: Feature alternative ID instead of default ID", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T19:56:13Z", "updated_at": "2021-09-25T16:18:54Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "My data already has an ID that comes from a federal agency.\r\nWould love to have documentation on how to modify the template to:\r\n- Remove the generated ID from the table\r\n- Link the federal ID to the detail page\r\n- and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key.\r\n\r\nI don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports.\r\n\r\nPerhaps this isn't a template issue, maybe more of a db manipulation...\r\n\r\nMargie", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1478/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1010112818, "node_id": "I_kwDOBm6k_c48NRky", "number": 1479, "title": "Win32 \"used by another process\" error with datasette publish", "user": {"value": 76450761, "label": "kirajano"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2021-09-28T19:12:00Z", "updated_at": "2023-09-07T02:14:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette.\r\n\r\nFailed to deploy. Attaching logs:\r\n1. Tried with an app created via `flyctl apps create frosty-fog-8565` and the ran `datasette publish fly covid.db --app frosty-fog-8565` \r\n``` \r\nDeploying frosty-fog-8565\r\n==> Validating app configuration\r\n--> Validating app configuration done\r\nServices\r\nTCP 80/443 \u21e2 8080\r\n\r\nError error connecting to docker: An unknown error occured.\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\grott\\Anaconda3\\Scripts\\datasette.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette_publish_fly\\__init__.py\", line 156, in fly\r\n \"--remote-only\",\r\n File \"c:\\users\\grott\\anaconda3\\lib\\contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\utils\\__init__.py\", line 451, in temporary_docker_directory\r\n tmp.cleanup()\r\n File \"c:\\users\\grott\\anaconda3\\lib\\tempfile.py\", line 811, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 404, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 402, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\grott\\\\AppData\\\\Local\\\\Temp\\\\tmpgcm8cz66\\\\frosty-fog-8565'\r\n```\r\n\r\n2. Tried also with an app that gets autogenerate when running `flyctl launch`. This also generates the .toml file. Ran then `datasette publish fly covid.db --app dark-feather-168` **but different error now**\r\n```Deploying dark-feather-168\r\n==> Validating app configuration\r\n\r\nError not possible to validate configuration: server returned Post \"https://api.fly.io/graphql\": unexpected EOF\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 193, in _run_module_as_main \r\n \"__main__\", mod_spec)\r\n exec(code, run_globals)\r\n File \"C:\\Users\\grott\\Anaconda3\\Scripts\\datasette.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette_publish_fly\\__init__.py\", line 156, in fly\r\n \"--remote-only\",\r\n File \"c:\\users\\grott\\anaconda3\\lib\\contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\utils\\__init__.py\", line 451, in temporary_docker_directory\r\n tmp.cleanup()\r\n File \"c:\\users\\grott\\anaconda3\\lib\\tempfile.py\", line 811, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 404, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 402, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\grott\\\\AppData\\\\Local\\\\Temp\\\\tmpnoyewcre\\\\dark-feather-168'\r\n```\r\n\r\nThese are also the contents of the generated **.toml file** in 2 scenario:\r\n\r\n```\r\n# fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00\r\n\r\napp = \"dark-feather-168\"\r\n\r\nkill_signal = \"SIGINT\"\r\nkill_timeout = 5\r\nprocesses = []\r\n\r\n[env]\r\n\r\n[experimental]\r\n allowed_public_ports = []\r\n auto_rollback = true\r\n\r\n[[services]]\r\n http_checks = []\r\n internal_port = 8080\r\n processes = [\"app\"]\r\n protocol = \"tcp\"\r\n script_checks = []\r\n\r\n [services.concurrency]\r\n hard_limit = 25\r\n soft_limit = 20\r\n type = \"connections\"\r\n\r\n [[services.ports]]\r\n handlers = [\"http\"]\r\n port = 80\r\n\r\n [[services.ports]]\r\n handlers = [\"tls\", \"http\"]\r\n port = 443\r\n\r\n [[services.tcp_checks]]\r\n grace_period = \"1s\"\r\n interval = \"15s\"\r\n restart_limit = 6\r\n timeout = \"2s\"\r\n```\r\n\r\n3. But also trying `datasette package covid.db` to create a local DOCKERFILE to later try to push it via `flyctl deploy` fails as well.\r\n\r\n```[+] Building 147.3s (11/11) FINISHED\r\n => [internal] load build definition from Dockerfile 0.2s \r\n => => transferring dockerfile: 396B 0.0s \r\n => [internal] load .dockerignore 0.1s \r\n => => transferring context: 2B 0.0s \r\n => [internal] load metadata for docker.io/library/python:3.8 4.7s \r\n => [auth] library/python:pull token for registry-1.docker.io 0.0s \r\n => [internal] load build context 0.1s \r\n => => transferring context: 82.37kB 0.0s \r\n => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s \r\n => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s \r\n => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s \r\n => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s \r\n => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s \r\n => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s \r\n => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s \r\n => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s \r\n => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s \r\n => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s \r\n => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s \r\n => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s \r\n => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s \r\n => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s \r\n => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s \r\n => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s \r\n => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s \r\n => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s \r\n => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s \r\n => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s \r\n => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s \r\n => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s \r\n => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s \r\n => [2/5] COPY . /app 2.3s \r\n => [3/5] WORKDIR /app 0.2s \r\n => [4/5] RUN pip install -U datasette 26.9s \r\n => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s\r\n => exporting to image 1.2s \r\n => => exporting layers 1.2s \r\n => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s\r\n\r\nUse 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them\r\nTraceback (most recent call last):\r\n \"__main__\", mod_spec)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\grott\\Anaconda3\\Scripts\\datasette.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\cli.py\", line 283, in package\r\n call(args)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\site-packages\\datasette\\utils\\__init__.py\", line 451, in temporary_docker_directory\r\n tmp.cleanup()\r\n File \"c:\\users\\grott\\anaconda3\\lib\\tempfile.py\", line 811, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 404, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\users\\grott\\anaconda3\\lib\\shutil.py\", line 402, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\grott\\\\AppData\\\\Local\\\\Temp\\\\tmpkb27qid3\\\\datasette'```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1479/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1015646369, "node_id": "I_kwDOBm6k_c48iYih", "number": 1480, "title": "Exceeding Cloud Run memory limits when deploying a 4.8G database", "user": {"value": 110420, "label": "ghing"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2021-10-04T21:20:24Z", "updated_at": "2022-10-07T04:39:10Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message:\r\n\r\n> Memory limit of 8192M exceeded with 8826M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits\r\n\r\nUnfortunately, the maximum amount of memory that can be allocated to an instance is 8192M.\r\n\r\nNaively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally:\r\n\r\n- Real Memory Size: 70.6 MB\r\n- Virtual Memory Size: 4.51 GB\r\n- Shared Memory Size: 2.5 MB\r\n- Private Memory Size: 57.4 MB\r\n\r\nI'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow.\r\n\r\nThis is somewhat related to #1082, but on a different platform, so I decided to open a new issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1480/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1072106103, "node_id": "I_kwDOBm6k_c4_5wp3", "number": 1542, "title": "feature request: order and dependency of plugins (that use js)", "user": {"value": 33631, "label": "fs111"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-06T12:40:45Z", "updated_at": "2021-12-15T17:47:08Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have been playing with datasette for the last couple of weeks and it is great! I am a big fan of `datasette-cluster-map` and wanted to enhance it a bit with a what I would call a sub-plugin. I basically want to add more controls to the map that cluster map provides. I have been looking into its code and how the plugin management works, but it seems what I am trying to do is not doable without hacks in js.\r\n\r\nBasically what would like to have is a way to say load my plugin after the plugins I depend on have been loaded and rendered. There seems to be no prior art where plugins have these dependencies on the js level so I was wondering if that could be added or if it exists how to do it.\r\n\r\nBasically what I want to do is:\r\n\r\nmy-awesome-plugin has a dependency on datastte-cluster-map. Whenever datasette cluster map has finished rendering on page load, call my plugin, but no earlier. To make that work datasette probably needs some total order in which way plugins are loaded intialized.\r\n\r\nSince I am new to datastte, I may be missing something obvious, so please let me know if the above makes no sense.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1542/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1058072543, "node_id": "I_kwDOBm6k_c4_EOff", "number": 1518, "title": "Complete refactor of TableView and table.html template", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 45, "created_at": "2021-11-19T02:55:16Z", "updated_at": "2022-03-15T18:35:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split from #878. The current `TableView` class is by far the most complex part of Datasette, and the most difficult to work on: https://github.com/simonw/datasette/blob/0.59.2/datasette/views/table.py\r\n\r\nIn #878 I started exploring a new pattern for building views. In doing so it became clear that `TableView` is the first beast that I need to slay - if I can refactor that into something neat the pattern for building other views will emerge as a natural consequence.\r\n\r\nI've been trying to build this as a `register_routes()` plugin, as originally suggested in #870 - though unfortunately it looks like those plugins can't replace existing Datasette default views at the moment, see #1517. [UPDATE: I was wrong about this, plugins can over-ride default views just fine]\r\n\r\nI also know that I want to have a fully documented template context for `table.html` as a major step on the way to Datasette 1.0, see #1510.\r\n\r\nAll of this adds up to the `TableView` factor being a major project that will unblock a whole flurry of other things - so I'm going to work on that in this separate issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1518/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1058803238, "node_id": "I_kwDOBm6k_c4_HA4m", "number": 1520, "title": "Pattern for avoiding accidental URL over-rides", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-11-19T18:28:05Z", "updated_at": "2021-11-19T18:29:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Following #1517 I'm experimenting with a plugin that does this:\r\n```python\r\n@hookimpl\r\ndef register_routes():\r\n return [\r\n (r\"/(?P[^/]+)/(?P[^/]+?)$\", Table().view),\r\n ]\r\n```\r\nThis is supposed to replace the default table page with new code... but there's a problem: `/-/versions` on that instance now returns 404 `Database '-' does not exist`!\r\n\r\nNeed to figure out a pattern to avoid that happening. Plugins get to add their routes before Datasette's default routes, which is why this is happening here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1520/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1059209412, "node_id": "I_kwDOBm6k_c4_IkDE", "number": 1523, "title": "Come up with a more elegant solution for base_url than ds.urls.path()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-20T19:05:22Z", "updated_at": "2021-11-20T19:05:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "While fixing #1519 I added a lot of ugly code that looks like this: https://github.com/simonw/datasette/blob/08947fa76433d18988aa1ee1d929bd8320c75fe2/datasette/facets.py#L228-L230\r\n\r\nSee these two commits in particular: fe687fd0207c4c56c4778d3e92e3505fc4b18172 and 08947fa76433d18988aa1ee1d929bd8320c75fe2\r\n\r\nIt would be great to come up with a less verbose and error-prone way of handling this problem.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1523/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1060631257, "node_id": "I_kwDOBm6k_c4_N_LZ", "number": 1528, "title": "Add new `\"sql_file\"` key to Canned Queries in metadata?", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-22T21:58:01Z", "updated_at": "2022-06-10T03:23:08Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Currently for canned queries, you have to inline SQL in your `metadata.yaml` like so:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql: |-\r\n select neighborhood, facet_cities.name, state\r\n from facetable\r\n join facet_cities on facetable.city_id = facet_cities.id\r\n where neighborhood like '%' || :text || '%'\r\n order by neighborhood\r\n title: Search neighborhoods\r\n```\r\n\r\nThis works fine, but for a few reasons, I usually have my canned queries already written in separate `.sql` files. I'd like to instead re-use those instead of re-writing it. \r\n\r\nSo, I'd like to see a new `\"sql_file\"` key that works like so:\r\n\r\n`metadata.yaml`:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql_file: neighborhood_search.sql\r\n title: Search neighborhoods\r\n```\r\n`neighborhood_search.sql`:\r\n```sql\r\nselect neighborhood, facet_cities.name, state\r\nfrom facetable\r\njoin facet_cities on facetable.city_id = facet_cities.id\r\nwhere neighborhood like '%' || :text || '%'\r\norder by neighborhood\r\n```\r\n\r\nBoth of these would work in the exact same way, where Datasette would instead open + include `neighborhood_search.sql` on startup. \r\n\r\n\r\nA few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:\r\n\r\n- Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code\r\n- Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit\r\n- Works well with other tools (can pipe `.sql` files into the `sqlite3` CLI, or use with other SQLite clients easier)\r\n- Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible\r\n\r\nLet me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1065429936, "node_id": "I_kwDOBm6k_c4_gSuw", "number": 1532, "title": "Use datasette-table Web Component to guide the design of the JSON API for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2021-11-28T20:37:18Z", "updated_at": "2022-03-16T20:13:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I realized that one of the reasons I'm having trouble committing to nailing down the JSON API for 1.0 is that I don't use it much myself - I use the `?_shape=array` one quite often, but I don't have any projects that are using the default, more fully-featured API.\r\n\r\nAs an experiment I built a Web Component for embedding Datasette tables on pages - https://github.com/simonw/datasette-table - and I think it's actually going to be a really useful tool for helping me dog food the v1.0 API design.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1532/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1067775061, "node_id": "I_kwDOBm6k_c4_pPRV", "number": 1539, "title": "Research PRAGMA query_only", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-11-30T23:30:24Z", "updated_at": "2021-11-30T23:30:24Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://www.sqlite.org/pragma.html#pragma_query_only\r\n\r\n> The query_only pragma prevents data changes on database files when enabled. When this pragma is enabled, any attempt to CREATE, DELETE, DROP, INSERT, or UPDATE will result in an [SQLITE_READONLY](https://www.sqlite.org/rescode.html#readonly) error. However, the database is not truly read-only. You can still run a [checkpoint](https://www.sqlite.org/wal.html#ckpt) or a [COMMIT](https://www.sqlite.org/lang_transaction.html) and the return value of the [sqlite3_db_readonly()](https://www.sqlite.org/c3ref/db_readonly.html) routine is not affected.\r\n\r\nWould it be worth adding this as an extra protection against accidental writes to a DB file over a read-only connection?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1539/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1068791148, "node_id": "I_kwDOBm6k_c4_tHVs", "number": 1540, "title": "Idea: hover to reveal details of linked row", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2021-12-01T19:28:07Z", "updated_at": "2021-12-09T23:38:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "\"fara__item_version__7_rows_where_where__item___5236\"\r\n\r\nHovering over that could work a little bit like GitHub issue links:\r\n\r\n![hover](https://user-images.githubusercontent.com/9599/144300537-9cd9e9af-ac16-42db-842f-37661bc94063.gif)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1540/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1069881276, "node_id": "I_kwDOBm6k_c4_xRe8", "number": 1541, "title": "Different default layout for row page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-02T18:56:36Z", "updated_at": "2021-12-02T18:56:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The row page displays as a table even though it only has one table row.\r\n\r\nmaybe default to the same display as the narrow page version, even for wide pages?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1541/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1955676270, "node_id": "I_kwDOBm6k_c50kUBu", "number": 2201, "title": "Discord invite link is invalid", "user": {"value": 11708906, "label": "andrewsanchez"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-21T21:50:05Z", "updated_at": "2023-10-21T21:50:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following:\r\n\r\n\"CleanShot\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2201/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1977726056, "node_id": "I_kwDOBm6k_c514bRo", "number": 2203, "title": "custom plugin not seen as sql function", "user": {"value": 7113541, "label": "LyzardKing"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T10:30:19Z", "updated_at": "2023-11-05T10:30:19Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi, I'm not sure if this is the right repo for this issue.\r\n\r\nI'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly.\r\n\r\nNow I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works).\r\nIf I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error:\r\n```duckdb.duckdb.CatalogException: Catalog Error: Scalar Function with name hello_world does not exist!```\r\n\r\nSince the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2203/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978023780, "node_id": "I_kwDOBm6k_c515j9k", "number": 2205, "title": "request.post_vars() method obliterates form keys with multiple values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 3, "created_at": "2023-11-05T23:25:08Z", "updated_at": "2023-11-06T04:10:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139\r\n\r\nIn GET requests you can do `?foo=1&foo=2` - you can do the same in POST requests, but the `dict()` call here eliminates those duplicates.\r\n\r\nYou can't even try calling `post_body()` and implement your own custom parsing because of:\r\n- #2204", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978022687, "node_id": "I_kwDOBm6k_c515jsf", "number": 2204, "title": "request.post_body() can only be called once", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T23:22:03Z", "updated_at": "2023-11-05T23:23:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This code here:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L127-L135\r\n\r\nIt consumes the messages, which means if you try to call it a second time you won't be able to get at the body.\r\n\r\nThis is efficient - we don't end up with a `request` object property with potentially megabytes of content that we never look at again - but it's inconvenient for cases like middleware or functions where we don't know if the body has been consumed yet or not.\r\n\r\nPotential solution: set `request._body` the first time it is called, and return that on subsequent calls.\r\n\r\nPotential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call `post_body()` multiple times against one of those larger bodies.\r\n\r\nI'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2204/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1994845152, "node_id": "I_kwDOBm6k_c525uvg", "number": 2207, "title": "ModuleNotFoundError: No module named 'click_default_group", "user": {"value": 283441, "label": "honzajavorek"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-15T14:04:32Z", "updated_at": "2023-11-15T14:04:32Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "No matter what I do, I'm getting this error:\r\n\r\n```\r\n$ datasette\r\nTraceback (most recent call last):\r\n File \"/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/bin/datasette\", line 5, in \r\n from datasette.cli import cli\r\n File \"/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/lib/python3.11/site-packages/datasette/cli.py\", line 6, in \r\n from click_default_group import DefaultGroup\r\nModuleNotFoundError: No module named 'click_default_group'\r\n```\r\n\r\nI have datasette in my dependencies like this:\r\n\r\n```toml\r\n[tool.poetry.group.dev.dependencies]\r\ndatasette = {version = \"1.0a7\", allow-prereleases = true}\r\n```\r\n\r\nI had the latest regular version (not pre-release) there originally, but the result was the same:\r\n\r\n```toml\r\n[tool.poetry.group.dev.dependencies]\r\ndatasette = \"0.64.5\"\r\n```\r\n\r\nFull pyproject.toml is at https://github.com/honzajavorek/junior.guru/ Previously datasette worked for me, but I guess something had to upgrade and now I can't even launch it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2207/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1994857251, "node_id": "I_kwDOBm6k_c525xsj", "number": 2208, "title": "No suggested facets when a column named 'value' is included", "user": {"value": 198537, "label": "rgieseke"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-11-15T14:11:17Z", "updated_at": "2023-11-15T14:18:59Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'.\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L168-L174\r\n\r\nCurrently the following is shown (from https://latest.datasette.io/fixtures/facetable)\r\n\r\n![image](https://github.com/simonw/datasette/assets/198537/a919509a-ea88-461b-b25b-8b776720c7c5)\r\n\r\nWhen I add a column named 'value' only the JSON facets are processed.\r\n\r\n![image](https://github.com/simonw/datasette/assets/198537/092bd0b3-4c20-434e-88f8-47e2b8994a1d)\r\n\r\nI think that not using aliases could be a solution (except if someone wants to use a column named `count(*)` though this seems to be unlikely). I'll open a PR with that.\r\n\r\nThere is also a TODO with a similar question in the same file. I have not looked into that yet.\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L512", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2208/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2028698018, "node_id": "I_kwDOBm6k_c5463mi", "number": 2213, "title": "feature request: gzip compression of database downloads", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-12-06T14:35:03Z", "updated_at": "2023-12-06T15:05:46Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. \r\n\r\nthis is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of \"application\"\r\n\r\n(see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2213/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2019811176, "node_id": "I_kwDOBm6k_c54Y99o", "number": 2211, "title": "Unreachable exception handlers for `sqlite3.OperationalError`", "user": {"value": 1214074, "label": "mattparmett"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-01T00:50:22Z", "updated_at": "2023-12-01T00:50:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There are several places where `sqlite3.OperationalError` is caught as part of an exception handler which catches multiple exceptions, but is then caught again immediately afterwards by a dedicated exception handler.\r\n\r\nBecause the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then `sqlite3.OperationalError` should be removed from the tuple of exceptions in the first handler.\r\n\r\nThis issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it.\r\n\r\nOne example:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/views/database.py#L534-L537\r\n\r\nOther instances:\r\n\r\nhttps://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270\r\nhttps://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2211/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2029908157, "node_id": "I_kwDOBm6k_c54_fC9", "number": 2214, "title": "CSV export fails for some `text` foreign key references", "user": {"value": 2874, "label": "precipice"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-12-07T05:04:34Z", "updated_at": "2023-12-07T07:36:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm starting this issue without a clear reproduction in case someone else has seen this behavior, and to use the issue as a notebook for research. \r\n\r\nI'm using Datasette with the [SWITRS](https://iswitrs.chp.ca.gov/) data set, which is a California Highway Patrol collection of traffic incident data from the past decade or so. I receive data from them in CSV and want to work with it in Datasette, then export it to CSV for mapping in Felt.com.\r\n\r\nTheir data makes extensive use of codes for incident column data (`1` for `Monday` and so on), some of it integer codes and some of it letter/text codes. The text codes are sometimes blank or `-`. During import, I'm creating lookup tables for foreign key references to make the Datasette UI presentation of the data easier to read.\r\n\r\nIf I import the data and set up the integer foreign keys, everything works fine, but if I set up the text foreign keys, CSV export starts to fail. \r\n\r\nThe foreign key configuration is as follows:\r\n\r\n```\r\n# Some tables use integer ids, like sensible tables do. Let's import them first\r\n# since we favor them.\r\n\r\nfor TABLE in DAY_OF_WEEK CHP_SHIFT POPULATION SPECIAL_COND BEAT_TYPE COLLISION_SEVERITY\r\ndo\r\n\tsqlite-utils create-table records.db $TABLE id integer name text --pk=id\r\n\tsqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv\r\n\tsqlite-utils add-foreign-key records.db collisions $TABLE $TABLE id\r\n\tsqlite-utils create-index records.db collisions $TABLE\r\ndone\r\n\r\n# *Other* tables use letter keys, like they were raised by WOLVES. Let's put them\r\n# at the end of the import queue.\r\n\r\nfor TABLE in WEATHER_1 WEATHER_2 LOCATION_TYPE RAMP_INTERSECTION SIDE_OF_HWY \\\r\nPRIMARY_COLL_FACTOR PCF_CODE_OF_VIOL PCF_VIOL_CATEGORY TYPE_OF_COLLISION MVIW \\\r\nPED_ACTION ROAD_SURFACE ROAD_COND_1 ROAD_COND_2 LIGHTING CONTROL_DEVICE \\\r\nSTWD_VEHTYPE_AT_FAULT CHP_VEHTYPE_AT_FAULT PRIMARY_RAMP SECONDARY_RAMP\r\ndo\r\n\tsqlite-utils create-table records.db $TABLE key text name text --pk=key\r\n\tsqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv\r\n\tsqlite-utils add-foreign-key records.db collisions $TABLE $TABLE key\r\n\tsqlite-utils create-index records.db collisions $TABLE\r\ndone\r\n```\r\n\r\nYou can see the full code and import script here: https://github.com/radical-bike-lobby/switrs-db\r\n\r\nIf I run this code and then hit the CSV export link in the Datasette interface (the simple link or the \"advanced\" dialog), export fails after a small number of CSV rows are written. I am not seeing any detailed error messages but this appears in the logging output:\r\n\r\n```\r\nINFO: 127.0.0.1:57885 - \"GET /records/collisions.csv?_facet=PRIMARY_RD&PRIMARY_RD=ASHBY+AV&_labels=on&_size=max HTTP/1.1\" 200 OK\r\nCaught this error: \r\n\r\n```\r\n\r\n(No other output follows `error:` other than a blank line.)\r\n\r\nI've stared at the rows directly after the error occurs and can't yet see what is causing the problem. I'm going to set up a development environment and see if I get any more detailed error output, and then stare more at some problematic lines to see if I can get a simple reproduction.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2214/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2023057255, "node_id": "I_kwDOBm6k_c54lWdn", "number": 2212, "title": "Can't filter with numbers", "user": {"value": 605070, "label": "fzakaria"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-04T05:26:29Z", "updated_at": "2023-12-04T05:26:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean).\r\nI can't seem to get the facet to work or even filtering on this column.\r\n\r\nMy guess is that Datasette is \"stringifying\" the number and it's not matching?\r\nExample: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2212/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1087919372, "node_id": "I_kwDOBm6k_c5A2FUM", "number": 1578, "title": "Confirm if documented nginx proxy config works for row pages with escaped characters in their primary key", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-12-23T18:27:59Z", "updated_at": "2021-12-24T21:33:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Found this while working on https://github.com/simonw/datasette-tiddlywiki\r\n\r\n\"image\"\r\n\r\nThen clicking on `/tiddlywiki/tiddlers/%24%3A%2FDefaultTiddlers` returns a 404.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1578/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1077628073, "node_id": "I_kwDOBm6k_c5AO0yp", "number": 1550, "title": "Research option for returning all rows from arbitrary query", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-12-11T19:31:11Z", "updated_at": "2021-12-11T23:43:24Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by thinking about #1549 - returning ALL rows from an arbitrary query is a lot easier if you just run that query and keep iterating over the cursor.\r\n\r\nI've avoided doing that in the past because it could tie up a connection for a long time - but in private instances this wouldn't be such a problem.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1550/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1077620955, "node_id": "I_kwDOBm6k_c5AOzDb", "number": 1549, "title": "Redesign CSV export to improve usability", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2021-12-11T19:02:12Z", "updated_at": "2022-04-04T11:17:13Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser*\r\n\r\nRight now, if the user clicks on the CSV related to a table or a query, the response header for the content type is \r\n\r\n\"content-type: text/plain; charset=utf-8\"\r\n\r\nMost browsers will try to open a file with this content-type in the browser. \r\n\r\nThis is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser.\r\n\r\nIt would be great if the response header could be something like \r\n\r\n```\r\n'Content-type: text/csv');\r\n'Content-disposition: attachment;filename=MyVerySpecial.csv');\r\n```\r\n\r\nwhich would lead browsers to open a download dialog.\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1549/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1079111498, "node_id": "I_kwDOBm6k_c5AUe9K", "number": 1553, "title": "if csv export is truncated in non streaming mode set informative response header", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-13T22:50:44Z", "updated_at": "2021-12-16T19:17:28Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.\r\n\r\nit would be great if a response is truncated that an header signalling that was set in the header.\r\n\r\ni need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1553/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1082584499, "node_id": "I_kwDOBm6k_c5Ahu2z", "number": 1558, "title": "Redesign `facet_results` JSON structure prior to Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-12-16T19:45:10Z", "updated_at": "2023-01-09T15:31:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Decision: as an initial fix I'm going to de-duplicate those keys by using `tags__array` etc - with a `_2` on the end if that key is already used.\r\n>\r\n> I'll open a separate issue to redesign this better for Datasette 1.0.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/625#issuecomment-996130862_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1083657868, "node_id": "I_kwDOBm6k_c5Al06M", "number": 1565, "title": "Documented JavaScript variables on different templates made available for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2021-12-17T22:30:51Z", "updated_at": "2021-12-19T22:37:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "While working on https://github.com/simonw/datasette-leaflet-freedraw/issues/10 I found myself writing this atrocity to figure out the SQL query used for a specific table page:\r\n\r\n```javascript\r\nlet innerSql = Array.from(document.getElementsByTagName(\"span\")).filter(\r\n el => el.innerText == \"View and edit SQL\"\r\n)[0].parentElement.getAttribute(\"title\")\r\n```\r\nThis is obviously bad - it's very brittle, and will break if I ever change the text on that link (like localizing it for example).\r\n\r\nInstead, I think pages like that one should have a block of script at the bottom something like this:\r\n```javascript\r\nwindow.datasette = window.datasette || {};\r\ndatasette.view_name = 'table';\r\ndatasette.table_sql = 'select * from ...';\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1565/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1084185188, "node_id": "I_kwDOBm6k_c5An1pk", "number": 1573, "title": "Make trace() a documented internal API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-19T20:32:56Z", "updated_at": "2021-12-19T21:13:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This should be documented so plugin authors can use it to add their own custom traces: https://github.com/simonw/datasette/blob/8f311d6c1d9f73f4ec643009767749c17b5ca5dd/datasette/tracer.py#L28-L52\r\n\r\nIncluding the new `kwargs` pattern I added in #1571: https://github.com/simonw/datasette/blob/f65817000fdf87ce8a0c23edc40784ebe33b5842/datasette/database.py#L128-L132", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1573/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1090810196, "node_id": "I_kwDOBm6k_c5BBHFU", "number": 1583, "title": "consider adding deletion step of cloudbuild artifacts to gcloud publish", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-30T00:33:23Z", "updated_at": "2021-12-30T00:34:16Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun.\r\n\r\nafter successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1583/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1091257796, "node_id": "I_kwDOBm6k_c5BC0XE", "number": 1584, "title": "give error with recursive sql", "user": {"value": 58088336, "label": "tunguyenatwork"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-12-30T18:53:16Z", "updated_at": "2021-12-30T18:53:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I got an error \"near \"WITH\": syntax error\" after I upgraded to version 0.59 from 0.52.4. This error is related to recursive sql. It works great on the previous version but it failed after upgraded. Below is an example of sql:\r\n\r\nWITH RECURSIVE manager_of(position, super_position) AS (SELECT position, case ifnull(INDIRECT_SUPER_POSITION,'') when '' then super_position else INDIRECT_SUPER_POSITION end as SUPER_POSITION FROM position where super_position<>'SGV000000001' and super_position!='' and position <> super_position),chain_manager_of_position(position, level) AS (SELECT super_position, 1 as level FROM manager_of WHERE super_position!='' and (position=:pos or position in (Select position from employee where employee=:ein)) UNION ALL SELECT super_position, level+1 as level FROM manager_of JOIN chain_manager_of_position USING(position)) SELECT * FROM chain_manager_of_position left join employee using(position) where employee is not NULL order by level limit 1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1584/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1091838742, "node_id": "I_kwDOBm6k_c5BFCMW", "number": 1585, "title": "Fire base caching for `publish cloudrun`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-01T15:38:15Z", "updated_at": "2022-01-01T15:40:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://gist.github.com/steren/03d3e58c58c9a53fd49bb78f58541872 has a recipe for this, via https://twitter.com/steren/status/1477038411114446848\r\n\r\nCould this enable easier vanity URLs of the format `https://$project_id.web.app/`? How about CDN caching?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1585/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1096536240, "node_id": "I_kwDOBm6k_c5BW9Cw", "number": 1586, "title": "run analyze on all databases as part of start up or publishing", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-07T17:52:34Z", "updated_at": "2022-02-02T07:13:37Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Running `analyze;` lets sqlite's query planner make *much* better use of any indices.\r\n\r\nIt might be nice if the analyze was run as part of the start up of \"serve\" or \"publish\".", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1586/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1100015398, "node_id": "I_kwDOBm6k_c5BkOcm", "number": 1591, "title": "Maybe let plugins define custom serve options?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-01-12T08:18:47Z", "updated_at": "2022-01-15T11:56:59Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://twitter.com/psychemedia/status/1481171650934714370\r\n\r\n> can extensions be passed their own cli args? eg `--ext-tiddlywiki-dbname tiddlywiki2.sqlite` ?\r\n\r\nI've thought something like this might be useful for other plugins in the past, too.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1591/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1100499619, "node_id": "I_kwDOBm6k_c5BmEqj", "number": 1592, "title": "Row pages should show links to foreign keys", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-12T15:50:20Z", "updated_at": "2022-01-12T15:52:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Refs #1518 refactor.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1592/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1102568047, "node_id": "I_kwDOBm6k_c5Bt9pv", "number": 1596, "title": "Documentation page warning of changes coming in 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-13T23:26:04Z", "updated_at": "2022-01-13T23:26:04Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I should start this relatively soon.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1596/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1102966378, "node_id": "I_kwDOBm6k_c5Bve5q", "number": 1599, "title": "Add architecture documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-14T04:55:38Z", "updated_at": "2022-01-14T04:56:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by https://matklad.github.io/2021/02/06/ARCHITECTURE.md.html\r\n\r\nGood example: https://github.com/rust-analyzer/rust-analyzer/blob/d7c99931d05e3723d878bea5dc26766791fa4e69/docs/dev/architecture.md", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1599/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1121121305, "node_id": "I_kwDOBm6k_c5C0vQZ", "number": 1618, "title": "Reconsider policy on blocking queries containing the string \"pragma\"", "user": {"value": 770231, "label": "strada"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-02-01T19:39:46Z", "updated_at": "2022-02-02T19:42:03Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "First of all, thanks for creating this cool project, and also supporting publishing to various hosting services out of the box.\r\n\r\nWhile testing out, I noticed legitimate queries such as \r\n```\r\nselect * from books where title like 'Pragmatic%'\r\n```\r\nor\r\n```\r\nselect * from books where title = 'The Pragmatic Programmer'\r\n```\r\nare blocked, due to the regular expression check here:\r\nhttps://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L185\r\n\r\nExample as seen from a Datasette instance:\r\nhttps://fivethirtyeight.datasettes.com/polls?sql=select+*+from+books+where+title+like+%27Pragmatic%25%27%0D%0A\r\n\r\nI'd propose a regular expression like \r\n```\r\nre.compile(f\"pragma_(?!({'|'.join(allowed_pragmas)}))\"),\r\n```\r\ninstead of\r\n```\r\nre.compile(f\"pragma(?!_({'|'.join(allowed_pragmas)}))\"),\r\n```\r\n\r\nI can create a pull request with this change, unless the maintainers think it would allow unwanted queries to be executed.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1618/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1121583414, "node_id": "I_kwDOBm6k_c5C2gE2", "number": 1619, "title": "JSON link on row page is 404 if base_url setting is used", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-02-02T07:09:53Z", "updated_at": "2023-03-24T15:38:04Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "On my local environment:\r\n\r\n datasette fixtures.db -p 3344 --setting base_url /foo/bar/\r\n\r\nThen hit http://127.0.0.1:3344/foo/bar/fixtures/table%2Fwith%2Fslashes.csv/3\r\n\r\n\"image\"\r\n\r\nBut... that `json` link goes here, which is a 404:\r\n\r\nhttp://127.0.0.1:3344/foo/bar/foo/bar/fixtures/table%2Fwith%2Fslashes.csv/3?_format=json", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1619/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1122427321, "node_id": "I_kwDOBm6k_c5C5uG5", "number": 1624, "title": "Index page `/` has no CORS headers", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-02-02T21:56:10Z", "updated_at": "2022-09-28T16:54:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Compare the following:\r\n```\r\n% curl -I 'https://latest.datasette.io/fixtures'\r\nHTTP/1.1 200 OK\r\nlink: https://latest.datasette.io/fixtures.json; rel=\"alternate\"; type=\"application/json+datasette\"\r\ncache-control: max-age=5\r\nreferrer-policy: no-referrer\r\naccess-control-allow-origin: *\r\naccess-control-allow-headers: Authorization\r\naccess-control-expose-headers: Link\r\ncontent-type: text/html; charset=utf-8\r\nx-databases: _memory, _internal, fixtures, extra_database\r\nDate: Wed, 02 Feb 2022 21:55:49 GMT\r\nServer: Google Frontend\r\nTransfer-Encoding: chunked\r\n\r\n% curl -I 'https://latest.datasette.io/' \r\nHTTP/1.1 200 OK\r\nlink: https://latest.datasette.io/.json; rel=\"alternate\"; type=\"application/json+datasette\"\r\ncontent-type: text/html; charset=utf-8\r\nx-databases: _memory, _internal, fixtures, extra_database\r\nDate: Wed, 02 Feb 2022 21:55:52 GMT\r\nServer: Google Frontend\r\nTransfer-Encoding: chunked\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1624/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1122450452, "node_id": "I_kwDOBm6k_c5C5zwU", "number": 1625, "title": "Try running tests against macOS and Windows in addition to Ubuntu", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-02-02T22:25:57Z", "updated_at": "2022-02-02T22:25:57Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I already do this for `sqlite-utils`: https://github.com/simonw/sqlite-utils/blob/3.22.1/.github/workflows/test.yml\r\n\r\nRelated:\r\n- #1617\r\n- #1545", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1625/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1122557010, "node_id": "I_kwDOBm6k_c5C6NxS", "number": 1627, "title": "Get the tests passing against Windows", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-02-03T01:23:06Z", "updated_at": "2022-02-03T01:23:32Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> OK, the tests do NOT pass against Windows! https://github.com/simonw/datasette/runs/5044105941\r\n>\r\n> \"image\"\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1626#issuecomment-1028515161_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1627/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1108300685, "node_id": "I_kwDOBm6k_c5CD1ON", "number": 1604, "title": "Option to assign a domain/subdomain using `datasette publish cloudrun`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-19T16:21:17Z", "updated_at": "2022-01-19T16:23:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Looks like this API should be able to do that: https://twitter.com/steren/status/1483835859191304192 - https://cloud.google.com/run/docs/reference/rest/v1/namespaces.domainmappings/create", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1604/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1108671952, "node_id": "I_kwDOBm6k_c5CFP3Q", "number": 1605, "title": "Scripted exports", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2022-01-19T23:45:55Z", "updated_at": "2022-11-30T15:06:38Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Posting this while I'm thinking about it: I mentioned at the end of [this thread](https://twitter.com/eyeseast/status/1483893011658551299) that I'm usually doing `datasette --get` to export canned queries.\r\n\r\nI used to use a tool called [datafreeze](https://github.com/pudo/datafreeze) to do scripted exports, but that project looks dead now. The ergonomics of it are pretty nice, though, and the `Freezefile.yml` structure is actually not too far from Datasette's canned queries.\r\n\r\nThis is related to the idea for `datasette query` (#1356) but I think it's a distinct feature. It's most likely a plugin, but I want to raise it here because it's probably something other people have thought about.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1605/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1113384383, "node_id": "I_kwDOBm6k_c5CXOW_", "number": 1611, "title": "Avoid ever running count(*) against SpatiaLite KNN table", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-25T03:32:54Z", "updated_at": "2022-02-02T06:45:47Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Got this in a trace:\r\n\r\n\"image\"\r\n\r\nLooks like running `count(*)` against KNN took 83s! It ignored the time limit. And still only returned a count of 0.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1611/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1114628238, "node_id": "I_kwDOBm6k_c5Cb-CO", "number": 1613, "title": "Improvements to help make Datasette a better tool for learning SQL", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-01-26T04:56:07Z", "updated_at": "2022-01-26T16:41:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Tracking issue for the general goal of making Datasette a better tool for learning SQL.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1613/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1115435536, "node_id": "I_kwDOBm6k_c5CfDIQ", "number": 1614, "title": "Try again with SQLite codemirror support", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-26T20:05:20Z", "updated_at": "2022-12-23T21:27:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I tried and failed to implement autocomplete a while ago. Relevant code:\r\n\r\nhttps://github.com/codemirror/legacy-modes/blob/8f36abca5f55024258cd23d9cfb0203d8d244f0d/mode/sql.js#L335\r\n\r\nSounds like upgrading to CodeMirror 6 ASAP would be worthwhile since it has better accessibility and touch screen support: https://codemirror.net/6/", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1614/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1125576543, "node_id": "I_kwDOBm6k_c5DFu9f", "number": 1630, "title": "Review datasette.utils and decide which functions should be documented for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-02-07T06:39:52Z", "updated_at": "2022-02-07T06:39:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Follows:\r\n- #1176", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1630/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1129052172, "node_id": "I_kwDOBm6k_c5DS_gM", "number": 1633, "title": "base_url or prefix does not work with _exact match", "user": {"value": 6613091, "label": "henrikek"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-02-09T21:45:07Z", "updated_at": "2022-04-28T09:12:56Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When i hit \"Apply\" button to search with \"_exact\" for a column syntax the URL prefix is removed from the url.\r\n\r\n![image](https://user-images.githubusercontent.com/6613091/153293758-0b757d55-5757-4987-992e-9426e69a7956.png)\r\n\r\nAnd the result is:\r\n![image](https://user-images.githubusercontent.com/6613091/153294672-87be7809-bb7b-455d-bf1a-41e90bbfa4ae.png)\r\n\r\nIf I add the marked row to url_builder.py it seams to work:\r\n![image](https://user-images.githubusercontent.com/6613091/153295231-bdd52e37-efcf-4b21-9d37-69f182a922f4.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1633/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1131295060, "node_id": "I_kwDOBm6k_c5DbjFU", "number": 1634, "title": "Update Dockerfile generated by `datasette publish`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2022-02-11T00:07:26Z", "updated_at": "2022-03-11T17:38:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The generated `Dockerfile` currently looks something like this:\r\n```Dockerfile\r\nFROM python:3.8\r\nCOPY . /app\r\nWORKDIR /app\r\n\r\nENV DATASETTE_SECRET 'edab49cbc5d5f6f33238f54852037e3fee710821960b73edd2ce743454182ae2'\r\nRUN pip install -U datasette datasette-auth-passwords datasette-tiddlywiki datasette-graphql\r\nRUN datasette inspect fixtures.db other.db --inspect-file inspect-data.json\r\nENV PORT 8080\r\nEXPOSE 8080\r\nCMD datasette serve --host 0.0.0.0 -i fixtures.db -i other.db --cors --inspect-file inspect-data.json --metadata metadata.json --create --port $PORT /data/*.db\r\n```\r\nThis is still on Python 3.8, and it generates a pretty large image compared to the `Dockerfile` used for https://hub.docker.com/datasetteproject/datasette - https://github.com/simonw/datasette/blob/0.60.2/Dockerfile\r\n\r\nHere's the code that generates it: https://github.com/simonw/datasette/blob/7d24fd405f3c60e4c852c5d746c91aa2ba23cf5b/datasette/utils/__init__.py#L389-L400", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1634/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 2, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1142107925, "node_id": "I_kwDOBm6k_c5EEy8V", "number": 1638, "title": "`filters_from_request` plugin hook docs should mention that returning an async function is allowed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-02-18T00:08:26Z", "updated_at": "2022-02-18T00:08:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/stable/plugin_hooks.html#filters-from-request-request-database-table-datasette doesn't mention that you can return an `async` function - but you can, and in fact Datasette itself uses that here: https://github.com/simonw/datasette/blob/aa7f0037a46eb76ae6fe9bf2a1f616c58738ecdf/datasette/filters.py#L43-L47", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1638/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1148638868, "node_id": "I_kwDOBm6k_c5EdtaU", "number": 1639, "title": "Make datasette-redirect-forbidden unneccessary", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-02-23T22:18:46Z", "updated_at": "2022-02-23T22:18:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I wrote `datasette-redirect-forbidden` today because I needed 403 errors to redirect to `/-/login` and it was the quickest way to solve that problem.\r\n\r\nThis should be a feature of Datasette core.\r\n\r\n- https://github.com/simonw/datasette-redirect-forbidden/issues/2", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1639/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1148725876, "node_id": "I_kwDOBm6k_c5EeCp0", "number": 1640, "title": "Support static assets where file length may change, e.g. logs", "user": {"value": 57859326, "label": "broccolihighkicks"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-02-24T00:34:42Z", "updated_at": "2022-03-05T01:19:25Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "This is a bit of an oxymoron.\r\n\r\nI am serving a log.txt file for a background process using the Datasette --static CLI. This is useful as I can observe a background process from the web UI to see any errors that occur (instead of spelunking the logs via docker exec/ssh etc).\r\n\r\nI get this error, which I think is because Datasette assumes that the size of the content does not change (but appending new log lines means the content length changes).\r\n\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/datasette/app.py\", line 1181, in route_path\r\n response = await view(request, send)\r\n File \"/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py\", line 305, in inner_static\r\n await asgi_send_file(send, full_path, chunk_size=chunk_size)\r\n File \"/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py\", line 280, in asgi_send_file\r\n await send(\r\n File \"/usr/local/lib/python3.9/site-packages/asgi_csrf.py\", line 104, in wrapped_send\r\n await send(event)\r\n File \"/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py\", line 460, in send\r\n output = self.conn.send(event)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_connection.py\", line 468, in send\r\n data_list = self.send_with_data_passthrough(event)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_connection.py\", line 501, in send_with_data_passthrough\r\n writer(event, data_list.append)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_writers.py\", line 58, in __call__\r\n self.send_data(event.data, write)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_writers.py\", line 78, in send_data\r\n raise LocalProtocolError(\"Too much data for declared Content-Length\")\r\nh11._util.LocalProtocolError: Too much data for declared Content-Length\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/datasette/app.py\", line 1181, in route_path\r\n response = await view(request, send)\r\n File \"/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py\", line 305, in inner_static\r\n await asgi_send_file(send, full_path, chunk_size=chunk_size)\r\n File \"/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py\", line 280, in asgi_send_file\r\n await send(\r\n File \"/usr/local/lib/python3.9/site-packages/asgi_csrf.py\", line 104, in wrapped_send\r\n await send(event)\r\n File \"/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py\", line 460, in send\r\n output = self.conn.send(event)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_connection.py\", line 468, in send\r\n data_list = self.send_with_data_passthrough(event)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_connection.py\", line 501, in send_with_data_passthrough\r\n writer(event, data_list.append)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_writers.py\", line 58, in __call__\r\n self.send_data(event.data, write)\r\n File \"/usr/local/lib/python3.9/site-packages/h11/_writers.py\", line 78, in send_data\r\n raise LocalProtocolError(\"Too much data for declared Content-Length\")\r\nh11._util.LocalProtocolError: Too much data for declared Content-Length\r\n```\r\n\r\nThanks, I am finding Datasette very useful.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1640/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1149310456, "node_id": "I_kwDOBm6k_c5EgRX4", "number": 1641, "title": "Tweak mobile keyboard settings", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-02-24T13:47:10Z", "updated_at": "2022-02-24T13:49:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://developer.apple.com/library/archive/documentation/StringsTextFonts/Conceptual/TextAndWebiPhoneOS/KeyboardManagement/KeyboardManagement.html#//apple_ref/doc/uid/TP40009542-CH5-SW12\r\n\r\n`autocorrect=\"off\"` is worth experimenting with.\r\n\r\nTwitter: https://twitter.com/forestgregg/status/1496842959563726852", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1641/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1154399841, "node_id": "I_kwDOBm6k_c5Ezr5h", "number": 1645, "title": "Sensible `cache-control` headers for static assets, including those served by plugins", "user": {"value": 697092, "label": "curiousleo"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2022-02-28T18:12:03Z", "updated_at": "2022-03-08T02:59:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "## What I'm seeing\r\n\r\nWith `default_cache_ttl = 86400`, I see the following:\r\n\r\nA table view returns `Cache-control: max-age=86400`:\r\n\r\n![Screenshot_20220228_190000](https://user-images.githubusercontent.com/697092/156034352-4d64683e-39c8-49af-81df-0217a5957bbd.png)\r\n\r\nA static asset returns no `Cache-control` header:\r\n\r\n![Screenshot_20220228_185933](https://user-images.githubusercontent.com/697092/156034363-d0b03cc2-5889-4ed2-b601-8c1846b8469a.png)\r\n\r\n## What I expected to see\r\n\r\nI expected the static asset to return a `Cache-control` header indicating that this response can be cached.\r\n\r\n## Why this matters\r\n\r\nI'm productionising a Datasette deployment right now and was looking into putting it behind a Varnish instance. I was surprised to see requests for static assets being served from Datasette rather than Varnish, this is what led me to look more closely at the response headers.\r\n\r\nWhile Datasette serves those static assets pretty quickly, I don't see why Datasette should serve them. By their nature, static assets like images and JS files are very cacheable, so it should be easy to serve them from a cache like Varnish.\r\n\r\n(Note that Varnish can easily be configured to override this header, enabling caching for static assets. But it would be better if this override was not necessary.)\r\n\r\n## Discussion\r\n\r\nIt seems clear to me that serving static assets without a `Cache-control` header is not ideal.\r\n\r\nI see two options here:\r\n\r\nA. Static assets use the same logic as table / SQL views to set the `Cache-control` header based on `default_cache_ttl`.\r\nB. An additional setting for static assets is introduced (`default_static_cache_ttl`, say).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1645/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1161937073, "node_id": "I_kwDOBm6k_c5FQcCx", "number": 1653, "title": "Mechanism to default a table to sorting by multiple columns", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-07T21:20:11Z", "updated_at": "2022-03-07T21:23:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "### Discussed in https://github.com/simonw/datasette/discussions/1652\r\n\r\n
\r\n\r\nOriginally posted by **zaneselvans** March 7, 2022\r\nIt's easy to tell datasette to sort tables using a single column, as [described in the docs](https://docs.datasette.io/en/stable/metadata.html#setting-a-default-sort-order):\r\n\r\n```yaml\r\ndatabases:\r\n ferc1:\r\n tables:\r\n f1_edcfu_epda:\r\n sort: created_time\r\n```\r\n\r\nBut is there some way to tell it to sort using a composite key, like you would in an `ORDER BY` clause instead? For example, the way it's being done **[in this query](https://data.catalyst.coop/ferc1?sql=select%0D%0A++rowid%2C%0D%0A++respondent_id%2C%0D%0A++report_year%2C%0D%0A++spplmnt_num%2C%0D%0A++row_number%2C%0D%0A++row_seq%2C%0D%0A++row_prvlg%2C%0D%0A++acct_num%2C%0D%0A++depr_plnt_base%2C%0D%0A++est_avg_srvce_lf%2C%0D%0A++net_salvage%2C%0D%0A++apply_depr_rate%2C%0D%0A++mrtlty_crv_typ%2C%0D%0A++avg_remaining_lf%2C%0D%0A++report_prd%0D%0Afrom%0D%0A++f1_edcfu_epda%0D%0Awhere%0D%0A++respondent_id+%3D+210%0D%0A++AND+report_year+%3D+2020%0D%0Aorder+by%0D%0A++report_year%2C+report_prd%2C+respondent_id%2C+spplmnt_num%2C+row_number%0D%0Alimit%0D%0A++1000)** on our Datasette?\r\n\r\n```sql\r\nSELECT\r\n respondent_id,\r\n report_year,\r\n spplmnt_num,\r\n row_number,\r\n row_seq,\r\n row_prvlg,\r\n acct_num,\r\n depr_plnt_base,\r\n est_avg_srvce_lf,\r\n net_salvage,\r\n apply_depr_rate,\r\n mrtlty_crv_typ,\r\n avg_remaining_lf,\r\n report_prd\r\nFROM\r\n f1_edcfu_epda\r\nWHERE\r\n respondent_id = 210\r\n AND report_year = 2020\r\nORDER BY\r\n report_year, report_prd, respondent_id, spplmnt_num, row_number\r\nLIMIT\r\n 1000\r\n```\r\n\r\nThe problem here is that by default it's using `rowid` (the SQLite assigned autoincrementing integer key) to order the records, but the table **should** have a natural composite primary key, but the original database that this data is being migrated from doesn't enforce unique primary keys, so there are dupes, and we don't want to drop those rows, and the records are somehow getting jumbled in the database (the `rowid` ordering isn't lined up with the expected ordering based on the composite primary key, though it's close) and this jumbling is confusing to users that expect to see the data ordered based on the natural primary key.\r\n\r\nI've tried setting the `sort` metadata parameter to a list of column names, a tuple of column names, a quoted string of comma-separated column names, a quoted string of a tuple of column names...\r\n\r\n```yaml\r\ndatabases:\r\n ferc1:\r\n tables:\r\n f1_edcfu_epda:\r\n sort: \"(report_year, report_prd, respondent_id, spplmnt_num, row_number)\"\r\n```\r\n\r\nand they all give me server errors like:\r\n\r\n```\r\nCannot sort table by (report_year, report_prd, respondent_id, spplmnt_num, row_number)\r\n```
", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1653/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1163369515, "node_id": "I_kwDOBm6k_c5FV5wr", "number": 1655, "title": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-09T00:56:40Z", "updated_at": "2023-10-17T21:53:17Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "[this page](https://labordata.bunkum.us/opdr-8335ea3?sql=with+most_recent_lu+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%28%0D%0A++++++select%0D%0A++++++++*%0D%0A++++++from%0D%0A++++++++lm_data%0D%0A++++++order+by%0D%0A++++++++f_num%2C%0D%0A++++++++receive_date+desc%0D%0A++++%29+t%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++aff_abbr+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+abbr_local_name%2C%0D%0A++coalesce%28%0D%0A++++regexp_match%28%27%28.*%3F%29%28%2C%3F+AFL-CIO%24%29%27%2C+union_name%29%2C%0D%0A++++regexp_match%28%27%28.*%3F%29%28+IND%24%29%27%2C+union_name%29%2C%0D%0A++++union_name%0D%0A++%29+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+full_local_name%2C%0D%0A++*%0D%0Afrom%0D%0A++most_recent_lu%0D%0Awhere+%28desig_num+IS+NOT+NULL+OR+unit_name+IS+NOT+NULL%29+AND+desig_name+%21%3D+%27HQ%27%0D%0Alimit%0D%0A++5000+offset+0)\r\n\r\nis using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb.\r\n\r\nit's using over a 1G on chrome 99.\r\n\r\ni found this because, i was trying to figure out why editing the SQL was getting very slow.\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1655/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174655187, "node_id": "I_kwDOBm6k_c5GA9DT", "number": 1671, "title": "Filters fail to work correctly against calculated numeric columns returned by SQL views because type affinity rules do not apply", "user": {"value": 9308268, "label": "rayvoelker"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-20T19:17:24Z", "updated_at": "2022-03-22T17:43:12Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I found a strange behavior, and I'm not sure if it's related to views and boolean values perhaps, or if there's something else weird going on here, but I'll provide an example that may help show what I'm seeing happen.\r\n\r\n```bash\r\n#!/bin/bash\r\n\r\necho \"\\\"id\\\",\\\"expiration_date\\\"\r\n0,2018-01-04\r\n1,2019-01-05\r\n2,2020-01-06\r\n3,2021-01-07\r\n4,2022-01-08\r\n5,2023-01-09\r\n6,2024-01-10\r\n7,2025-01-11\r\n8,2026-01-12\r\n9,2027-01-13\r\n\" > test.csv\r\ncsvs-to-sqlite test.csv test.db\r\nsqlite-utils create-view --replace test.db test_view \"select id, expiration_date, case when julianday('NOW') >= julianday(expiration_date) then 1 else 0 end as has_expired FROM test\"\r\n```\r\n\r\n```bash\r\ndatasette test.db\r\n```\r\n![image](https://user-images.githubusercontent.com/9308268/159178745-9c6152f7-eac6-4bf9-bef5-a2d63d3ee13f.png)\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/159178824-c8952137-270c-42a4-ad1c-f6ad2c51e499.png)\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/159178877-23e00b36-443a-43ef-83e5-e0bdddd3fdcd.png)\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/159178918-65922cc7-2514-4735-a72d-4904b99976d4.png)\r\n\r\nThanks again and let me know if you want me to provide anything else!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1671/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174697144, "node_id": "I_kwDOBm6k_c5GBHS4", "number": 1672, "title": "Refactor CSV handling code out of DataView", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-20T21:47:00Z", "updated_at": "2022-03-20T21:52:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> I think the way to get rid of most of the remaining complexity in `DataView` is to refactor how CSV stuff works - pulling it in line with other export factors and extracting the streaming mechanism. Opening a fresh issue for that.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1660#issuecomment-1073355032_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1672/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174708375, "node_id": "I_kwDOBm6k_c5GBKCX", "number": 1673, "title": "Streaming CSV spends a lot of time in `table_column_details`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-20T22:25:28Z", "updated_at": "2022-03-20T22:34:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "At least I think it does. I tried running `py-spy top -p $PID` against a Datasette process that was trying to do:\r\n\r\n datasette covid.db --get '/covid/ny_times_us_counties.csv?_size=10&_stream=on'\r\n\r\nWhile investigating:\r\n- #1355\r\n\r\nAnd spotted this:\r\n```\r\ndatasette covid.db --get /covid/ny_times_us_counties.csv?_size=10&_stream=on' (python v3.10.2)\r\nTotal Samples 5800\r\nGIL: 71.00%, Active: 98.00%, Threads: 4\r\n\r\n %Own %Total OwnTime TotalTime Function (filename:line) \r\n 8.00% 8.00% 4.32s 4.38s sql_operation_in_thread (datasette/database.py:212)\r\n 5.00% 5.00% 3.77s 3.93s table_column_details (datasette/utils/__init__.py:614)\r\n 6.00% 6.00% 3.72s 3.72s _worker (concurrent/futures/thread.py:81)\r\n 7.00% 7.00% 2.98s 2.98s _read_from_self (asyncio/selector_events.py:120)\r\n 5.00% 6.00% 2.35s 2.49s detect_fts (datasette/utils/__init__.py:571)\r\n 4.00% 4.00% 1.34s 1.34s _write_to_self (asyncio/selector_events.py:140)\r\n```\r\nRelevant code: https://github.com/simonw/datasette/blob/798f075ef9b98819fdb564f9f79c78975a0f71e8/datasette/utils/__init__.py#L609-L625\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1673/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174717287, "node_id": "I_kwDOBm6k_c5GBMNn", "number": 1674, "title": "Tweak design of /.json", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-20T22:58:01Z", "updated_at": "2022-03-20T22:58:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://latest.datasette.io/.json\r\n\r\nCurrently:\r\n```json\r\n{\r\n \"_memory\": {\r\n \"name\": \"_memory\",\r\n \"hash\": null,\r\n \"color\": \"a6c7b9\",\r\n \"path\": \"/_memory\",\r\n \"tables_and_views_truncated\": [],\r\n \"tables_and_views_more\": false,\r\n \"tables_count\": 0,\r\n \"table_rows_sum\": 0,\r\n \"show_table_row_counts\": false,\r\n \"hidden_table_rows_sum\": 0,\r\n \"hidden_tables_count\": 0,\r\n \"views_count\": 0,\r\n \"private\": false\r\n },\r\n \"fixtures\": {\r\n \"name\": \"fixtures\",\r\n \"hash\": \"645005884646eb941c89997fbd1c0dd6be517cb1b493df9816ae497c0c5afbaa\",\r\n \"color\": \"645005\",\r\n \"path\": \"/fixtures\",\r\n \"tables_and_views_truncated\": [\r\n {\r\n \"name\": \"compound_three_primary_keys\",\r\n \"columns\": [\r\n \"pk1\",\r\n \"pk2\",\r\n \"pk3\",\r\n \"content\"\r\n ],\r\n \"primary_keys\": [\r\n \"pk1\",\r\n \"pk2\",\r\n \"pk3\"\r\n ],\r\n \"count\": 1001,\r\n \"hidden\": false,\r\n \"fts_table\": null,\r\n \"num_relationships_for_sorting\": 0,\r\n \"private\": false\r\n },\r\n```\r\nAs-of this issue the `\"path\"` key is confusing, it doesn't match what https://latest.datasette.io/-/databases returns:\r\n\r\n- #1668", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1674/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1175690070, "node_id": "I_kwDOBm6k_c5GE5tW", "number": 1676, "title": "Reconsider ensure_permissions() logic, can it be less confusing?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-03-21T17:14:57Z", "updated_at": "2022-12-02T01:23:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Updated documentation: https://github.com/simonw/datasette/blob/e627510b760198ccedba9e5af47a771e847785c9/docs/internals.rst#await-ensure_permissionsactor-permissions\r\n>\r\n>> This method allows multiple permissions to be checked at onced. It raises a `datasette.Forbidden` exception if any of the checks are denied before one of them is explicitly granted.\r\n>> \r\n>> This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns `True` or not a single one of them returns `False`:\r\n>\r\n> That's pretty hard to understand! I'm going to open a separate issue to reconsider if this is a useful enough abstraction given how confusing it is.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1675#issuecomment-1074177827_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1676/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1175894898, "node_id": "I_kwDOBm6k_c5GFrty", "number": 1680, "title": "Consider simplifying permissions for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-03-21T20:17:29Z", "updated_at": "2022-03-21T20:17:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Permission checks right now can express one of three opinions:\r\n\r\n- `False` means \"so not grant this permisson\"\r\n- `True` means \"grant this permission\"\r\n- `None` means \"I have no opinion\"\r\n\r\nBut... there's also a concept of a \"default\" for a given permission check, which might be `False` or `True`.\r\n\r\nI worry this is too complicated. Could this be simplified before 1.0? In particular the default concept.\r\n\r\nSee also:\r\n- #1676 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1680/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1177101697, "node_id": "I_kwDOBm6k_c5GKSWB", "number": 1681, "title": "Potential bug in numeric handling where_clause for filters", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-22T17:43:50Z", "updated_at": "2022-03-22T17:49:09Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Note that Datasette does already have special logic to convert parameters to integers for numeric comparisons like `>`:\r\n>\r\n> https://github.com/simonw/datasette/blob/c4c9dbd0386e46d2bf199f0ed34e4895c98cb78c/datasette/filters.py#L203-L212\r\n> \r\n> Though... it looks like there's a bug in that? It doesn't account for `float` values - `\"3.5\".isdigit()` return `False` - probably for the best, because `int(3.5)` would break that value anyway.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1671#issuecomment-1075432283_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1681/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1179998071, "node_id": "I_kwDOBm6k_c5GVVd3", "number": 1684, "title": "Mechanism for disabling faceting on large tables only", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-24T20:06:11Z", "updated_at": "2022-03-24T20:13:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Forest turned off faceting on https://labordata.bunkum.us/ because it was causing performance problems on some of the huge tables - but it would be nice if it could still be an option on smaller tables such as https://labordata.bunkum.us/voluntary_recognitions-4421085/voluntary_recognitions\r\n\r\nOne option: a new setting that automatically disables faceting (and facet suggestion) for tables that have either more than X rows or that are so big that the count could not be completed within the time limit.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1684/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1181037277, "node_id": "I_kwDOBm6k_c5GZTLd", "number": 1686, "title": "heroku bails if app name specifed in datasette publish is the same as existing app", "user": {"value": 2115933, "label": "tlongers"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-25T17:10:34Z", "updated_at": "2022-03-25T17:10:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Seem that `heroku` does not accept an app overwrite triggered by specifying the app name using `datasette publish`, as below:\r\n\r\n```\r\ndatasette publish heroku some.db --name \"jazzy-name\" \r\n```\r\n\r\nThe resulting error has the below traceback:\r\n\r\n```\r\nCreating jazzy-name... !\r\n \u25b8 Name jazzy-name is already taken\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/bin/datasette\", line 33, in \r\n sys.exit(load_entry_point('datasette==0.60.1', 'console_scripts', 'datasette')())\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/datasette/publish/heroku.py\", line 127, in heroku\r\n create_output = check_output(cmd).decode(\"utf8\")\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 420, in check_output\r\n return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 524, in run\r\n raise CalledProcessError(retcode, process.args,\r\nsubprocess.CalledProcessError: Command '['heroku', 'apps:create', 'jazzy-name', '--json']' returned non-zero exit status 1.\r\n```\r\n\r\nIt's a solid failsafe, but does `datasette publish` have a way to force an overwrite?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1686/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1181364043, "node_id": "I_kwDOBm6k_c5Gai9L", "number": 1687, "title": "Make show_json.html or a similar mechanism stable for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-25T23:42:45Z", "updated_at": "2022-03-25T23:42:45Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I used `show_json.html` in the new `datasette-packages` plugin, which means it will break if that template changes:\r\n- https://github.com/simonw/datasette-packages/issues/3\r\n\r\nIt would be useful if it (or something like it) was documented and stable for plugins to use.\r\n\r\nAlso relevant:\r\n- #878", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1687/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1182227211, "node_id": "I_kwDOBm6k_c5Gd1sL", "number": 1692, "title": "[plugins][feature request]: Support additional script tag attributes when loading custom JS", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-27T01:16:03Z", "updated_at": "2022-03-30T06:14:51Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "## Motivation\r\n\r\n- The build system for my new [plugin](https://github.com/hydrosquall/datasette-nteract-data-explorer) has two output JS files, one for browsers that support ES modules, one for browsers that don't. At present, I'm only passing one of them into Datasette.\r\n- I'd like to specify the non-es-module script as a fallback for older browsers. I don't want to load it by default, because browsers will only need one, and it's heavy, so for now I'm only supporting modern browsers. \r\n\r\nTo be able to support legacy browsers without slowing down users with modern browsers, I would like to be able to set additional HTML attributes on the tag fallback script, `nomodule` and `defer`. My injected scripts should look something like this:\r\n\r\n```html\r\n\r\n\r\n```\r\n\r\n## Proposal\r\n\r\nTo achieve this, I propose additional optional properties to the API accepted by the `extra_js_urls` hook and custom JS field the `metadata.json` [described here](https://docs.datasette.io/en/stable/custom_templates.html#custom-css-and-javascript). \r\n\r\nUnder this API, I'd write something like this to get the above HTML rendered in Datasette.\r\n\r\n```json\r\n{\r\n \"extra_js_urls\": [\r\n {\r\n \"url\": \"/index.my-es-module-bundle.js\",\r\n \"module\": true,\r\n },\r\n {\r\n \"url\": \"/index.my-legacy-fallback-bundle.js\",\r\n \"nomodule\": \"\",\r\n \"defer\": true\r\n }\r\n ]\r\n}\r\n```\r\n\r\n## Resources\r\n\r\n- [MDN on the script tag](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script)\r\n - There may be other properties that could be added that are potentially valuable, like `async` or `referrerpolicy`, but I don't have an immediate need for those.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1692/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1182141761, "node_id": "I_kwDOBm6k_c5Gdg1B", "number": 1690, "title": "Idea: `datasette.set_actor_cookie(response, actor)`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-26T22:41:52Z", "updated_at": "2022-03-26T22:43:00Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just wrote this code in a plugin and it felt like it could benefit from an abstraction: https://github.com/simonw/datasette-auth0/blob/152e6eb21e96e9b73bd9c205f9749a1297d0ef0b/datasette_auth0/__init__.py#L79-L92\r\n\r\n```python\r\n redirect_response = Response.redirect(\"/\")\r\n expires_at = int(time.time()) + (24 * 60 * 60)\r\n redirect_response.set_cookie(\r\n \"ds_actor\",\r\n datasette.sign(\r\n {\r\n \"a\": profile_response.json(),\r\n \"e\": baseconv.base62.encode(expires_at),\r\n },\r\n \"actor\",\r\n ),\r\n )\r\n return redirect_response\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1690/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1185868354, "node_id": "I_kwDOBm6k_c5GrupC", "number": 1695, "title": "Option to un-filter facet not shown for `?col__exact=value`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-30T04:44:02Z", "updated_at": "2022-03-30T04:46:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted this on a page with `COUNTY__exact=Lee` in the URL:\r\n\r\n![CleanShot 2022-03-29 at 21 41 46@2x](https://user-images.githubusercontent.com/9599/160752849-a9039343-3770-4655-920b-f19e25687a57.png)\r\n\r\nWith `COUNTY=Lee` you get this instead:\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1695/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1186696202, "node_id": "I_kwDOBm6k_c5Gu4wK", "number": 1696, "title": "Show foreign key label when filtering", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-30T16:18:54Z", "updated_at": "2023-01-29T20:56:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "For example here:\r\n\r\n\"image\"\r\n\r\n3 corresponds to \"Human Related: Other\" - it would be neat to display this in this area of the page somehow.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1696/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1193090967, "node_id": "I_kwDOBm6k_c5HHR-X", "number": 1699, "title": "Proposal: datasette query", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-04-05T12:36:43Z", "updated_at": "2022-04-11T01:32:12Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I started sketching out a plugin to add a `datasette query` subcommand to export data from the command line. This is based on discussions in #1356 and #1605. Before I get too far down this rabbit hole, I figure it's worth getting some feedback here (unless this should happen in `Discussions`). Here's what I'm thinking:\r\n\r\nAt its most basic, it will write the results of a query to STDOUT.\r\n\r\n```sh\r\ndatasette query -d data.db 'select * from data' > results.json\r\n```\r\n\r\nThis isn't much improvement over using [sqlite-utils](https://github.com/simonw/sqlite-utils). To make better use of datasette and its ecosystem, run `datasette query` using a canned query defined in a `metadata.yml` file.\r\n\r\nFor example, using the metadata file from [alltheplaces-datasette](https://github.com/eyeseast/alltheplaces-datasette/blob/main/metadata.yml):\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query -d alltheplaces.db -m metadata.yml count_by_spider\r\n```\r\n\r\nThat query would be good to get as CSV, and we can auto-discover metadata and databases in the current directory:\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query count_by_spider -f csv\r\n```\r\n\r\nIn this case, `count_by_spider` is a canned query defined on the `alltheplaces` database. If the same query is defined on multiple databases or its otherwise unclear which database `query` should use, pass the `-d` or `--database` option.\r\n\r\nIf a query takes parameters, I can pass them in at runtime, using the `--param` or `-p` option:\r\n\r\n```sh\r\ndatasette query -d data.db -p value something 'select * from neighborhoods where some_column = :value'\r\n```\r\n\r\nI'm very interested in feedback on this, including whether it should be a plugin or in Datasette core. (I don't have a strong opinion about this, but I'm prototyping it as a plugin to start.)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1699/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1196327155, "node_id": "I_kwDOBm6k_c5HToDz", "number": 1702, "title": "Be more consistent with column quoting", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-07T16:59:20Z", "updated_at": "2022-04-07T16:59:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This tutorial made me notice that Datasette is pretty inconsistent with how column quoting works: https://datasette.io/tutorials/learn-sql\r\n\r\nIt has examples of each of `\"table_name\"` and `[table_name]` and `table_name`, and it uses single quoted values too.\r\n\r\nDatasette should generate SQL as consistently as possible to support learners.\r\n\r\nThat tutorial should also provide a tiny bit of extra information about what's going on here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1702/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1197925865, "node_id": "I_kwDOBm6k_c5HZuXp", "number": 1704, "title": "File PRs against incompatible plugins pinning to datasette<1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-04-08T23:15:30Z", "updated_at": "2022-04-08T23:15:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "As part of the preparation for the 1.0 release, test all existing known plugins against the alpha.\r\n\r\nFor any that break, submit a PR suggesting they pin to a version <1.0 - and include a link to the documentation on how to upgrade the plugin for 1.0.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1704/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1197926598, "node_id": "I_kwDOBm6k_c5HZujG", "number": 1705, "title": "How to upgrade your plugin for 1.0 documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-04-08T23:16:47Z", "updated_at": "2022-12-13T05:29:05Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Among other things, needed by:\r\n- #1704", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1705/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1198822563, "node_id": "I_kwDOBm6k_c5HdJSj", "number": 1706, "title": "[feature] immutable mode for a directory, not just individual sqlite file", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-04-10T00:50:57Z", "updated_at": "2022-12-09T19:11:40Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "## Motivation\r\n\r\n- I have a directory of sqlite databases\r\n- I'd like to use immutable mode when opening them for better performance [docs](https://docs.datasette.io/en/0.54/performance.html#immutable-mode)\r\n- Currently using this flag throws the following error\r\n\r\n IsADirectoryError: [Errno 21] Is a directory: '/name-of-directory'\r\n\r\n## Proposal\r\n\r\nImmutable flag works for both single files and directories\r\n\r\n datasette -i /folder-of-sqlite-files", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1706/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1200224939, "node_id": "I_kwDOBm6k_c5Hifqr", "number": 1707, "title": "[feature] expanded detail page", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-11T16:29:17Z", "updated_at": "2022-04-11T16:33:00Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Right now, if click on the detail page for a row you get the info for the row and links to related tables:\r\n![Screenshot 2022-04-11 at 12-27-26 lm20 filing](https://user-images.githubusercontent.com/536941/162786802-90ac1a71-4624-47c4-ae55-b783f4f6c92d.png)\r\n\r\nIt would be very cool if there was an option to expand the rows of the related tables from within this detail view.\r\n\r\nIf you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1707/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1200649124, "node_id": "I_kwDOBm6k_c5HkHOk", "number": 1708, "title": "Datasette 1.0 alpha upcoming release notes", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 2, "created_at": "2022-04-11T22:57:12Z", "updated_at": "2022-12-13T05:29:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I'm going to try writing the release notes first, to see if that helps unblock me.\r\n\r\n# \u26a0\ufe0f Any release notes in this issue are a draft, and should not be treated as the real thing \u26a0\ufe0f ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1708/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1200649502, "node_id": "I_kwDOBm6k_c5HkHUe", "number": 1709, "title": "Redesigned JSON API with ?_extra= parameters", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-04-11T22:57:49Z", "updated_at": "2022-12-13T05:29:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This will be the single biggest breaking change for the 1.0 release.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1709/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1200650491, "node_id": "I_kwDOBm6k_c5HkHj7", "number": 1711, "title": "Template context powered entirely by the JSON API format", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-04-11T22:59:27Z", "updated_at": "2022-12-13T05:29:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette 1.0 will have a stable template context. I'm going to achieve this by refactoring the templates to work only with keys returned by the API (or some of its extras) - then the API documentation will double up as template documentation.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1711/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1203943272, "node_id": "I_kwDOBm6k_c5Hwrdo", "number": 1713, "title": "Datasette feature for publishing snapshots of query results", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-04-14T01:42:00Z", "updated_at": "2022-07-04T05:16:35Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://twitter.com/simonw/status/1514392335718645760\r\n\r\n> Maybe [@datasetteproj](https://twitter.com/datasetteproj) should grow a feature that lets you cache the results of a query and give that snapshot a stable permalink\r\n>\r\n> A plugin that publishes the JSON output of a query to an S3 bucket would be pretty neat... especially if it could also be configured to re-publish the results on a schedule\r\n\r\nA lot of people said they would find this useful.\r\n\r\nProbably going to build this as a plugin.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1713/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1221849746, "node_id": "I_kwDOBm6k_c5I0_KS", "number": 1732, "title": "Custom page variables aren't decoded", "user": {"value": 52649, "label": "tannewt"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-04-30T14:55:46Z", "updated_at": "2022-05-03T01:50:45Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a page `templates/filer/{filer_id}.html`. It uses `filer_id` in a `sql()` call to fetch data. With 0.61.1 this no longer works because the spaces in IDs isn't preserved. Instead, the escaped version is passed into the template and the id isn't present in my db.\r\n\r\nDatasette should unescape the url component before passing them into the template.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1732/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1216436131, "node_id": "I_kwDOBm6k_c5IgVej", "number": 1721, "title": "Implement plugin hooks: `register_table_extras`, `register_row_extras`, `register_query_extras`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 0, "created_at": "2022-04-26T20:21:49Z", "updated_at": "2022-12-13T05:29:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Designed in:\r\n- #1720\r\n\r\nPart of:\r\n- #262\r\n- #1709", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1721/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1216479167, "node_id": "I_kwDOBm6k_c5Igf-_", "number": 1722, "title": "`db.primary_keys()` and `db.table_columns()` don't show up in traces", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-26T21:08:36Z", "updated_at": "2022-04-26T21:08:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Noticed this while working on:\r\n- #1715\r\n\r\nThis code here isn't showing up in traces: https://github.com/simonw/datasette/blob/579f59dcec43a91dd7d404e00b87a00afd8515f2/datasette/views/table.py#L218-L220\r\n\r\nBecause those functions don't use the regular trace-instrumented `db.execute()` code path - they work directly against a connection instead: https://github.com/simonw/datasette/blob/579f59dcec43a91dd7d404e00b87a00afd8515f2/datasette/utils/__init__.py#L610-L626\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1722/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1216622905, "node_id": "I_kwDOBm6k_c5IhDE5", "number": 1725, "title": "Performance question - what is happening in this gap?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-27T00:21:11Z", "updated_at": "2022-04-27T00:21:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Trace from https://latest-with-plugins.datasette.io/github/commits?_facet=repo&_trace=1&_facet=committer\r\n\r\n![CleanShot 2022-04-26 at 17 20 06@2x](https://user-images.githubusercontent.com/9599/165413811-db2cd599-2acc-46ce-b9c2-f9bc45b879e9.png)\r\n\r\nWhat's going on in that gap? Can I improve the tracing output to show some non-SQL queries to figure that out?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1725/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1217014076, "node_id": "I_kwDOBm6k_c5Iiik8", "number": 1726, "title": "Security page in the documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-27T08:43:30Z", "updated_at": "2022-04-27T08:43:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "A page talking about how to run Datasette securely, and security concerns to take into account.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1726/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1217759117, "node_id": "I_kwDOBm6k_c5IlYeN", "number": 1727, "title": "Research: demonstrate if parallel SQL queries are worthwhile", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 32, "created_at": "2022-04-27T18:54:21Z", "updated_at": "2022-09-26T14:48:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I added parallel SQL query execution here:\r\n- https://github.com/simonw/datasette/issues/1723\r\n\r\nMy hunch is that this will take advantage of multiple cores, since Python's `sqlite3` module releases the GIL once a query is passed to SQLite.\r\n\r\nI'd really like to prove this is the case though. Just not sure how to do it!\r\n\r\nLarger question: is this performance optimization actually improving performance at all? Under what circumstances is it worthwhile?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1727/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1219385669, "node_id": "I_kwDOBm6k_c5IrllF", "number": 1729, "title": "Implement ?_extra and new API design for TableView", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 12, "created_at": "2022-04-28T22:28:14Z", "updated_at": "2022-12-13T05:29:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Part of:\r\n- #262\r\n- #1518", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1729/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1219398983, "node_id": "I_kwDOBm6k_c5Iro1H", "number": 1730, "title": "SQL tracing should much more closely track the SQL query execution", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-28T22:41:04Z", "updated_at": "2022-04-28T22:41:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In #1727 I realized that the SQL tracing was measuring a whole bunch of stuff outside of the SQL query itself.\r\n\r\nI started experimenting with this fix for that but it didn't work - I got back an empty JSON array of traces for some reason:\r\n\r\n```diff\r\ndiff --git a/datasette/database.py b/datasette/database.py\r\nindex ba594a8..d7f9172 100644\r\n--- a/datasette/database.py\r\n+++ b/datasette/database.py\r\n@@ -7,7 +7,7 @@ import sys\r\n import threading\r\n import uuid\r\n \r\n-from .tracer import trace\r\n+from .tracer import trace, trace_child_tasks\r\n from .utils import (\r\n detect_fts,\r\n detect_primary_keys,\r\n@@ -207,30 +207,31 @@ class Database:\r\n time_limit_ms = custom_time_limit\r\n \r\n with sqlite_timelimit(conn, time_limit_ms):\r\n- try:\r\n- cursor = conn.cursor()\r\n- cursor.execute(sql, params if params is not None else {})\r\n- max_returned_rows = self.ds.max_returned_rows\r\n- if max_returned_rows == page_size:\r\n- max_returned_rows += 1\r\n- if max_returned_rows and truncate:\r\n- rows = cursor.fetchmany(max_returned_rows + 1)\r\n- truncated = len(rows) > max_returned_rows\r\n- rows = rows[:max_returned_rows]\r\n- else:\r\n- rows = cursor.fetchall()\r\n- truncated = False\r\n- except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:\r\n- if e.args == (\"interrupted\",):\r\n- raise QueryInterrupted(e, sql, params)\r\n- if log_sql_errors:\r\n- sys.stderr.write(\r\n- \"ERROR: conn={}, sql = {}, params = {}: {}\\n\".format(\r\n- conn, repr(sql), params, e\r\n+ with trace(\"sql\", database=self.name, sql=sql.strip(), params=params):\r\n+ try:\r\n+ cursor = conn.cursor()\r\n+ cursor.execute(sql, params if params is not None else {})\r\n+ max_returned_rows = self.ds.max_returned_rows\r\n+ if max_returned_rows == page_size:\r\n+ max_returned_rows += 1\r\n+ if max_returned_rows and truncate:\r\n+ rows = cursor.fetchmany(max_returned_rows + 1)\r\n+ truncated = len(rows) > max_returned_rows\r\n+ rows = rows[:max_returned_rows]\r\n+ else:\r\n+ rows = cursor.fetchall()\r\n+ truncated = False\r\n+ except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:\r\n+ if e.args == (\"interrupted\",):\r\n+ raise QueryInterrupted(e, sql, params)\r\n+ if log_sql_errors:\r\n+ sys.stderr.write(\r\n+ \"ERROR: conn={}, sql = {}, params = {}: {}\\n\".format(\r\n+ conn, repr(sql), params, e\r\n+ )\r\n )\r\n- )\r\n- sys.stderr.flush()\r\n- raise\r\n+ sys.stderr.flush()\r\n+ raise\r\n \r\n if truncate:\r\n return Results(rows, truncated, cursor.description)\r\n@@ -238,9 +239,8 @@ class Database:\r\n else:\r\n return Results(rows, False, cursor.description)\r\n \r\n- with trace(\"sql\", database=self.name, sql=sql.strip(), params=params):\r\n- results = await self.execute_fn(sql_operation_in_thread)\r\n- return results\r\n+ with trace_child_tasks():\r\n+ return await self.execute_fn(sql_operation_in_thread)\r\n \r\n @property\r\n def size(self):\r\n```\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1727#issuecomment-1111602802_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1730/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1237586379, "node_id": "I_kwDOBm6k_c5JxBHL", "number": 1742, "title": "?_trace=1 fails with datasette-geojson for some reason", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-05-16T19:06:05Z", "updated_at": "2022-05-16T19:42:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "view-source:https://calands.datasettes.com/calands/CPAD_2020a_SuperUnits.geojson?_sort=id&id__exact=4&_labels=on&_trace=1 is showing me a blank page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1742/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1237871948, "node_id": "I_kwDOBm6k_c5JyG1M", "number": 1743, "title": "`datasette.utils.to_css_class()` should be a documented internal", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-05-16T23:57:26Z", "updated_at": "2022-05-16T23:57:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Because I'm using it in this plugin:\r\n- https://github.com/simonw/datasette-upload-dbs/issues/1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1743/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}