{"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-917642487", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 917642487, "node_id": "IC_kwDOBm6k_c42shz3", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-12T14:03:09Z", "updated_at": "2021-09-12T14:03:09Z", "author_association": "CONTRIBUTOR", "body": "haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-918621705", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 918621705, "node_id": "IC_kwDOBm6k_c42wQ4J", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-13T22:17:17Z", "updated_at": "2021-09-13T22:17:17Z", "author_association": "CONTRIBUTOR", "body": "> haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there?\r\n\r\nNot certain, but I think tests in CI run on Ubuntu and don't appear to install any additional Sqlite-related dependencies, and so my guess is the version of Sqlite installed by default on Ubuntu has the `SQLITE_ENABLE_FTS3_PARENTHESIS` option enabled and so doesn't run into this issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1473#issuecomment-922363640", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1473", "id": 922363640, "node_id": "IC_kwDOBm6k_c42-ib4", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-09-18T19:45:47Z", "updated_at": "2021-09-18T19:45:47Z", "author_association": "CONTRIBUTOR", "body": "An update, if I remove the `img` tag and replace it with the text, \"Safer or Toxic?\" it links to the right place.\r\n\r\nAlso, if I keep things exactly as they are, and it improperly, but consistently goes to the `undefined` page, on THAT 404 page, a click on the image properly clicks through to the www.SaferOrToxic.org page.\r\n\r\nWeird stuff.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 999902754, "label": "base logo link visits `undefined` rather than href url"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1473#issuecomment-922394999", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1473", "id": 922394999, "node_id": "IC_kwDOBm6k_c42-qF3", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-09-19T00:44:39Z", "updated_at": "2021-09-19T00:45:32Z", "author_association": "CONTRIBUTOR", "body": "I replaced:\r\n```\r\n\r\n\r\n\r\n```\r\nwith:\r\n```\r\n\r\n```\r\n\r\nI'd still love to know what caused this (and how to troubleshoot to figure it out), so I'll leave it open for a bit, but I do have a functional logo linking to the Hugo home page, at least locally. I'll likely push tomorrow.\r\n\r\n(Before trying this, I tried to apply a background image to the `a` tag. That didn't work.)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 999902754, "label": "base logo link visits `undefined` rather than href url"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-1268613335", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 1268613335, "node_id": "IC_kwDOBm6k_c5LnYDX", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-05T15:45:49Z", "updated_at": "2022-10-05T15:45:49Z", "author_association": "CONTRIBUTOR", "body": "running into this as i continue to grow my labor data warehouse.\r\n\r\nHere a CloudRun PM says the container size should **not** count against memory: https://stackoverflow.com/a/56570717", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-1268629159", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 1268629159, "node_id": "IC_kwDOBm6k_c5Lnb6n", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-05T16:00:55Z", "updated_at": "2022-10-05T16:00:55Z", "author_association": "CONTRIBUTOR", "body": "as a next step, i'll fetch the docker image from the google registry, and see what memory and disk usage looks like when i run it locally.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-1269847461", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 1269847461, "node_id": "IC_kwDOBm6k_c5LsFWl", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-06T11:21:49Z", "updated_at": "2022-10-06T11:21:49Z", "author_association": "CONTRIBUTOR", "body": "thanks @simonw, i'll spend a little more time trying to figure out why this isn't working on cloudrun, and then will flip over to fly if i can't.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-1271101072", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 1271101072, "node_id": "IC_kwDOBm6k_c5Lw3aQ", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T04:39:10Z", "updated_at": "2022-10-07T04:39:10Z", "author_association": "CONTRIBUTOR", "body": "switching from `immutable=1` to `mode=ro` completely addressed this. see https://github.com/simonw/datasette/issues/1836#issuecomment-1271100651 for details.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-938171377", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 938171377, "node_id": "IC_kwDOBm6k_c4361vx", "user": {"value": 110420, "label": "ghing"}, "created_at": "2021-10-07T21:33:12Z", "updated_at": "2021-10-07T21:33:12Z", "author_association": "CONTRIBUTOR", "body": "Thanks for the reply @simonw. What services have you had better success with than Cloud Run for larger database?\r\n\r\nAlso, what about my issue description makes you think there may be a workaround?\r\n\r\nIs there any instrumentation I could add to see at which point in the deploy the memory usage spikes? Should I be able to see this whether it's running under Docker locally, or do you suspect this is Cloud Run-specific?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-947196177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 947196177, "node_id": "IC_kwDOBm6k_c44dRER", "user": {"value": 110420, "label": "ghing"}, "created_at": "2021-10-20T00:05:10Z", "updated_at": "2021-10-20T00:05:10Z", "author_association": "CONTRIBUTOR", "body": "I was looking through the Dockerfile-generation code to see if there was anything that would cause memory usage to be a lot during deployment. \r\n\r\nI noticed that the Dockerfile [runs `datasette --inspect`](https://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L354). Is it possible that this is using a lot of memory usage?\r\n\r\nOr would that come into play when running `gcloud builds submit`, not when it's actually deployed?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-947203725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 947203725, "node_id": "IC_kwDOBm6k_c44dS6N", "user": {"value": 110420, "label": "ghing"}, "created_at": "2021-10-20T00:21:54Z", "updated_at": "2021-10-20T00:21:54Z", "author_association": "CONTRIBUTOR", "body": "This StackOverflow post, [sqlite - Cloud Run: Why does my instance need so much RAM?](https://stackoverflow.com/questions/59812405/cloud-run-why-does-my-instance-need-so-much-ram), points to [this section of the Cloud Run docs](https://cloud.google.com/run/docs/troubleshooting) that says:\r\n\r\n> Note that the Cloud Run container instances run in an environment where the files written to the local filesystem count towards the available memory. This also includes any log files that are not written to /var/log/* or /dev/log.\r\n\r\nDoes datasette write any large files when starting? \r\n\r\nOr does the [`COPY` command in the Dockerfile](https://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L349) count as writing to the local filesystem?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1522#issuecomment-976117989", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1522", "id": 976117989, "node_id": "IC_kwDOBm6k_c46LmDl", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-11-23T03:00:34Z", "updated_at": "2021-11-23T03:00:34Z", "author_association": "CONTRIBUTOR", "body": "I tried deploying the most recent version of the Dockerfile in this thread ([link to comment](https://github.com/simonw/datasette/issues/1522#issuecomment-974605128)), and after trying a few different different combinations, I was only successful when I used `--no-cpu-throttling` (\"CPU Is always allocated\" in the UI)\r\n\r\nUsing this method, I got a very similar issue to you: The first time I'd load the site I'd get a 503. But after that first load, I didn't get the issue again. It would re-occur if the service started from cold boot. \r\n\r\nI suspect this is a race condition in the supervisord configuration. The errors I got were the same `Connection refused: AH00957: http: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed`, and that seems to indicate that `datasette` hadn't yet started. \r\n\r\nLooking at the order of logs getting back, the processes reported successfully completing loading after the first 503 was returned, so that makes me think race condition. \r\n\r\nI can replicate this locally, if I `docker run` and request `localhost:5000/prefix` _before_ I get the `datasette entered RUNNING state` message. Cloud Run wakes up when requests are received, so this test would semi-replicate that, but local docker would be the equivalent of a persistent process, hence it doesn't normally exhibit the same issues.\r\n\r\nUnfortunately supervisor/supervisor issue 122 (not linking as to prevent cross-project link spam) seems to say that dependency chaining is a feature that's been asked for for a long time, but hasn't been implemented. You could try some suggestions in that thread. ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058896236, "label": "Deploy a live instance of demos/apache-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1528#issuecomment-1151887842", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1528", "id": 1151887842, "node_id": "IC_kwDOBm6k_c5EqGni", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-06-10T03:23:08Z", "updated_at": "2022-06-10T03:23:08Z", "author_association": "CONTRIBUTOR", "body": "I just put together a version of this in a plugin: https://github.com/eyeseast/datasette-query-files. Happy to have any feedback.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1060631257, "label": "Add new `\"sql_file\"` key to Canned Queries in metadata?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1528#issuecomment-975955589", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1528", "id": 975955589, "node_id": "IC_kwDOBm6k_c46K-aF", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2021-11-22T22:00:30Z", "updated_at": "2021-11-22T22:00:30Z", "author_association": "CONTRIBUTOR", "body": "Oh, another thing to consider: I believe this would be the first `\"_file\"` key in datasette's metadata, compared to other `\"_url\"` keys like `\"license_url\"` or `\"about_url\"`. Not too sure what considerations to include with this (ex should missing files cause Datasette to stop before starting, should build scripts bundle these sql files somewhere during `datasette package`, etc.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1060631257, "label": "Add new `\"sql_file\"` key to Canned Queries in metadata?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1547#issuecomment-997511968", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1547", "id": 997511968, "node_id": "IC_kwDOBm6k_c47dNMg", "user": {"value": 127565, "label": "wragge"}, "created_at": "2021-12-20T01:21:59Z", "updated_at": "2021-12-20T01:21:59Z", "author_association": "CONTRIBUTOR", "body": "I've installed the alpha version but get an error when starting up Datasette:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/tim/.pyenv/versions/stock-exchange/bin/datasette\", line 5, in \r\n from datasette.cli import cli\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/cli.py\", line 15, in \r\n from .app import Datasette, DEFAULT_SETTINGS, SETTINGS, SQLITE_LIMIT_ATTACHED, pm\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/app.py\", line 31, in \r\n from .views.database import DatabaseDownload, DatabaseView\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/views/database.py\", line 25, in \r\n from datasette.plugins import pm\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/plugins.py\", line 29, in \r\n mod = importlib.import_module(plugin)\r\n File \"/Users/tim/.pyenv/versions/3.8.5/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/filters.py\", line 9, in \r\n @hookimpl(specname=\"filters_from_request\")\r\nTypeError: __call__() got an unexpected keyword argument 'specname'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1076388044, "label": "Writable canned queries fail to load custom templates"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1547#issuecomment-997519202", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1547", "id": 997519202, "node_id": "IC_kwDOBm6k_c47dO9i", "user": {"value": 127565, "label": "wragge"}, "created_at": "2021-12-20T01:36:58Z", "updated_at": "2021-12-20T01:36:58Z", "author_association": "CONTRIBUTOR", "body": "Yep, that works -- thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1076388044, "label": "Writable canned queries fail to load custom templates"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1549#issuecomment-1087428593", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1549", "id": 1087428593, "node_id": "IC_kwDOBm6k_c5A0Nfx", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-04-04T11:17:13Z", "updated_at": "2022-04-04T11:17:13Z", "author_association": "CONTRIBUTOR", "body": "another way to get the behavior of downloading the file is to use the download attribute of the anchor tag\r\n\r\nhttps://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#attr-download", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1077620955, "label": "Redesign CSV export to improve usability"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1549#issuecomment-991754237", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1549", "id": 991754237, "node_id": "IC_kwDOBm6k_c47HPf9", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-11T19:14:39Z", "updated_at": "2021-12-11T19:14:39Z", "author_association": "CONTRIBUTOR", "body": "that option is not available on [custom queries](https://labordata.bunkum.us/odpr-962a140?sql=with+local_union_filings+as+%28%0D%0A++select+*+from+lm_data+%0D%0A++where%0D%0A++++yr_covered+%3E+cast%28strftime%28%27%25Y%27%2C+%27now%27%2C+%27-5+years%27%29+as+int%29%0D%0A++++and+desig_name+%3D+%27LU%27%0D%0A++order+by+yr_covered+desc%0D%0A%29%2C%0D%0Amost_recent_filing+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from+local_union_filings%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++most_recent_filing%0D%0Awhere%0D%0A++next_election+%3E%3D+strftime%28%27%25Y-%25m%27%2C+%27now%27%29%0D%0A++and+next_election+%3C+strftime%28%27%25Y-%25m%27%2C+%27now%27%2C+%27%2B1+year%27%29%0D%0Aorder+by%0D%0A++members+desc%3B).\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1077620955, "label": "Redesign CSV export to improve usability"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1552#issuecomment-995296725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1552", "id": 995296725, "node_id": "IC_kwDOBm6k_c47UwXV", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2021-12-15T23:29:32Z", "updated_at": "2021-12-15T23:29:32Z", "author_association": "CONTRIBUTOR", "body": "@simonw thank you for your fast answer and your guidance!\r\n\r\nWhile digging into the code, I found an undocumented way of doing it:\r\n\r\n```yaml\r\nfacets: [\"Facet for a column\", {\"array\": \"Facet for an array\"}]\r\n```\r\n\r\nThe only remaining problem with that solution is here: https://github.com/simonw/datasette/blob/250db8192cb8aba5eb8cd301ccc2a49525bc3d24/datasette/facets.py#L33\r\n\r\nWe have:\r\n\r\n```python\r\ntype, metadata_config = metadata_config.items()[0]\r\n```\r\n\r\nBut it requires to cast the `dict_items` as a list prior to access the first element:\r\n\r\n```python\r\ntype, metadata_config = list(metadata_config.items())[0]\r\n```\r\n\r\nI guess it's an unspotted bug? (I mean, independently of the facets-with-arrays issue.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1078702875, "label": "Allow to set `facets_array` in metadata (like current `facets`)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1552#issuecomment-996229007", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1552", "id": 996229007, "node_id": "IC_kwDOBm6k_c47YT-P", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2021-12-16T22:04:39Z", "updated_at": "2021-12-16T22:04:39Z", "author_association": "CONTRIBUTOR", "body": "Wow, that was fast, thank you so much @simonw !\r\n\r\n> I'm also not convinced that this configuration syntax is right. It's a bit weird having a `\"facets\"` list that can either by column-name-strings or `{\"type-of-facet\": \"column-name\"}` objects. Maybe there's a better design for this?\r\n\r\nI agree that it's not ideal, my initial naive approach was to detect if it's an array, like what is done here:\r\n\r\nhttps://github.com/simonw/datasette/blob/2c07327d23d9c5cf939ada9ba4091c1b8b2ba42d/datasette/facets.py#L312-L313\r\n\r\nBut it requires an extra query to determine the type, which is a bit problematic, especially for big tables I guess.\r\n\r\nTaking a look at #510, I wonder if a `facet_delimiter` should be defined for that kind of columns (that would help our team not to have an intermediary conversion step from `foo|bar` to `[\"foo\",\"bar\"]` for instance).\r\n\r\nTo be consistent with the `--extract-column` parameter, maybe an explicit casting/delimiter would be useful: `--set-column 'Foo:Array:|'`.\r\n\r\nThrowing a lot of ideas without knowing the big picture\u2026 but sometimes newcomers have superpowers :).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1078702875, "label": "Allow to set `facets_array` in metadata (like current `facets`)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1553#issuecomment-992986587", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1553", "id": 992986587, "node_id": "IC_kwDOBm6k_c47L8Xb", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-13T22:57:04Z", "updated_at": "2021-12-13T22:57:04Z", "author_association": "CONTRIBUTOR", "body": "would also be good if the header said the what the max row limit was", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1079111498, "label": "if csv export is truncated in non streaming mode set informative response header"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1553#issuecomment-993014772", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1553", "id": 993014772, "node_id": "IC_kwDOBm6k_c47MDP0", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-13T23:46:18Z", "updated_at": "2021-12-13T23:46:18Z", "author_association": "CONTRIBUTOR", "body": "these headers would also be relevant for json exports of custom queries", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1079111498, "label": "if csv export is truncated in non streaming mode set informative response header"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1561#issuecomment-997128712", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1561", "id": 997128712, "node_id": "IC_kwDOBm6k_c47bvoI", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-18T02:35:48Z", "updated_at": "2021-12-18T02:35:48Z", "author_association": "CONTRIBUTOR", "body": "interesting! i love this feature. this + full caching with cloudflare is really super!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1082765654, "label": "add hash id to \"_memory\" url if hashed url mode is turned on and crossdb is also turned on"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1581#issuecomment-1077047295", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1581", "id": 1077047295, "node_id": "IC_kwDOBm6k_c5AMm__", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-03-24T04:08:18Z", "updated_at": "2022-03-24T04:08:18Z", "author_association": "CONTRIBUTOR", "body": "this has been addressed by the datasette-hashed-urls plugin", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1089529555, "label": "when hashed urls are turned on, the _memory db has improperly long-lived cache expiry"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1583#issuecomment-1002825217", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1583", "id": 1002825217, "node_id": "IC_kwDOBm6k_c47xeYB", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-30T00:34:16Z", "updated_at": "2021-12-30T00:34:16Z", "author_association": "CONTRIBUTOR", "body": "if that is not desirable, it might be good to document that users might want to set up a lifecycle rule to automatically delete these build artifacts. something like https://stackoverflow.com/questions/59937542/can-i-delete-container-images-from-google-cloud-storage-artifacts-bucket", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1090810196, "label": "consider adding deletion step of cloudbuild artifacts to gcloud publish"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1591#issuecomment-1010947634", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1591", "id": 1010947634, "node_id": "IC_kwDOBm6k_c48QdYy", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-01-12T11:32:17Z", "updated_at": "2022-01-12T11:32:17Z", "author_association": "CONTRIBUTOR", "body": "Is it possible to parse things like `--ext-{plugin}-{arg} VALUE` ?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1100015398, "label": "Maybe let plugins define custom serve options?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/160#issuecomment-459915995", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/160", "id": 459915995, "node_id": "MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-02-02T00:43:16Z", "updated_at": "2019-02-02T00:58:20Z", "author_association": "CONTRIBUTOR", "body": "Do you have any simple working examples of how to use `--static`? Inspection of default served files suggests locations such as `http://example.com/-/static/app.css?0e06ee`.\r\n\r\nIf `datasette` is being proxied to `http://example.com/foo/datasette`, what form should arguments to `--static` take so that static files are correctly referenced?\r\n\r\nUse case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple `datasette` demo in MyBinder using jupyter-server-proxy.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 278208011, "label": "Ability to bundle and serve additional static files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1601#issuecomment-1016651485", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1601", "id": 1016651485, "node_id": "IC_kwDOBm6k_c48mN7d", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-01-19T16:39:03Z", "updated_at": "2022-01-19T16:39:03Z", "author_association": "CONTRIBUTOR", "body": "I think both of these are Spatialite specific. They get generated when you first initialize the extension. KNN is actually deprecated in favor of [KNN2](https://www.gaia-gis.it/fossil/libspatialite/wiki?name=KNN2), as I understand it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1105916061, "label": "Add KNN and data_licenses to hidden tables list"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1605#issuecomment-1016994329", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1605", "id": 1016994329, "node_id": "IC_kwDOBm6k_c48nhoZ", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-01-20T00:27:17Z", "updated_at": "2022-01-20T00:27:17Z", "author_association": "CONTRIBUTOR", "body": "Right now, I usually have a line in a Makefile like this:\r\n\r\n```make\r\ncombined.geojson: project.db\r\n pipenv run datasette project.db --get /project/combined.geojson \\\r\n --load-extension spatialite \\\r\n --setting sql_time_limit_ms 5000 \\\r\n --setting max_returned_rows 20000 \\\r\n -m metadata.yml > $@\r\n```\r\n\r\nThat all assumes I've loaded whatever I need into `project.db` and created a canned query called `combined` (and then uses `datasette-geojson` for geojson output). \r\n\r\nIt works, but as you can see, it's a lot to manage, a lot of boilerplate, and it wasn't obvious how to get there. If there's an error in the canned query, I get an HTML error page, so that's hard to debug. And it's only one query, so each output needs a line like this. Make isn't ideal, either, for that reason.\r\n\r\nThe thing I really liked with `datafreeze` was doing templated filenames. I have a project now where I need to export a bunch of litttle geojson files, based on queries, and it would be awesome to be able to do something like this:\r\n\r\n```yml\r\ndatabases:\r\n project:\r\n queries:\r\n boundaries:\r\n sql: \"SELECT * FROM boundaries\"\r\n filename: \"boundaries/{id}.geojson\"\r\n mode: \"item\"\r\n format: geojson\r\n```\r\n\r\nAnd then do:\r\n\r\n```sh\r\ndatasette freeze -m metadata.yml project.db\r\n```\r\n\r\nFor HTML export, maybe there's a `template` argument, or `format: template` or something. And that gets you a static site generator, kinda for free.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1108671952, "label": "Scripted exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1605#issuecomment-1018741262", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1605", "id": 1018741262, "node_id": "IC_kwDOBm6k_c48uMIO", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-01-21T18:05:09Z", "updated_at": "2022-01-21T18:05:09Z", "author_association": "CONTRIBUTOR", "body": "Thinking about this more, as well as #1356 and various other tickets related to output formats, I think there's a missing plugin hook for formatting results, separate from `register_output_renderer` (or maybe part of it, depending on #1101). \r\n\r\nRight now, as I understand it, getting output in any format goes through the normal view stack -- a table, a row or a query -- and so by the time `register_output_renderer` gets it, the results have already been truncated or paginated. What I'd want, I think, is to be able to register ways to format results independent of where those results are sent.\r\n\r\nIt's possible this could be done using [`conn.row_factory`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.row_factory) (maybe in the `prepare_connection` hook), but I'm not sure that's where it belongs.\r\n\r\nAnother option is some kind of registry of serializers, which `register_output_renderer` and other plugin hooks could use. What I'm trying to avoid here is writing a plugin that also needs plugins for formats I haven't thought of yet.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1108671952, "label": "Scripted exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1605#issuecomment-1018778667", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1605", "id": 1018778667, "node_id": "IC_kwDOBm6k_c48uVQr", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-01-21T19:00:01Z", "updated_at": "2022-01-21T19:00:01Z", "author_association": "CONTRIBUTOR", "body": "Let me know if you want help prototyping any of this, because I'm thinking about it and trying stuff out. Happy to be a sounding board, if it helps.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1108671952, "label": "Scripted exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1605#issuecomment-1331187551", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1605", "id": 1331187551, "node_id": "IC_kwDOBm6k_c5PWE9f", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-11-29T19:29:42Z", "updated_at": "2022-11-29T19:29:42Z", "author_association": "CONTRIBUTOR", "body": "Interesting. I started a version using metadata like I outlined up top, but I realized that there's no documented way for a plugin to access either metadata or canned queries. Or at least, I couldn't find a way.\r\n\r\nThere is this method: https://github.com/simonw/datasette/blob/main/datasette/app.py#L472 but I don't want to rely on it if it's not documented. Same with this: https://github.com/simonw/datasette/blob/main/datasette/app.py#L544\r\n\r\nIf those are safe, I'll build on them. I'm also happy to document them, if that greases the wheels.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1108671952, "label": "Scripted exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1605#issuecomment-1332310772", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1605", "id": 1332310772, "node_id": "IC_kwDOBm6k_c5PaXL0", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-11-30T15:06:37Z", "updated_at": "2022-11-30T15:06:37Z", "author_association": "CONTRIBUTOR", "body": "I'll add issues for both and do a documentation PR.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1108671952, "label": "Scripted exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1612#issuecomment-1021497165", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1612", "id": 1021497165, "node_id": "IC_kwDOBm6k_c484s9N", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2022-01-25T18:44:23Z", "updated_at": "2022-01-25T18:44:23Z", "author_association": "CONTRIBUTOR", "body": "OMG, this might be the fastest OS ticket I've ever filed, thanks so much @simonw ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1114147905, "label": "Move canned queries closer to the SQL input area"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1614#issuecomment-1364345119", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1614", "id": 1364345119, "node_id": "IC_kwDOBm6k_c5RUkEf", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-12-23T21:27:10Z", "updated_at": "2022-12-23T21:27:10Z", "author_association": "CONTRIBUTOR", "body": "is this issue closed by #1893?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1115435536, "label": "Try again with SQLite codemirror support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/163#issuecomment-804539729", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/163", "id": 804539729, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDUzOTcyOQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:41:14Z", "updated_at": "2021-03-23T02:41:14Z", "author_association": "CONTRIBUTOR", "body": "I'm visiting old issues for context while learning datasette. Let me know if okay to make the occasional comment like this one.\r\nquerystring argument now located at:\r\nhttps://docs.datasette.io/en/latest/settings.html#sql-time-limit-ms", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 279547886, "label": "Document the querystring argument for setting a different time limit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/164#issuecomment-804541064", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/164", "id": 804541064, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:45:12Z", "updated_at": "2021-03-23T02:45:12Z", "author_association": "CONTRIBUTOR", "body": "\"datasette skeleton\" feature removed #476", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 280013907, "label": "datasette skeleton command for kick-starting database and table metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1641#issuecomment-1049879118", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1641", "id": 1049879118, "node_id": "IC_kwDOBm6k_c4-k-JO", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-02-24T13:49:26Z", "updated_at": "2022-02-24T13:49:26Z", "author_association": "CONTRIBUTOR", "body": "maybe worth considering adding buttons for paren, asterisk, etc. under the input text box on mobile?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1149310456, "label": "Tweak mobile keyboard settings"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1655#issuecomment-1062450649", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1655", "id": 1062450649, "node_id": "IC_kwDOBm6k_c4_U7XZ", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-03-09T01:10:46Z", "updated_at": "2022-03-09T01:10:46Z", "author_association": "CONTRIBUTOR", "body": "i increased the max_returned_row, because I have some scripts that get CSVs from this site, and this makes doing pagination of CSVs less annoying for many cases. i know that's streaming csvs is something you are hoping to address in 1.0. let me know if there's anything i can do to help with that.\r\n\r\nas for what if anything can be done about the size of the dom, I don't have any ideas right now, but i'll poke around.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1163369515, "label": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1655#issuecomment-1258166572", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1655", "id": 1258166572, "node_id": "IC_kwDOBm6k_c5K_hks", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-09-26T14:57:04Z", "updated_at": "2022-09-26T14:57:04Z", "author_association": "CONTRIBUTOR", "body": "I think that paginating, even in javascript, could be very helpful. Maybe render json or csv into the page and let javascript loading that into the dom?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1163369515, "label": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1655#issuecomment-1766994810", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1655", "id": 1766994810, "node_id": "IC_kwDOBm6k_c5pUjN6", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-10-17T19:01:59Z", "updated_at": "2023-10-17T19:01:59Z", "author_association": "CONTRIBUTOR", "body": "hi @yejiyang, have your tried using my fork of datasette: https://github.com/fgregg/datasette/tree/no_limit_csv_publish\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1163369515, "label": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1655#issuecomment-1767219901", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1655", "id": 1767219901, "node_id": "IC_kwDOBm6k_c5pVaK9", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-10-17T21:29:03Z", "updated_at": "2023-10-17T21:29:03Z", "author_association": "CONTRIBUTOR", "body": "@yejiyang why don\u2019t you move this discussion to my fork to spare simon\u2019s notifications ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1163369515, "label": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1684#issuecomment-1078126065", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1684", "id": 1078126065, "node_id": "IC_kwDOBm6k_c5AQuXx", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-03-24T20:08:56Z", "updated_at": "2022-03-24T20:13:19Z", "author_association": "CONTRIBUTOR", "body": "would be nice if the behavior was\r\n\r\n1. try to facet all the columns\r\n2. for bigger tables try to facet the indexed columns\r\n3. for the biggest tables, turn off autofacetting completely\r\n\r\nThis is based on my assumption that what determines autofaceting is the rarity of unique values. Which may not be true!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1179998071, "label": "Mechanism for disabling faceting on large tables only"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1688#issuecomment-1079550754", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1688", "id": 1079550754, "node_id": "IC_kwDOBm6k_c5AWKMi", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2022-03-26T01:27:27Z", "updated_at": "2022-03-26T03:16:29Z", "author_association": "CONTRIBUTOR", "body": "> Is there a way to serve a static assets when using the plugins/ directory method instead of installing plugins as a new python package?\r\n\r\nAs a workaround, I found I can serve my statics from a non-plugin specific folder using the [--static](https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files) CLI flag.\r\n\r\n```bash\r\ndatasette ~/Library/Safari/History.db \\\r\n --plugins-dir=plugins/ \\\r\n --static assets:dist/\r\n```\r\n\r\nIt's not ideal because it means I'll change the cache pattern path depending on how the plugin is running (via pip install or as a one off script), but it's usable as a workaround.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1181432624, "label": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1688#issuecomment-1079806857", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1688", "id": 1079806857, "node_id": "IC_kwDOBm6k_c5AXIuJ", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2022-03-27T01:01:14Z", "updated_at": "2022-03-27T01:01:14Z", "author_association": "CONTRIBUTOR", "body": "Thank you! I went through the cookiecutter template, and published my first package here: https://github.com/hydrosquall/datasette-nteract-data-explorer", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1181432624, "label": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1696#issuecomment-1407767434", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1696", "id": 1407767434, "node_id": "IC_kwDOBm6k_c5T6NOK", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-29T20:56:20Z", "updated_at": "2023-01-29T20:56:20Z", "author_association": "CONTRIBUTOR", "body": "I did some horrible things in https://github.com/cldellow/datasette-ui-extras/issues/2 to enable this in my plugin -- example here: https://dux-demo.fly.dev/cooking/posts?_facet=owner_user_id&owner_user_id=67\r\n\r\nThe implementation relies on two things:\r\n\r\n- a `filters_from_request` hook that adds a good human description (unfortunately, without the benefit of the CSS styling you mention)\r\n- doing something evil to hijack the `exact` and `not` operators in the `Filters` class. We can't leave them as is, or we'll get 2 human descriptions -- the built-in Datasette one and the one from my plugin. We can't remove them, or the filters UI will stop supporting the `=` and `!=` operators\r\n\r\nThis got me thinking: it'd be neat if the list of operators that the filters UI supported wasn't a closed set.\r\n\r\nA motivating example: adding a geospatial `NEAR` operator. Ideally it'd take two arguments - a target point and a radius, so you could express a filter like `find me all rows whose lat/lng are within 10km of 43.4516\u00b0 N, 80.4925\u00b0 W`. (Optionally, the UI could be enhanced if the geonames database was loaded and queried, so a user could say `find me all rows whose lat/lng are within 10km of Kitchener, ON`, and the city gets translated to a lat/lng for them)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1186696202, "label": "Show foreign key label when filtering"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1699#issuecomment-1092357672", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1699", "id": 1092357672, "node_id": "IC_kwDOBm6k_c5BHA4o", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-08T01:39:40Z", "updated_at": "2022-04-08T01:39:40Z", "author_association": "CONTRIBUTOR", "body": "> My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me.\r\n\r\nThat's my thinking, too. It's really the thing I've been wanting since writing `datasette-geojson`, since I'm always exporting with `datasette --get`. The workflow I'm always looking for is something like this:\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query dunkin_in_suffolk -f geojson -o dunkin_in_suffolk.geojson\r\n```\r\n\r\nI think this probably needs either a new plugin hook separate from `register_output_renderer` or a way to use that without going through the HTTP stack. Or maybe a render mode that writes to a stream instead of a response. Maybe there's a new key in the dictionary that `register_output_renderer` returns that handles CLI exports.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1193090967, "label": "Proposal: datasette query"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1699#issuecomment-1092370880", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1699", "id": 1092370880, "node_id": "IC_kwDOBm6k_c5BHEHA", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-08T02:07:40Z", "updated_at": "2022-04-08T02:07:40Z", "author_association": "CONTRIBUTOR", "body": "So maybe `render_output_render` returns something like this:\r\n\r\n```python\r\n@hookimpl\r\ndef register_output_renderer(datasette):\r\n return {\r\n \"extension\": \"geojson\",\r\n \"render\": render_geojson,\r\n \"stream\": stream_geojson,\r\n \"can_render\": can_render_geojson,\r\n }\r\n```\r\n\r\nAnd stream gets an iterator, instead of a list of rows, so it can efficiently handle large queries. Maybe it also gets passed a destination stream, or it returns an iterator. I'm not sure what makes more sense. Either way, that might cover both CLI exports and streaming responses.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1193090967, "label": "Proposal: datasette query"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1699#issuecomment-1092386254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1699", "id": 1092386254, "node_id": "IC_kwDOBm6k_c5BHH3O", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-08T02:39:25Z", "updated_at": "2022-04-08T02:39:25Z", "author_association": "CONTRIBUTOR", "body": "And just to think this through a little more, here's what `stream_geojson` might look like:\r\n\r\n```python\r\nasync def stream_geojson(datasette, columns, rows, database, stream):\r\n db = datasette.get_database(database)\r\n for row in rows:\r\n feature = await row_to_geojson(row, db)\r\n stream.write(feature + \"\\n\") # just assuming newline mode for now\r\n```\r\n\r\nAlternately, that could be an async generator, like this:\r\n\r\n```python\r\nasync def stream_geojson(datasette, columns, rows, database):\r\n db = datasette.get_database(database)\r\n for row in rows:\r\n feature = await row_to_geojson(row, db)\r\n yield feature\r\n```\r\n\r\nNot sure which makes more sense, but I think this pattern would open up a lot of possibility. If you had your [stream_indented_json](https://til.simonwillison.net/python/output-json-array-streaming) function, you could do `yield from stream_indented_json(rows, 2)` and be one your way.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1193090967, "label": "Proposal: datasette query"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1699#issuecomment-1094453751", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1699", "id": 1094453751, "node_id": "IC_kwDOBm6k_c5BPAn3", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-11T01:32:12Z", "updated_at": "2022-04-11T01:32:12Z", "author_association": "CONTRIBUTOR", "body": "Was looking through old issues and realized a bunch of this got discussed in #1101 (including by me!), so sorry to rehash all this. Happy to help with whatever piece of it I can. Would be very excited to be able to use format plugins with exports.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1193090967, "label": "Proposal: datasette query"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1713#issuecomment-1099540225", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1713", "id": 1099540225, "node_id": "IC_kwDOBm6k_c5BiacB", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-14T19:09:57Z", "updated_at": "2022-04-14T19:09:57Z", "author_association": "CONTRIBUTOR", "body": "I wonder if this overlaps with what I outlined in #1605. You could run something like this:\r\n\r\n```sh\r\ndatasette freeze -d exports/\r\naws s3 cp exports/ s3://my-export-bucket/$(date)\r\n```\r\n\r\nAnd maybe that does what you need. Of course, that plugin isn't built yet. But that's the idea.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1203943272, "label": "Datasette feature for publishing snapshots of query results"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1713#issuecomment-1103312860", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1713", "id": 1103312860, "node_id": "IC_kwDOBm6k_c5Bwzfc", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-04-20T00:52:19Z", "updated_at": "2022-04-20T00:52:19Z", "author_association": "CONTRIBUTOR", "body": "feels related to #1402 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1203943272, "label": "Datasette feature for publishing snapshots of query results"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1713#issuecomment-1173358747", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1713", "id": 1173358747, "node_id": "IC_kwDOBm6k_c5F8Aib", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-07-04T05:16:35Z", "updated_at": "2022-07-04T05:16:35Z", "author_association": "CONTRIBUTOR", "body": "This feature is pretty important and would be nice if it would be all within Datasette (no separate CLI/deploy required). My workflow now is to basically just copy the result and paste into a Google Sheet, which works, but then it's not discoverable to other journalists browsing the Datasette instance. I started building a plugin similar to [datasette-saved-queries](https://datasette.io/plugins/datasette-saved-queries) but one that maintains its own DB (required if you're working with all immutable DBs), but got bogged down in details.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1203943272, "label": "Datasette feature for publishing snapshots of query results"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1258129113, "node_id": "IC_kwDOBm6k_c5K_YbZ", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-09-26T14:30:11Z", "updated_at": "2022-09-26T14:48:31Z", "author_association": "CONTRIBUTOR", "body": "from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the `fetchmany` call)\r\n\r\nthis is probably a simplistic idea, but what if you had the python code in the `execute` method iterate over the cursor and yield out rows or small chunks of rows.\r\n\r\nsomething like: \r\n```python\r\n with sqlite_timelimit(conn, time_limit_ms):\r\n try:\r\n cursor = conn.cursor()\r\n cursor.execute(sql, params if params is not None else {})\r\n except:\r\n ...\r\n max_returned_rows = self.ds.max_returned_rows\r\n if max_returned_rows == page_size:\r\n max_returned_rows += 1\r\n if max_returned_rows and truncate:\r\n for i, row in enumerate(cursor):\r\n yield row\r\n if i == max_returned_rows - 1:\r\n break\r\n else:\r\n for row in cursor:\r\n yield row\r\n truncated = False \r\n```\r\n\r\nthis kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite. \r\n\r\nyou would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking.\r\n\r\ndepending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111705323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111705323, "node_id": "IC_kwDOBm6k_c5CQ0br", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T03:32:06Z", "updated_at": "2022-04-28T03:32:06Z", "author_association": "CONTRIBUTOR", "body": "Ah, that would be it! I have a core set of data which doesn't change to which I want authorised users to be able to submit corrections. I was going to deal with the persistence issue by just grabbing the user corrections at regular intervals and saving to GitHub. I might need to rethink. Thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111712953", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111712953, "node_id": "IC_kwDOBm6k_c5CQ2S5", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T03:48:36Z", "updated_at": "2022-04-28T03:48:36Z", "author_association": "CONTRIBUTOR", "body": "I don't think that'd work for this project. The db is very big, and my aim was to have an environment where researchers could be making use of the data, but be easily able to add corrections to the HTR/OCR extracted data when they came across problems. It's in its immutable (!) form here: https://sydney-stock-exchange-xqtkxtd5za-ts.a.run.app/stock_exchange/stocks", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111751734", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111751734, "node_id": "IC_kwDOBm6k_c5CQ_w2", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T05:09:59Z", "updated_at": "2022-04-28T05:09:59Z", "author_association": "CONTRIBUTOR", "body": "Thanks, I'll give it a try!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111752676", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111752676, "node_id": "IC_kwDOBm6k_c5CQ__k", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T05:11:54Z", "updated_at": "2022-04-28T05:11:54Z", "author_association": "CONTRIBUTOR", "body": "And in terms of the bug, yep I agree that option 2 would be the most useful and least frustrating.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1742#issuecomment-1128049716", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1742", "id": 1128049716, "node_id": "IC_kwDOBm6k_c5DPKw0", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-05-16T19:24:44Z", "updated_at": "2022-05-16T19:24:44Z", "author_association": "CONTRIBUTOR", "body": "Where is `_trace` getting injected? And is it something a plugin should be able to handle? (If it is, I guess I should handle it in this case.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1237586379, "label": "?_trace=1 fails with datasette-geojson for some reason"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1742#issuecomment-1128064864", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1742", "id": 1128064864, "node_id": "IC_kwDOBm6k_c5DPOdg", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-05-16T19:42:13Z", "updated_at": "2022-05-16T19:42:13Z", "author_association": "CONTRIBUTOR", "body": "Just to add a wrinkle here, this loads fine: https://alltheplaces-datasette.fly.dev/alltheplaces/places.geojson?_trace=1\r\n\r\nBut also, this doesn't add any trace data: https://alltheplaces-datasette.fly.dev/alltheplaces/places.json?_trace=1\r\n\r\nWhat am I missing?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1237586379, "label": "?_trace=1 fails with datasette-geojson for some reason"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1779#issuecomment-1210675046", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1779", "id": 1210675046, "node_id": "IC_kwDOBm6k_c5IKW9m", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-08-10T13:28:37Z", "updated_at": "2022-08-10T13:28:37Z", "author_association": "CONTRIBUTOR", "body": "maybe a simpler solution is to set the maxscale to like 2? since datasette is not set up to make use of container scaling anyway?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1334628400, "label": "google cloudrun updated their limits on maxscale based on memory and cpu count"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1779#issuecomment-1214437408", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1779", "id": 1214437408, "node_id": "IC_kwDOBm6k_c5IYtgg", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-08-14T19:42:58Z", "updated_at": "2022-08-14T19:42:58Z", "author_association": "CONTRIBUTOR", "body": "thanks @simonw!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1334628400, "label": "google cloudrun updated their limits on maxscale based on memory and cpu count"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/179#issuecomment-360535979", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/179", "id": 360535979, "node_id": "MDEyOklzc3VlQ29tbWVudDM2MDUzNTk3OQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-01-25T17:18:24Z", "updated_at": "2018-01-25T17:18:24Z", "author_association": "CONTRIBUTOR", "body": "To summarise that thread:\r\n\r\n- expose full `metadata.json` object to the index page template, eg to allow tables to be referred to by name;\r\n- ability to import multiple `metadata.json` files, eg to allow metadata files created for a specific SQLite db to be reused in a datasette referring to several database files;\r\n\r\nIt could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 288438570, "label": "More metadata options for template authors "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1796#issuecomment-1364345071", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1796", "id": 1364345071, "node_id": "IC_kwDOBm6k_c5RUkDv", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-12-23T21:27:02Z", "updated_at": "2022-12-23T21:27:02Z", "author_association": "CONTRIBUTOR", "body": "@simonw is this issue closed by #1893?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1355148385, "label": "Research an upgrade to CodeMirror 6"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1810#issuecomment-1248204219", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1810", "id": 1248204219, "node_id": "IC_kwDOBm6k_c5KZhW7", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-09-15T14:44:47Z", "updated_at": "2022-09-15T14:46:26Z", "author_association": "CONTRIBUTOR", "body": "A couple+ of possible use case examples:\r\n\r\n- someone has a collection of articles indexed with FTS; they want to publish a simple search tool over the results;\r\n- someone has an image collection and they want to be able to search over description text to return images;\r\n- someone has a set of locations with descriptions, and wants to run a query over places and descriptions and get results as a listing or on a map;\r\n- someone has a set of audio or video files with titles, descriptions and/or transcripts, and wants to be able to search over them and return playable versions of returned items.\r\n\r\nIn many cases, I suspect the raw content will be in one table, but the search table will be a second (eg FTS) table. Generally, the search may be over one or more joined tables, and the results constructed from one or more tables (which may or may not be distinct from the search tables).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1374626873, "label": "Featured table(s) on the homepage"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1813#issuecomment-1250901367", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1813", "id": 1250901367, "node_id": "IC_kwDOBm6k_c5Kjz13", "user": {"value": 883348, "label": "adipasquale"}, "created_at": "2022-09-19T11:34:45Z", "updated_at": "2022-09-19T11:34:45Z", "author_association": "CONTRIBUTOR", "body": "oh and by writing this I just realized the difference: the URL on fly.io is with a custom SQL command whereas the local one is without. \r\nIt seems that there is no pagination when using custom SQL commands which makes sense\r\n\r\nSorry for this useless issue, maybe this can be useful for someone else / me in the future.\r\n\r\nThanks again for this wonderful project !", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1377811868, "label": "missing next and next_url in JSON responses from an instance deployed on Fly "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1817#issuecomment-1256781274", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1817", "id": 1256781274, "node_id": "IC_kwDOBm6k_c5K6PXa", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2022-09-23T22:59:46Z", "updated_at": "2022-09-23T22:59:46Z", "author_association": "CONTRIBUTOR", "body": "While you are adding features, would you be future-proofing your APIs if you switched over some arguments over to keyword-only arguments or would that be too disruptive?\r\n\r\nThinking out loud:\r\n\r\n```\r\nasync def render_template( \r\n self, templates, *, context=None, plugin_context=None, request=None, view_name=None \r\n ): \r\n```\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1384273985, "label": "Expose `sql` and `params` arguments to various plugin hooks"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1270923537", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1270923537, "node_id": "IC_kwDOBm6k_c5LwMER", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T00:46:08Z", "updated_at": "2022-10-07T00:46:08Z", "author_association": "CONTRIBUTOR", "body": "i thought it was maybe to do with reading through all the files, but that does not seem to be the case\r\n\r\nif i make a little test file like:\r\n\r\n```python\r\n# test_read.py\r\nimport hashlib\r\nimport sys\r\nimport pathlib\r\n\r\nHASH_BLOCK_SIZE = 1024 * 1024\r\n\r\ndef inspect_hash(path):\r\n \"\"\"Calculate the hash of a database, efficiently.\"\"\"\r\n m = hashlib.sha256()\r\n with path.open(\"rb\") as fp:\r\n while True:\r\n data = fp.read(HASH_BLOCK_SIZE)\r\n if not data:\r\n break\r\n m.update(data)\r\n\r\n return m.hexdigest()\r\n\r\ninspect_hash(pathlib.Path(sys.argv[1]))\r\n```\r\n\r\nthen a line in the Dockerfile like\r\n\r\n```docker\r\nRUN python test_read.py nlrb.db && echo \"[]\" > /etc/inspect.json\r\n```\r\n\r\njust produes a layer of `3B`\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1270936982", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1270936982, "node_id": "IC_kwDOBm6k_c5LwPWW", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T00:52:41Z", "updated_at": "2022-10-07T00:52:41Z", "author_association": "CONTRIBUTOR", "body": "it's not that the inspect command is somehow changing the db files. if i set them to only read-only, the \"inspect\" layer still has the same very large size.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1270988081", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1270988081, "node_id": "IC_kwDOBm6k_c5Lwb0x", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T01:19:01Z", "updated_at": "2022-10-07T01:27:35Z", "author_association": "CONTRIBUTOR", "body": "okay, some progress!! running some sql against a database file causes that file to get duplicated even if it doesn't apparently change the file.\r\n\r\nmake a little test script like this:\r\n\r\n```python\r\n# test_sql.py\r\nimport sqlite3\r\nimport sys\r\n\r\ndb_name = sys.argv[1]\r\nconn = sqlite3.connect(f'file:/app/{db_name}', uri=True)\r\ncur = conn.cursor()\r\ncur.execute('select count(*) from filing')\r\nprint(cur.fetchone())\r\n```\r\n\r\nthen \r\n\r\n```docker\r\nRUN python test_sql.py nlrb.db\r\n```\r\n\r\nproduced a layer that's the same size as `nlrb.db`!!\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1270992795", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1270992795, "node_id": "IC_kwDOBm6k_c5Lwc-b", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T01:29:15Z", "updated_at": "2022-10-07T01:50:14Z", "author_association": "CONTRIBUTOR", "body": "fascinatingly, telling python to open sqlite in read only mode makes this layer have a size of 0\r\n\r\n```python\r\n# test_sql_ro.py\r\nimport sqlite3\r\nimport sys\r\n\r\ndb_name = sys.argv[1]\r\nconn = sqlite3.connect(f'file:/app/{db_name}?mode=ro', uri=True)\r\ncur = conn.cursor()\r\ncur.execute('select count(*) from filing')\r\nprint(cur.fetchone())\r\n```\r\n\r\nthat's quite weird because setting the file permissions to read only didn't do anything. (on reflection, that chmod isn't doing anything because the dockerfile commands are run as root)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271003212", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271003212, "node_id": "IC_kwDOBm6k_c5LwfhM", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T01:52:04Z", "updated_at": "2022-10-07T01:52:04Z", "author_association": "CONTRIBUTOR", "body": "and if we try immutable mode, which is how things are opened by `datasette inspect` we duplicate the files!!!\r\n\r\n```python\r\n# test_sql_immutable.py\r\nimport sqlite3\r\nimport sys\r\n\r\ndb_name = sys.argv[1]\r\nconn = sqlite3.connect(f'file:/app/{db_name}?immutable=1', uri=True)\r\ncur = conn.cursor()\r\ncur.execute('select count(*) from filing')\r\nprint(cur.fetchone())\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271008997", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271008997, "node_id": "IC_kwDOBm6k_c5Lwg7l", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T02:00:37Z", "updated_at": "2022-10-07T02:00:49Z", "author_association": "CONTRIBUTOR", "body": "yes, and i also think that this is causing the apparent memory problems in #1480. when the container starts up, it will make some operation on the database in `immutable` mode which apparently makes some small change to the db file. if that's so, then the db files will be copied to the read/write layer which counts against cloudrun's memory allocation!\r\n\r\nrunning a test of that now. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271020193", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271020193, "node_id": "IC_kwDOBm6k_c5Lwjqh", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T02:15:05Z", "updated_at": "2022-10-07T02:21:08Z", "author_association": "CONTRIBUTOR", "body": "when i hack the connect method to open non mutable files with \"mode=ro\" and not \"immutable=1\" https://github.com/simonw/datasette/blob/eff112498ecc499323c26612d707908831446d25/datasette/database.py#L79\r\n\r\nthen: \r\n\r\n```bash\r\n870 B RUN /bin/sh -c datasette inspect nlrb.db --inspect-file inspect-data.json\r\n```\r\n\r\nthe `datasette inspect` layer is only the size of the json file!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271100651", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271100651, "node_id": "IC_kwDOBm6k_c5Lw3Tr", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T04:38:14Z", "updated_at": "2022-10-07T04:38:14Z", "author_association": "CONTRIBUTOR", "body": "> yes, and i also think that this is causing the apparent memory problems in #1480. when the container starts up, it will make some operation on the database in `immutable` mode which apparently makes some small change to the db file. if that's so, then the db files will be copied to the read/write layer which counts against cloudrun's memory allocation!\r\n> \r\n> running a test of that now.\r\n\r\nthis completely addressed #1480 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271103097", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271103097, "node_id": "IC_kwDOBm6k_c5Lw355", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T04:43:41Z", "updated_at": "2022-10-07T04:43:41Z", "author_association": "CONTRIBUTOR", "body": "@simonw, should i open up a new issue for investigating the differences between \"immutable=1\" and \"mode=ro\" and possibly switching to \"mode=ro\". Or would you like to keep that conversation in this issue?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1272357976", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1272357976, "node_id": "IC_kwDOBm6k_c5L1qRY", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-08T16:56:51Z", "updated_at": "2022-10-08T16:56:51Z", "author_association": "CONTRIBUTOR", "body": "when you are running from docker, you **always** will want to run as `mode=ro` because the same thing that is causing duplication in the inspect layer will cause duplication in the final container read/write layer when `datasette serve` runs.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1851#issuecomment-1290615599", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1851", "id": 1290615599, "node_id": "IC_kwDOBm6k_c5M7Tsv", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-10-25T14:05:12Z", "updated_at": "2022-10-25T14:05:12Z", "author_association": "CONTRIBUTOR", "body": "This could use a new plugin hook, too. I don't want to complicate your life too much, but for things like GIS, I'd want a way to turn regular JSON into SpatiaLite geometries or combine X/Y coordinates into point geometries and such. Happy to help however I can.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1421544654, "label": "API to insert a single record into an existing table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1851#issuecomment-1291228502", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1851", "id": 1291228502, "node_id": "IC_kwDOBm6k_c5M9pVW", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-10-25T23:02:10Z", "updated_at": "2022-10-25T23:02:10Z", "author_association": "CONTRIBUTOR", "body": "That's reasonable. Canned queries and custom endpoints are certainly going to give more room for specific needs. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1421544654, "label": "API to insert a single record into an existing table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1851#issuecomment-1292519956", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1851", "id": 1292519956, "node_id": "IC_kwDOBm6k_c5NCkoU", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2022-10-26T19:20:33Z", "updated_at": "2022-10-26T19:20:33Z", "author_association": "CONTRIBUTOR", "body": "> This could use a new plugin hook, too. I don't want to complicate your life too much, but for things like GIS, I'd want a way to turn regular JSON into SpatiaLite geometries or combine X/Y coordinates into point geometries and such. Happy to help however I can.\r\n\r\n @eyeseast Maybe you could do this with triggers? Like you can insert JSON-friendly data into a \"raw\" table, and create a trigger that transforms that inserted data into the proper table\r\n\r\nHere's an example:\r\n\r\n```sql\r\n-- meant to be updated from a Datasette insert\r\ncreate table points_raw(longitude int, latitude int);\r\n\r\n-- the target table with proper spatliate geometries\r\ncreate table points(point geometry);\r\n\r\nCREATE TRIGGER insert_points_raw INSERT ON points_raw \r\n BEGIN\r\n insert into points(point) values (makepoint(new.longitude, new.latitude))\r\n END;\r\n```\r\n\r\nYou could then POST a new row to `points_raw` like this:\r\n```\r\nPOST /db/points_raw\r\nAuthorization: Bearer xxx\r\nContent-Type: application/json\r\n{\r\n \"row\": {\r\n \"longitude\": 27.64356,\r\n \"latitude\": -47.29384\r\n }\r\n}\r\n```\r\n\r\nThen SQLite with run the trigger and insert a new row in `points` with the correct geometry point. Downside is you'd have duplicated data with `points_raw`, but maybe it could be a `TEMP` table (or have a cron that deletes all rows from that table every so often?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1421544654, "label": "API to insert a single record into an existing table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1851#issuecomment-1292592210", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1851", "id": 1292592210, "node_id": "IC_kwDOBm6k_c5NC2RS", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-10-26T20:03:46Z", "updated_at": "2022-10-26T20:03:46Z", "author_association": "CONTRIBUTOR", "body": "Yeah, every time I see something cool done with triggers, I remember that I need to start using triggers.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1421544654, "label": "API to insert a single record into an existing table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1871#issuecomment-1309650806", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1871", "id": 1309650806, "node_id": "IC_kwDOBm6k_c5OD692", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2022-11-10T01:38:58Z", "updated_at": "2022-11-10T01:38:58Z", "author_association": "CONTRIBUTOR", "body": "> Realized the API explorer doesn't need the API key piece at all - it can work with standard cookie-based auth.\r\n> \r\n> This also reflects how most plugins are likely to use this API, where they'll be adding JavaScript that uses `fetch()` to call the write API directly.\r\n\r\nI agree (that's what I did with the previous insert plugin), maybe a complete example using `fetch()` in the documentation would be valuable as a \u201cGetting started with the API\u201d or similar?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1427293909, "label": "API explorer tool"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1872#issuecomment-1296076803", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1872", "id": 1296076803, "node_id": "IC_kwDOBm6k_c5NQJAD", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2022-10-30T02:50:34Z", "updated_at": "2022-10-30T02:50:34Z", "author_association": "CONTRIBUTOR", "body": "should this issue be under https://github.com/simonw/datasette-publish-vercel/issues ?\r\n\r\nPerhaps I just need to update: \r\ndatasette-publish-vercel==0.11\r\nin requirements.txt?\r\n \r\n I'll try that and see what happens...\r\n ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1428560020, "label": "SITE-BUSTING ERROR: \"render_template() called before await ds.invoke_startup()\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1872#issuecomment-1296080804", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1872", "id": 1296080804, "node_id": "IC_kwDOBm6k_c5NQJ-k", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2022-10-30T03:06:32Z", "updated_at": "2022-10-30T03:06:32Z", "author_association": "CONTRIBUTOR", "body": "I updated datasette-publish-vercel to 0.14.2 in requirements.txt\r\n\r\nAnd the site is back up!\r\n\r\nIs there a way that we can get some sort of notice when something like this will have critical impact on website function?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1428560020, "label": "SITE-BUSTING ERROR: \"render_template() called before await ds.invoke_startup()\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1884#issuecomment-1309735529", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1884", "id": 1309735529, "node_id": "IC_kwDOBm6k_c5OEPpp", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-11-10T03:57:23Z", "updated_at": "2022-11-10T03:57:23Z", "author_association": "CONTRIBUTOR", "body": "Here's how to get a list of virtual tables: https://stackoverflow.com/questions/46617118/how-to-fetch-names-of-virtual-tables", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1439009231, "label": "Exclude virtual tables from datasette inspect"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1884#issuecomment-1313962183", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1884", "id": 1313962183, "node_id": "IC_kwDOBm6k_c5OUXjH", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-11-14T15:46:32Z", "updated_at": "2022-11-14T15:46:32Z", "author_association": "CONTRIBUTOR", "body": "It does work, though I think it's probably still worth excluding virtual tables that will always be zero. Here's the same inspection as before, now with `--load-extension spatialite`:\r\n\r\n```json\r\n{\r\n \"alltheplaces\": {\r\n \"hash\": \"0843cfe414439ab903c22d1121b7ddbc643418c35c7f0edbcec82ef1452411df\",\r\n \"size\": 963375104,\r\n \"file\": \"alltheplaces.db\",\r\n \"tables\": {\r\n \"spatial_ref_sys\": {\r\n \"count\": 6215\r\n },\r\n \"spatialite_history\": {\r\n \"count\": 18\r\n },\r\n \"sqlite_sequence\": {\r\n \"count\": 2\r\n },\r\n \"geometry_columns\": {\r\n \"count\": 3\r\n },\r\n \"spatial_ref_sys_aux\": {\r\n \"count\": 6164\r\n },\r\n \"views_geometry_columns\": {\r\n \"count\": 0\r\n },\r\n \"virts_geometry_columns\": {\r\n \"count\": 0\r\n },\r\n \"geometry_columns_statistics\": {\r\n \"count\": 3\r\n },\r\n \"views_geometry_columns_statistics\": {\r\n \"count\": 0\r\n },\r\n \"virts_geometry_columns_statistics\": {\r\n \"count\": 0\r\n },\r\n \"geometry_columns_field_infos\": {\r\n \"count\": 0\r\n },\r\n \"views_geometry_columns_field_infos\": {\r\n \"count\": 0\r\n },\r\n \"virts_geometry_columns_field_infos\": {\r\n \"count\": 0\r\n },\r\n \"geometry_columns_time\": {\r\n \"count\": 3\r\n },\r\n \"geometry_columns_auth\": {\r\n \"count\": 3\r\n },\r\n \"views_geometry_columns_auth\": {\r\n \"count\": 0\r\n },\r\n \"virts_geometry_columns_auth\": {\r\n \"count\": 0\r\n },\r\n \"data_licenses\": {\r\n \"count\": 10\r\n },\r\n \"sql_statements_log\": {\r\n \"count\": 0\r\n },\r\n \"states\": {\r\n \"count\": 56\r\n },\r\n \"counties\": {\r\n \"count\": 3234\r\n },\r\n \"idx_states_geometry_rowid\": {\r\n \"count\": 56\r\n },\r\n \"idx_states_geometry_node\": {\r\n \"count\": 3\r\n },\r\n \"idx_states_geometry_parent\": {\r\n \"count\": 2\r\n },\r\n \"idx_counties_geometry_rowid\": {\r\n \"count\": 3234\r\n },\r\n \"idx_counties_geometry_node\": {\r\n \"count\": 98\r\n },\r\n \"idx_counties_geometry_parent\": {\r\n \"count\": 97\r\n },\r\n \"idx_places_geometry_rowid\": {\r\n \"count\": 1236796\r\n },\r\n \"idx_places_geometry_node\": {\r\n \"count\": 38163\r\n },\r\n \"idx_places_geometry_parent\": {\r\n \"count\": 38162\r\n },\r\n \"places\": {\r\n \"count\": 1332609\r\n },\r\n \"SpatialIndex\": {\r\n \"count\": 0\r\n },\r\n \"ElementaryGeometries\": {\r\n \"count\": 0\r\n },\r\n \"KNN\": {\r\n \"count\": 0\r\n },\r\n \"idx_states_geometry\": {\r\n \"count\": 56\r\n },\r\n \"idx_counties_geometry\": {\r\n \"count\": 3234\r\n },\r\n \"idx_places_geometry\": {\r\n \"count\": 1236796\r\n }\r\n }\r\n }\r\n}\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1439009231, "label": "Exclude virtual tables from datasette inspect"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1884#issuecomment-1314066229", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1884", "id": 1314066229, "node_id": "IC_kwDOBm6k_c5OUw81", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-11-14T16:48:35Z", "updated_at": "2022-11-14T16:48:35Z", "author_association": "CONTRIBUTOR", "body": "I'm realizing I don't know if a virtual table will ever return a count. Maybe it depends on the implementation. For these three, just checking now, it'll always return zero.\r\n\r\nThat said, I'm not sure there's any downside to having them return zero and caching that. (They're hidden, too.) ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1439009231, "label": "Exclude virtual tables from datasette inspect"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1884#issuecomment-1321460293", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1884", "id": 1321460293, "node_id": "IC_kwDOBm6k_c5Ow-JF", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2022-11-21T04:40:55Z", "updated_at": "2022-11-21T04:40:55Z", "author_association": "CONTRIBUTOR", "body": "Counting any virtual tables can be pretty tricky. On one hand, counting a [CSV virtual table](https://www.sqlite.org/csv.html) would return the number of rows in the CSV, which is helpful (but can be I/O intensive). Counting a [FTS5 virtual table](https://www.sqlite.org/fts5.html) would return the number of entries in the FTS index, which is kindof helpful, but can be misleading in some cases.\r\n\r\nOn the other hand, arbitrarily running `COUNT(*)` on some virtual tables can be incredibly expensive. SQLite offers new shortcuts/pushdowns on `COUNT(*)` queries for virtual tables, and instead calls the underlying vtab implementation and iterates through all rows in the table without discretion. For example, a virtual table that's backed by a Postgres table would call `select * from pg_table`, which would use up a lot of network and CPU calls. Or a virtual table backed by a [google sheet](https://github.com/0x6b/libgsqlite) would make network/API requests to get all the rows from the sheet just to make a count.\r\n\r\nThe [`pragma_table_list`](https://www.sqlite.org/pragma.html#pragma_table_list) pragma tells you when a table is a regular table or virtual (in the `type` column), but was only added in version 3.37.0 (2021-11-27). \r\n\r\n\r\nPersonally, I wouldnt try to `COUNT(*)` virtual tables - it depends on how the virtual table is implemented, it requires that the connection has the proper extensions loaded, and it may accientally cause perf issues for new-age extensions. A few extensions that I'm writing have virtual tables that wouldn't benefit much from `COUNT(*)`, and the fact that SQLite iterates through all rows in a table to count just makes things worse. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1439009231, "label": "Exclude virtual tables from datasette inspect"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1313252879", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1313252879, "node_id": "IC_kwDOBm6k_c5ORqYP", "user": {"value": 883348, "label": "adipasquale"}, "created_at": "2022-11-14T08:10:23Z", "updated_at": "2022-11-14T08:10:23Z", "author_association": "CONTRIBUTOR", "body": "Hi @simonw and thanks for the great tools you're publishing, your dedication is inspiring!\r\n\r\nI work for the French Ministry of Culture on a surveying tool for objects protected for their historical value. It is part of a program building modern public services called [beta.gouv.fr](https://beta.gouv.fr/).\r\n\r\nIn that context I'm using data published by the Ministry that I have ingested into datasette and published on a free Fly instance : https://collectif-objets-datasette.fly.dev . I have also ingested another data set with infos about french cities on this instance so that I can perform joined queries.\r\n\r\nThe surveying tool synchronizes its data regularly from this datasette instance, and I also use it to perform queries when asked generic questions about the distribution of objects. (The data is not very accessible as it's undocumented and for internal usage mostly)", "reactions": "{\"total_count\": 3, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 3, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1314241058", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1314241058, "node_id": "IC_kwDOBm6k_c5OVboi", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-11-14T19:06:35Z", "updated_at": "2022-11-14T19:06:35Z", "author_association": "CONTRIBUTOR", "body": "This probably counts as a case study: https://github.com/eyeseast/spatial-data-cooking-show. Even has video.\r\n\r\nSeriously, though, this workflow has become integral to my work with reporters and editors across USA TODAY Network. Very often, I get sent a folder of data in mixed formats, with a vague ask of how we should communicate some part of it to users. Datasette and its constellation of tools makes it easy to get a quick look at that data, run exploratory queries, map it and ask questions to figure out what's important to show. And then I export a version of the data that's exactly what I need for display.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1321003094", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1321003094, "node_id": "IC_kwDOBm6k_c5OvOhW", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2022-11-20T00:52:05Z", "updated_at": "2022-11-20T00:52:05Z", "author_association": "CONTRIBUTOR", "body": "Happy birthday to datasette and thank you Simon for your continued effort on this project! \r\n\r\nI use datasette (python) as a fast layer on top of search for github projects using https://github.com/dogsheep/github-to-sqlite , and use the JSON API it provides to serve sample data to make Vega-Lite graphing workshop examples that don't require authentication/API keys. It's awesome to have a full SQL API support working without needing to develop any custom API middleware for both filtering and grouping.\r\n\r\nI've also enjoyed using it as a teaching tool for working with public dataset in [civic data workshops](https://2022.open-data.nyc/event/low-code-visual-data-exploration-with-nyc-public-data/) and as a platform for making visualization [plugins](https://github.com/hydrosquall/datasette-nteract-data-explorer) . I\r\n\r\nI'm especially excited about datasette-lite, as it will let people participate in future editions of this workshop without having to install anything to make use of their own tables :)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1321241426", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1321241426, "node_id": "IC_kwDOBm6k_c5OwItS", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-11-20T20:58:54Z", "updated_at": "2022-11-20T20:58:54Z", "author_association": "CONTRIBUTOR", "body": "i wrote up a blog post of how i'm using it! https://bunkum.us/2022/11/20/mgdo-stack.html", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1890#issuecomment-1317889323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1890", "id": 1317889323, "node_id": "IC_kwDOBm6k_c5OjWUr", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-11-17T00:47:36Z", "updated_at": "2022-11-17T00:47:36Z", "author_association": "CONTRIBUTOR", "body": "amazing! thanks @simonw ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1448143294, "label": "Autocomplete text entry for filter values that correspond to facets"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1897#issuecomment-1319533445", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1897", "id": 1319533445, "node_id": "IC_kwDOBm6k_c5OpnuF", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-18T04:38:03Z", "updated_at": "2022-11-18T04:38:03Z", "author_association": "CONTRIBUTOR", "body": "Are you tracking the change to send the JSON over to the frontend separately or was that part of this? Something like this is probably pretty close https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25#diff-0c93232bfd5477eeac96382e52769108b41433d960d5277ffcccf2f464e60abdR9", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1452457263, "label": "Serve schema JSON to the SQL editor to enable autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1899#issuecomment-1317873458", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1899", "id": 1317873458, "node_id": "IC_kwDOBm6k_c5OjScy", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-17T00:31:07Z", "updated_at": "2022-11-17T00:31:07Z", "author_association": "CONTRIBUTOR", "body": "This is one way to fix it\r\n\r\n```patch\r\nr.html\r\ndiff --git a/datasette/static/cm-editor-6.0.1.js b/datasette/static/cm-editor-6.0.1.js\r\nindex c1fd2ab..68cf398 100644\r\n--- a/datasette/static/cm-editor-6.0.1.js\r\n+++ b/datasette/static/cm-editor-6.0.1.js\r\n@@ -22,7 +22,14 @@ export function editorFromTextArea(textarea, conf = {}) {\r\n // https://github.com/codemirror/lang-sql#user-content-sqlconfig.tables\r\n let view = new EditorView({\r\n doc: textarea.value,\r\n+\r\n extensions: [\r\n+ EditorView.theme({\r\n+ \".cm-content\": {\r\n+ // Height on cm-content ensures the editor is focusable by clicking beyond the height of the text\r\n+ minHeight: \"70px\",\r\n+ },\r\n+ }),\r\n keymap.of([\r\n {\r\n key: \"Shift-Enter\",\r\ndiff --git a/datasette/templates/_codemirror.html b/datasette/templates/_codemirror.html\r\nindex dea4710..c4629ae 100644\r\n--- a/datasette/templates/_codemirror.html\r\n+++ b/datasette/templates/_codemirror.html\r\n@@ -4,7 +4,6 @@\r\n .cm-editor {\r\n resize: both;\r\n overflow: hidden;\r\n- min-height: 70px;\r\n width: 80%;\r\n border: 1px solid #ddd;\r\n }\r\n```\r\n\r\nI don't love it but it seems to work for the default case. You can still retrigger the bug by resizing the editor to be > 70px high.\r\n\r\nThe other approach would be to listen for a click on that empty region and move focus to the editor, or something", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1452495049, "label": "Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1899#issuecomment-1318897922", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1899", "id": 1318897922, "node_id": "IC_kwDOBm6k_c5OnMkC", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-17T16:32:42Z", "updated_at": "2022-11-17T16:32:42Z", "author_association": "CONTRIBUTOR", "body": "Another idea would be to just not set a min-height and allow the 1 line input to be 1 line heigh", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1452495049, "label": "Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1929#issuecomment-1339906969", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1929", "id": 1339906969, "node_id": "IC_kwDOBm6k_c5P3VuZ", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2022-12-06T19:34:20Z", "updated_at": "2022-12-06T19:34:20Z", "author_association": "CONTRIBUTOR", "body": "I confirm that it works \ud83d\udc4d ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1473659191, "label": "Incorrect link from the API explorer to the JSON API documentation"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1973#issuecomment-1369044959", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1973", "id": 1369044959, "node_id": "IC_kwDOBm6k_c5Rmfff", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-02T15:41:40Z", "updated_at": "2023-01-02T15:41:40Z", "author_association": "CONTRIBUTOR", "body": "Thanks for the response!\r\n\r\nYes, it does seem like a pretty nice developer experience--both the automagical labelling of fkeys, and the ability to index the row by column name in addition to column index.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1515815014, "label": "render_cell plugin hook's row object is not a sqlite.Row"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1973#issuecomment-1407523547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1973", "id": 1407523547, "node_id": "IC_kwDOBm6k_c5T5Rrb", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-29T00:40:31Z", "updated_at": "2023-01-29T00:40:31Z", "author_association": "CONTRIBUTOR", "body": "A +1 for switching to `CustomRow`: I think you currently only get a `CustomRow` if the result set had a column that was an fkey ([this code](https://github.com/simonw/datasette/blob/3c352b7132ef09b829abb69a0da0ad00be5edef9/datasette/views/table.py#L667-L682))\r\n\r\nOtherwise you get vanilla `sqlite3.Row`s, which will fail if you try to access `.columns` or lookup the cell by name, which surprised me recently", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1515815014, "label": "render_cell plugin hook's row object is not a sqlite.Row"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1978#issuecomment-1375708725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1978", "id": 1375708725, "node_id": "IC_kwDOBm6k_c5R_6Y1", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2023-01-09T14:30:00Z", "updated_at": "2023-01-09T14:30:00Z", "author_association": "CONTRIBUTOR", "body": "Totally missed that issue. I can close this as a duplicate.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1522778923, "label": "Document datasette.urls.row and row_blob"}, "performed_via_github_app": null}