{"html_url": "https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008166084", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365", "id": 1008166084, "node_id": "IC_kwDOCGYnMM48F2TE", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-01-08T22:32:47Z", "updated_at": "2022-01-08T22:32:47Z", "author_association": "CONTRIBUTOR", "body": "or using \u201c pragma optimize\u201d", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1096558279, "label": "create-index should run analyze after creating index"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008164786", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365", "id": 1008164786, "node_id": "IC_kwDOCGYnMM48F1-y", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-01-08T22:24:19Z", "updated_at": "2022-01-08T22:24:19Z", "author_association": "CONTRIBUTOR", "body": "the out-of-date scenario you describe could be addressed by automatically adding an analyze to the insert or convert commands if they implicate an index", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1096558279, "label": "create-index should run analyze after creating index"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008164116", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365", "id": 1008164116, "node_id": "IC_kwDOCGYnMM48F10U", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-01-08T22:18:57Z", "updated_at": "2022-01-08T22:18:57Z", "author_association": "CONTRIBUTOR", "body": "the table with the query ran so bad was about 50k. \r\n\r\ni think the scenario should not be worse than no stats. \r\n\r\ni also did not know that sqlite was so different from postgres and needed an explicit analyze call.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1096558279, "label": "create-index should run analyze after creating index"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008161965", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365", "id": 1008161965, "node_id": "IC_kwDOCGYnMM48F1St", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-01-08T22:02:56Z", "updated_at": "2022-01-08T22:02:56Z", "author_association": "CONTRIBUTOR", "body": "for options 2 and 3, i would worry about discoverablity. \r\n\r\nin other db\u2019s it is not necessary to explicitly call analyze for most indices. ie for postgres\r\n\r\n> The system regularly collects statistics on all of a table's columns. Newly-created non-expression indexes can immediately use these statistics to determine an index's usefulness.\r\n\r\ni suppose i would propose raising a warning if the stats table is created that explains what is going on and informs users about a \u2014no-analyze argument.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1096558279, "label": "create-index should run analyze after creating index"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1574#issuecomment-1007844190", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1574", "id": 1007844190, "node_id": "IC_kwDOBm6k_c48Ente", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-01-08T00:42:12Z", "updated_at": "2022-01-08T00:42:12Z", "author_association": "CONTRIBUTOR", "body": "is there a reason to not always use the slim option?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1084193403, "label": "introduce new option for datasette package to use a slim base image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1007636709", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365", "id": 1007636709, "node_id": "IC_kwDOCGYnMM48D1Dl", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-01-07T18:28:33Z", "updated_at": "2022-01-07T18:29:43Z", "author_association": "CONTRIBUTOR", "body": "i added an index to one table with sqlite-utils, and then a query that used to take about 1 second started taking hundreds of seconds. \r\n\r\nrunning analyze got me back to sub second speed.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1096558279, "label": "create-index should run analyze after creating index"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1583#issuecomment-1002825217", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1583", "id": 1002825217, "node_id": "IC_kwDOBm6k_c47xeYB", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-30T00:34:16Z", "updated_at": "2021-12-30T00:34:16Z", "author_association": "CONTRIBUTOR", "body": "if that is not desirable, it might be good to document that users might want to set up a lifecycle rule to automatically delete these build artifacts. something like https://stackoverflow.com/questions/59937542/can-i-delete-container-images-from-google-cloud-storage-artifacts-bucket", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1090810196, "label": "consider adding deletion step of cloudbuild artifacts to gcloud publish"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1547#issuecomment-997519202", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1547", "id": 997519202, "node_id": "IC_kwDOBm6k_c47dO9i", "user": {"value": 127565, "label": "wragge"}, "created_at": "2021-12-20T01:36:58Z", "updated_at": "2021-12-20T01:36:58Z", "author_association": "CONTRIBUTOR", "body": "Yep, that works -- thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1076388044, "label": "Writable canned queries fail to load custom templates"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1547#issuecomment-997511968", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1547", "id": 997511968, "node_id": "IC_kwDOBm6k_c47dNMg", "user": {"value": 127565, "label": "wragge"}, "created_at": "2021-12-20T01:21:59Z", "updated_at": "2021-12-20T01:21:59Z", "author_association": "CONTRIBUTOR", "body": "I've installed the alpha version but get an error when starting up Datasette:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/tim/.pyenv/versions/stock-exchange/bin/datasette\", line 5, in \r\n from datasette.cli import cli\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/cli.py\", line 15, in \r\n from .app import Datasette, DEFAULT_SETTINGS, SETTINGS, SQLITE_LIMIT_ATTACHED, pm\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/app.py\", line 31, in \r\n from .views.database import DatabaseDownload, DatabaseView\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/views/database.py\", line 25, in \r\n from datasette.plugins import pm\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/plugins.py\", line 29, in \r\n mod = importlib.import_module(plugin)\r\n File \"/Users/tim/.pyenv/versions/3.8.5/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/filters.py\", line 9, in \r\n @hookimpl(specname=\"filters_from_request\")\r\nTypeError: __call__() got an unexpected keyword argument 'specname'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1076388044, "label": "Writable canned queries fail to load custom templates"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1561#issuecomment-997128712", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1561", "id": 997128712, "node_id": "IC_kwDOBm6k_c47bvoI", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-18T02:35:48Z", "updated_at": "2021-12-18T02:35:48Z", "author_association": "CONTRIBUTOR", "body": "interesting! i love this feature. this + full caching with cloudflare is really super!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1082765654, "label": "add hash id to \"_memory\" url if hashed url mode is turned on and crossdb is also turned on"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1552#issuecomment-996229007", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1552", "id": 996229007, "node_id": "IC_kwDOBm6k_c47YT-P", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2021-12-16T22:04:39Z", "updated_at": "2021-12-16T22:04:39Z", "author_association": "CONTRIBUTOR", "body": "Wow, that was fast, thank you so much @simonw !\r\n\r\n> I'm also not convinced that this configuration syntax is right. It's a bit weird having a `\"facets\"` list that can either by column-name-strings or `{\"type-of-facet\": \"column-name\"}` objects. Maybe there's a better design for this?\r\n\r\nI agree that it's not ideal, my initial naive approach was to detect if it's an array, like what is done here:\r\n\r\nhttps://github.com/simonw/datasette/blob/2c07327d23d9c5cf939ada9ba4091c1b8b2ba42d/datasette/facets.py#L312-L313\r\n\r\nBut it requires an extra query to determine the type, which is a bit problematic, especially for big tables I guess.\r\n\r\nTaking a look at #510, I wonder if a `facet_delimiter` should be defined for that kind of columns (that would help our team not to have an intermediary conversion step from `foo|bar` to `[\"foo\",\"bar\"]` for instance).\r\n\r\nTo be consistent with the `--extract-column` parameter, maybe an explicit casting/delimiter would be useful: `--set-column 'Foo:Array:|'`.\r\n\r\nThrowing a lot of ideas without knowing the big picture\u2026 but sometimes newcomers have superpowers :).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1078702875, "label": "Allow to set `facets_array` in metadata (like current `facets`)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1552#issuecomment-995296725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1552", "id": 995296725, "node_id": "IC_kwDOBm6k_c47UwXV", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2021-12-15T23:29:32Z", "updated_at": "2021-12-15T23:29:32Z", "author_association": "CONTRIBUTOR", "body": "@simonw thank you for your fast answer and your guidance!\r\n\r\nWhile digging into the code, I found an undocumented way of doing it:\r\n\r\n```yaml\r\nfacets: [\"Facet for a column\", {\"array\": \"Facet for an array\"}]\r\n```\r\n\r\nThe only remaining problem with that solution is here: https://github.com/simonw/datasette/blob/250db8192cb8aba5eb8cd301ccc2a49525bc3d24/datasette/facets.py#L33\r\n\r\nWe have:\r\n\r\n```python\r\ntype, metadata_config = metadata_config.items()[0]\r\n```\r\n\r\nBut it requires to cast the `dict_items` as a list prior to access the first element:\r\n\r\n```python\r\ntype, metadata_config = list(metadata_config.items())[0]\r\n```\r\n\r\nI guess it's an unspotted bug? (I mean, independently of the facets-with-arrays issue.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1078702875, "label": "Allow to set `facets_array` in metadata (like current `facets`)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-993078038", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 993078038, "node_id": "IC_kwDOBm6k_c47MSsW", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-14T01:46:52Z", "updated_at": "2021-12-14T01:46:52Z", "author_association": "CONTRIBUTOR", "body": "the nested query idea is very nice, and i stole if for [my client side paginator](https://observablehq.com/d/1d5da3a3c3f2f347#DatasetteClient). However, it won't do the right thing if the original query orders by random().\r\n\r\nIf you go the nested query route, maybe raise a 4XX status code if the query has such a clause?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1553#issuecomment-993014772", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1553", "id": 993014772, "node_id": "IC_kwDOBm6k_c47MDP0", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-13T23:46:18Z", "updated_at": "2021-12-13T23:46:18Z", "author_association": "CONTRIBUTOR", "body": "these headers would also be relevant for json exports of custom queries", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1079111498, "label": "if csv export is truncated in non streaming mode set informative response header"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1553#issuecomment-992986587", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1553", "id": 992986587, "node_id": "IC_kwDOBm6k_c47L8Xb", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-13T22:57:04Z", "updated_at": "2021-12-13T22:57:04Z", "author_association": "CONTRIBUTOR", "body": "would also be good if the header said the what the max row limit was", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1079111498, "label": "if csv export is truncated in non streaming mode set informative response header"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-992971072", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 992971072, "node_id": "IC_kwDOBm6k_c47L4lA", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-13T22:29:34Z", "updated_at": "2021-12-13T22:29:34Z", "author_association": "CONTRIBUTOR", "body": "just came by to open this issue. would make my data analysis in observable a lot better!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1549#issuecomment-991754237", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1549", "id": 991754237, "node_id": "IC_kwDOBm6k_c47HPf9", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-11T19:14:39Z", "updated_at": "2021-12-11T19:14:39Z", "author_association": "CONTRIBUTOR", "body": "that option is not available on [custom queries](https://labordata.bunkum.us/odpr-962a140?sql=with+local_union_filings+as+%28%0D%0A++select+*+from+lm_data+%0D%0A++where%0D%0A++++yr_covered+%3E+cast%28strftime%28%27%25Y%27%2C+%27now%27%2C+%27-5+years%27%29+as+int%29%0D%0A++++and+desig_name+%3D+%27LU%27%0D%0A++order+by+yr_covered+desc%0D%0A%29%2C%0D%0Amost_recent_filing+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from+local_union_filings%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++most_recent_filing%0D%0Awhere%0D%0A++next_election+%3E%3D+strftime%28%27%25Y-%25m%27%2C+%27now%27%29%0D%0A++and+next_election+%3C+strftime%28%27%25Y-%25m%27%2C+%27now%27%2C+%27%2B1+year%27%29%0D%0Aorder+by%0D%0A++members+desc%3B).\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1077620955, "label": "Redesign CSV export to improve usability"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/353#issuecomment-991405755", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/353", "id": 991405755, "node_id": "IC_kwDOCGYnMM47F6a7", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-12-11T01:38:29Z", "updated_at": "2021-12-11T01:38:29Z", "author_association": "CONTRIBUTOR", "body": "wow! that's awesome! thanks so much, @simonw!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1077102934, "label": "Allow passing a file of code to \"sqlite-utils convert\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/348#issuecomment-983155079", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/348", "id": 983155079, "node_id": "IC_kwDOCGYnMM46mcGH", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-12-01T00:28:40Z", "updated_at": "2021-12-01T00:28:40Z", "author_association": "CONTRIBUTOR", "body": "I'd use this. Right now, I tend to do `touch my.db` and then `enable-wal` or whatever else, but I'm never sure if that's a bad idea.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1067771698, "label": "Command for creating an empty database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1522#issuecomment-976117989", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1522", "id": 976117989, "node_id": "IC_kwDOBm6k_c46LmDl", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-11-23T03:00:34Z", "updated_at": "2021-11-23T03:00:34Z", "author_association": "CONTRIBUTOR", "body": "I tried deploying the most recent version of the Dockerfile in this thread ([link to comment](https://github.com/simonw/datasette/issues/1522#issuecomment-974605128)), and after trying a few different different combinations, I was only successful when I used `--no-cpu-throttling` (\"CPU Is always allocated\" in the UI)\r\n\r\nUsing this method, I got a very similar issue to you: The first time I'd load the site I'd get a 503. But after that first load, I didn't get the issue again. It would re-occur if the service started from cold boot. \r\n\r\nI suspect this is a race condition in the supervisord configuration. The errors I got were the same `Connection refused: AH00957: http: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed`, and that seems to indicate that `datasette` hadn't yet started. \r\n\r\nLooking at the order of logs getting back, the processes reported successfully completing loading after the first 503 was returned, so that makes me think race condition. \r\n\r\nI can replicate this locally, if I `docker run` and request `localhost:5000/prefix` _before_ I get the `datasette entered RUNNING state` message. Cloud Run wakes up when requests are received, so this test would semi-replicate that, but local docker would be the equivalent of a persistent process, hence it doesn't normally exhibit the same issues.\r\n\r\nUnfortunately supervisor/supervisor issue 122 (not linking as to prevent cross-project link spam) seems to say that dependency chaining is a feature that's been asked for for a long time, but hasn't been implemented. You could try some suggestions in that thread. ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058896236, "label": "Deploy a live instance of demos/apache-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1528#issuecomment-975955589", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1528", "id": 975955589, "node_id": "IC_kwDOBm6k_c46K-aF", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2021-11-22T22:00:30Z", "updated_at": "2021-11-22T22:00:30Z", "author_association": "CONTRIBUTOR", "body": "Oh, another thing to consider: I believe this would be the first `\"_file\"` key in datasette's metadata, compared to other `\"_url\"` keys like `\"license_url\"` or `\"about_url\"`. Not too sure what considerations to include with this (ex should missing files cause Datasette to stop before starting, should build scripts bundle these sql files somewhere during `datasette package`, etc.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1060631257, "label": "Add new `\"sql_file\"` key to Canned Queries in metadata?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1495#issuecomment-974108455", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1495", "id": 974108455, "node_id": "IC_kwDOBm6k_c46D7cn", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-11-19T14:14:35Z", "updated_at": "2021-11-19T14:14:35Z", "author_association": "CONTRIBUTOR", "body": "A nudge on this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1033678984, "label": "Allow routes to have extra options"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1514#issuecomment-972852184", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1514", "id": 972852184, "node_id": "IC_kwDOBm6k_c45_IvY", "user": {"value": 49699333, "label": "dependabot[bot]"}, "created_at": "2021-11-18T13:11:15Z", "updated_at": "2021-11-18T13:11:15Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #1516.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1056117435, "label": "Bump black from 21.9b0 to 21.11b0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1500#issuecomment-971568829", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1500", "id": 971568829, "node_id": "IC_kwDOBm6k_c456Pa9", "user": {"value": 49699333, "label": "dependabot[bot]"}, "created_at": "2021-11-17T13:13:58Z", "updated_at": "2021-11-17T13:13:58Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #1514.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1041158024, "label": "Bump black from 21.9b0 to 21.10b0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-970266123", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 970266123, "node_id": "IC_kwDOBm6k_c451RYL", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2021-11-16T13:18:36Z", "updated_at": "2021-11-16T13:18:36Z", "author_association": "CONTRIBUTOR", "body": "Congratulations, looks like it went through! There was a bit of a hold-up\non the JupyterLab ones, but it's semi automated: a dependabot pr to\nwarehouse and a CI deploy, with a click in between.\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1380#issuecomment-967747190", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1380", "id": 967747190, "node_id": "IC_kwDOBm6k_c45rqZ2", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-11-13T00:47:26Z", "updated_at": "2021-11-13T00:47:26Z", "author_association": "CONTRIBUTOR", "body": "Would it make sense to run datasette with a fswatch/inotifywait on a folder, then? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 924748955, "label": "Serve all db files in a folder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/26#issuecomment-964205475", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/26", "id": 964205475, "node_id": "IC_kwDOCGYnMM45eJuj", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-11-09T14:31:29Z", "updated_at": "2021-11-09T14:31:29Z", "author_association": "CONTRIBUTOR", "body": "i was just reaching for a tool to do this this morning", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455486286, "label": "Mechanism for turning nested JSON into foreign keys / many-to-many"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-851567204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 851567204, "node_id": "MDEyOklzc3VlQ29tbWVudDg1MTU2NzIwNA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-05-31T15:42:10Z", "updated_at": "2021-11-04T03:15:01Z", "author_association": "CONTRIBUTOR", "body": "I very much want to make:\r\n https://list.SaferDisinfectants.org/disinfectants/listN \r\nhave this URL:\r\n https://list.SaferDisinfectants.org/\r\n \r\nI'm using only one table page on the site, with no pagination. I'm not using the home page, though when I tried to move my table to the home page as mentioned above, I failed to figure out how. \r\n\r\nI am using cloudflare, but I haven't figured out a forwarding or HTML re-write method of doing this, either.\r\n\r\nIs there any way I can get a prettier list URL? I'm on Vercel.\r\n\r\n(I have a wordpress site on the main domain on a separate host.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1495#issuecomment-960420237", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1495", "id": 960420237, "node_id": "IC_kwDOBm6k_c45PtmN", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-11-04T03:12:01Z", "updated_at": "2021-11-04T03:12:01Z", "author_association": "CONTRIBUTOR", "body": "This all looks promising! I will need detailed documentation on how to upgrade datasette once it's available, and how to implement. (@fgregg example looks very straightforward on the plugin front.) \r\nI'll be so excited if I can get:\r\nhttps://list.saferdisinfectants.org/ \r\ninstead of\r\nhttps://list.saferdisinfectants.org/disinfectants/listN\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1033678984, "label": "Allow routes to have extra options"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1495#issuecomment-954384496", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1495", "id": 954384496, "node_id": "IC_kwDOBm6k_c444sBw", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-10-29T03:07:13Z", "updated_at": "2021-10-29T03:07:13Z", "author_association": "CONTRIBUTOR", "body": "okay @simonw, made the requested changes. tests are running locally. i think this is ready for you to look at again.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1033678984, "label": "Allow routes to have extra options"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/242#issuecomment-953911245", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/242", "id": 953911245, "node_id": "IC_kwDOCGYnMM4424fN", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-10-28T14:37:55Z", "updated_at": "2021-10-28T14:37:55Z", "author_association": "CONTRIBUTOR", "body": "I've been thinking about this a bit lately, doing a project that involves moving a lot of data in and out of SQLite files, datasette and GeoJSON. This has me leaning toward the idea that something like [`datasette query`](https://github.com/simonw/datasette/issues/1356) would be a better place to do async queries.\r\n\r\nI know there's a lot of overlap in sqlite-utils and datasette, and maybe keeping sqlite-utils synchronous would let datasette be entirely async and give a cleaner separation of implementations.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 817989436, "label": "Async support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1380#issuecomment-953366110", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1380", "id": 953366110, "node_id": "IC_kwDOBm6k_c440zZe", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-10-27T22:48:55Z", "updated_at": "2021-10-27T22:48:55Z", "author_association": "CONTRIBUTOR", "body": "It looks like if the files argument is a directory, `config_dir` is set, but files in that folder are only loaded into `self.files` at the `Datasette` class initialisation. \r\n\r\nI tried seeing if I could get `--reload` to work, but I'm getting issues trying to use that command when specifying a directory, as the command `serve` ends up in the files list(?): \r\n\r\n```\r\ndatasette serve . --reload\r\nError: Invalid value for '[FILES]...': Path 'serve' does not exist.\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 924748955, "label": "Serve all db files in a folder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1380#issuecomment-953334718", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1380", "id": 953334718, "node_id": "IC_kwDOBm6k_c440ru-", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-10-27T21:45:04Z", "updated_at": "2021-10-27T21:45:04Z", "author_association": "CONTRIBUTOR", "body": "I am also getting this issue, using the currently most recent version of datasette\r\n\r\n```\r\n$ datasette --version\r\ndatasette, version 0.59.1\r\n```\r\n\r\nIf I run `datasette` within just a folder of files, \r\n\r\n```\r\n$ datasette serve .\r\n```\r\n\r\nAdding new files while datasette is running shows no new files, and removing files causes datasette to return 500 errors. \r\n\r\n\r\n```\r\nhome\r\nError 500\r\n[Errno 2] No such file or directory: 'mydatabase.db'\r\nPowered by Datasette\r\n```\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 924748955, "label": "Serve all db files in a folder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1401#issuecomment-950150483", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1401", "id": 950150483, "node_id": "IC_kwDOBm6k_c44oiVT", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2021-10-23T13:09:10Z", "updated_at": "2021-10-23T13:09:10Z", "author_association": "CONTRIBUTOR", "body": "I think it's because of this in `app.css` \r\n\r\n```\r\nol,\r\nul {\r\n\tlist-style: none;\r\n}\r\n```\r\n\r\nhttps://github.com/simonw/datasette/blame/main/datasette/static/app.css#L35-L38\r\n\r\nYou could probably reinstate that by providing your own CSS.\r\nhttps://docs.datasette.io/en/0.24/custom_templates.html#custom-css-and-javascript", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 950664971, "label": "unordered list is not rendering bullet points in description_html on database page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-949604763", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 949604763, "node_id": "IC_kwDOBm6k_c44mdGb", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-10-22T12:54:34Z", "updated_at": "2021-10-22T12:54:34Z", "author_association": "CONTRIBUTOR", "body": "i'm going to take a swing at this today. we'll see.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-947203725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 947203725, "node_id": "IC_kwDOBm6k_c44dS6N", "user": {"value": 110420, "label": "ghing"}, "created_at": "2021-10-20T00:21:54Z", "updated_at": "2021-10-20T00:21:54Z", "author_association": "CONTRIBUTOR", "body": "This StackOverflow post, [sqlite - Cloud Run: Why does my instance need so much RAM?](https://stackoverflow.com/questions/59812405/cloud-run-why-does-my-instance-need-so-much-ram), points to [this section of the Cloud Run docs](https://cloud.google.com/run/docs/troubleshooting) that says:\r\n\r\n> Note that the Cloud Run container instances run in an environment where the files written to the local filesystem count towards the available memory. This also includes any log files that are not written to /var/log/* or /dev/log.\r\n\r\nDoes datasette write any large files when starting? \r\n\r\nOr does the [`COPY` command in the Dockerfile](https://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L349) count as writing to the local filesystem?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-947196177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 947196177, "node_id": "IC_kwDOBm6k_c44dRER", "user": {"value": 110420, "label": "ghing"}, "created_at": "2021-10-20T00:05:10Z", "updated_at": "2021-10-20T00:05:10Z", "author_association": "CONTRIBUTOR", "body": "I was looking through the Dockerfile-generation code to see if there was anything that would cause memory usage to be a lot during deployment. \r\n\r\nI noticed that the Dockerfile [runs `datasette --inspect`](https://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L354). Is it possible that this is using a lot of memory usage?\r\n\r\nOr would that come into play when running `gcloud builds submit`, not when it's actually deployed?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1396#issuecomment-946467547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1396", "id": 946467547, "node_id": "IC_kwDOBm6k_c44afLb", "user": {"value": 72577720, "label": "MichaelTiemannOSC"}, "created_at": "2021-10-19T08:10:26Z", "updated_at": "2021-10-19T08:10:26Z", "author_association": "CONTRIBUTOR", "body": "Now that 0.59 has excellent annotated release notes, you can re-confirm this is fixed by updating the published Docker image and checking that these fixes still work ;-)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 944903881, "label": "\"invalid reference format\" publishing Docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1432#issuecomment-946287922", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1432", "id": 946287922, "node_id": "IC_kwDOBm6k_c44ZzUy", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-10-19T01:16:41Z", "updated_at": "2021-10-19T01:16:41Z", "author_association": "CONTRIBUTOR", "body": "Resolved, with assistance from @ashishdotme (Thank you!)\r\n\r\nUpdated requirements.txt to include:\r\n```\r\ndatasette==0.59\r\ndatasette-publish-vercel==0.11\r\nsqlite-utils==3.6\r\n```\r\n\r\nRan:\r\n```\r\n$ pip3 install -r requirements.txt\r\n```\r\nThe site is back at work! Yay!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 969855774, "label": "Rename Datasette.__init__(config=) parameter to settings="}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1432#issuecomment-946255239", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1432", "id": 946255239, "node_id": "IC_kwDOBm6k_c44ZrWH", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-10-18T23:55:25Z", "updated_at": "2021-10-18T23:55:25Z", "author_association": "CONTRIBUTOR", "body": "I am getting this when I visit my live Datasette page:\r\n```\r\nThis Serverless Function has crashed.\r\nYour connection is working correctly.\r\nVercel is working correctly.\r\n500: INTERNAL_SERVER_ERROR\r\nCode: FUNCTION_INVOCATION_FAILED\r\nID: ...\r\n```\r\nAnd in the server logs, I'm getting\r\n\r\n```\r\n[GET] /disinfectants/listN\r\n19:53:14:23\r\nmodule initialization error: __init__() got an unexpected keyword argument 'config'\r\nmodule initialization error\r\n__init__() got an unexpected keyword argument 'config'\r\n```\r\n Which is the same error that @ashishdotme reported above.\r\n \r\n \r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 969855774, "label": "Rename Datasette.__init__(config=) parameter to settings="}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1489#issuecomment-943594738", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1489", "id": 943594738, "node_id": "IC_kwDOBm6k_c44Phzy", "user": {"value": 49699333, "label": "dependabot[bot]"}, "created_at": "2021-10-14T18:04:13Z", "updated_at": "2021-10-14T18:04:13Z", "author_association": "CONTRIBUTOR", "body": "OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`. You can also ignore all major, minor, or patch releases for a dependency by adding an [`ignore` condition](https://docs.github.com/en/code-security/supply-chain-security/configuration-options-for-dependency-updates#ignore) with the desired `update_types` to your config file.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1026379132, "label": "Update pyyaml requirement from ~=5.3 to >=5.3,<7.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1489#issuecomment-943594735", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1489", "id": 943594735, "node_id": "IC_kwDOBm6k_c44Phzv", "user": {"value": 49699333, "label": "dependabot[bot]"}, "created_at": "2021-10-14T18:04:12Z", "updated_at": "2021-10-14T18:04:12Z", "author_association": "CONTRIBUTOR", "body": "Looks like this PR is closed. If you re-open it I'll rebase it as long as no-one else has edited it (you can use `@dependabot reopen` if the branch has been deleted).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1026379132, "label": "Update pyyaml requirement from ~=5.3 to >=5.3,<7.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1480#issuecomment-938171377", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1480", "id": 938171377, "node_id": "IC_kwDOBm6k_c4361vx", "user": {"value": 110420, "label": "ghing"}, "created_at": "2021-10-07T21:33:12Z", "updated_at": "2021-10-07T21:33:12Z", "author_association": "CONTRIBUTOR", "body": "Thanks for the reply @simonw. What services have you had better success with than Cloud Run for larger database?\r\n\r\nAlso, what about my issue description makes you think there may be a workaround?\r\n\r\nIs there any instrumentation I could add to see at which point in the deploy the memory usage spikes? Should I be able to see this whether it's running under Docker locally, or do you suspect this is Cloud Run-specific?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1015646369, "label": "Exceeding Cloud Run memory limits when deploying a 4.8G database"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/3#issuecomment-934372104", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3", "id": 934372104, "node_id": "IC_kwDOD079W843sWMI", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2021-10-05T12:38:24Z", "updated_at": "2021-10-05T12:38:24Z", "author_association": "CONTRIBUTOR", "body": "As dogsheep-photos already uses [osxphotos](https://github.com/RhetTbull/osxphotos) to load photos you can access the EXIF data via osxphotos. Apple Photos imports a small subset of EXIF data at the time the photo is imported and osxphotos provides this via the [exif_info](https://github.com/RhetTbull/osxphotos#exifinfo) property. If you want the full EXIF data, osxphotos also provides a wrapper around [exiftool](https://github.com/RhetTbull/osxphotos#exiftool).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 602533481, "label": "Import EXIF data into SQLite - lens used, ISO, aperture etc"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1473#issuecomment-922394999", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1473", "id": 922394999, "node_id": "IC_kwDOBm6k_c42-qF3", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-09-19T00:44:39Z", "updated_at": "2021-09-19T00:45:32Z", "author_association": "CONTRIBUTOR", "body": "I replaced:\r\n```\r\n\r\n\r\n\r\n```\r\nwith:\r\n```\r\n\r\n```\r\n\r\nI'd still love to know what caused this (and how to troubleshoot to figure it out), so I'll leave it open for a bit, but I do have a functional logo linking to the Hugo home page, at least locally. I'll likely push tomorrow.\r\n\r\n(Before trying this, I tried to apply a background image to the `a` tag. That didn't work.)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 999902754, "label": "base logo link visits `undefined` rather than href url"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1473#issuecomment-922363640", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1473", "id": 922363640, "node_id": "IC_kwDOBm6k_c42-ib4", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-09-18T19:45:47Z", "updated_at": "2021-09-18T19:45:47Z", "author_association": "CONTRIBUTOR", "body": "An update, if I remove the `img` tag and replace it with the text, \"Safer or Toxic?\" it links to the right place.\r\n\r\nAlso, if I keep things exactly as they are, and it improperly, but consistently goes to the `undefined` page, on THAT 404 page, a click on the image properly clicks through to the www.SaferOrToxic.org page.\r\n\r\nWeird stuff.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 999902754, "label": "base logo link visits `undefined` rather than href url"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1453#issuecomment-919135732", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1453", "id": 919135732, "node_id": "IC_kwDOBm6k_c42yOX0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "created_at": "2021-09-14T13:10:38Z", "updated_at": "2021-09-14T13:10:38Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #1471.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 982780906, "label": "Bump black from 21.7b0 to 21.8b0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-918621705", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 918621705, "node_id": "IC_kwDOBm6k_c42wQ4J", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-13T22:17:17Z", "updated_at": "2021-09-13T22:17:17Z", "author_association": "CONTRIBUTOR", "body": "> haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there?\r\n\r\nNot certain, but I think tests in CI run on Ubuntu and don't appear to install any additional Sqlite-related dependencies, and so my guess is the version of Sqlite installed by default on Ubuntu has the `SQLITE_ENABLE_FTS3_PARENTHESIS` option enabled and so doesn't run into this issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-917642487", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 917642487, "node_id": "IC_kwDOBm6k_c42shz3", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-12T14:03:09Z", "updated_at": "2021-09-12T14:03:09Z", "author_association": "CONTRIBUTOR", "body": "haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/326#issuecomment-916119657", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/326", "id": 916119657, "node_id": "IC_kwDOCGYnMM42muBp", "user": {"value": 191622, "label": "meatcar"}, "created_at": "2021-09-09T13:54:10Z", "updated_at": "2021-09-09T13:54:10Z", "author_association": "CONTRIBUTOR", "body": "dupe of #293?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991237645, "label": "Test against 3.10-dev"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915343886", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915343886, "node_id": "IC_kwDOBm6k_c42jwoO", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-08T15:32:06Z", "updated_at": "2021-09-08T15:32:06Z", "author_association": "CONTRIBUTOR", "body": "Thanks, that does look similar!\r\n\r\n> Unfortunately, pysqlite3-binary isn't available for Mac OS X, so I can't quickly check that that fixes it; will do so later.\r\n\r\nAh that makes sense, I guess that's why this isn't just always installed already. I wonder if a possible solution to this issue could be doing feature detection on whether this feature is supported by the current Sqlite version, and if not these tests could be disabled locally? But possibly there's a better way to handle this, will see what @simonw thinks", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915302885", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915302885, "node_id": "IC_kwDOBm6k_c42jmnl", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-08T14:44:50Z", "updated_at": "2021-09-08T14:44:50Z", "author_association": "CONTRIBUTOR", "body": "thanks for the response! full errors attached; excerpt:\r\n\r\n```\r\n...\r\n\r\n\r\n def test_searchmode(table_metadata, querystring, expected_rows):\r\n with make_app_client(\r\n metadata={\"databases\": {\"fixtures\": {\"tables\": {\"searchable\": table_metadata}}}}\r\n ) as client:\r\n response = client.get(\"/fixtures/searchable.json?\" + querystring)\r\n> assert expected_rows == response.json[\"rows\"]\r\nE AssertionError: assert [[1, 'barry c...sel', 'puma']] == []\r\nE Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']\r\nE Use -v to get the full diff\r\n\r\n/Users/t/dev/datasette/tests/test_api.py:1115: AssertionError\r\n```\r\n\r\n[errors.txt](https://github.com/simonw/datasette/files/7129719/errors.txt)\r\n\r\nA quick scan of #1223 suggests you're right. Unfortunately, pysqlite3-binary isn't available for Mac OS X, so I can't quickly check that that fixes it; will do so later.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915299013", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915299013, "node_id": "IC_kwDOBm6k_c42jlrF", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-08T14:40:28Z", "updated_at": "2021-09-08T14:40:28Z", "author_association": "CONTRIBUTOR", "body": "What are the full errors you're getting?\r\n\r\nThis *may* be the same issue as described in https://github.com/simonw/datasette/pull/1223 - essentially the test suite (and corresponding Datasette features I assume) are by default implicitly dependent on your Sqlite installation having been compiled with the `SQLITE_ENABLE_FTS3_PARENTHESIS` option. If this is the same issue then I think this can be fixed either by recompiling with that option or (probably more easily) by running `pip install pysqlite3-binary`, which will be used in preference to your system Sqlite installation and has this option enabled.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915279711", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915279711, "node_id": "IC_kwDOBm6k_c42jg9f", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-08T14:16:49Z", "updated_at": "2021-09-08T14:16:49Z", "author_association": "CONTRIBUTOR", "body": "on commit d57ab156b35ec642", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1455#issuecomment-913001282", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1455", "id": 913001282, "node_id": "IC_kwDOBm6k_c42a0tC", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-04T16:31:24Z", "updated_at": "2021-09-04T16:31:24Z", "author_association": "CONTRIBUTOR", "body": "I love it! maybe 'researchers' instead? Or 'scientists and researchers'?", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 988325628, "label": "Add scientists to target groups"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/58#issuecomment-910121331", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/58", "id": 910121331, "node_id": "IC_kwDODEm0Qs42P1lz", "user": {"value": 42904, "label": "rubenv"}, "created_at": "2021-09-01T09:49:33Z", "updated_at": "2021-09-01T09:49:33Z", "author_association": "CONTRIBUTOR", "body": "Found the cause, it's the other commands. PR #59 submitted.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 984939366, "label": "Error: Use either --since or --since_id, not both - still broken"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-905904540", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 905904540, "node_id": "IC_kwDOBm6k_c41_wGc", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-08-25T21:59:14Z", "updated_at": "2021-08-25T21:59:55Z", "author_association": "CONTRIBUTOR", "body": "I did two tests: one with 1000 5-30mb DBs and a second with 20 multi gig DBs. For the second, I created them like so:\r\n`for i in {1..20}; do sqlite-generate db$i.db --tables ${i}00 --rows 100,2000 --columns 5,100 --pks 0 --fks 0; done`\r\n\r\nThis was for deciding whether to use lots of small DBs or to group things into a smaller number of bigger DBs. The second strategy wins.\r\n\r\nBy simply persisting the `_internal` DB to disk, I was able to avoid most of the performance issues I was experiencing previously. (To do this, I changed the `datasette/internal_db.py:init_internal_db` creates to if not exists, and changed the `_internal` DB instantiation in `datasette/app.py:Datasette.__init__` to a path with `is_mutable=True`.) Super rough, but the pages now load so I can continue testing ideas.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-905899177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 905899177, "node_id": "IC_kwDOBm6k_c41_uyp", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-08-25T21:48:00Z", "updated_at": "2021-08-25T21:48:00Z", "author_association": "CONTRIBUTOR", "body": "Upon first stab, there's two issues here:\r\n- DB/table/row counts (as discussed above). This isn't too bad if the DBs are actually above the MAX limit check.\r\n- Populating the internal DB. On first load of a giant set of DBs, it can take 10-20 mins to populate. By altering datasette and persisting the internal DB to disk, this problem is vastly improved, but I'm sure this will cause problems elsewhere.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-904982056", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 904982056, "node_id": "IC_kwDOBm6k_c418O4o", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-08-24T21:15:04Z", "updated_at": "2021-08-24T21:15:30Z", "author_association": "CONTRIBUTOR", "body": "I'm running into issues with this as well. All other pages seem to work with lots of DBs except the home page, which absolutely tanks. Would be willing to put some work into this, if there's been any kind of progress on concepts on how this ought to work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1425#issuecomment-895003796", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1425", "id": 895003796, "node_id": "IC_kwDOBm6k_c41WKyU", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-08-09T07:14:35Z", "updated_at": "2021-08-09T07:14:35Z", "author_association": "CONTRIBUTOR", "body": "I believe this also provides a workaround for the problem I face in https://github.com/simonw/datasette/issues/1300. \r\n\r\nNow I should be able to get table PKs and generate a row URL. I'll test this out and report my findings.\r\n\r\n\r\n```py\r\nfrom datasette.utils import path_from_row_pks\r\n\r\npks = await db.primary_keys(table)\r\nurl = self.ds.urls.row_blob(\r\n database,\r\n table,\r\n path_from_row_pks(row, pks, not pks),\r\n column,\r\n)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 963528457, "label": "render_cell() hook should support returning an awaitable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1419#issuecomment-893114612", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1419", "id": 893114612, "node_id": "IC_kwDOBm6k_c41O9j0", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-08-05T02:29:06Z", "updated_at": "2021-08-05T02:29:06Z", "author_association": "CONTRIBUTOR", "body": "there's a lot of complexity here, that's probably not worth addressing. i got what i needed by patching the dockerfile that cloudrun uses to install a newer version of sqlite.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 959710008, "label": "`publish cloudrun` should deploy a more recent SQLite version"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1419#issuecomment-892276385", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1419", "id": 892276385, "node_id": "IC_kwDOBm6k_c41Lw6h", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-08-04T00:58:49Z", "updated_at": "2021-08-04T00:58:49Z", "author_association": "CONTRIBUTOR", "body": "yes, [filter clause on aggregate queries were added to sqlite3 in 3.30](https://www.sqlite.org/releaselog/3_30_1.html)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 959710008, "label": "`publish cloudrun` should deploy a more recent SQLite version"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1401#issuecomment-884910320", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1401", "id": 884910320, "node_id": "IC_kwDOBm6k_c40vqjw", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-07-22T13:26:01Z", "updated_at": "2021-07-22T13:26:01Z", "author_association": "CONTRIBUTOR", "body": "ordered lists didn't work either, btw", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 950664971, "label": "unordered list is not rendering bullet points in description_html on database page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1388#issuecomment-876213177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1388", "id": 876213177, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NjIxMzE3Nw==", "user": {"value": 80737, "label": "aslakr"}, "created_at": "2021-07-08T07:47:17Z", "updated_at": "2021-07-08T07:47:17Z", "author_association": "CONTRIBUTOR", "body": "> This sounds like a valuable feature for people running Datasette behind a proxy.\r\n\r\nYes, in some cases it is easer to use e.g. Apache's [ProxyPass Directive](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass) with Unix Domain Socket like `unix:/home/www.socket|http://localhost/whatever/`.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 939051549, "label": "Serve using UNIX domain socket"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-869191854", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 869191854, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTE5MTg1NA==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-27T16:42:14Z", "updated_at": "2021-06-27T16:42:14Z", "author_association": "CONTRIBUTOR", "body": "This would really help with this issue: https://github.com/eyeseast/datasette-geojson/issues/7", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1168#issuecomment-869076254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1168", "id": 869076254, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NjI1NA==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-27T00:03:16Z", "updated_at": "2021-06-27T00:05:51Z", "author_association": "CONTRIBUTOR", "body": "> Related: Here's an implementation of a `get_metadata()` plugin hook by @brandonrobertz [next-LI@3fd8ce9](https://github.com/next-LI/datasette/commit/3fd8ce91f3108c82227bf65ff033929426c60437)\r\n\r\nHere's a plugin that implements metadata-within-DBs: [next-LI/datasette-live-config](https://github.com/next-LI/datasette-live-config)\r\n\r\nHow it works: If a database has a `__metadata` table, then it gets parsed and included in the global metadata. It also implements a database-action hook with a UI for managing config.\r\n\r\nMore context: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L109-L140", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777333388, "label": "Mechanism for storing metadata in _metadata tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074701", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074701, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:45:18Z", "updated_at": "2021-06-26T23:45:37Z", "author_association": "CONTRIBUTOR", "body": "> Here's where the plugin hook is called, demonstrating the `fallback=` argument:\r\n> \r\n> https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472\r\n> \r\n> I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459?\r\n\r\nI think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074182", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074182, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:37:42Z", "updated_at": "2021-06-26T23:37:42Z", "author_association": "CONTRIBUTOR", "body": "> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries.\r\n> > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try?\r\n> \r\n> Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160\r\n> \r\n> That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it.\r\n> \r\n> This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem.\r\n> \r\n> But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead.\r\n\r\n_Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.)\r\n\r\nIn my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1368#issuecomment-865204472", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1368", "id": 865204472, "node_id": "MDEyOklzc3VlQ29tbWVudDg2NTIwNDQ3Mg==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-21T17:11:37Z", "updated_at": "2021-06-21T17:11:37Z", "author_association": "CONTRIBUTOR", "body": "If this is a concept ACK then I will move onto fixing the tests (adding new ones) and updating the documentation for the new plugin hook.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 913865304, "label": "DRAFT: A new plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/278#issuecomment-864621099", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/278", "id": 864621099, "node_id": "MDEyOklzc3VlQ29tbWVudDg2NDYyMTA5OQ==", "user": {"value": 601708, "label": "mcint"}, "created_at": "2021-06-20T22:39:57Z", "updated_at": "2021-06-20T22:39:57Z", "author_association": "CONTRIBUTOR", "body": "Fair. I looked into it, it looks like it could be done, but it would be _a bit ugly_. I can upload and link a gist of my exploration. **Click** can parse a first argument while still recognizing it as a sub-command keyword. From there, the program could:\r\n1. ignore it preemptively if it matches a sub-command\r\n2. and/or check if a (db) file exists at the path.\r\n\r\nIt would then also need to set a shared db argument variable.\r\n\r\nClick also makes it easy to parse arguments from environment variables. If you're amenable, I may submit a patch for only that, which would update each sub-command to check for a DB/SQLITE_UTILS_DB environment variable. The goal would be usage that looks like: `DB=./convenient.db sqlite-utils [operation] [args]`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 923697888, "label": "Support db as first parameter before subcommand, or as environment variable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/272#issuecomment-861944202", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/272", "id": 861944202, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MTk0NDIwMg==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-16T01:41:03Z", "updated_at": "2021-06-16T01:41:03Z", "author_association": "CONTRIBUTOR", "body": "So, I do things like this a lot, too. I like the idea of piping in from stdin. Something like this would be nice to do in a makefile:\r\n\r\n```sh\r\ncat file.csv | sqlite-utils --csv --table data - 'SELECT * FROM data WHERE col=\"whatever\"' > filtered.csv\r\n```\r\n\r\nIf you assumed that you're always piping out the same format you're piping in, the option names don't have to change. Depends how much you want to change formats.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 921878733, "label": "Idea: import CSV to memory, run SQL, export in a single command"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1130#issuecomment-861497548", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1130", "id": 861497548, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MTQ5NzU0OA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-06-15T13:27:48Z", "updated_at": "2021-06-15T13:27:48Z", "author_association": "CONTRIBUTOR", "body": "There's a workaround: https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/\r\n\r\nand a future fix: https://css-tricks.com/safari-15-new-ui-theme-colors-and-a-css-tricks-cameo/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 756876238, "label": "Fix footer not sticking to bottom in short pages"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1370#issuecomment-857298526", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1370", "id": 857298526, "node_id": "MDEyOklzc3VlQ29tbWVudDg1NzI5ODUyNg==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-09T01:18:59Z", "updated_at": "2021-06-09T01:18:59Z", "author_association": "CONTRIBUTOR", "body": "I'm happy to grab some or all of these in this PR, if you want. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 914130834, "label": "Ensure db.path is a string before trying to insert into internal database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1368#issuecomment-856182547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1368", "id": 856182547, "node_id": "MDEyOklzc3VlQ29tbWVudDg1NjE4MjU0Nw==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-07T18:59:47Z", "updated_at": "2021-06-07T23:04:25Z", "author_association": "CONTRIBUTOR", "body": "Note that if we went with a \"update_metadata\" hook, the hook signature would look something like this (it would return nothing):\r\n\r\n```\r\nupdate_metadata(\r\n datasette=self, metadata=metadata, key=key, database=database, table=table,\r\n fallback=fallback\r\n)\r\n```\r\n\r\nThe Datasette function `_metadata_recursive_update(self, orig, updated)` would disappear into the plugins. Doing this, though, we'd lose the easy ability to make the local metadata.yaml immutable (since we'd no longer have the recursive update).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 913865304, "label": "DRAFT: A new plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1356#issuecomment-853895159", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1356", "id": 853895159, "node_id": "MDEyOklzc3VlQ29tbWVudDg1Mzg5NTE1OQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-03T14:03:59Z", "updated_at": "2021-06-03T14:03:59Z", "author_association": "CONTRIBUTOR", "body": "(Putting thoughts here to keep the conversation in one place.)\r\n\r\nI think using datasette for this use-case is the right approach. I usually have both datasette and sqlite-utils installed in the same project, and that's where I'm trying out queries, so it probably makes the most sense to have datasette also manage the output (and maybe the input, too).\r\n\r\nIt seems like both `--get` and `--query` could work better as subcommands, rather than options, if you're looking at building out a full CLI experience in datasette. It would give a cleaner separation in what you're trying to do and let each have its own dedicated options. So something like this:\r\n\r\n```sh\r\n# run an arbitrary query\r\ndatasette query covid.db \"select * from ny_times_us_counties limit 1\" --format yaml\r\n\r\n# run a canned query\r\ndatasette get covid.db some-canned-query --format yaml\r\n```\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 910092577, "label": "Research: syntactic sugar for using --get with SQL queries, maybe \"datasette query\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1348#issuecomment-850077261", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1348", "id": 850077261, "node_id": "MDEyOklzc3VlQ29tbWVudDg1MDA3NzI2MQ==", "user": {"value": 10801138, "label": "blairdrummond"}, "created_at": "2021-05-28T03:05:38Z", "updated_at": "2021-05-28T03:05:38Z", "author_association": "CONTRIBUTOR", "body": "Note, the CVEs are probably resolvable with this https://github.com/simonw/datasette/pull/1296 . My experience is that Ubuntu seems to manage these better? Though that is surprising :/ ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 904598267, "label": "DRAFT: add test and scan for docker images"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-846413174", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59", "id": 846413174, "node_id": "MDEyOklzc3VlQ29tbWVudDg0NjQxMzE3NA==", "user": {"value": 631242, "label": "frosencrantz"}, "created_at": "2021-05-22T14:06:19Z", "updated_at": "2021-05-22T14:06:19Z", "author_association": "CONTRIBUTOR", "body": "Thanks Simon!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771872303, "label": "Remove unneeded exists=True for -a/--auth flag."}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-842798043", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 842798043, "node_id": "MDEyOklzc3VlQ29tbWVudDg0Mjc5ODA0Mw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-05-18T03:28:25Z", "updated_at": "2021-05-18T03:28:25Z", "author_association": "CONTRIBUTOR", "body": "That corner handle looks like a hamburger menu to me. Note that the default resize handle is not limited to two-way resize: http://jsfiddle.net/LLrh7Lte/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1318#issuecomment-838449572", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1318", "id": 838449572, "node_id": "MDEyOklzc3VlQ29tbWVudDgzODQ0OTU3Mg==", "user": {"value": 49699333, "label": "dependabot[bot]"}, "created_at": "2021-05-11T13:12:30Z", "updated_at": "2021-05-11T13:12:30Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #1321.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 876431852, "label": "Bump black from 21.4b2 to 21.5b0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1280#issuecomment-837166862", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1280", "id": 837166862, "node_id": "MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg==", "user": {"value": 10801138, "label": "blairdrummond"}, "created_at": "2021-05-10T19:07:46Z", "updated_at": "2021-05-10T19:07:46Z", "author_association": "CONTRIBUTOR", "body": "Do you have a list of sqlite versions you want to test against?\r\n\r\nOne cool thing I saw recently (that we started using) was using `import docker` within python, and then writing pytest functions which executed against the container\r\n\r\n[setup](https://github.com/StatCan/kubeflow-containers/blob/3c7dcfb5e7188982fb8ebcded82e84292720f720/conftest.py#L85)\r\n\r\n[example](https://github.com/StatCan/kubeflow-containers/blob/master/tests/jupyterlab-cpu/test_julia.py#L8-L18)\r\n\r\nThe inspiration for this came from the [jupyter docker-stacks](https://github.com/jupyter/docker-stacks/blob/09fb66007615ea68d9bce8f8e1a2cf9402f1e432/test/test_packages.py#L107)\r\n\r\nSo off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 842862708, "label": "Ability to run CI against multiple SQLite versions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1296#issuecomment-835491318", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1296", "id": 835491318, "node_id": "MDEyOklzc3VlQ29tbWVudDgzNTQ5MTMxOA==", "user": {"value": 10801138, "label": "blairdrummond"}, "created_at": "2021-05-08T19:59:01Z", "updated_at": "2021-05-08T19:59:01Z", "author_association": "CONTRIBUTOR", "body": "I have also found that ubuntu has fewer vulnerabilities than the buster based images.\r\n\r\n```\r\n\u279c ~ docker pull python:3-buster\r\n\u279c ~ trivy image python:3-buster | head \r\n2021-04-28T17:14:29.313-0400 INFO Detecting Debian vulnerabilities...\r\n2021-04-28T17:14:29.393-0400 INFO Trivy skips scanning programming language libraries because no supported file was detected\r\npython:3-buster (debian 10.9)\r\n=============================\r\nTotal: 1621 (UNKNOWN: 13, LOW: 1106, MEDIUM: 343, HIGH: 145, CRITICAL: 14)\r\n+------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+\r\n| LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION | TITLE |\r\n+------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855446829, "label": "Dockerfile: use Ubuntu 20.10 as base"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1300#issuecomment-833132571", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1300", "id": 833132571, "node_id": "MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-05-06T00:16:50Z", "updated_at": "2021-05-06T00:18:05Z", "author_association": "CONTRIBUTOR", "body": "I ended up using some JS as a workaround. \r\n\r\nFirst, add a JS file in `metadata.yaml`:\r\n\r\n```yaml\r\nextra_js_urls:\r\n - '/static/app.js'\r\n```\r\nthen inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `` elements. \r\nYou need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post.\r\n~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run.\r\n\r\n```js\r\nwindow.addEventListener('DOMContentLoaded', () => {\r\n function renderBlobImages() {\r\n document.querySelectorAll('a[href*=\".blob\"]').forEach(el => {\r\n const img = document.createElement('img');\r\n img.className = 'blob-image';\r\n img.loading = 'lazy';\r\n img.src = el.href.replace('.blob', '.jpg');\r\n el.parentElement.replaceChild(img, el);\r\n });\r\n }\r\n\r\n renderBlobImages();\r\n});\r\n```\r\n\r\nwhile this does the job, I'd prefer handling this in Python where it belongs.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860625833, "label": "Make row available to `render_cell` plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1313#issuecomment-829352402", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1313", "id": 829352402, "node_id": "MDEyOklzc3VlQ29tbWVudDgyOTM1MjQwMg==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2021-04-29T15:47:23Z", "updated_at": "2021-04-29T15:47:23Z", "author_association": "CONTRIBUTOR", "body": "This pull request will no longer be automatically closed when a new version is found as this pull request was created by Dependabot Preview and this repo is using a `version: 2` config file. You can close this pull request and let Dependabot re-create it the next time it checks for updates.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 871046111, "label": "Bump black from 20.8b1 to 21.4b2"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1311#issuecomment-829260725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1311", "id": 829260725, "node_id": "MDEyOklzc3VlQ29tbWVudDgyOTI2MDcyNQ==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2021-04-29T13:58:08Z", "updated_at": "2021-04-29T13:58:08Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #1313.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 870227815, "label": "Bump black from 20.8b1 to 21.4b1"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1309#issuecomment-828679943", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1309", "id": 828679943, "node_id": "MDEyOklzc3VlQ29tbWVudDgyODY3OTk0Mw==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2021-04-28T18:26:03Z", "updated_at": "2021-04-28T18:26:03Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #1311.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 869237023, "label": "Bump black from 20.8b1 to 21.4b0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1298#issuecomment-823093669", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1298", "id": 823093669, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMzA5MzY2OQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-20T08:38:10Z", "updated_at": "2021-04-20T08:40:22Z", "author_association": "CONTRIBUTOR", "body": "@dracos I appreciate your ideas!\r\n\r\n1. Ooh, I like this: https://codepen.io/astro87/pen/LYRQNbd?editors=1100 (That's the codepen from your linked stackoverflow.)\r\n2. I worry that a max height will be a problem when my facets are open. (I've got 35 active ingredients, and so I've set the default_facet_size to 35.)\r\n3. I don't understand this one. I'm observing the screenshot... very helpful! (Ah, okay, TR = Top Right and BR = Bottom Right. Absolute grid refers to position style.) All the scroll bars look a little wonky to me. I've also got a lot of facets, and prefer the extra horizontal space so that not as many facets disappear below the fold. My site also has end users... some will be on mobile... not sure what the absolute grid would do there... \r\n4. (I still think a hover-arrow that scrolls upon click would help, too...)\r\n\r\nBut meanwhile, I'm going to go ahead and see if I can apply that shadow. (Never would've thought of that.) Hmmm... I'm not an SCSS person. This looks helpful! https://jsonformatter.org/scss-to-css", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855476501, "label": "improve table horizontal scroll experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1300#issuecomment-821971059", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1300", "id": 821971059, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMTk3MTA1OQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-04-18T10:42:19Z", "updated_at": "2021-04-18T10:42:19Z", "author_association": "CONTRIBUTOR", "body": "If there's a simpler way to generate a URL for a specific row, I'm all ears", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860625833, "label": "Make row available to `render_cell` plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1300#issuecomment-821970965", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1300", "id": 821970965, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMTk3MDk2NQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-04-18T10:41:15Z", "updated_at": "2021-04-18T10:41:15Z", "author_association": "CONTRIBUTOR", "body": "If I change the hookspec and add a row parameter, it works\r\n\r\nhttps://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/hookspecs.py#L58\r\n\r\n```\r\ndef render_cell(value, column, row, table, database, datasette):\r\n```\r\n\r\nBut to generate a URL, I need the primary keys, but I can't call `pks = await db.primary_keys(table)` inside a sync function. I can't call `datasette.utils.detect_primary_keys` either, because the db connection is not publicly exposed (AFAICT).\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860625833, "label": "Make row available to `render_cell` plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1296#issuecomment-819467759", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1296", "id": 819467759, "node_id": "MDEyOklzc3VlQ29tbWVudDgxOTQ2Nzc1OQ==", "user": {"value": 295329, "label": "camallen"}, "created_at": "2021-04-14T12:07:37Z", "updated_at": "2021-04-14T12:11:36Z", "author_association": "CONTRIBUTOR", "body": "> Removing /var/lib/apt and /var/lib/dpkg makes apt and dpkg unusable in\r\nimages based on this one. Running `apt-get clean` and removing\r\n/var/lib/apt/lists achieves similar size savings.\r\n\r\nthis PR helps me as removing the /var/lib/apt and /var/lib/dpkg directories breaks my ability to add packages when using `datasetteproject/datasette:0.56` as a base image.\r\n\r\n\r\n---- \r\nShorterm workaround for me was to use this in my Dockerfile\r\n```\r\nFROM datasetteproject/datasette:0.56\r\n\r\nRUN mkdir -p /var/lib/apt\r\nRUN mkdir -p /var/lib/dpkg\r\nRUN mkdir -p /var/lib/dpkg/updates\r\nRUN mkdir -p /var/lib/dpkg/info\r\nRUN touch /var/lib/dpkg/status\r\n\r\nRUN apt-get update # and install your packages etc\r\n```\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855446829, "label": "Dockerfile: use Ubuntu 20.10 as base"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/830#issuecomment-817414881", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/830", "id": 817414881, "node_id": "MDEyOklzc3VlQ29tbWVudDgxNzQxNDg4MQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-12T01:06:34Z", "updated_at": "2021-04-12T01:07:27Z", "author_association": "CONTRIBUTOR", "body": "Related: #1285, including arguments for natural breaks, equal interval, etc. modeled after choropleth map legends.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 636511683, "label": "Redesign register_facet_classes plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-815978405", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 815978405, "node_id": "MDEyOklzc3VlQ29tbWVudDgxNTk3ODQwNQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-08T16:47:29Z", "updated_at": "2021-04-10T03:59:00Z", "author_association": "CONTRIBUTOR", "body": "This worked for me: \r\n`{{ cell.value | replace('\", \"','; ') | replace('[\\\"','') | replace('\\\"]','')}}`\r\n\r\nI'm sure there is a prettier (and more flexible) way, but for now, this is ever-so-much more pleasant to look at. \r\n\r\n------ AFTER:\r\n\"Screen\r\n\r\n------ BEFORE:\r\n\"Screen\r\n\r\n\r\n\r\n(Note: I didn't figure out how to have one item have no semicolon, while multi-items close with a semicolon, but this is good enough for now. I also didn't figure out how to set up a new jinja filter. I don't want to add to /datasette/utils/__init__.py as I assume that would get overwritten when upgrading datasette. Having a starter guide on creating jinja filters in datasette would be helpful. (The jinja documentation isn't datasette-specific enough for me to quite nail it.)\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/502#issuecomment-812813732", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/502", "id": 812813732, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg==", "user": {"value": 5413548, "label": "louispotok"}, "created_at": "2021-04-03T05:16:54Z", "updated_at": "2021-04-03T05:16:54Z", "author_association": "CONTRIBUTOR", "body": "For what it's worth, if anyone finds this in the future, I was having the same issue. \r\n\r\nAfter digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so `datasette serve -i xyz.db` rather than the doc's quickstart recommendation of `datasette serve xyz.db`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 453131917, "label": "Exporting sqlite database(s)?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-812679221", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 812679221, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMjY3OTIyMQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-02T19:34:01Z", "updated_at": "2021-04-02T19:34:01Z", "author_association": "CONTRIBUTOR", "body": "This shows the city in a different color (and not the comma), but I get the idea, and I like it. (Ooh, could be nice to have the gear have an option in array fields to show as bullets or commas or semicolons...)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-810779928", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 810779928, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMDc3OTkyOA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-31T05:40:12Z", "updated_at": "2021-03-31T05:40:12Z", "author_association": "CONTRIBUTOR", "body": "Maybe the addition of two template files: 'one_database_index.html' and 'one_table_index.html' would be a better idea than the documentation diff idea. (They could include commented instructions to rename the preferred template 'index.html', along with any other necessary guidance.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1274#issuecomment-805214307", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1274", "id": 805214307, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNTIxNDMwNw==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-23T20:12:29Z", "updated_at": "2021-03-23T20:12:29Z", "author_association": "CONTRIBUTOR", "body": "One issue I could see with adding first class support for metadata in hjson format is that this would require adding an additional dependency to handle this, for a feature that would be unused by many users. I wonder if this could fit in as a plugin instead; if a hook existed for loading metadata (maybe as part of https://github.com/simonw/datasette/issues/860) the metadata could then come from any source, as specified by plugins, e.g. hjson, toml, XML, a database table etc.\r\n\r\nUntil/unless this exists, a few ideas for how you could add comments:\r\n- Using YAML as you suggest.\r\n- A common pattern is adding a `\"comment\"` key for comments to any object in JSON - I don't think including an unnecessary key like this would break anything in Datasette, but not certain.\r\n- You could use another tool as a preprocessor for your JSON metadata - e.g. hjson or Jsonnet. You'd write the metadata in that format, and then convert that into JSON to actually use as your final metadata.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 839008371, "label": "Might there be some way to comment metadata.json?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1153#issuecomment-804640440", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1153", "id": 804640440, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDY0MDQ0MA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T05:58:20Z", "updated_at": "2021-03-23T05:58:20Z", "author_association": "CONTRIBUTOR", "body": "Could there be a little widget that offers conversion from one to the other? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771202454, "label": "Use YAML examples in documentation by default, not JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1159#issuecomment-804639427", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1159", "id": 804639427, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDYzOTQyNw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T05:56:02Z", "updated_at": "2021-03-23T05:56:02Z", "author_association": "CONTRIBUTOR", "body": "With just three facets, I like it, but it does take more horizontal space. Would be nice to have a switch somewhere, enabling either original compact option or this proposed more-readable option. Also some control over word wrap (width setting) and facet spacing. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 774332247, "label": "Improve the display of facets information"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/164#issuecomment-804541064", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/164", "id": 804541064, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:45:12Z", "updated_at": "2021-03-23T02:45:12Z", "author_association": "CONTRIBUTOR", "body": "\"datasette skeleton\" feature removed #476", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 280013907, "label": "datasette skeleton command for kick-starting database and table metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/163#issuecomment-804539729", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/163", "id": 804539729, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDUzOTcyOQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:41:14Z", "updated_at": "2021-03-23T02:41:14Z", "author_association": "CONTRIBUTOR", "body": "I'm visiting old issues for context while learning datasette. Let me know if okay to make the occasional comment like this one.\r\nquerystring argument now located at:\r\nhttps://docs.datasette.io/en/latest/settings.html#sql-time-limit-ms", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 279547886, "label": "Document the querystring argument for setting a different time limit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/88#issuecomment-804471733", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/88", "id": 804471733, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDQ3MTczMw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-22T23:46:36Z", "updated_at": "2021-03-22T23:46:36Z", "author_association": "CONTRIBUTOR", "body": "Google Map API limits seem to prevent https://nhs-england-map.netlify.com from being a working demo.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273775212, "label": "Add NHS England Hospitals example to wiki"}, "performed_via_github_app": null}