{"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623807568", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16", "id": 623807568, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzgwNzU2OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-05-05T02:56:06Z", "updated_at": "2020-05-05T02:56:06Z", "author_association": "MEMBER", "body": "I'm pretty sure this is what I'm after. The `groups` table has what looks like identified labels in the rows with category = 2025:\r\n\r\n\"words__groups__2_528_rows_where_where_category___2025\"\r\n\r\nThen there's a `ga` table that maps groups to assets:\r\n\r\n\"words__ga__633_653_rows\"\r\n\r\nAnd an `assets` table which looks like it has one row for every one of my photos:\r\n\r\n\"words__assets__40_419_rows\"\r\n\r\nOne major challenge: these UUIDs are split into two integer numbers, `uuid_0` and `uuid_1` - but the main photos database uses regular UUIDs like this:\r\n\r\n![image](https://user-images.githubusercontent.com/9599/81031481-39164280-8e41-11ea-983b-005ced641a18.png)\r\n\r\nI need to figure out how to match up these two different UUID representations. I asked on Twitter if anyone has any ideas: https://twitter.com/simonw/status/1257500689019703296", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612287234, "label": "Import machine-learning detected labels (dog, llama etc) from Apple Photos"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395209", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395209, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTIwOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-05-10T21:52:42Z", "updated_at": "2020-05-10T21:52:42Z", "author_association": "MEMBER", "body": "Aha! It looks like I accidentally installed the old bplist into the same environment:\r\n```\r\n$ pip freeze | grep bpylist\r\nbpylist==0.1.4\r\nbpylist2==3.0.0\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395781", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395781, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTc4MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-05-10T21:57:09Z", "updated_at": "2020-05-10T21:57:09Z", "author_association": "MEMBER", "body": "Yes, I just recreated my virtual environment from scratch and the error went away.\r\n\r\nThe problem occurred when I ran `pip install datasette-bplist` in the same virtual environment - https://github.com/simonw/datasette-bplist/blob/master/setup.py depends on `bpylist` which is incompatible with `bpylist2`.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1035717429", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 1035717429, "node_id": "IC_kwDOD079W849u8s1", "user": {"value": 18504, "label": "harperreed"}, "created_at": "2022-02-11T01:55:38Z", "updated_at": "2022-02-11T01:55:38Z", "author_association": "NONE", "body": "I would love this merged! ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1656696679", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 1656696679, "node_id": "IC_kwDOD079W85ivy9n", "user": {"value": 319473, "label": "coldclimate"}, "created_at": "2023-07-29T10:10:29Z", "updated_at": "2023-07-29T10:10:29Z", "author_association": "NONE", "body": "+1 to getting this merged down.\r\n\r\nFor future googlers, I installed by...\r\n```\r\ngit clone git@github.com:RhetTbull/dogsheep-photos.git\r\ncd dogsheep-photos\r\ngit checkout update_for_bigsur\r\npython setup.py install\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-811362316", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 811362316, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMTM2MjMxNg==", "user": {"value": 871250, "label": "PabloLerma"}, "created_at": "2021-03-31T19:14:39Z", "updated_at": "2021-03-31T19:14:39Z", "author_association": "NONE", "body": "\ud83d\udc4b could I help somehow for this to be merged? As Big Sur is going to be more used as the time goes I think it would be nice to merge and publish a new version. Nice work!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/evernote-to-sqlite/issues/11#issuecomment-777798330", "issue_url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/11", "id": 777798330, "node_id": "MDEyOklzc3VlQ29tbWVudDc3Nzc5ODMzMA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-11T21:18:58Z", "updated_at": "2021-02-11T21:18:58Z", "author_association": "MEMBER", "body": "Thanks for the fix!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 792851444, "label": "XML parse error"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622279374", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33", "id": 622279374, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjI3OTM3NA==", "user": {"value": 2029, "label": "garethr"}, "created_at": "2020-05-01T07:12:47Z", "updated_at": "2020-05-01T07:12:47Z", "author_association": "NONE", "body": "I also go it working with:\r\n\r\n```yaml\r\nrun: echo ${{ secrets.github_token }} | github-to-sqlite auth\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 609950090, "label": "Fall back to authentication via ENV"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/34#issuecomment-622133298", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34", "id": 622133298, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjEzMzI5OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-04-30T21:48:24Z", "updated_at": "2020-04-30T21:48:24Z", "author_association": "MEMBER", "body": "Unfortunately it's not available through any GitHub API - I managed to figure out how to get dependencies, but I need dependents. https://github.com/simonw/til/blob/master/github/dependencies-graphql-api.md", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 610408908, "label": "Command for retrieving dependents for a repo"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770071568", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60", "id": 770071568, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDA3MTU2OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-01-29T21:56:15Z", "updated_at": "2021-01-29T21:56:15Z", "author_association": "MEMBER", "body": "I really like the way you're using pipes here - really smart. It's similar to how I build the demo database in this GitHub Actions workflow:\r\n\r\nhttps://github.com/dogsheep/github-to-sqlite/blob/62dfd3bc4014b108200001ef4bc746feb6f33b45/.github/workflows/deploy-demo.yml#L52-L82\r\n\r\n`twitter-to-sqlite` actually has a mechanism for doing this kind of thing, documented at https://github.com/dogsheep/twitter-to-sqlite#providing-input-from-a-sql-query-with---sql-and---attach\r\n\r\nIt lets you do things like:\r\n\r\n```\r\n$ twitter-to-sqlite users-lookup my.db --sql=\"select follower_id from following\" --ids\r\n```\r\nMaybe I should add something similar to `github-to-sqlite`? Feels like it could be really useful.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797097140, "label": "Use Data from SQLite in other commands"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/72#issuecomment-1105474232", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/72", "id": 1105474232, "node_id": "IC_kwDODFdgUs5B5DK4", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-21T17:02:15Z", "updated_at": "2022-04-21T17:02:15Z", "author_association": "MEMBER", "body": "That's interesting - yeah it looks like the number of pages can be derived from the `Link` header, which is enough information to show a progress bar, probably using Click just to avoid adding another dependency.\r\n\r\nhttps://docs.github.com/en/rest/guides/traversing-with-pagination", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1211283427, "label": "feature: display progress bar when downloading multi-page responses"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790198930", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4", "id": 790198930, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MDE5ODkzMA==", "user": {"value": 203343, "label": "Btibert3"}, "created_at": "2021-03-04T00:58:40Z", "updated_at": "2021-03-04T00:58:40Z", "author_association": "NONE", "body": "I am just seeing this sorry, yes! I will kick the tires later on tonight. My apologies for the delay.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778380836, "label": "Feature Request: Gmail"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-786925280", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 786925280, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjkyNTI4MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-26T22:23:10Z", "updated_at": "2021-02-26T22:23:10Z", "author_association": "MEMBER", "body": "Thanks!\r\n\r\nI requested my Gmail export from takeout - once that arrives I'll test it against this and then merge the PR.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790389335", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 790389335, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MDM4OTMzNQ==", "user": {"value": 306240, "label": "UtahDave"}, "created_at": "2021-03-04T07:32:04Z", "updated_at": "2021-03-04T07:32:04Z", "author_association": "NONE", "body": "> The command takes quite a while to start running, presumably because this line causes it to have to scan the WHOLE file in order to generate a count:\r\n> \r\n> https://github.com/dogsheep/google-takeout-to-sqlite/blob/a3de045eba0fae4b309da21aa3119102b0efc576/google_takeout_to_sqlite/utils.py#L66-L67\r\n> \r\n> I'm fine with waiting though. It's not like this is a command people run every day - and without that count we can't show a progress bar, which seems pretty important for a process that takes this long.\r\n\r\nThe wait is from python loading the mbox file. This happens regardless if you're getting the length of the mbox. The mbox module is on the slow side. It is possible to do one's own parsing of the mbox, but I kind of wanted to avoid doing that.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/pocket-to-sqlite/issues/10#issuecomment-1221623052", "issue_url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/10", "id": 1221623052, "node_id": "IC_kwDODLZ_YM5I0H0M", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-08-21T21:20:33Z", "updated_at": "2022-08-21T21:20:33Z", "author_association": "MEMBER", "body": "That was clearly the intention from the description of this issue:\r\n- #4", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1246826792, "label": "When running `auth` command, don't overwrite an existing auth.json file"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/20#issuecomment-544335363", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/20", "id": 544335363, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NDMzNTM2Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-10-21T03:32:04Z", "updated_at": "2019-10-21T03:32:04Z", "author_association": "MEMBER", "body": "In case anyone is interested, here's an extract from the crontab I'm running these under at the moment:\r\n```\r\n1,11,21,31,41,51 * * * * /home/ubuntu/datasette-venv/bin/twitter-to-sqlite user-timeline /home/ubuntu/twitter.db -a /home/ubuntu/auth.json --since\r\n2,7,12,17,22,27,32,37,42,47,52,57 * * * * /home/ubuntu/datasette-venv/bin/twitter-to-sqlite home-timeline /home/ubuntu/timeline.db -a /home/ubuntu/auth.json --since\r\n6,16,26,36,46,56 * * * * /home/ubuntu/datasette-venv/bin/twitter-to-sqlite favorites /home/ubuntu/twitter.db -a /home/ubuntu/auth.json --stop_after=50\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 506268945, "label": "--since support for various commands for refresh-by-cron"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-691501132", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50", "id": 691501132, "node_id": "MDEyOklzc3VlQ29tbWVudDY5MTUwMTEzMg==", "user": {"value": 706257, "label": "bcongdon"}, "created_at": "2020-09-12T14:48:10Z", "updated_at": "2020-09-12T14:48:10Z", "author_association": "NONE", "body": "This seems to be an issue even with larger values of `--stop_after`:\r\n\r\n```\r\n$ twitter-to-sqlite favorites twitter.db --stop_after=2000\r\nImporting favorites [####################################] 198\r\n$\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 698791218, "label": "favorites --stop_after=N stops after min(N, 200)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-716048564", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 716048564, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjA0ODU2NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-10-24T20:08:31Z", "updated_at": "2020-10-24T20:08:31Z", "author_association": "OWNER", "body": "Documentation here: https://docs.datasette.io/en/latest/internals.html#datasette-urls", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1050#issuecomment-718342036", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1050", "id": 718342036, "node_id": "MDEyOklzc3VlQ29tbWVudDcxODM0MjAzNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-10-29T03:49:57Z", "updated_at": "2020-10-29T03:49:57Z", "author_association": "OWNER", "body": "@thadk from that error it looks like the problem may have been that you had a BLOB column containing a `null` value? If so that's definitely a bug, I'll fix that.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 729057388, "label": "Switch to .blob render extension for BLOB downloads"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1399341761", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1399341761, "node_id": "IC_kwDOBm6k_c5TaELB", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-01-21T22:07:19Z", "updated_at": "2023-01-21T22:07:19Z", "author_association": "OWNER", "body": "Idea for supporting streaming with the `register_output_renderer` hook:\r\n\r\n```python\r\n@hookimpl\r\ndef register_output_renderer(datasette):\r\n return {\r\n \"extension\": \"test\",\r\n \"render\": render_demo,\r\n \"can_render\": can_render_demo,\r\n \"render_stream\": render_demo_stream, # This is new\r\n }\r\n```\r\nSo there's a new `\"render_stream\"` key which can be returned, which if present means that the output renderer supports streaming.\r\n\r\nI'll play around with the design of that function signature in:\r\n\r\n- #1999\r\n- #1062 ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/111#issuecomment-738904347", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/111", "id": 738904347, "node_id": "MDEyOklzc3VlQ29tbWVudDczODkwNDM0Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-12-04T17:16:56Z", "updated_at": "2020-12-04T17:16:56Z", "author_association": "OWNER", "body": "This is STILL a good idea.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274615452, "label": "Add \u201cupdated\u201d to metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1114#issuecomment-735443626", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1114", "id": 735443626, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTQ0MzYyNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-29T19:40:49Z", "updated_at": "2020-11-29T19:40:49Z", "author_association": "OWNER", "body": "Fix is out in 0.52.1: https://docs.datasette.io/en/latest/changelog.html#v0-52-1", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752966476, "label": "--load-extension=spatialite not working with datasetteproject/datasette docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1142#issuecomment-743998792", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1142", "id": 743998792, "node_id": "MDEyOklzc3VlQ29tbWVudDc0Mzk5ODc5Mg==", "user": {"value": 6622733, "label": "nitinpaultifr"}, "created_at": "2020-12-13T12:14:06Z", "updated_at": "2020-12-13T12:14:06Z", "author_association": "NONE", "body": "Agreed, it would definitely provide better controls. However, I do feel it makes for a bit of inconsistent UX for the 'Advanced export' section, with links to download for JSON, checkboxes and radio buttons + button to download for CSV. Do you think this example makes the UX a bit nicer/consistent?\r\n\r\n![Screenshot 2020-12-13 at 5 38 43 PM](https://user-images.githubusercontent.com/6622733/102011444-1dc1cd00-3d6a-11eb-9e38-5af198161e80.png)\r\n\r\nI could give it a try if you'd like but I've never contributed to an actual project!\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 763361458, "label": "\"Stream all rows\" is not at all obvious"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1142#issuecomment-744522099", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1142", "id": 744522099, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NDUyMjA5OQ==", "user": {"value": 6622733, "label": "nitinpaultifr"}, "created_at": "2020-12-14T15:37:47Z", "updated_at": "2020-12-14T15:37:47Z", "author_association": "NONE", "body": "Alright I could give it a try! This might be a stupid question, can you tell me how to run the server from my fork? So that I can test the changes?", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 763361458, "label": "\"Stream all rows\" is not at all obvious"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1142#issuecomment-744563209", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1142", "id": 744563209, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NDU2MzIwOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-12-14T16:41:11Z", "updated_at": "2020-12-14T16:41:11Z", "author_association": "OWNER", "body": "To check out and start the server:\r\n\r\n /tmp % git clone git@github.com:nitinpaul/datasette\r\n Cloning into 'datasette'...\r\n remote: Enumerating objects: 124, done.\r\n # ...\r\n datasette % python3 -m venv venv\r\n datasette % source venv/bin/activate\r\n (venv) datasette % pip install -e '.[test]'\r\n Obtaining file:///private/tmp/datasette\r\n Collecting asgiref<3.4.0,>=3.2.10\r\n Using cached asgiref-3.3.1-py3-none-any.whl (19 kB)\r\n # ...\r\n (venv) datasette % datasette\r\n INFO: Started server process [24002]\r\n INFO: Waiting for application startup.\r\n INFO: Application startup complete.\r\n INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)\r\n\r\nAnd to run the tests:\r\n\r\n (venv) datasette % pytest\r\n ======================================================================== test session starts ========================================================================\r\n platform darwin -- Python 3.9.1, pytest-6.1.2, py-1.10.0, pluggy-0.13.1\r\n SQLite: 3.34.0\r\n rootdir: /private/tmp/datasette, configfile: pytest.ini\r\n plugins: asyncio-0.14.0, timeout-1.4.2\r\n collected 841 items \r\n\r\n tests/test_package.py .. [ 0%]\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 763361458, "label": "\"Stream all rows\" is not at all obvious"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1144#issuecomment-744489028", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1144", "id": 744489028, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NDQ4OTAyOA==", "user": {"value": 475613, "label": "MarkusH"}, "created_at": "2020-12-14T14:47:11Z", "updated_at": "2020-12-14T14:47:11Z", "author_association": "NONE", "body": "Thanks for opening the issue, @simonw. Let me elaborate on my Tweets.\r\n\r\n[datasette-chartjs](https://github.com/MarkusH/datasette-chartjs) provides drop down lists to pick the chart visualization (e.g. bar, line, doughnut, pie, ...) as well as the column used for the \"x axis\" (e.g. time).\r\n\r\nA user can change the values on-demand. The chart will be redrawn w/o querying the database again.\r\n\r\nHowever, if a user wants to change the underlying query, they will use the SQL field provided by datasette or any of the other datasette built-in features to amend a query. In order to maintain a user's selections for the plugin, datasette-chartjs copies some parts of [datasette-vega](https://github.com/simonw/datasette-vega) which persist the chosen visualization and column in the hash part of a URL (the stuff behind the `#`). The plugin load the config from the hash upon initialization on the next page and use it accordingly.\r\n\r\nAdditionally, datasette-vega and datasette-chartjs need to make sure to include the hash in all links and forms that cause a reload of the page. This is, such that the config persists between clicks.\r\n\r\nThis ticket is about moving thes parts into datasette that provide the functionality to do so. This includes:\r\n\r\n1. a way to load config options with a given prefix from the current URL hash\r\n1. a way to update the current URL hash with a new config value or a bunch of config options\r\n1. updating all necessary links and forms on the current page to include the URL hash whenever its updated\r\n1. to prevent leaking config options to external pages, only \"internal\" links should be updated\r\n\r\nThere's another, optional, feature that we might want to think about during the design phase: the scope of the config. Links within a datasette instance have 1 of 3 scopes:\r\n\r\n1. global, for the whole datasette project\r\n1. database, for all tables in a database\r\n1. table, only for a table within a database\r\n\r\nWhen updating the links and forms as pointed out in 3. above, it might be worth considering which links need to be updated. I could imagine a plugin that wants to persist some setting across all tables within a database but another setting only within a table.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 765637324, "label": "JavaScript to help plugins interact with the fragment part of the URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1148#issuecomment-747062909", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1148", "id": 747062909, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NzA2MjkwOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-12-16T21:51:54Z", "updated_at": "2020-12-16T21:51:54Z", "author_association": "OWNER", "body": "This is a really frustrating bug with Vercel: https://github.com/simonw/datasette-publish-vercel/issues/28\r\n\r\n`+` characters in URLs get translated into spaces before they get to Datasette. They know about the bug and said they were working on a fix a few months ago, but looks like it's still a problem.\r\n\r\nA workaround is to avoid `+` and use `-` instead - I think this SQL query does the same thing as yours:\r\n\r\nhttps://aws-partners-singapore.vercel.app/partners?sql=select%0D%0A++A.launch_rank%2C%0D%0A++A.partner_info%0D%0Afrom%0D%0A++summary+A%0D%0A++INNER+JOIN+summary+B+ON+A.launch_rank+%3E%3D+B.launch_rank+-+3%0D%0A++AND+A.launch_rank+-4+%3C%3D+B.launch_rank%0D%0AWHERE%0D%0A++B.%22partner_info%22+LIKE+%27%25Palo+Alto%25%27\r\n\r\n```sql\r\nselect\r\n A.launch_rank,\r\n A.partner_info\r\nfrom\r\n summary A\r\n INNER JOIN summary B ON A.launch_rank >= B.launch_rank - 3\r\n AND A.launch_rank -4 <= B.launch_rank\r\nWHERE\r\n B.\"partner_info\" LIKE '%Palo Alto%'\r\n```\r\nI've been moving projects from Vercel to Cloud Run when they run into this, but that's not a great situation to be in.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 767561886, "label": "Syntax error with + symbol when deployed to Vercel"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1149#issuecomment-747207787", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1149", "id": 747207787, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NzIwNzc4Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-12-17T05:06:16Z", "updated_at": "2020-12-17T05:06:16Z", "author_association": "OWNER", "body": "So, an idea: what if Datasette's default CSS applied only to elements with classes - or maybe to childen of a `body class=\"datasette\"` element? In such a way that you could write your own custom HTML that reused elements of Datasette's CSS - the cog menu styling for example - but only on an opt-in basis?", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 769520939, "label": "Make it easier to theme Datasette with CSS"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1153#issuecomment-805109341", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1153", "id": 805109341, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNTEwOTM0MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-03-23T17:55:48Z", "updated_at": "2021-03-23T18:41:57Z", "author_association": "OWNER", "body": "Beginnings of a UI element for switching between them:\r\n```html\r\n
\r\nJSON\r\nYAML\r\n
\r\n```\r\n\"Metadata_\u2014_Datasette_documentation\"\r\n\r\nThat `
` has a padding of 12px, so using 12px padding on the tab links should get them to line up better.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771202454, "label": "Use YAML examples in documentation by default, not JSON"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1169#issuecomment-753653260", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1169", "id": 753653260, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzY1MzI2MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-01-03T17:54:40Z", "updated_at": "2021-01-03T17:54:40Z", "author_association": "OWNER", "body": "And @benpickles yes I would land that pull request straight away as-is. Thanks!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777677671, "label": "Prettier package not actually being cached"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/120#issuecomment-439421164", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/120", "id": 439421164, "node_id": "MDEyOklzc3VlQ29tbWVudDQzOTQyMTE2NA==", "user": {"value": 36796532, "label": "ad-si"}, "created_at": "2018-11-16T15:05:18Z", "updated_at": "2018-11-16T15:05:18Z", "author_association": "NONE", "body": "This would be an awesome feature \u2764\ufe0f ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275087397, "label": "Plugin that adds an authentication layer of some sort"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1200#issuecomment-777178728", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1200", "id": 777178728, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NzE3ODcyOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-11T03:13:59Z", "updated_at": "2021-02-11T03:13:59Z", "author_association": "OWNER", "body": "I came up with the need for this while playing with this tool: https://calands.datasettes.com/calands?sql=select%0D%0A++AsGeoJSON(geometry)%2C+*%0D%0Afrom%0D%0A++CPAD_2020a_SuperUnits%0D%0Awhere%0D%0A++PARK_NAME+like+'%25mini%25'+and%0D%0A++Intersects(GeomFromGeoJSON(%3Afreedraw)%2C+geometry)+%3D+1%0D%0A++and+CPAD_2020a_SuperUnits.rowid+in+(%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++SpatialIndex%0D%0A++++where%0D%0A++++++f_table_name+%3D+'CPAD_2020a_SuperUnits'%0D%0A++++++and+search_frame+%3D+GeomFromGeoJSON(%3Afreedraw)%0D%0A++)&freedraw={\"type\"%3A\"MultiPolygon\"%2C\"coordinates\"%3A[[[[-122.42202758789064%2C37.82280243352759]%2C[-122.39868164062501%2C37.823887203271454]%2C[-122.38220214843751%2C37.81846319511331]%2C[-122.35061645507814%2C37.77071473849611]%2C[-122.34924316406251%2C37.74465712069939]%2C[-122.37258911132814%2C37.703380457832374]%2C[-122.39044189453125%2C37.690340943717715]%2C[-122.41241455078126%2C37.680559803205135]%2C[-122.44262695312501%2C37.67295135774715]%2C[-122.47283935546876%2C37.67295135774715]%2C[-122.52502441406251%2C37.68382032669382]%2C[-122.53463745117189%2C37.6892542140253]%2C[-122.54699707031251%2C37.690340943717715]%2C[-122.55798339843751%2C37.72945260537781]%2C[-122.54287719726564%2C37.77831314799672]%2C[-122.49893188476564%2C37.81303878836991]%2C[-122.46185302734376%2C37.82822612280363]%2C[-122.42889404296876%2C37.82822612280363]%2C[-122.42202758789064%2C37.82280243352759]]]]} - before I fixed https://github.com/simonw/datasette-leaflet-geojson/issues/16 it was loading a LOT of maps, which felt bad. I wanted to be able to link people to that page with a hard limit on the number of rows displayed on that page.\r\n\r\nIt's mainly to guard against unexpected behaviour from limit-less queries though. It's not a very high priority feature!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 792890765, "label": "?_size=10 option for the arbitrary query page would be useful"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1209#issuecomment-769455370", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1209", "id": 769455370, "node_id": "MDEyOklzc3VlQ29tbWVudDc2OTQ1NTM3MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-01-28T23:00:21Z", "updated_at": "2021-01-28T23:00:21Z", "author_association": "OWNER", "body": "Good catch on the workaround here. The root problem is that `datasette-template-sql` looks for the first available databsae if you don't provide it with a `database=` argument, and in Datasette 0.54 the first available database changed to being the new `_internal` database.\r\n\r\nIs this a bug? I think it is - because the documented behaviour on https://docs.datasette.io/en/stable/internals.html#get-database-name is this:\r\n\r\n> `name` - string, optional\r\n>\r\n> The name to be used for this database - this will be used in the URL path, e.g. `/dbname`. If not specified Datasette will pick one based on the filename or memory name.\r\n\r\nSince the new behaviour differs from what was in the documentation I'm going to treat this as a bug and fix it.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 795367402, "label": "v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1217#issuecomment-774385092", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1217", "id": 774385092, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NDM4NTA5Mg==", "user": {"value": 6165713, "label": "plpxsk"}, "created_at": "2021-02-06T02:49:11Z", "updated_at": "2021-02-06T02:49:11Z", "author_association": "NONE", "body": "A good reference seems to be the note to run `datasette` as a module in https://github.com/simonw/datasette/pull/556\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 802513359, "label": "Possible to deploy as a python app (for Rstudio connect server)?"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-778467759", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 778467759, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2021-02-12T21:35:17Z", "updated_at": "2021-02-12T21:35:17Z", "author_association": "NONE", "body": "Thank you", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1238#issuecomment-790857004", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1238", "id": 790857004, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2021-03-04T19:06:55Z", "updated_at": "2021-03-04T19:06:55Z", "author_association": "NONE", "body": "@rgieseke Ah, that's super helpful. Thank you for the workaround for now!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813899472, "label": "Custom pages don't work with base_url setting"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1241#issuecomment-784567547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1241", "id": 784567547, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDU2NzU0Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-23T22:45:56Z", "updated_at": "2021-02-23T22:46:12Z", "author_association": "OWNER", "body": "I really like the way the Share feature on Stack Overflow works: https://stackoverflow.com/questions/18934149/how-can-i-use-postgresqls-text-column-type-in-django\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814595021, "label": "Share button for copying current URL"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1255#issuecomment-812710120", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1255", "id": 812710120, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMjcxMDEyMA==", "user": {"value": 1111743, "label": "jungle-boogie"}, "created_at": "2021-04-02T20:50:08Z", "updated_at": "2021-04-02T20:50:08Z", "author_association": "NONE", "body": "Hello again,\r\n\r\nI was able to get my facets running with this `settings.json`, which was lifted from one of Simon's datasette's and slightly modified.\r\n\r\n```\r\n{\r\n    \"default_page_size\": 100,\r\n    \"max_returned_rows\": 1000,\r\n    \"num_sql_threads\": 3,\r\n    \"sql_time_limit_ms\": 9000,\r\n    \"default_facet_size\": 10,\r\n    \"facet_time_limit_ms\": 9000,\r\n    \"facet_suggest_time_limit_ms\": 500,\r\n    \"hash_urls\": false,\r\n    \"allow_facet\": true,\r\n    \"suggest_facets\": false,\r\n    \"default_cache_ttl\": 5,\r\n    \"default_cache_ttl_hashed\": 31536000,\r\n    \"cache_size_kb\": 0,\r\n    \"allow_csv_stream\": true,\r\n    \"max_csv_mb\": 100,\r\n    \"truncate_cells_html\": 2048,\r\n    \"template_debug\": false,\r\n    \"base_url\": \"/\"\r\n}\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 826700095, "label": "Facets timing out but work when filtering"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1258#issuecomment-1437671409", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1258", "id": 1437671409, "node_id": "IC_kwDOBm6k_c5VsR_x", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2023-02-20T23:39:58Z", "updated_at": "2023-02-20T23:39:58Z", "author_association": "CONTRIBUTOR", "body": "This is pretty annoying for FTS because sqlite throws an error instead of just doing something like returning all or no results. This makes users who are unfamiliar with SQL and Datasette think the canned query page is broken and is a frequent source of confusion.\r\n\r\nTo anyone dealing with this: My solution is to modify the canned query so that it returns no results which cues people to fill in the blank parameters.\r\n\r\nSo instead of `emails_fts match escape_fts(:search))`\r\n\r\nMy canned queries now look like this:\r\n\r\n`emails_fts match escape_fts(iif(:search==\"\", \"*\", :search))`\r\n\r\nThere are no asterisks in my data so the result is always blank.\r\n\r\nUltimately it would be nice to be able to handle this in the metadata. Either making some named parameters required or setting some default values.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 828858421, "label": "Allow canned query params to specify default values"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1274#issuecomment-805214307", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1274", "id": 805214307, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNTIxNDMwNw==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-23T20:12:29Z", "updated_at": "2021-03-23T20:12:29Z", "author_association": "CONTRIBUTOR", "body": "One issue I could see with adding first class support for metadata in hjson format is that this would require adding an additional dependency to handle this, for a feature that would be unused by many users. I wonder if this could fit in as a plugin instead; if a hook existed for loading metadata (maybe as part of https://github.com/simonw/datasette/issues/860) the metadata could then come from any source, as specified by plugins, e.g. hjson, toml, XML, a database table etc.\r\n\r\nUntil/unless this exists, a few ideas for how you could add comments:\r\n- Using YAML as you suggest.\r\n- A common pattern is adding a `\"comment\"` key for comments to any object in JSON - I don't think including an unnecessary key like this would break anything in Datasette, but not certain.\r\n- You could use another tool as a preprocessor for your JSON metadata - e.g. hjson or Jsonnet. You'd write the metadata in that format, and then convert that into JSON to actually use as your final metadata.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 839008371, "label": "Might there be some way to comment metadata.json?"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-810740486", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 810740486, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMDc0MDQ4Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-03-31T03:57:55Z", "updated_at": "2021-03-31T03:57:55Z", "author_association": "OWNER", "body": "You're right, doing this is really hard at the moment - I'm not sure I know how I would tackle this either, and it's something I've wanted in the past!\r\n\r\nI'll have a think about this one.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-949604763", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 949604763, "node_id": "IC_kwDOBm6k_c44mdGb", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-10-22T12:54:34Z", "updated_at": "2021-10-22T12:54:34Z", "author_association": "CONTRIBUTOR", "body": "i'm going to take a swing at this today. we'll see.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-812664443", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 812664443, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMjY2NDQ0Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-04-02T18:52:45Z", "updated_at": "2021-04-02T18:52:51Z", "author_association": "OWNER", "body": "Idea: default to displaying single-dimension JSON arrays of strings as a comma-separated list but show the comma in a different colour - something like this:\r\n\r\n\"fixtures__facetable__15_rows\"\r\n\r\nI used this HTML for the prototype (re-using `.type-int` just to get the colour):\r\n```html\r\ntag1, tag2\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-815978405", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 815978405, "node_id": "MDEyOklzc3VlQ29tbWVudDgxNTk3ODQwNQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-08T16:47:29Z", "updated_at": "2021-04-10T03:59:00Z", "author_association": "CONTRIBUTOR", "body": "This worked for me:                      \r\n`{{ cell.value | replace('\", \"','; ') | replace('[\\\"','') | replace('\\\"]','')}}`\r\n\r\nI'm sure there is a prettier (and more flexible) way, but for now, this is ever-so-much more pleasant to look at. \r\n\r\n------ AFTER:\r\n\"Screen\r\n\r\n------ BEFORE:\r\n\"Screen\r\n\r\n\r\n\r\n(Note: I didn't figure out how to have one item have no semicolon, while multi-items close with a semicolon, but this is good enough for now. I also didn't figure out how to set up a new jinja filter. I don't want to add to /datasette/utils/__init__.py as I assume that would get overwritten when upgrading datasette. Having a starter guide on creating jinja filters in datasette would be helpful. (The jinja documentation isn't datasette-specific enough for me to quite nail it.)\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1304#issuecomment-981980048", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1304", "id": 981980048, "node_id": "IC_kwDOBm6k_c46h9OQ", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-11-29T20:13:53Z", "updated_at": "2021-11-29T20:14:11Z", "author_association": "NONE", "body": "There isn't any way to do this with sqlite as far as I know.  The only option is to insert the right number of ? placeholders into the sql template and then provide an array of values.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 863884805, "label": "Document how to send multiple values for \"Named parameters\" "}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1304#issuecomment-988463455", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1304", "id": 988463455, "node_id": "IC_kwDOBm6k_c466sFf", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-12-08T03:23:14Z", "updated_at": "2021-12-08T03:23:14Z", "author_association": "NONE", "body": "I actually think it would be a useful thing to add support for in datasette. It wouldn't be difficult to unwind an array of params and add the placeholders automatically.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 863884805, "label": "Document how to send multiple values for \"Named parameters\" "}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1375#issuecomment-860230385", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1375", "id": 860230385, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MDIzMDM4NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-06-13T15:37:49Z", "updated_at": "2021-06-13T15:37:49Z", "author_association": "OWNER", "body": "There is a feature for this at the moment, but it's a little bit hidden: you can use `?_json=col` to tell\r\nDatasette that you would like a specific column to be exported as nested JSON: https://docs.datasette.io/en/stable/json_api.html#special-json-arguments\r\n\r\nI considered trying to make this automatic - so it detects columns that appear to contain valid JSON and outputs them as nested objects - but the problem with that is that it can lead to inconsistent results - you might hit the API and find that not every column contains valid JSON (compared to the previous day) resulting in the API retuning  string instead of the expected dictionary and breaking your code.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919508498, "label": "JSON export dumps JSON fields as TEXT"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1375#issuecomment-860548546", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1375", "id": 860548546, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MDU0ODU0Ng==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-14T09:41:59Z", "updated_at": "2021-06-14T09:41:59Z", "author_association": "NONE", "body": "> There is a feature for this at the moment, but it's a little bit hidden: you can use `?_json=col` to tell\r\n> Datasette that you would like a specific column to be exported as nested JSON: https://docs.datasette.io/en/stable/json_api.html#special-json-arguments\r\n\r\nThanks :)\r\n \r\n> I considered trying to make this automatic - so it detects columns that appear to contain valid JSON and outputs them as nested objects - but the problem with that is that it can lead to inconsistent results - you might hit the API and find that not every column contains valid JSON (compared to the previous day) resulting in the API retuning string instead of the expected dictionary and breaking your code.\r\n\r\nIf a developer is not sure if the JSON fields are valid, but then retrieves and parse them, it should handle errors too. Handling inconsistent data is necessary due to the nature of SQLite. A global or dataset option to render the data as they have been defined (JSON, boolean, etc.) when requesting JSON could allow the user to download a regular JSON from the browser without having to rely on APIs. I would guess someone could just make a custom template with an extra JSON-parsed download button otherwise :)", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919508498, "label": "JSON export dumps JSON fields as TEXT"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066222323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066222323, "node_id": "IC_kwDOBm6k_c4_jULz", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-14T00:36:42Z", "updated_at": "2022-03-14T00:36:42Z", "author_association": "CONTRIBUTOR", "body": "> Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)\r\n\r\nAll good. Report back any issues you find with this stuff. Metadata/dynamic config hasn't been tested widely outside of what I've done AFAIK. If you find a strong use case for async meta, it's going to be better to know sooner rather than later!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1396#issuecomment-880326049", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1396", "id": 880326049, "node_id": "MDEyOklzc3VlQ29tbWVudDg4MDMyNjA0OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-07-15T01:50:05Z", "updated_at": "2021-07-15T01:50:05Z", "author_association": "OWNER", "body": "I think I made a mistake in this commit: https://github.com/simonw/datasette/commit/0486303b60ce2784fd2e2ecdbecf304b7d6e6659\r\n\r\n\"Explicitly_push_version_tag__refs__1281_\u00b7_simonw_datasette_0486303\"\r\n\r\nIt looks like I copied `$VERSION_TAG` from here - but it's not available in the `publish.yml` flow: https://github.com/simonw/datasette/blob/0486303b60ce2784fd2e2ecdbecf304b7d6e6659/.github/workflows/push_docker_tag.yml#L18-L25", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 944903881, "label": "\"invalid reference format\" publishing Docker image"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1522#issuecomment-976117989", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1522", "id": 976117989, "node_id": "IC_kwDOBm6k_c46LmDl", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-11-23T03:00:34Z", "updated_at": "2021-11-23T03:00:34Z", "author_association": "CONTRIBUTOR", "body": "I tried deploying the most recent version of the Dockerfile in this thread ([link to comment](https://github.com/simonw/datasette/issues/1522#issuecomment-974605128)), and after trying a few different different combinations, I was only successful when I used `--no-cpu-throttling` (\"CPU Is always allocated\" in the UI)\r\n\r\nUsing this method, I got a very similar issue to you: The first time I'd load the site I'd get a 503. But after that first load, I didn't get the issue again. It would re-occur if the service started from cold boot. \r\n\r\nI suspect this is a race condition in the supervisord configuration. The errors I got were the same `Connection refused: AH00957: http: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed`, and that seems to indicate that `datasette` hadn't yet started. \r\n\r\nLooking at the order of logs getting back, the processes reported successfully completing loading after the first 503 was returned, so that makes me think race condition. \r\n\r\nI can replicate this locally, if I `docker run` and request `localhost:5000/prefix` _before_ I get the `datasette entered RUNNING state` message. Cloud Run wakes up when requests are received, so this test would semi-replicate that, but local docker would be the equivalent of a persistent process, hence it doesn't normally exhibit the same issues.\r\n\r\nUnfortunately supervisor/supervisor issue 122 (not linking as to prevent cross-project link spam) seems to say that dependency chaining is a feature that's been asked for for a long time, but hasn't been implemented. You could try some suggestions in that thread. ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058896236, "label": "Deploy a live instance of demos/apache-proxy"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1549#issuecomment-991754794", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1549", "id": 991754794, "node_id": "IC_kwDOBm6k_c47HPoq", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-12-11T19:16:33Z", "updated_at": "2021-12-11T19:16:33Z", "author_association": "OWNER", "body": "Good call! I'm doing a refactor #1518 right now which will hopefully bring the functionality of those two much closer - I'll make a note to consider this there too.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1077620955, "label": "Redesign CSV export to improve usability"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1552#issuecomment-995034143", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1552", "id": 995034143, "node_id": "IC_kwDOBm6k_c47TwQf", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-12-15T18:02:53Z", "updated_at": "2021-12-15T18:02:53Z", "author_association": "OWNER", "body": "This is definitely a missing feature. The \"different types of facet\" stuff feels incomplete to me generally - this is one issue, but this one as well:\r\n\r\n- #625", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1078702875, "label": "Allow to set `facets_array` in metadata (like current `facets`)"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1608#issuecomment-1017998993", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1608", "id": 1017998993, "node_id": "IC_kwDOBm6k_c48rW6R", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-01-20T22:56:00Z", "updated_at": "2022-01-20T22:56:00Z", "author_association": "OWNER", "body": "> https://sphinx-version-warning.readthedocs.io/ looks like it can show a banner for \"You are looking at v0.36 but you should be looking at 0.40\" but doesn't hand the case I need here which is \"you are looking at /latest/ but you should be looking at /stable/\".\r\n\r\nCorrection! That tool DOES support that, as can be seen in their example configuration for their own documentation:\r\n\r\nhttps://github.com/humitos/sphinx-version-warning/blob/a82156c2ea08e5feab406514d0ccd9d48a345f48/docs/conf.py#L32-L38\r\n\r\n```python\r\nversionwarning_messages = {\r\n    'latest': 'This is a custom message only for version \"latest\" of this documentation.',\r\n}\r\nversionwarning_admonition_type = 'tip'\r\nversionwarning_banner_title = 'Tip'\r\nversionwarning_body_selector = 'div[itemprop=\"articleBody\"]'\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1109808154, "label": "Documentation should clarify /stable/ vs /latest/"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1613#issuecomment-1021860694", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1613", "id": 1021860694, "node_id": "IC_kwDOBm6k_c486FtW", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-01-26T04:57:53Z", "updated_at": "2022-01-26T04:57:53Z", "author_association": "OWNER", "body": "The existing flow where you can apply filters to a table and then click \"View and edit SQL\" to see the query is a good starting point.\r\n\r\nGroup by queries are both crucially important and difficult to assemble for beginners. Providing a way to see the query that was used by a facet (since facets are really just group-by-counts) would be very useful, which could come out of this:\r\n\r\n- #1080", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1114628238, "label": "Improvements to help make Datasette a better tool for learning SQL"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1688#issuecomment-1079582485", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1688", "id": 1079582485, "node_id": "IC_kwDOBm6k_c5AWR8V", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-03-26T03:15:34Z", "updated_at": "2022-03-26T03:15:34Z", "author_association": "OWNER", "body": "Yup, you're right in what you figured out here: stand-alone plugins can't currently package static assets other then using the static folder.\r\n\r\nThe `datasette-plugin` cookiecutter template should make creating a Python package pretty easy though: https://github.com/simonw/datasette-plugin\r\n\r\nYou can run that yourself, or you can run it using this GitHub template repository: https://github.com/simonw/datasette-plugin-template-repository \r\n\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1181432624, "label": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1692#issuecomment-1082663746", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1692", "id": 1082663746, "node_id": "IC_kwDOBm6k_c5AiCNC", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-03-30T06:14:39Z", "updated_at": "2022-03-30T06:14:51Z", "author_association": "OWNER", "body": "I like your design, though I think it should be `\"nomodule\": True` for consistency with the other options.\r\n\r\nI think `\"async\": True` is worth supporting too.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1182227211, "label": "[plugins][feature request]: Support additional script tag attributes when loading custom JS"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109174715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109174715, "node_id": "IC_kwDOBm6k_c5CHKm7", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:40:13Z", "updated_at": "2022-04-26T00:43:33Z", "author_association": "OWNER", "body": "Some of the things I'd like to use `?_extra=` for, that may or not make sense as plugins:\r\n\r\n- Performance breakdown information, maybe including explain output for a query/table\r\n- Information about the tables that were consulted in a query - imagine pulling in additional table metadata\r\n- Statistical aggregates against the full set of results. This may well be a Datasette core feature at some point in the future, but being able to provide it early as a plugin would be really cool.\r\n- For tables, what are the other tables they can join against?\r\n- Suggested facets\r\n- Facet results themselves\r\n- New custom facets I haven't thought of - though the `register_facet_classes` hook covers that already\r\n- Table schema\r\n- Table metadata\r\n- Analytics - how many times has this table been queried? Would be a plugin thing\r\n- For geospatial data, how about a GeoJSON polygon that represents the bounding box for all returned results? Effectively this is an extra aggregation.\r\n\r\nLooking at https://github-to-sqlite.dogsheep.net/github/commits.json?_labels=on&_shape=objects for inspiration.\r\n\r\nI think there's a separate potential mechanism in the future that lets you add custom columns to a table. This would affect `.csv` and the HTML presentation too, which makes it a different concept from the `?_extra=` hook that affects the JSON export (and the context that is fed to the HTML templates).", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1744#issuecomment-1129251699", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1744", "id": 1129251699, "node_id": "IC_kwDOBm6k_c5DTwNz", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-05-17T19:44:47Z", "updated_at": "2022-05-17T19:46:38Z", "author_association": "OWNER", "body": "Updated docs: https://docs.datasette.io/en/latest/getting_started.html#using-datasette-on-your-own-computer and https://docs.datasette.io/en/latest/cli-reference.html#datasette-serve-help", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1239008850, "label": "`--nolock` feature for opening locked databases"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/175#issuecomment-353424169", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/175", "id": 353424169, "node_id": "MDEyOklzc3VlQ29tbWVudDM1MzQyNDE2OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-12-21T18:33:55Z", "updated_at": "2017-12-21T18:33:55Z", "author_association": "OWNER", "body": "Done - thanks for curating these: https://github.com/topics/automatic-api", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 282971961, "label": "Add project topic \"automatic-api\""}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-431867885", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 431867885, "node_id": "MDEyOklzc3VlQ29tbWVudDQzMTg2Nzg4NQ==", "user": {"value": 634572, "label": "eads"}, "created_at": "2018-10-22T15:24:57Z", "updated_at": "2018-10-22T15:24:57Z", "author_association": "NONE", "body": "I'd like this as well. It would let me access Datasette-driven projects from GatsbyJS the same way I can access Postgres DBs via Hasura. While I don't see SQLite replacing Postgres for the 50m row datasets I sometimes have to work with, there's a whole class of smaller datasets that are great with Datasette but currently would find another option.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-617208503", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 617208503, "node_id": "MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw==", "user": {"value": 12976, "label": "nkirsch"}, "created_at": "2020-04-21T14:16:24Z", "updated_at": "2020-04-21T14:16:24Z", "author_association": "NONE", "body": "@eads I'm interested in helping, if there's still a need...", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1775#issuecomment-1233680261", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1775", "id": 1233680261, "node_id": "IC_kwDOBm6k_c5JiHeF", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-09-01T03:05:57Z", "updated_at": "2022-09-01T03:05:57Z", "author_association": "OWNER", "body": "OK, I'm convinced that it's time to start figuring this out.\r\n\r\nI've done a little bit of this with Django in the past, but Datasette isn't built on Django.\r\n\r\nIt looks to me like the key library for implementing this is Babel: https://babel.pocoo.org/en/latest/\r\n\r\nIt's been around since 2007 and is very widely used: https://github.com/python-babel/babel/network/dependents?package_id=UGFja2FnZS01MDM0NTU3NQ%3D%3D\r\n\r\nAlso found these hints on getting it to work with Jinja: https://stackoverflow.com/questions/12046998/babel-doesnt-recognize-jinja2-extraction-method-for-language-support", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1323346408, "label": "i18n support"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1779#issuecomment-1214416491", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1779", "id": 1214416491, "node_id": "IC_kwDOBm6k_c5IYoZr", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-08-14T17:07:34Z", "updated_at": "2022-08-14T17:07:34Z", "author_association": "OWNER", "body": "Tested that with:\r\n\r\n    datasette publish cloudrun fixtures.db --service issue-1779 --min-instances 2 --max-instances 4\r\n\r\n\"image\"\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1334628400, "label": "google cloudrun updated their limits on maxscale based on memory and cpu count"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1814#issuecomment-1251677554", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1814", "id": 1251677554, "node_id": "IC_kwDOBm6k_c5KmxVy", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-09-19T23:35:06Z", "updated_at": "2022-09-19T23:35:06Z", "author_association": "OWNER", "body": "It might have been useful for Datasette to show an error when started against a `settings.json` file that contains an invalid setting though.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1378495690, "label": "Static files not served"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1860#issuecomment-1292659986", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1860", "id": 1292659986, "node_id": "IC_kwDOBm6k_c5NDG0S", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-10-26T21:14:26Z", "updated_at": "2022-10-26T21:15:22Z", "author_association": "OWNER", "body": "Yeah we should fix this.\r\n\r\nhttps://www.sqlite.org/lang_comment.html - SQLite also supports `-- style` comments.\r\n\r\nI like how explicit the documentation is here:\r\n\r\n> SQL comments begin with two consecutive \"-\" characters (ASCII 0x2d) and extend up to and including the next newline character (ASCII 0x0a) or until the end of input, whichever comes first.\r\n> \r\n> C-style comments begin with \"/*\" and extend up to and including the next \"*/\" character pair or until the end of input, whichever comes first. C-style comments can span multiple lines. ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1424378012, "label": "SQL query field can't begin by a comment"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1860#issuecomment-1293928738", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1860", "id": 1293928738, "node_id": "IC_kwDOBm6k_c5NH8ki", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-10-27T18:46:31Z", "updated_at": "2022-10-27T18:46:31Z", "author_association": "OWNER", "body": "I think mine has a better pattern for handling `/* ... anything in here that isn't */ ... */`", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1424378012, "label": "SQL query field can't begin by a comment"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-467264937", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 467264937, "node_id": "MDEyOklzc3VlQ29tbWVudDQ2NzI2NDkzNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-02-26T02:14:28Z", "updated_at": "2019-02-26T02:14:28Z", "author_association": "OWNER", "body": "I'm working on a port of Datasette to Starlette which I think would fix this issue: https://github.com/encode/starlette", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null}
{"html_url": "https://github.com/simonw/datasette/issues/1871#issuecomment-1312821031", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1871", "id": 1312821031, "node_id": "IC_kwDOBm6k_c5OQA8n", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-13T21:02:06Z", "updated_at": "2022-11-13T21:03:11Z", "author_association": "OWNER", "body": "Actually no, I'm going to add a class of `details-menu` to the other details elements that SHOULD be closed. That way custom templates using `
` won't close in a surprising way.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1427293909, "label": "API explorer tool"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1879#issuecomment-1299102108", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1879", "id": 1299102108, "node_id": "IC_kwDOBm6k_c5Nbrmc", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-01T20:30:54Z", "updated_at": "2022-11-01T20:33:06Z", "author_association": "OWNER", "body": "One idea: add a `/-/debug` page (or `/-/tips` or `/-/checks`) which shows the incoming requests headers and could even detect if there's an `x-forwarded-host` header that isn't being repeated and show a tip on how to fix that.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1432037325, "label": "Make it easier to fix URL proxy problems"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1880#issuecomment-1311271298", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1880", "id": 1311271298, "node_id": "IC_kwDOBm6k_c5OKGmC", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-11T06:12:29Z", "updated_at": "2022-11-11T06:12:29Z", "author_association": "OWNER", "body": "I think you may have misunderstood this feature. This is talking about the `_internal` in-memory database, which maintains a set of tables that list the databases and tables that are attached to Datasette.\r\n\r\nThey're not a copy of the data itself - just a list of table names, column names and database names.\r\n\r\nYou can see what that database looks like by signing in as root - running `datasette --root` and clicking the link. Or you can see an example here:\r\n\r\n- Click the button on https://latest.datasette.io/login-as-root\r\n- Now visit https://latest.datasette.io/_internal\r\n\r\nFor the example instance that looks like this:\r\n\r\n\"image\"\r\n\r\nThe two most interesting tables in there are these ones:\r\n\r\n\"image\"\r\n\r\n\"CleanShot\r\n\r\nAs you can see, it's just the table schema itself and the columns that make up the tables. Even if you have hundreds of databases connected each with hundreds of tables this should still only add up to a few MB of RAM.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1433576351, "label": "Datasette with many and large databases > Memory use"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1880#issuecomment-1311273063", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1880", "id": 1311273063, "node_id": "IC_kwDOBm6k_c5OKHBn", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-11T06:15:28Z", "updated_at": "2022-11-11T06:15:28Z", "author_association": "OWNER", "body": "The `_internal` database is intended to help Datasette handle much larger attached databases. Right now Datasette attempts to show every database on the https://latest.datasette.io/ index page and every table on the https://latest.datasette.io/fixtures database index page - but these are not paginated. If you had a database containing 1,000 tables the database index page would get pretty slow.\r\n\r\nSo I want to be able to paginate (and search) those. But to paginate them it's useful to have them in a database table itself, since then I can paginate using SQL.\r\n\r\nMy plan for `_internal` is to use it to implement those advanced browsing features. I've not completed this work yet though. See this issue for more details on that:\r\n\r\n- #417", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1433576351, "label": "Datasette with many and large databases > Memory use"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1356842576", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1356842576, "node_id": "IC_kwDOBm6k_c5Q38ZQ", "user": {"value": 18738650, "label": "stevecrawshaw"}, "created_at": "2022-12-18T17:34:20Z", "updated_at": "2022-12-18T17:34:20Z", "author_association": "NONE", "body": "A bit late to this, but I have made an app to publish air quality data in Bristol, UK. \r\n[air quality data in Bristol, UK.](https://brisaq-wfzqhmj43q-ew.a.run.app/)\r\nNext step to see if I can make a streamlit app based on this to produce some nice charts.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1900#issuecomment-1319574972", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1900", "id": 1319574972, "node_id": "IC_kwDOBm6k_c5Opx28", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-18T05:41:28Z", "updated_at": "2022-11-18T05:41:28Z", "author_association": "OWNER", "body": "Oh this is with `datasette package`? That should work. Will investigate.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1452572348, "label": "datasette package --spatialite throws error during build"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1958#issuecomment-1352644267", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1958", "id": 1352644267, "node_id": "IC_kwDOBm6k_c5Qn7ar", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-12-13T18:33:32Z", "updated_at": "2022-12-13T18:33:32Z", "author_association": "OWNER", "body": "When you run `--root` you need to follow the special link that gets output to the console:\r\n\r\n```\r\n% datasette --root\r\nhttp://127.0.0.1:8001/-/auth-token?token=036d8055cc8000e9667f21c1dd08722a9358c066463873ad9566d23d88765c52\r\nINFO: Started server process [53934]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\n```\r\nThat `/-/auth-token?...` link is the one that sets the cookie and lets you in.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1497909798, "label": "datasette --root running in Docker doesn't reliably show the magic URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2001#issuecomment-1403084856", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2001", "id": 1403084856, "node_id": "IC_kwDOBm6k_c5ToWA4", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-25T04:31:02Z", "updated_at": "2023-01-25T04:31:02Z", "author_association": "CONTRIBUTOR", "body": "Aha, it's user error on my part.\r\n\r\nAdding\r\n\r\n```\r\nsqlite3_db_config.argtypes = [ctypes.c_void_p, ctypes.c_int, ctypes.c_int, ctypes.c_int]\r\n```\r\n\r\nmakes it work reliably both on the CLI and from datasette, and now I can reproduce the errors you mentioned in the issue description.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1553615704, "label": "Datasette is not compatible with SQLite's strict quoting compilation option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2023#issuecomment-1425974877", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2023", "id": 1425974877, "node_id": "IC_kwDOBm6k_c5U_qZd", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-10T15:32:41Z", "updated_at": "2023-02-10T15:32:41Z", "author_association": "CONTRIBUTOR", "body": "I think this feature was removed in Datasette 0.61 and moved to a plugin. People who want hashed URLs can use the [datasette-hashed-urls](https://docs.datasette.io/en/stable/performance.html#performance-hashed-urls) plugin to achieve the same affect.\r\n\r\nIt looks like you're trying to disable hashed urls, so I think you can just remove that config setting and things will work.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1579695809, "label": "Error: Invalid setting 'hash_urls' in settings.json in 0.64.1"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2093#issuecomment-1613895188", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2093", "id": 1613895188, "node_id": "IC_kwDOBm6k_c5gMhYU", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2023-06-29T22:51:53Z", "updated_at": "2023-06-29T22:51:53Z", "author_association": "CONTRIBUTOR", "body": "I agree with not liking `metadata.json` stuff in a `datasette.*` config file. Editing description of a table/column in a file like `datasette.*` seems odd to me. \r\n\r\nThough since plugin configuration currently lives in `metadata.json`, I think it should be removed from there and placed in `datasette.*`, at least for top-level config like `datasette-auth-github`'s config. Keeping `metadata.json` strictly for documentation/licensing/column units makes sense to me, but anything plugin related should be in some config file, like `datasette.*`.\r\n\r\nAnd ya, supporting both `datasette.*` and CLI flags makes a lot of sense to me. Any `--setting` flag should override anything in `datasette.*` for easier debugging, with possibly a warning message so people don't get confused. Same with `--port` and a port defined in `datasette.*`", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1781530343, "label": "Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2123#issuecomment-1689207309", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2123", "id": 1689207309, "node_id": "IC_kwDOBm6k_c5kr0IN", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-23T03:07:27Z", "updated_at": "2023-08-23T03:07:27Z", "author_association": "OWNER", "body": "> I'm happy to debug and land a patch if it's welcome.\r\n\r\nYes please! What an odd bug.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1825007061, "label": "datasette serve when invoked with --reload interprets the serve command as a file"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2126#issuecomment-1672385674", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2126", "id": 1672385674, "node_id": "IC_kwDOBm6k_c5jrpSK", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-10T01:07:43Z", "updated_at": "2023-08-10T01:07:43Z", "author_association": "OWNER", "body": "What version of Datasette are you running?\r\n\r\nThat feature was added in Datasette 1.0a2, so if you're on the current stable release you won't have it yet.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1838266862, "label": "Permissions in metadata.yml / metadata.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2143#issuecomment-1685263948", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2143", "id": 1685263948, "node_id": "IC_kwDOBm6k_c5kcxZM", "user": {"value": 11784304, "label": "dvizard"}, "created_at": "2023-08-20T11:50:10Z", "updated_at": "2023-08-20T11:50:10Z", "author_association": "NONE", "body": "This also makes it simple to separate out secrets.\r\n\r\n`datasette --config settings.yaml --config secrets.yaml --config db-docs.yaml --config db-fixtures.yaml`\r\n\r\nsettings.yaml\r\n```\r\nsettings:\r\n default_page_size: 10\r\n max_returned_rows: 3000\r\n sql_time_limit_ms\": 8000\r\nplugins:\r\n datasette-ripgrep:\r\n path: /usr/local/lib/python3.11/site-packages\r\n```\r\n\r\nsecrets.yaml\r\n```\r\nplugins:\r\n datasette-auth-github:\r\n client_secret: SUCH_SECRET \r\n```\r\n\r\n\r\ndb-docs.yaml\r\n```\r\ndatabases:\r\n docs:\r\n permissions:\r\n create-table:\r\n id: editor\r\n```\r\n\r\ndb-fixtures.yaml\r\n```\r\ndatabases:\r\n fixtures:\r\n tables:\r\n no_primary_key:\r\n hidden: true\r\n queries:\r\n neighborhood_search:\r\n sql: |-\r\n select neighborhood, facet_cities.name, state\r\n from facetable join facet_cities on facetable.city_id = facet_cities.id\r\n where neighborhood like '%' || :text || '%' order by neighborhood;\r\n title: Search neighborhoods\r\n description_html: |-\r\n

This demonstrates basic LIKE search\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1855885427, "label": "De-tangling Metadata before Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2143#issuecomment-1692182910", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2143", "id": 1692182910, "node_id": "IC_kwDOBm6k_c5k3Kl-", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-24T18:06:57Z", "updated_at": "2023-08-24T18:08:17Z", "author_association": "OWNER", "body": "The other thing that could work is something like this:\r\n```bash\r\nexport AUTH_TOKENS_DB=\"tokens\"\r\ndatasette \\\r\n -s settings.sql_time_limit_ms 1000 \\\r\n -s plugins.datasette-auth-tokens.manage_tokens true \\\r\n -e plugins.datasette-auth-tokens.manage_tokens_database AUTH_TOKENS_DB\r\n```\r\nSo `-e` is an alternative version of `-s` which reads from the named environment variable instead of having the value provided directly as the second value in the pair.\r\n\r\nI quite like this, because it could replace the really ugly `$ENV` pattern we have in plugin configuration at the moment: https://docs.datasette.io/en/1.0a4/plugins.html#secret-configuration-values\r\n```yaml\r\nplugins:\r\n datasette-auth-github:\r\n client_secret:\r\n $env: GITHUB_CLIENT_SECRET\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1855885427, "label": "De-tangling Metadata before Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2145#issuecomment-1686683596", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2145", "id": 1686683596, "node_id": "IC_kwDOBm6k_c5kiL_M", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-21T16:49:12Z", "updated_at": "2023-08-21T16:49:12Z", "author_association": "OWNER", "body": "Suggestion from @asg017 is that we say that if your row has a null primary key you don't get a link to a row page for that row.\r\n\r\nWhich has some precedent, because our SQL view display doesn't link to row pages at all (since they don't make sense for views): https://latest.datasette.io/fixtures/simple_view", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1857234285, "label": "If a row has a primary key of `null` various things break"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-1033772902", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 1033772902, "node_id": "IC_kwDOBm6k_c49nh9m", "user": {"value": 1376648, "label": "jordaneremieff"}, "created_at": "2022-02-09T13:40:52Z", "updated_at": "2022-02-09T13:40:52Z", "author_association": "NONE", "body": "Hi @simonw, \r\n\r\nI've received some inquiries over the last year or so about Datasette and how it might be supported by [Mangum](https://github.com/jordaneremieff/mangum). I maintain Mangum which is, as far as I know, the only project that provides support for ASGI applications in AWS Lambda.\r\n\r\nIf there is anything that I can help with here, please let me know because I think what Datasette provides to the community (even beyond OSS) is noble and worthy of special consideration.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-744461856", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 744461856, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NDQ2MTg1Ng==", "user": {"value": 296686, "label": "robintw"}, "created_at": "2020-12-14T14:04:57Z", "updated_at": "2020-12-14T14:04:57Z", "author_association": "NONE", "body": "I'm looking into using datasette with a database with spatialite geometry columns, and came across this issue. Has there been any progress on this since 2018?\r\n\r\nIn one of my tables I'm just storing lat/lon points in a spatialite point geometry, and I've managed to make datasette-cluster-map display the points by extracting the lat and lon in SQL - using something like `select ... ST_X(location) as longitude, ST_Y(location) as latitude from Blah`. Something more 'built-in' would be great though - particularly for the tables I have that store more complex geometries.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/308#issuecomment-405971920", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/308", "id": 405971920, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNTk3MTkyMA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2018-07-18T15:27:12Z", "updated_at": "2018-07-18T15:27:12Z", "author_association": "OWNER", "body": "It looks like there are a few extra options we should support:\r\n\r\nhttps://devcenter.heroku.com/articles/heroku-cli-commands\r\n\r\n```\r\n -t, --team=team team to use\r\n --region=region specify region for the app to run in\r\n --space=space the private space to create the app in\r\n```\r\n\r\nSince these differ from the options for Zeit Now I think this means splitting up `datasette publish now` and `datasette publish Heroku` into separate subcommands.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 330826972, "label": "Support extra Heroku apps:create options - region, space, team"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/339#issuecomment-404565566", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/339", "id": 404565566, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNDU2NTU2Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2018-07-12T16:08:42Z", "updated_at": "2018-07-12T16:08:42Z", "author_association": "OWNER", "body": "I'm going to turn this into an issue about better supporting the above option.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 340396247, "label": "Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/370#issuecomment-435974786", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/370", "id": 435974786, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNTk3NDc4Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2018-11-05T18:06:56Z", "updated_at": "2018-11-05T18:06:56Z", "author_association": "OWNER", "body": "I've been thinking a bit about ways of using Jupyter Notebook more effectively with Datasette (thinks like a `publish_dataframes(df1, df2, df3)` function which publishes some Pandas dataframes and returns you a URL to a new hosted Datasette instance) but you're right, Jupyter Lab is potentially a much more interesting fit.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377155320, "label": "Integration with JupyterLab"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/391#issuecomment-450964512", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/391", "id": 450964512, "node_id": "MDEyOklzc3VlQ29tbWVudDQ1MDk2NDUxMg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-01-02T19:45:12Z", "updated_at": "2019-01-02T19:45:12Z", "author_association": "OWNER", "body": "Thanks, I've fixed this. I had to re-alias it against now:\r\n```\r\n~ $ now alias google-trends-pnwhfwvgqf.now.sh https://google-trends.datasettes.com/\r\n> Assigning alias google-trends.datasettes.com to deployment google-trends-pnwhfwvgqf.now.sh\r\n> Certificate for google-trends.datasettes.com (cert_uXaADIuNooHS3tZ) created [18s]\r\n> Success! google-trends.datasettes.com now points to google-trends-pnwhfwvgqf.now.sh [20s]\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 392610803, "label": "Google Trends example doesn\u2019t work"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602907207", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602907207, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-23T23:12:18Z", "updated_at": "2020-03-23T23:12:18Z", "author_association": "CONTRIBUTOR", "body": "This would also be useful for running Datasette in Jupyter notebooks on [Binder](https://mybinder.org/). While you can use [Jupyter-server-proxy](https://github.com/jupyterhub/jupyter-server-proxy) to access Datasette on Binder, the links are broken.\r\n\r\nWhy run Datasette on Binder? I'm developing a [range of Jupyter notebooks](https://glam-workbench.github.io/) that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic.\r\n\r\nFor example, there are a [series of notebooks](https://glam-workbench.github.io/trove-harvester/) that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data.\r\n\r\nBut it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-603631640", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 603631640, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMzYzMTY0MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-03-25T04:19:08Z", "updated_at": "2020-03-25T04:19:08Z", "author_association": "OWNER", "body": "Shipped in 0.39: https://datasette.readthedocs.io/en/latest/changelog.html#v0-39", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/397#issuecomment-453330680", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/397", "id": 453330680, "node_id": "MDEyOklzc3VlQ29tbWVudDQ1MzMzMDY4MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-01-11T01:17:11Z", "updated_at": "2019-01-11T01:25:33Z", "author_association": "OWNER", "body": "If you pull [the latest image](https://hub.docker.com/r/datasetteproject/datasette) you should get the right SQLite version now:\r\n\r\n docker pull datasetteproject/datasette\r\n docker run -p 8001:8001 \\\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0\r\n\r\nhttp://0.0.0.0:8001/-/versions now gives me:\r\n\r\n```\r\n \"version\": \"3.26.0\"\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 397129564, "label": "Update official datasetteproject/datasette Docker container to SQLite 3.26.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/419#issuecomment-473708941", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/419", "id": 473708941, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3MzcwODk0MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-03-17T19:58:11Z", "updated_at": "2019-03-17T19:58:11Z", "author_association": "OWNER", "body": "Some problems to solve:\r\n\r\n* Right now Datasette assumes it can always show the count of rows in a table, because this has been pre-calculated. If a database is mutable the pre-calculation trick no longer works, and for giant tables a `select count(*) from X` query can be expensive to run. Maybe we set a time limit on these? If time limit expires show \"many rows\"?\r\n* Maintaining a content hash of the table no longer makes sense if it is changing (though interestingly there's a `.sha3sum` built-in SQLite CLI command which takes a hash of the content and stays the same even through vacuum runs). Without that we need a different mechanism for calculating table colours. It also means that we can't do the special dbname-hash URL trick (see #418) at all if the database is opened as mutable.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421551434, "label": "Default to opening files in mutable mode, special option for immutable files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/46#issuecomment-344161226", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/46", "id": 344161226, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDE2MTIyNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-14T06:41:21Z", "updated_at": "2017-11-14T06:41:21Z", "author_association": "OWNER", "body": "Spatial extensions would be really useful too. https://www.gaia-gis.it/spatialite-2.1/SpatiaLite-manual.html", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 271301468, "label": "Dockerfile should build more recent SQLite with FTS5 and spatialite support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/486#issuecomment-495659567", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/486", "id": 495659567, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5NTY1OTU2Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-05-24T14:41:45Z", "updated_at": "2019-05-24T14:41:45Z", "author_association": "OWNER", "body": "I'm really keen to offer this as a plugin hook once I have Datasette working on ASGI - #272 \r\n\r\nI'll hopefully have that working in the next few weeks, but in the meantime there are a couple of tricks you can use:\r\n\r\n- you can add static HTML files (no templates though) using the static route configuration options\r\n- you can link to external hosted pages using the `about_url` metadata option\r\n- you can add information to an existing page with a custom template. I do that here for example: https://russian-ira-facebook-ads.datasettes.com/", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 448189298, "label": "Ability to add extra routes and related templates"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-1689128911", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 1689128911, "node_id": "IC_kwDOBm6k_c5krg_P", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-23T01:29:20Z", "updated_at": "2023-08-23T01:29:20Z", "author_association": "OWNER", "body": "It's going to be called `datasette.json` and the concept of metadata will be split out separately. See:\r\n\r\n- #2149 ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/498#issuecomment-498839428", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/498", "id": 498839428, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5ODgzOTQyOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-06-04T20:53:21Z", "updated_at": "2019-06-04T20:53:21Z", "author_association": "OWNER", "body": "It does not, but that's a really great idea for a feature.\r\n\r\nOne challenge here is that FTS ranking calculations take overall table statistics into account, which means it's usually not possible to combine rankings from different tables in a sensible way. But that doesn't mean it's not possible to return grouped results.\r\n\r\nI think this makes a lot of sense as a plugin.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 451513541, "label": "Full text search of all tables at once?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/499#issuecomment-498840129", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/499", "id": 498840129, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5ODg0MDEyOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-06-04T20:55:30Z", "updated_at": "2019-06-04T21:01:22Z", "author_association": "OWNER", "body": "I really want this too!\r\n\r\nIt's one of the goals of the Datasette Library #417 concept, which I'm hoping to turn into an actual feature in the coming months.\r\n\r\nIt's also going to be a major focus of my ten month JSK fellowship at Stanford, which starts in September. https://twitter.com/simonw/status/1123624552867565569", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 451585764, "label": "Accessibility for non-techie newsies? "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-509154312", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 509154312, "node_id": "MDEyOklzc3VlQ29tbWVudDUwOTE1NDMxMg==", "user": {"value": 4363711, "label": "JesperTreetop"}, "created_at": "2019-07-08T09:36:25Z", "updated_at": "2019-07-08T09:40:33Z", "author_association": "NONE", "body": "@chrismp: Ports 1024 and under are privileged and can usually only be bound by a root or supervisor user, so it makes sense if you're running as the user `chris` that port 8000 works but 80 doesn't.\r\n\r\nSee [this generic question-and-answer](https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443) and [this systemd question-and-answer](https://stackoverflow.com/questions/40865775/linux-systemd-service-on-port-80) for more information about ways to skin this cat. Without knowing your specific circumstances, either extending those privileges to that service/executable/user, proxying them through something like nginx or indeed looking at what the nginx systemd job has to do to listen at port 80 all sound like good ways to start.\r\n\r\nAt this point, this is more generic systemd/Linux support than a Datasette issue, which is why a complete rando like me is able to contribute anything. ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-1074019047", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 1074019047, "node_id": "IC_kwDOBm6k_c5ABDrn", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-03-21T15:09:56Z", "updated_at": "2022-03-21T15:09:56Z", "author_association": "OWNER", "body": "I should research how much overhead creating a new connection costs - it may be that an easy way to solve this is to create A dedicated connection for the query and then close that connection at the end.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null}