{"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316320521", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316320521, "node_id": "IC_kwDOBm6k_c5OdXUJ", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T04:29:23Z", "updated_at": "2022-11-16T04:29:23Z", "author_association": "CONTRIBUTOR", "body": "\"Screenshot\r\n\r\nUI issue I see on the autocomplete popup with overlapping icon & text. Screenshot's from Firefox, it seems even a little more pronounced on Safari", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/875#issuecomment-651293559", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/875", "id": 651293559, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MTI5MzU1OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-06-29T18:43:50Z", "updated_at": "2020-06-29T18:43:50Z", "author_association": "OWNER", "body": "\"_memory_\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 647103735, "label": "\"Logged in as: XXX - logout\" navigation item"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/146#issuecomment-346682905", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/146", "id": 346682905, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjY4MjkwNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-23T18:55:08Z", "updated_at": "2017-11-23T18:55:08Z", "author_association": "OWNER", "body": "\"compute_engine_-_simonwillisonblog\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 276455748, "label": "datasette publish gcloud"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/86#issuecomment-346691243", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/86", "id": 346691243, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjY5MTI0Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-23T20:07:15Z", "updated_at": "2017-11-23T20:07:15Z", "author_association": "OWNER", "body": "\"fivethirtyeight__bob-ross_elements-by-episode_csv\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273703829, "label": "Filter UI on table page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/993#issuecomment-703928029", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/993", "id": 703928029, "node_id": "MDEyOklzc3VlQ29tbWVudDcwMzkyODAyOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-10-05T22:42:45Z", "updated_at": "2020-10-05T22:42:59Z", "author_association": "OWNER", "body": "\"fixtures__facetable__15_rows\"\r\n\r\nThe `NOT NULL` text shows up only for columns that are not null.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 715072935, "label": "Column action menu should show column type"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/41#issuecomment-339866724", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/41", "id": 339866724, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTg2NjcyNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-27T04:04:52Z", "updated_at": "2017-10-27T04:04:52Z", "author_association": "OWNER", "body": "\"databases\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268590777, "label": "Homepage should show summary of databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1712#issuecomment-1097068474", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1712", "id": 1097068474, "node_id": "IC_kwDOBm6k_c5BY--6", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-12T18:38:18Z", "updated_at": "2022-04-12T18:38:18Z", "author_association": "OWNER", "body": "\"image\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1202227104, "label": "Make \"\" easier to read"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/133#issuecomment-346902583", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/133", "id": 346902583, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjkwMjU4Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-24T22:30:32Z", "updated_at": "2017-11-24T22:30:32Z", "author_association": "OWNER", "body": "\"sf-trees__street_tree_list__1_row_where_search_matches__ocean___qcareassistant____1__qcareassistant_is_not_blank_and_qlegalstatus___1\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275176006, "label": "If view is filtered, search should apply within those filtered rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/750#issuecomment-622999623", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/750", "id": 622999623, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjk5OTYyMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-05-02T19:05:07Z", "updated_at": "2020-05-02T19:05:07Z", "author_association": "OWNER", "body": "\"data__names__5_rows_where_where_name_not_like__Sim___sorted_by_rowid\"", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 611252244, "label": "Add notlike table filter"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/96#issuecomment-344786528", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/96", "id": 344786528, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDc4NjUyOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-16T01:32:41Z", "updated_at": "2017-11-16T01:32:41Z", "author_association": "OWNER", "body": "\"australian-dogs\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274001453, "label": "UI for editing named parameters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/573#issuecomment-1646686675", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/573", "id": 1646686675, "node_id": "IC_kwDOCGYnMM5iJnHT", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-07-22T22:54:38Z", "updated_at": "2023-07-22T22:54:38Z", "author_association": "OWNER", "body": "\"image\"\r\n\r\nGlitch in the rendered documentation from https://sqlite-utils--573.org.readthedocs.build/en/573/plugins.html#prepare-connection-conn", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1816917522, "label": "feat: Implement a prepare_connection plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/147#issuecomment-346900554", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/147", "id": 346900554, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjkwMDU1NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-24T22:02:22Z", "updated_at": "2017-11-24T22:02:22Z", "author_association": "OWNER", "body": "\"conventional_power_plants_eu__conventional_power_plants_eu__14_rows_where_company____nuon_\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 276476670, "label": "Tidy up design of the header of the table page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/132#issuecomment-346701751", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/132", "id": 346701751, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjcwMTc1MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-23T21:51:51Z", "updated_at": "2017-11-23T21:51:51Z", "author_association": "OWNER", "body": "\"fatal-police-shootings-data__fatal-police-shootings-data\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275175929, "label": "Row view is not currently expanding foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1106#issuecomment-733247101", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1106", "id": 733247101, "node_id": "MDEyOklzc3VlQ29tbWVudDczMzI0NzEwMQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-24T21:35:29Z", "updated_at": "2020-11-24T21:36:04Z", "author_association": "OWNER", "body": "\"Edit_Redirects___Read_the_Docs\"\r\n\r\nhttps://docs.datasette.io/en/latest/config.html isn't redirecting though, even after I tried running a rebuild of the `latest` version.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749983857, "label": "Rebrand and redirect config.rst as settings.rst"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/69#issuecomment-344048656", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/69", "id": 344048656, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDA0ODY1Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-13T20:32:47Z", "updated_at": "2017-11-13T20:32:47Z", "author_association": "OWNER", "body": "\"ak\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273248366, "label": "Enforce pagination (or at least limits) for arbitrary custom SQL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/65#issuecomment-343709217", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/65", "id": 343709217, "node_id": "MDEyOklzc3VlQ29tbWVudDM0MzcwOTIxNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-11-12T02:36:37Z", "updated_at": "2017-11-12T02:36:37Z", "author_association": "OWNER", "body": "\"nhsadmin\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273191608, "label": "Re-implement ?sql= mode"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647936117", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647936117, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:25:17Z", "updated_at": "2020-06-23T06:25:17Z", "author_association": "CONTRIBUTOR", "body": "> \r\n> \r\n> ```\r\n> sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50\r\n> ```\r\n> \r\n> Looks like that will take 35 minutes to run (it's not a particularly fast tool).\r\n\r\nTry chunking write operations into batches every 1000 records or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/954#issuecomment-682312736", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/954", "id": 682312736, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjMxMjczNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-08-28T04:05:01Z", "updated_at": "2020-08-28T04:05:10Z", "author_association": "OWNER", "body": "> It can also return a dictionary with the following keys. This format is **deprecated** as-of Datasette 0.49 and will be removed by Datasette 1.0.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 687694947, "label": "Remove old register_output_renderer dict mechanism in Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421177666", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1421177666, "node_id": "IC_kwDOCGYnMM5UtXNC", "user": {"value": 21095447, "label": "4l1fe"}, "created_at": "2023-02-07T17:39:00Z", "updated_at": "2023-02-07T17:39:00Z", "author_association": "NONE", "body": "> lets users make schema changes, so it's important to me that the tool work in a non-surprising way -- if you ask for a column of type X, you should get type X. If the column or table previously had CHECK constraints, they shouldn't be silently removed\r\n\r\nI've got your concern. Let's see if we will be replied on it and i'll close the issue some later.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1059#issuecomment-718078447", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1059", "id": 718078447, "node_id": "MDEyOklzc3VlQ29tbWVudDcxODA3ODQ0Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-10-28T17:07:59Z", "updated_at": "2020-10-28T17:08:14Z", "author_association": "OWNER", "body": "> #### 0.6.0 (2020-10-27)\r\n> \r\n> - aiofiles is now tested on ppc64le.\r\n> - Added name and mode properties to async file objects. [#82](https://github.com/Tinche/aiofiles/pull/82)\r\n> - Fixed a DeprecationWarning internally. [#75](https://github.com/Tinche/aiofiles/pull/75)\r\n> - Python 3.9 support and tests.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 731445447, "label": "Update aiofiles requirement from <0.6,>=0.4 to >=0.4,<0.7"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1648#issuecomment-1060065736", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1648", "id": 1060065736, "node_id": "IC_kwDOBm6k_c4_L1HI", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-03-06T23:43:00Z", "updated_at": "2022-03-06T23:43:11Z", "author_association": "OWNER", "body": "> * Maybe use dash encoding for database name too?\r\n\r\nYes, I'm going to do this. At the moment if a DB file is called `fixx%tures.db` when you run it in Datasette the path is `/fix%2525tures` - which is liable to break.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1160432941, "label": "Use dash encoding for table names and row primary keys in URLs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1522#issuecomment-974575512", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1522", "id": 974575512, "node_id": "IC_kwDOBm6k_c46FteY", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-11-20T02:09:20Z", "updated_at": "2021-11-20T02:09:20Z", "author_association": "OWNER", "body": "> **Waiting for health check to begin** makes it sound like the container didn't start properly.\r\n\r\nThat eventually failed, but I did get these in the build logs:\r\n\r\n\"Screen\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058896236, "label": "Deploy a live instance of demos/apache-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1228#issuecomment-1072954795", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1228", "id": 1072954795, "node_id": "IC_kwDOBm6k_c4_8_2r", "user": {"value": 7107523, "label": "Kabouik"}, "created_at": "2022-03-19T06:44:40Z", "updated_at": "2022-03-19T06:44:40Z", "author_association": "NONE", "body": "> ... unless your data had a column called `n`?\r\n\r\nExactly, that's highly likely even though I can't double check from this computer just now. Thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 810397025, "label": "500 error caused by faceting if a column called `n` exists"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1317329157", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1317329157, "node_id": "IC_kwDOBm6k_c5OhNkF", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T16:46:52Z", "updated_at": "2022-11-16T16:46:52Z", "author_association": "CONTRIBUTOR", "body": "> \"Screenshot\r\n> \r\n> UI issue I see on the autocomplete popup with overlapping icon & text. Screenshot's from Firefox, it seems even a little more pronounced on Safari\r\n\r\nI checked and if I empty out app.css the bug goes away, so there's some kind of inheritance issue there. It's hard to debug bc the autocomplete popup goes away on blur (i.e. when trying to inspect it in devtools), but at least it's narrowed down a bit.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504685187", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504685187, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY4NTE4Nw==", "user": {"value": 7936571, "label": "chrismp"}, "created_at": "2019-06-22T17:43:24Z", "updated_at": "2019-06-22T17:43:24Z", "author_association": "NONE", "body": "> > > WorkingDirectory=/path/to/data\r\n> > \r\n> > \r\n> > @russss, Which directory does this represent?\r\n> \r\n> It's the working directory (cwd) of the spawned process. In this case if you set it to the directory your data is in, you can use relative paths to the db (and metadata/templates/etc) in the `ExecStart` command.\r\n\r\nIn my case, on a remote server, I set up a virtual environment in `/home/chris/Env/datasette`, and when I activated that environment I ran `pip install datasette`. \r\n\r\nMy datasette project is in `/home/chris/datatsette-project`, so I guess I'd use that directory in the `WorkingDirectory` parameter?\r\n\r\nAnd the `ExecStart` parameter would be `/home/chris/Env/datasette/lib/python3.7/site-packages/datasette serve -h 0.0.0.0 my.db` I'm guessing?\r\n ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074182", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074182, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:37:42Z", "updated_at": "2021-06-26T23:37:42Z", "author_association": "CONTRIBUTOR", "body": "> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries.\r\n> > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try?\r\n> \r\n> Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160\r\n> \r\n> That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it.\r\n> \r\n> This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem.\r\n> \r\n> But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead.\r\n\r\n_Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.)\r\n\r\nIn my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/648#issuecomment-619591380", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/648", "id": 619591380, "node_id": "MDEyOklzc3VlQ29tbWVudDYxOTU5MTM4MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-04-26T17:33:04Z", "updated_at": "2020-04-26T17:33:04Z", "author_association": "OWNER", "body": "> > Stretch goal: it would be neat if these pages could return custom HTTP headers (eg content-type) and maybe even status codes (eg for redirects) somehow.\r\n> \r\n> I think I could do that with a custom template function - if that function is called during the render then we follow those instructions instead of returning the rendered HTML.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 534492501, "label": "Mechanism for adding arbitrary pages like /about"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/737#issuecomment-619591533", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/737", "id": 619591533, "node_id": "MDEyOklzc3VlQ29tbWVudDYxOTU5MTUzMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-04-26T17:33:48Z", "updated_at": "2020-04-26T17:33:48Z", "author_association": "OWNER", "body": "> > Stretch goal: it would be neat if these pages could return custom HTTP headers (eg content-type) and maybe even status codes (eg for redirects) somehow.\r\n> \r\n> I think I could do that with a custom template function - if that function is called during the render then we follow those instructions instead of returning the rendered HTML.\r\nhttps://github.com/simonw/datasette/issues/648#issuecomment-619591380", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 607067303, "label": "Custom pages mechanism, refs #648"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504684831", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504684831, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY4NDgzMQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T17:38:23Z", "updated_at": "2019-06-22T17:38:23Z", "author_association": "CONTRIBUTOR", "body": "> > WorkingDirectory=/path/to/data\r\n> \r\n> @russss, Which directory does this represent?\r\n\r\nIt's the working directory (cwd) of the spawned process. In this case if you set it to the directory your data is in, you can use relative paths to the db (and metadata/templates/etc) in the `ExecStart` command.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-401312981", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 401312981, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMTMxMjk4MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-06-29T10:14:54Z", "updated_at": "2018-06-29T10:14:54Z", "author_association": "CONTRIBUTOR", "body": "> @RusSs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg kartena/Proj4Leaflet) although the leaflet side would need to detect or be informed of the original projection?\r\n\r\nWell, as @simonw mentioned, GeoJSON only supports WGS84, and GeoJSON (and/or TopoJSON) is the standard we probably want to aim for. On-the-fly reprojection in spatialite is not an issue anyway, and in general I think you want to be serving stuff to web maps in WGS84 or Web Mercator.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1710380941", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 1710380941, "node_id": "IC_kwDODFE5qs5l8leN", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2023-09-07T15:39:59Z", "updated_at": "2023-09-07T15:39:59Z", "author_association": "NONE", "body": "> @maxhawkins curious why you didn't use the stdlib `mailbox` to parse the `mbox` files?\r\n\r\nMailbox parses the entire mbox into memory. Using the lower level library lets us stream the emails in one at a time to support larger archives. Both libraries are in the stdlib.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1003437288", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 1003437288, "node_id": "IC_kwDODFE5qs47zzzo", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-12-31T19:06:20Z", "updated_at": "2021-12-31T19:06:20Z", "author_association": "NONE", "body": "> @maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text.\r\n\r\nShouldn't be hard. The easiest way is probably to remove the `if body.content_type == \"text/html\"` clause from [utils.py:254](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8e6d487b697ce2e8ad885acf613a157bfba84c59#diff-25ad9dd1ced1b8bfc37fda8444819c803232c08891e4af3d4064aa205d8174eaR254) and just return content directly without parsing.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2143#issuecomment-1690800641", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2143", "id": 1690800641, "node_id": "IC_kwDOBm6k_c5kx5IB", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-24T00:11:16Z", "updated_at": "2023-08-24T00:11:16Z", "author_association": "OWNER", "body": "> @simonw, FWIW, I do exactly the same thing for one of my projects (both to allow multiple configuration files to be passed on the command line and setting individual values) and it works quite well for me and my users. I even use the same parameter name for both (https://studio.zerobrane.com/doc-configuration#configuration-via-command-line), but I understand why you may want to use different ones for files and individual values. There is one small difference that I accept code snippets, but I don't think it matters much in this case.\r\n\r\nThat's a neat example thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1855885427, "label": "De-tangling Metadata before Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1356#issuecomment-1017016553", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1356", "id": 1017016553, "node_id": "IC_kwDOBm6k_c48nnDp", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-01-20T01:06:37Z", "updated_at": "2022-01-20T01:06:37Z", "author_association": "OWNER", "body": "> A problem with this is that if you're using `--query` you likely want ALL of the results - at the moment the only Datasette output type that can stream everything is `.csv` and plugin formats can't handle full streams, see #1062 and #1177.\r\n\r\nI figured out a neat pattern for streaming JSON arrays in this TIL: https://til.simonwillison.net/python/output-json-array-streaming", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 910092577, "label": "Research: syntactic sugar for using --get with SQL queries, maybe \"datasette query\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/578#issuecomment-1648339661", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/578", "id": 1648339661, "node_id": "IC_kwDOCGYnMM5iP6rN", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2023-07-24T17:44:30Z", "updated_at": "2023-07-24T17:44:30Z", "author_association": "CONTRIBUTOR", "body": "> A related feature would be support for plugins to add new ways of ingesting data - currently sqlite-utils insert works against JSON, newline-JSON, CSV and TSV.\r\n\r\nThis is my goal, to have one plugin that handles input and output symmetrically. I'd like to be able to do something like this:\r\n\r\n```sh\r\nsqlite-utils insert data.db table file.geojson --format geojson\r\n# ... explore and manipulate in Datasette\r\nsqlite-utils query data.db ... --format geojson > output.geojson\r\n```\r\n\r\nThis would work especially well with [datasette-query-files](https://github.com/eyeseast/datasette-query-files), since I already have the queries I need saved in standalone SQL files.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1818838294, "label": "Plugin hook for adding new output formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421055590", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1421055590, "node_id": "IC_kwDOCGYnMM5Us5Zm", "user": {"value": 21095447, "label": "4l1fe"}, "created_at": "2023-02-07T16:25:31Z", "updated_at": "2023-02-07T16:25:31Z", "author_association": "NONE", "body": "> Ah, it looks like that is controlled by this dict: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L178\r\n> \r\n> I suspect you could overwrite the datetime entry to achieve what you want\r\n\r\nAnd thank you for pointing me to it. At least, i can make a monkey patch for my need...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066222323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066222323, "node_id": "IC_kwDOBm6k_c4_jULz", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-14T00:36:42Z", "updated_at": "2022-03-14T00:36:42Z", "author_association": "CONTRIBUTOR", "body": "> Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)\r\n\r\nAll good. Report back any issues you find with this stuff. Metadata/dynamic config hasn't been tested widely outside of what I've done AFAIK. If you find a strong use case for async meta, it's going to be better to know sooner rather than later!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/565#issuecomment-1646657324", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/565", "id": 1646657324, "node_id": "IC_kwDOCGYnMM5iJf8s", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-07-22T19:39:06Z", "updated_at": "2023-07-22T19:39:06Z", "author_association": "OWNER", "body": "> Also need a design for an option for the `.transform()` method to indicate that the new table should be created with a new name without dropping the old one.\r\n\r\nI think `keep_table=\"name_of_table\"` is good for this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1786258502, "label": "Table renaming: db.rename_table() and sqlite-utils rename-table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1439#issuecomment-1045075207", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1439", "id": 1045075207, "node_id": "IC_kwDOBm6k_c4-SpUH", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-02-18T19:39:35Z", "updated_at": "2022-02-18T19:40:13Z", "author_association": "OWNER", "body": "> And if for some horific reason you had a table with the name `/db/table-.csv.csv` (so `/db/` was the first part of the actual table name in SQLite) the URLs would look like this:\r\n> \r\n> * `/db/%2Fdb%2Ftable---.csv-.csv` - the HTML version\r\n> * `/db/%2Fdb%2Ftable---.csv-.csv.csv` - the CSV version\r\n> * `/db/%2Fdb%2Ftable---.csv-.csv.json` - the JSON version\r\n\r\nHere's what those look like with the updated version of `dot_dash_encode()` that also encodes `/` as `-/`:\r\n\r\n- `/db/-/db-/table---.csv-.csv` - HTML\r\n- `/db/-/db-/table---.csv-.csv.csv` - CSV\r\n- `/db/-/db-/table---.csv-.csv.json` - JSON\r\n\r\n\"image\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 973139047, "label": "Rethink how .ext formats (v.s. ?_format=) works before 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2143#issuecomment-1684488526", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2143", "id": 1684488526, "node_id": "IC_kwDOBm6k_c5kZ0FO", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-18T22:18:39Z", "updated_at": "2023-08-18T22:18:39Z", "author_association": "OWNER", "body": "> Another option would be, instead of flat `datasette.json`/`datasette.yaml` files, we could instead use a Python file, like `datasette_config.py`. That way one could dynamically generate config (ex dev vs prod, auto-discover credentials, etc.). Kinda like Django settings.\r\n\r\n> Another option would be, instead of flat `datasette.json`/`datasette.yaml` files, we could instead use a Python file, like `datasette_config.py`. That way one could dynamically generate config (ex dev vs prod, auto-discover credentials, etc.). Kinda like Django settings.\r\n\r\nI'm not a fan of that. I feel like software history is full of examples of projects that implemented configuration-as-code and then later regretted it - the most recent example is `setup.py` in Python turning into `pyproject.yaml`, but I feel like I've seen that pattern play out elsewhere too.\r\n\r\nI don't think having people dynamically generate JSON/YAML for their configuration is a big burden. I'd have to see some very compelling use-cases to convince me otherwise.\r\n\r\nThat said, I do really like a bias towards settings that can be changed at runtime. Datasette has suffered a bit from some settings that can't be easily changed at runtime already - hence my gnarly https://github.com/simonw/datasette-remote-metadata plugin.\r\n\r\nFor things like Datasette Cloud for example the more people can configure without rebooting their container the better!\r\n\r\nI don't think live reconfiguration at runtime is incompatible with JSON/YAML configuration though. Caddy is one of my favourite examples of software that can be entirely re-configured at runtime by POSTING a big blob of JSON to it: https://caddyserver.com/docs/quick-starts/api\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1855885427, "label": "De-tangling Metadata before Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1870#issuecomment-1295667649", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1870", "id": 1295667649, "node_id": "IC_kwDOBm6k_c5NOlHB", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-29T00:52:43Z", "updated_at": "2022-10-29T00:53:43Z", "author_association": "CONTRIBUTOR", "body": "> Are you saying that I can build a container, but then when I run it and it does `datasette serve -i data.db ...` it will somehow modify the image, or create a new modified filesystem layer in the runtime environment, as a result of running that `serve` command?\r\n\r\nSomehow, `datasette serve -i data.db` will lead to the `data.db` being modified, which will trigger a [copy-on-write](https://docs.docker.com/storage/storagedriver/#the-copy-on-write-cow-strategy) of `data.db` into the read-write layer of the container.\r\n\r\nI don't understand **how** that happens.\r\n\r\nit kind of feels like a bug in sqlite, but i can't quite follow the sqlite code.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1426379903, "label": "don't use immutable=1, only mode=ro"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1522#issuecomment-974683220", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1522", "id": 974683220, "node_id": "IC_kwDOBm6k_c46GHxU", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-11-20T17:29:12Z", "updated_at": "2021-11-20T17:29:12Z", "author_association": "OWNER", "body": "> As a a sanity check, would it be worth looking at trying to push the multi-process container on another provider of a knative / cloud run / tekton ? I have a somewhat similar use case for a future proejct, so i'm been very grateful to you sharing all the progress in this issue.\r\n\r\nThat's a great idea. I'll try running on a non-Knative host too (probably Fly - though they actually run containers using Firecracker which ends up being completely different).\r\n\r\nCloud Run are the only Knative host I've used, know of any others aside from Scaleway? They look like they're worth getting familiar with.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058896236, "label": "Deploy a live instance of demos/apache-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1296#issuecomment-850583584", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1296", "id": 850583584, "node_id": "MDEyOklzc3VlQ29tbWVudDg1MDU4MzU4NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-05-28T18:06:11Z", "updated_at": "2021-05-28T18:06:11Z", "author_association": "OWNER", "body": "> As a bonus, the Docker image becomes smaller\r\n\r\nThat's a huge surprise to me! And most welcome.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855446829, "label": "Dockerfile: use Ubuntu 20.10 as base"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/432#issuecomment-488595724", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/432", "id": 488595724, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4ODU5NTcyNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-02T08:50:53Z", "updated_at": "2019-05-02T08:50:53Z", "author_association": "CONTRIBUTOR", "body": "> Can I pull those needs out of the Facet class somehow?\r\n\r\nI was thinking that it might be handy for datasette to have a request object which wraps the Sanic Request. This could include the datasette-specific querystring decoding and the `special_args` parsing from TableView.data.\r\n\r\nThis would mean that we could expose the request object to plugin hooks without coupling them to Sanic.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432893491, "label": "Refactor facets to a class and new plugin, refs #427"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/670#issuecomment-797158641", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/670", "id": 797158641, "node_id": "MDEyOklzc3VlQ29tbWVudDc5NzE1ODY0MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-03-12T00:59:49Z", "updated_at": "2021-03-12T00:59:49Z", "author_association": "OWNER", "body": "> Challenge: what's the equivalent for PostgreSQL of opening a database in read only mode? Will I have to talk users through creating read only credentials?\r\n\r\nIt looks like the answer to this is yes - I'll need users to setup read-only credentials. Here's a TIL about that: https://til.simonwillison.net/postgresql/read-only-postgresql-user", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 564833696, "label": "Prototoype for Datasette on PostgreSQL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/675#issuecomment-590593247", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/675", "id": 590593247, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDU5MzI0Nw==", "user": {"value": 141844, "label": "aviflax"}, "created_at": "2020-02-24T23:02:52Z", "updated_at": "2020-02-24T23:02:52Z", "author_association": "NONE", "body": "> Design looks great to me.\r\n\r\nExcellent, thanks!\r\n\r\n> I'm not keen on two letter short versions (`-cp`) - I'd rather either have a single character or no short form at all.\r\n\r\nHmm, well, anyone running `datasette package` is probably at least somewhat familiar with UNIX CLIs\u2026 so how about `--cp` as a middle ground?\r\n\r\n```shell\r\n$ datasette package --cp /the/source/path /the/target/path data.db\r\n```\r\n\r\nI think I like it. Easy to remember!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 567902704, "label": "--cp option for datasette publish and datasette package for shipping additional files and directories"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/249#issuecomment-803502424", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/249", "id": 803502424, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMzUwMjQyNA==", "user": {"value": 36287, "label": "prabhur"}, "created_at": "2021-03-21T02:43:32Z", "updated_at": "2021-03-21T02:43:32Z", "author_association": "NONE", "body": "> Did you run `enable-fts` before you inserted the data?\r\n> \r\n> If so you'll need to run `populate-fts` after the insert to populate the FTS index.\r\n> \r\n> A better solution may be to add `--create-triggers` to the `enable-fts` command to add triggers that will automatically keep the index updated as you insert new records.\r\n\r\nWow. Wasn't expecting a response this quick, especially during a weekend. :-) Sincerely appreciate it.\r\nI tried the `populate-fts` and that did the trick. My bad for not consulting the docs again. I think I forgot to add that step when I automated the workflow.\r\nThanks for the suggestion. I'll close this issue. Have a great weekend and many many thanks for creating these suite of tools around sqlite.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 836963850, "label": "Full text search possibly broken?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421022917", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1421022917, "node_id": "IC_kwDOCGYnMM5UsxbF", "user": {"value": 21095447, "label": "4l1fe"}, "created_at": "2023-02-07T16:06:03Z", "updated_at": "2023-02-07T16:08:58Z", "author_association": "NONE", "body": "> Do you see a way to enable it without affecting existing users or bumping the major version number?\r\n\r\nI don't see a clean solution, only extending code with a side variable that tells us we want to apply advanced types instead of basic.\r\n\r\nit could be a similiar command like `tranform-v2 --type column DATETIME` or a cli option `transform --adv-type column DATETIME` along with a dict that contains the advanced types. Then with knowledge that we run an advanced command we take that dictionary somehow, we can wrap the current and new dictionaries by a superdict and work with it everywhere according to the knowledge. This way shouldn't affect users who are using the previous lib versions and it have to be merged in the next major one.\r\n\r\nBut this way looks a bad design, too messy.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1000#issuecomment-705926445", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1000", "id": 705926445, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNTkyNjQ0NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-10-09T02:15:38Z", "updated_at": "2020-10-09T02:15:38Z", "author_association": "OWNER", "body": "> FAILED tests/test_messages.py::test_messages_are_displayed_and_cleared - KeyError: 'ds_messages'\r\n\r\nThat one is caused by `response.cookies` skipping cookies that were set to the empty string. Same fix as this: https://github.com/simonw/datasette/blob/a1687351fb75b01f737fda4ad07e0781029de05c/tests/test_auth.py#L90-L95", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 717746043, "label": "datasette.client internal requests mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066169718", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066169718, "node_id": "IC_kwDOBm6k_c4_jHV2", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-13T19:48:49Z", "updated_at": "2022-03-13T19:48:49Z", "author_association": "CONTRIBUTOR", "body": "> For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests?\r\n\r\nYou shouldn't need to do this, as I mentioned previously. The code inside `render_cell` hook already has access to the most recently sync'd metadata via `datasette._metadata_local`. Refreshing the metadata for every cell seems ... excessive.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/595#issuecomment-552327079", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/595", "id": 552327079, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MjMyNzA3OQ==", "user": {"value": 647359, "label": "tomchristie"}, "created_at": "2019-11-11T07:34:27Z", "updated_at": "2019-11-11T07:34:27Z", "author_association": "NONE", "body": "> Glitch has been upgraded to Python 3.7.\r\n\r\nWhoop! \ud83e\udd73 \u2728 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 506300941, "label": "bump uvicorn to 0.9.0 to be Python-3.8 friendly"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316339035", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316339035, "node_id": "IC_kwDOBm6k_c5Odb1b", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T04:47:11Z", "updated_at": "2022-11-16T04:47:11Z", "author_association": "CONTRIBUTOR", "body": "> Have you ever seen CodeMirror correctly auto-completing columns? I'm not entirely sure I believe that the feature works anywhere else.\r\n\r\nI was thinking of the BigQuery console, like \r\n\r\n\"Screenshot\r\n\r\nBut they must be doing something pretty custom & appears to be using Monaco anyway. I suspect some kind of lower level autocomplete integration could make this work, but if the table completion is a good-enough starting point I think it's not too hard. The main issue is that we don't pass the relevant table data down to QueryView.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782748093", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782748093, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0ODA5Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:54:52Z", "updated_at": "2021-02-20T20:54:52Z", "author_association": "OWNER", "body": "> Have you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don\u2019t parse the JSON. Could be default (can\u2019t be much bigger with gzip?) or opt-in.\r\n\r\nAdding a `?_pretty=1` option that does that is a great idea, I'm filing a ticket for it: #1237", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/615#issuecomment-846660103", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/615", "id": 846660103, "node_id": "MDEyOklzc3VlQ29tbWVudDg0NjY2MDEwMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-05-24T00:47:00Z", "updated_at": "2021-05-24T00:47:00Z", "author_association": "OWNER", "body": "> Here's a bug: removing the `rowid` column returns an error.\r\n\r\nRemoving the `rowid` column should work. We can continue to show the `Link` column, ensuring users can still navigate to the row page for each row.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 517451234, "label": "?_col= and ?_nocol= support for toggling columns on table view"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074701", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074701, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:45:18Z", "updated_at": "2021-06-26T23:45:37Z", "author_association": "CONTRIBUTOR", "body": "> Here's where the plugin hook is called, demonstrating the `fallback=` argument:\r\n> \r\n> https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472\r\n> \r\n> I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459?\r\n\r\nI think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/520#issuecomment-1539109587", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/520", "id": 1539109587, "node_id": "IC_kwDOCGYnMM5bvPLT", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-05-08T22:00:46Z", "updated_at": "2023-05-08T22:00:46Z", "author_association": "OWNER", "body": "> Hey, isn't this essentially the same issue as #448 ?\r\n\r\nYes it is, good catch!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1516644980, "label": "rows_from_file() raises confusing error if file-like object is not in binary mode"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869071790", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869071790, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3MTc5MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-06-26T23:04:12Z", "updated_at": "2021-06-26T23:04:12Z", "author_association": "OWNER", "body": "> Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries.\r\n> \r\n> @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try?\r\n\r\nAnswering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160\r\n\r\nThat's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it.\r\n\r\nThis is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem.\r\n\r\nBut... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2162#issuecomment-1696709110", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2162", "id": 1696709110, "node_id": "IC_kwDOBm6k_c5lIbn2", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-29T03:20:40Z", "updated_at": "2023-08-29T03:22:47Z", "author_association": "OWNER", "body": "> However, one important notes about those new `core_` tables: If a `--internal` DB is passed in, that means those `core_` tables will persist across multiple Datasette instances. This wasn't the case before, since `_internal` was always an in-memory database created from scratch.\r\n\r\nI'm completely happy for the `core_*` tables (or `datasette_*` or some other name) to live in the persisted-to-disk `internal.db` database, even though they're effectively meant to be an in-memory cache.\r\n\r\nI don't think it causes any harm, and it could even be quite useful to have them visible on disk - other applications could read the `internal.db` database while Datasette itself is running, should they have some weird reason to want to do that!\r\n\r\nHaving those tables stick around in `internal.db` after Datasette shuts down could be useful for other debugging activities as well.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1870672704, "label": "Add new `--internal internal.db` option, deprecate legacy `_internal` database"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/361#issuecomment-1006294777", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/361", "id": 1006294777, "node_id": "IC_kwDOCGYnMM47-tb5", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-01-06T05:24:54Z", "updated_at": "2022-01-06T05:24:54Z", "author_association": "OWNER", "body": "> I added a custom error message for if the user's `--convert` code doesn't return a dict.\r\n\r\nThat turned out to be a bad idea because it meant exhausting the iterator early for the check - before we got to the `.insert_all()` code that breaks the iterator up into chunks. I tried fixing that with `itertools.tee()` to run the generator twice but that's grossly memory-inefficient for large imports.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1094890366, "label": "--lines and --text and --convert and --import"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/266#issuecomment-389579762", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/266", "id": 389579762, "node_id": "MDEyOklzc3VlQ29tbWVudDM4OTU3OTc2Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2018-05-16T16:21:12Z", "updated_at": "2018-05-16T16:21:12Z", "author_association": "OWNER", "body": "> I basically want someone to tell me which arguments I can pass to Python's csv.writer() function that will result in the least complaints from people who try to parse the results :)\r\nhttps://twitter.com/simonw/status/996786815938977792", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323681589, "label": "Export to CSV"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1317797044", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1317797044, "node_id": "IC_kwDOBm6k_c5Oi_y0", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-16T23:08:34Z", "updated_at": "2022-11-16T23:08:34Z", "author_association": "OWNER", "body": "> I can push up a commit that uses the static fixtures schema for testing, but given that the query used to generate it is authed we would still need some work to make that work on live data, right?\r\n\r\nYeah, push that up. I'm happy to wire in the query right after we land this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/329#issuecomment-968451954", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/329", "id": 968451954, "node_id": "IC_kwDOCGYnMM45uWdy", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-11-15T02:05:29Z", "updated_at": "2021-11-15T02:05:29Z", "author_association": "OWNER", "body": "> I could even have those replacement characters be properties of the `Database` class, so uses can sub-class and change them.\r\n\r\nI'm not going to do this, it's unnecessary extra complexity and it means the function that fixes the column names needs to have access to the current `Database` instance.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1005891028, "label": "Rethink approach to [ and ] in column names (currently throws error)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2008#issuecomment-1407568923", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2008", "id": 1407568923, "node_id": "IC_kwDOBm6k_c5T5cwb", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-01-29T05:47:36Z", "updated_at": "2023-01-29T05:47:36Z", "author_association": "OWNER", "body": "> I don't know how/if you do automated tests for performance, so I haven't changed any of the tests.\r\n\r\nWe don't have any performance tests yet - would be a useful thing to add, I've not built anything like that before (at least not in CI, I've always done as-hoc performance testing using something like Locust) so I don't have a great feel for how it could work.\r\n\r\nI see not having to change the tests at all for this change as a really positive sign. If you find any behaviour differences between this and the previous that's a sign we should add a mother test or two specifying the behaviour we want.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1560982210, "label": "array facet: don't materialize unnecessary columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/646#issuecomment-561247711", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/646", "id": 561247711, "node_id": "MDEyOklzc3VlQ29tbWVudDU2MTI0NzcxMQ==", "user": {"value": 18017473, "label": "lagolucas"}, "created_at": "2019-12-03T16:31:39Z", "updated_at": "2019-12-03T17:31:33Z", "author_association": "NONE", "body": "> I don't think this is possible at the moment but you're right, it totally should be.\r\n\r\nJust give me a heads-up if you think you can do that quickly. I am trying to implement it with very little knowledge of how datasette works, so it will take loads of time.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 531502365, "label": "Make database level information from metadata.json available in the index.html template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/268#issuecomment-876616414", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/268", "id": 876616414, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NjYxNjQxNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-07-08T17:29:04Z", "updated_at": "2021-07-08T17:29:04Z", "author_association": "OWNER", "body": "> I had setup a full text search on my instance of Datasette for title data for our public library, and was noticing that some of the features of the SQLite FTS weren't working as expected ... and maybe the issue is in the `escape_fts()` function\r\n\r\nThat's a deliberate feature (albeit controversial, see #759) - part of the main problem here is that it's easy to construct a SQLite full-text search string which results in a database error. This is a bad user-experience!\r\n\r\nYou can opt-in to raw SQL queries by appending `?_searchmode=raw` to the page, see https://docs.datasette.io/en/stable/full_text_search.html#advanced-sqlite-search-queries\r\n\r\nBut maybe there should be an option for turning that on by default without needing the query string?\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323718842, "label": "Mechanism for ranking results from SQLite full-text search"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791530093", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 791530093, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MTUzMDA5Mw==", "user": {"value": 306240, "label": "UtahDave"}, "created_at": "2021-03-05T16:28:07Z", "updated_at": "2021-03-05T16:28:07Z", "author_association": "NONE", "body": "> I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout.\r\n> \r\n> Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once?\r\n\r\n@maxhawkins a limitation of the python mbox module is it loads the entire mbox into memory. I did find another approach to this problem that didn't use the builtin python mbox module and created a generator so that it didn't have to load the whole mbox into memory. I was hoping to use standard library modules, but this might be a good reason to investigate that approach a bit more. My worry is making sure a custom processor handles all the ins and outs of the mbox format correctly.\r\n\r\nHm. As I'm writing this, I thought of something. I think I can parse each message one at a time, and then use an mbox function to load each message using the python mbox module. That way the mbox module can still deal with the specifics of the mbox format, but I can use a generator.\r\n\r\nI'll give that a try. Thanks for the feedback @maxhawkins and @simonw. I'll give that a try.\r\n\r\n@simonw can we hold off on merging this until I can test this new approach?", "reactions": "{\"total_count\": 3, \"+1\": 3, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/173#issuecomment-956041692", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/173", "id": 956041692, "node_id": "IC_kwDOCGYnMM44_Anc", "user": {"value": 2118708, "label": "Florents-Tselai"}, "created_at": "2021-11-01T08:42:24Z", "updated_at": "2021-11-01T08:42:24Z", "author_association": "NONE", "body": "> I know how to build this for CSV and TSV - I can read them via a file wrapper that counts how many bytes it has seen.\r\n> \r\n> Not sure how to do it for JSON though. Maybe I could provide it just for newline-delimited JSON? Again I can measure progress based on how many bytes have been read.\r\n\r\nI was thinking about this, while inserting a stream of ~40M line-delimited json docs. Wouldn't a `--total-expected` flag work ? \r\n\r\nThat's [how tqdm does it](https://github.com/tqdm/tqdm/blob/fc69d5dcf578f7c7986fa76841a6b793f813df35/tqdm/std.py#L366)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 707478649, "label": "Progress bar for sqlite-utils insert"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111451790", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111451790, "node_id": "IC_kwDOBm6k_c5CP2iO", "user": {"value": 716529, "label": "glyph"}, "created_at": "2022-04-27T20:30:33Z", "updated_at": "2022-04-27T20:30:33Z", "author_association": "NONE", "body": "> I should try seeing what happens with WAL mode enabled.\r\n\r\nI've only skimmed above but it looks like you're doing mainly read-only queries? WAL mode is about better interactions between writers & readers, primarily.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/279#issuecomment-391073009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/279", "id": 391073009, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA3MzAwOQ==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-22T17:23:26Z", "updated_at": "2018-05-22T17:23:26Z", "author_association": "CONTRIBUTOR", "body": "> I think I prefer the aesthetics of just \"0.22\" for the version string if it's a tagged release with no additional changes - does that work?\r\n\r\nYes! That's the default versioneer behaviour.\r\n\r\n> I'd like to continue to provide a tuple that can be imported from the version.py module as well, as seen here:\r\n\r\nShould work now, it can be a two (for a tagged version), three or four items tuple.\r\n\r\n```\r\nIn [2]: datasette.__version__\r\nOut[2]: '0.12+292.ga70c2a8.dirty'\r\n\r\nIn [3]: datasette.__version_info__\r\nOut[3]: ('0', '12+292', 'ga70c2a8', 'dirty')\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325352370, "label": "Add version number support with Versioneer"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1241#issuecomment-784347646", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1241", "id": 784347646, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDM0NzY0Ng==", "user": {"value": 7107523, "label": "Kabouik"}, "created_at": "2021-02-23T16:55:26Z", "updated_at": "2021-02-23T16:57:39Z", "author_association": "NONE", "body": "> I think it's possible that many users these days no longer assume they can paste a URL from the browser address bar (if they ever understood that at all) because to many apps are SPAs with broken URLs.\r\n\r\nAbsolutely, that's why I thought my corner case with `iframe` preventing access to the datasette URL could actually be relevant in more general situations.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814595021, "label": "Share button for copying current URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782756398", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782756398, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc1NjM5OA==", "user": {"value": 601316, "label": "simonrjones"}, "created_at": "2021-02-20T22:05:48Z", "updated_at": "2021-02-20T22:05:48Z", "author_association": "NONE", "body": "> I think it\u2019s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default.\n\nI agree it is more predictable if the top level item is an object with a rows or data object that contains an array of data, which then allows for other top-level meta data. \n\nI can see the argument for removing this and just using an array for convenience - but I think that's OK as an option (as you have now).\n\nRather than have lots of top-level keys you could have a \"meta\" object to contain non-data stuff. You could use something like \"links\" for API endpoint URLs (or use a standard like HAL). Which would then leave the top level a bit cleaner - if that's what you what. \n\nHave you had much feedback from users who use the Datasette API a lot?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/200#issuecomment-380608372", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/200", "id": 380608372, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MDYwODM3Mg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-11T21:55:46Z", "updated_at": "2018-04-11T21:55:46Z", "author_association": "CONTRIBUTOR", "body": "> I think the most reliable way to detect spatialite is to run `SELECT AddGeometryColumn(1, 2, 3, 4, 5);` against a `:memory:` database and see if it throws an exception\r\n\r\nOr just see if there's a `geometry_columns` table? I think that's quite unlikely to be added by accident (and it's an OGC standard). It also tells you if Spatialite is installed in the database rather than just loaded.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313494458, "label": "Hide Spatialite system tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1519#issuecomment-974701788", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1519", "id": 974701788, "node_id": "IC_kwDOBm6k_c46GMTc", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-11-20T19:42:29Z", "updated_at": "2021-11-20T19:42:29Z", "author_association": "OWNER", "body": "> I think what's happening here is Apache is actually making a request to `/fixtures` rather than making a request to `/prefix/fixtures` - and Datasette is replying to requests on both the prefixed and the non-prefixed paths.\r\n> \r\n> This is pretty confusing! I think Datasette should ONLY reply to `/prefix/fixtures` instead and return a 404 for `/fixtures` - this would make things a whole lot easier to debug.\r\n> \r\n> But shipping that change could break existing deployments. Maybe that should be a breaking change for 1.0.\r\n\r\nOn further thought I'm not going to do this. Having Datasette work behind a proxy the way it does right now is clearly easy for people to deploy (now that I've fixed the bugs) and I trust my improved tests to catch problems in the future.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058790545, "label": "base_url is omitted in JSON and CSV views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029335225", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/385", "id": 1029335225, "node_id": "IC_kwDOCGYnMM49Wmi5", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-02-03T19:39:40Z", "updated_at": "2022-02-03T19:39:40Z", "author_association": "OWNER", "body": "> I thought about adding these as methods on `Database` and `Table`, and I'm back and forth on it for the same reasons you are. It's certainly cleaner, and it's clearer what you're operating on. I could go either way.\r\n> \r\n> I do sort of like having all the Spatialite stuff in its own module, just because it's built around an extension you might not have or want, but I don't know if that's a good reason to have a different API.\r\n> \r\n> You could have `init_spatialite` add methods to `Database` and `Table`, so they're only there if you have Spatialite set up. Is that too clever? It feels too clever.\r\n\r\nYeah that's too clever. You know what? I'm pretty confident we are both massively over-thinking this. We should put the methods on `Database` and `Table`! API simplicity and consistency matters more than vague concerns about purity.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1102899312, "label": "Add new spatialite helper methods"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1960#issuecomment-1355319541", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1960", "id": 1355319541, "node_id": "IC_kwDOBm6k_c5QyIj1", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-12-16T17:58:24Z", "updated_at": "2022-12-16T17:58:46Z", "author_association": "OWNER", "body": "> I tried adding `invoke_startup()` to the `ds_client()` fixture to see if that would fix this.\r\n\r\nIt did not: I'm still seeing those same failures. Frustrating: https://github.com/simonw/datasette/actions/runs/3715317653/jobs/6300336884\r\n\r\n ====== 11 failed, 1252 passed, 1 skipped, 1 warning in 185.77s (0:03:05) =======", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1499150951, "label": "Port as many tests as possible to async def tests against ds_client"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139", "id": 682182178, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-27T20:46:18Z", "updated_at": "2020-08-27T20:46:18Z", "author_association": "CONTRIBUTOR", "body": "> I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem.\r\n\r\nSo the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048\r\n\r\nWith regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this:\r\nhttps://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63\r\n\r\nIt doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here:\r\nhttps://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716\r\n\r\nThe existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either.\r\n\r\n@simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made?\r\n\r\nWould be grateful for your thoughts! Thanks!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 686978131, "label": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/172#issuecomment-698178101", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/172", "id": 698178101, "node_id": "MDEyOklzc3VlQ29tbWVudDY5ODE3ODEwMQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-09-24T07:48:57Z", "updated_at": "2020-09-24T07:49:20Z", "author_association": "OWNER", "body": "> I wonder if I could make this faster by separating it out into a few steps:\r\n> \r\n> * Create the new lookup table with all of the distinct rows\r\n> \r\n> * Add the blank foreign key column\r\n> \r\n> * run a `UPDATE table SET blah_id = (select id from lookup where thang = table.thang)`\r\n> \r\n> * Drop the value columns\r\nMy prototype of this knocked the time down from 10 minutes to 4 seconds, so I think the change is worth it!\r\n```\r\n% date\r\nsqlite-utils extract salaries.db salaries \\\r\n 'Department Code' 'Department' \\\r\n --table 'departments' \\\r\n --fk-column 'department_id' \\\r\n --rename 'Department Code' code \\\r\n --rename 'Department' name\r\ndate\r\nsqlite-utils extract salaries.db salaries \\\r\n 'Union Code' 'Union' \\\r\n --table 'unions' \\\r\n --fk-column 'union_id' \\\r\n --rename 'Union Code' code \\\r\n --rename 'Union' name\r\ndate\r\nsqlite-utils extract salaries.db salaries \\\r\n 'Job Family Code' 'Job Family' \\\r\n --table 'job_families' \\\r\n --fk-column 'job_family_id' \\\r\n --rename 'Job Family Code' code \\\r\n --rename 'Job Family' name\r\ndate\r\nsqlite-utils extract salaries.db salaries \\\r\n 'Job Code' 'Job' \\\r\n --table 'jobs' \\\r\n --fk-column 'job_id' \\\r\n --rename 'Job Code' code \\\r\n --rename 'Job' name\r\ndate\r\nThu Sep 24 00:48:16 PDT 2020\r\n\r\nThu Sep 24 00:48:20 PDT 2020\r\n\r\nThu Sep 24 00:48:24 PDT 2020\r\n\r\nThu Sep 24 00:48:28 PDT 2020\r\n\r\nThu Sep 24 00:48:32 PDT 2020\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 707427200, "label": "Improve performance of extract operations"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1388#issuecomment-877717262", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1388", "id": 877717262, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NzcxNzI2Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-07-10T23:37:54Z", "updated_at": "2021-07-10T23:37:54Z", "author_association": "OWNER", "body": "> I wonder if `--fd` is worth supporting too?\r\n\r\nI'm going to hold off on implementing this until someone asks for it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 939051549, "label": "Serve using UNIX domain socket"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030741289", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/399", "id": 1030741289, "node_id": "IC_kwDOCGYnMM49b90p", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-02-06T03:03:43Z", "updated_at": "2022-02-06T03:03:43Z", "author_association": "CONTRIBUTOR", "body": "> I wonder if there are any interesting non-geospatial canned conversions that it would be worth including?\r\n\r\nOff the top of my head:\r\n\r\n- Un-nesting JSON objects into columns\r\n- Splitting arrays\r\n- Normalizing dates and times\r\n- URL munging with `urlparse`\r\n- Converting strings to numbers\r\n\r\nSome of this is easy enough with SQL functions, some is easier in Python. Maybe that's where having pre-built classes gets really handy, because it saves you from thinking about which way it's implemented.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1124731464, "label": "Make it easier to insert geometries, with documentation and maybe code"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1331#issuecomment-842499728", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1331", "id": 842499728, "node_id": "MDEyOklzc3VlQ29tbWVudDg0MjQ5OTcyOA==", "user": {"value": 475613, "label": "MarkusH"}, "created_at": "2021-05-17T17:24:30Z", "updated_at": "2021-05-17T17:24:30Z", "author_association": "NONE", "body": "> I wonder if there are any new 3.0 features we should be taking advantage of here that would justify pinning to 3.0 minimum?\r\n\r\nThe changelog reads like bug fixes and removal of deprecated parts to me", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 893537744, "label": "Add support for Jinja2 version 3.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/203#issuecomment-381315675", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/203", "id": 381315675, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTMxNTY3NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-14T09:14:45Z", "updated_at": "2018-04-14T09:27:30Z", "author_association": "CONTRIBUTOR", "body": "> I'd like to figure out a sensible opt-in way to expose this in the JSON output as well. Maybe with a &_units=true parameter?\r\n\r\nFrom a machine-readable perspective I'm not sure why it would be useful to decorate the values with units. Edit: Should have had some coffee first. It's clearly useful for stuff like map rendering!\r\n\r\nI agree that the unit metadata should definitely be exposed in the JSON.\r\n\r\n> In #204 you said \"I'd like to add support for using units when querying but this is PR is pretty usable as-is.\" - I'm fascinated to hear more about how this could work.\r\n\r\nI'm thinking about a couple of approaches here. I think the simplest one is: if the column has a unit attached, optionally accept units in query fields:\r\n\r\n```python\r\ncolumn_units = ureg(\"Hz\") # Create a unit object for the column's unit\r\nquery_variable = ureg(\"4 GHz\") # Supplied query variable\r\n\r\n# Now we can convert the query units into column units before querying\r\nsupplied_value.to(column_units).magnitude\r\n> 4000000000.0\r\n\r\n# If the user doesn't supply units, pint just returns the plain\r\n# number and we can query as usual assuming it's the base unit\r\nquery_variable = ureg(\"50\")\r\nquery_variable\r\n> 50\r\n\r\nisinstance(query_variable, numbers.Number)\r\n> True\r\n```\r\n\r\nThis also lets us do some nice unit conversion on querying:\r\n\r\n```python\r\ncolumn_units = ureg(\"m\")\r\nquery_variable = ureg(\"50 ft\")\r\n\r\nsupplied_value.to(column_units)\r\n> \r\n```\r\n\r\nThe alternative would be to provide a dropdown of units next to the query field (so a \"Hz\" field would give you \"kHz\", \"MHz\", \"GHz\"). Although this would be clearer to the user, it isn't so easy - we'd need to know more about the context of the field to give you sensible SI prefixes (I'm not so interested in nanoHertz, for example).\r\n\r\nYou also lose the bonus of being able to convert - although pint will happily show you all the compatible units, it again suffers from a lack of context:\r\n\r\n```python\r\nureg(\"m\").compatible_units()\r\n> frozenset({,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n })\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313837303, "label": "Support for units"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/88#issuecomment-344430689", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/88", "id": 344430689, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQzMDY4OQ==", "user": {"value": 15543, "label": "tomdyson"}, "created_at": "2017-11-14T23:08:22Z", "updated_at": "2017-11-14T23:08:22Z", "author_association": "CONTRIBUTOR", "body": "> I'm getting an internal server error on http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/ at the moment\r\n\r\nSorry about that - here's a working version on Netlify:\r\n\r\nhttps://nhs-england-map.netlify.com", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273775212, "label": "Add NHS England Hospitals example to wiki"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2123#issuecomment-1689207309", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2123", "id": 1689207309, "node_id": "IC_kwDOBm6k_c5kr0IN", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-08-23T03:07:27Z", "updated_at": "2023-08-23T03:07:27Z", "author_association": "OWNER", "body": "> I'm happy to debug and land a patch if it's welcome.\r\n\r\nYes please! What an odd bug.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1825007061, "label": "datasette serve when invoked with --reload interprets the serve command as a file"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-391505930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 391505930, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTUwNTkzMA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-23T21:41:37Z", "updated_at": "2018-05-23T21:41:37Z", "author_association": "CONTRIBUTOR", "body": "> I'm not keen on anything that modifies the SQLite file itself on startup\r\n\r\nAh I didn't mean that - I meant altering the SELECT query to fetch the data so that it ran a spatialite function to transform that specific column.\r\n\r\nI think that's less useful as a general-purpose plugin hook though, and it's not that hard to parse the WKB in Python (my default approach would be to use [shapely](https://github.com/Toblerity/Shapely), which is great, but geomet looks like an interesting pure-python alternative).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 888075098, "node_id": "IC_kwDODFE5qs407vNa", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-07-28T07:18:56Z", "updated_at": "2021-07-28T07:18:56Z", "author_association": "NONE", "body": "> I'm not sure why but my most recent import, when displayed in Datasette, looks like this:\r\n> \r\n> \"mbox__mbox_emails__753_446_rows\"\r\n\r\nI did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead.\r\n\r\n@simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1331#issuecomment-846482057", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1331", "id": 846482057, "node_id": "MDEyOklzc3VlQ29tbWVudDg0NjQ4MjA1Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-05-23T00:39:55Z", "updated_at": "2021-05-23T00:39:55Z", "author_association": "OWNER", "body": "> I'm stuck also because datasette wants itsdangerous~=1.1 instead of allowing itsdangerous-2.0.0\r\n\r\nBumped that dependency in b64d87204612a84663616e075f542499a5d82a03", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 893537744, "label": "Add support for Jinja2 version 3.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1805#issuecomment-1265161668", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1805", "id": 1265161668, "node_id": "IC_kwDOBm6k_c5LaNXE", "user": {"value": 562352, "label": "CharlesNepote"}, "created_at": "2022-10-03T09:18:05Z", "updated_at": "2022-10-03T09:18:05Z", "author_association": "NONE", "body": "> I'm tempted to add `word-wrap: anywhere` only to links that are know to be longer than a certain threshold.\r\n\r\nMake sense IMHO.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1363552780, "label": "truncate_cells_html does not work for links?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1317449610", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1317449610, "node_id": "IC_kwDOBm6k_c5Ohq-K", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-11-16T18:14:28Z", "updated_at": "2022-11-16T18:14:28Z", "author_association": "OWNER", "body": "> I'm thinking of also adding `count` to the list since that's a common thing people would want to autocomplete. I notice BQ console highlights `count` in the same manner as other keywords like `select` as well.\r\n\r\nHuh, yeah we should definitely have `count` - surprised it's not on the list on https://www.sqlite.org/lang_keywords.html which is why we didn't get it from the GPT-3 generated schema.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2031#issuecomment-1462921890", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2031", "id": 1462921890, "node_id": "IC_kwDOBm6k_c5XMmqi", "user": {"value": 9599, "label": "simonw"}, "created_at": "2023-03-09T22:35:30Z", "updated_at": "2023-03-09T22:35:30Z", "author_association": "OWNER", "body": "> I've implemented the test (thanks for pointing me in the right direction!).\r\n> \r\n> At [tmcl-it/datasette:0.64.1+row-view-expand-labels](https://github.com/tmcl-it/datasette/tree/0.64.1%2Brow-view-expand-labels) I also have a variant of this patch that applies to the 0.64.x branch. Please let me know if you'd be interested in merging that as well and I'll open another PR.\r\n\r\nSure, let's merge that one too - it can go out in the next `0.64.x` series release (maybe even a 0.65).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1605481359, "label": "Expand foreign key references in row view as well"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/864#issuecomment-650842514", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/864", "id": 650842514, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MDg0MjUxNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-06-29T00:12:59Z", "updated_at": "2020-06-29T00:12:59Z", "author_association": "OWNER", "body": "> I've made enough progress on this to be able to solve the messages issue in #864. I may still complete this overall goal (registering internal views with `register_routes()`) as part of Datasette 0.45 but it would be OK if it slipped to a later release.\r\nhttps://github.com/simonw/datasette/issues/870#issuecomment-650842381", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 644309017, "label": "datasette.add_message() doesn't work inside plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/262#issuecomment-691526719", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/262", "id": 691526719, "node_id": "MDEyOklzc3VlQ29tbWVudDY5MTUyNjcxOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-09-12T18:19:50Z", "updated_at": "2020-09-12T18:19:50Z", "author_association": "OWNER", "body": "> Idea: `?_extra=sqllog` could output a lot of every individual SQL statement that was executed in order to generate the page - useful for seeing how foreign key expansion and faceting actually works.\r\n\r\nI built a version of that a while ago as the `?_trace=1` argument.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323658641, "label": "Add ?_extra= mechanism for requesting extra properties in JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066006292", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066006292, "node_id": "IC_kwDOBm6k_c4_ifcU", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-13T02:09:44Z", "updated_at": "2022-03-13T02:09:44Z", "author_association": "CONTRIBUTOR", "body": "> If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config?\r\n\r\nReading from sqlite DBs is pretty quick and I didn't notice significant performance issues when I was benchmarking. I tested on very large Datasette deployments (hundreds of DBs, millions of rows). See [\"Many small queries are efficient in sqlite\"](https://sqlite.org/np1queryprob.html) for more information on the rationale here. Also note that in the [datasette-live-config](https://github.com/next-LI/datasette-live-config) reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had.\r\n\r\nIf you need to ensure fresh metadata is being read inside of a `render_cell` hook specifically, you don't need to do anything further! `get_metadata` gets called before `render_cell` every request, so it already has access to the synced meta. There shouldn't be a need to call `get_metadata(...)` or `metadata(...)` inside `render_cell`, you can just use `datasette._metadata_local` if you're really worried about performance.\r\n\r\n> The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases.\r\n\r\nYes correct, the datadette-remote-metadata plugin doesn't do that. But the datasette-live-config plugin does. [It supports a `__metadata` table](https://github.com/next-LI/datasette-live-config/blob/main/datasette_live_config/__init__.py#L107-L138) that, when it exists on an attached DB, gets pulled into the Datasette internal `_metadata` and is also accessible via `get_metadata`. Updating is instantaneous so there's no gotchas for users or security issues for users relying on the metadata-based permissions. Simon talked about eventually making something like this a standard feature of Datasette, but I'm not sure what the status is on that!\r\n\r\nGood luck!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/92#issuecomment-599127453", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/92", "id": 599127453, "node_id": "MDEyOklzc3VlQ29tbWVudDU5OTEyNzQ1Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-03-14T19:50:08Z", "updated_at": "2020-03-14T19:50:08Z", "author_association": "OWNER", "body": "> If the declared type for a column contains the string \"BLOB\" or if no type is specified then the column has affinity BLOB\r\n\r\nI currently treat those as `str` - it sounds like I should treat them as `bytes`:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/43f1c6ab4e3a6b76531fb6f5447adb83d26f3971/sqlite_utils/db.py#L68-L69\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 581339961, "label": ".columns_dict doesn't work for all possible column types"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316256386", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316256386, "node_id": "IC_kwDOBm6k_c5OdHqC", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T03:18:06Z", "updated_at": "2022-11-16T03:18:06Z", "author_association": "CONTRIBUTOR", "body": "> If you can get a version of this working with table and column autocompletion just using a static JavaScript object in the source code with the right tables and columns, I'm happy to take on the work of turning that static object into something that Datasette includes in the page itself with all of the correct values.\r\n\r\nThis version \"sort of\" works when on the main database page where the template passes the relevant data https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25 by doing this and passing that into the `schema` object:\r\n\r\n```\r\n let TABLES_DATA = [];\r\n {% if tables is defined %} \r\n TABLES_DATA = {{ tables | tojson(indent=2) }};\r\n {% endif %}\r\n\r\n // Turn into an object, shaped like https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/test/test-complete.ts#L27.\r\n const TABLES_SCHEMA = Object.fromEntries(\r\n new Map(\r\n TABLES_DATA.map((table) => {\r\n return [table.name, table.columns];\r\n })\r\n ).entries()\r\n );\r\n```\r\n\r\nBut there are a number of papercuts with it - it's not escaping table names with spaces (likely be fixable from the data being passed into the view) but mainly it doesn't seem to autocomplete columns. I think it might only want to do it when you first type the table name from my read of https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/test/test-complete.ts#L37. It's possible I'm just passing something wrong, but it may end up being something that needs feature work upstream.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1519#issuecomment-974559176", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1519", "id": 974559176, "node_id": "IC_kwDOBm6k_c46FpfI", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-11-20T00:42:08Z", "updated_at": "2021-11-20T00:42:08Z", "author_association": "OWNER", "body": "> In the meantime I can catch these errors by changing the test to run each path twice, once with and once without the prefix. This should accurately simulate how Apache is working here.\r\n\r\nThis worked, I managed to get the tests to fail! Here's the change I made:\r\n\r\n```diff\r\ndiff --git a/tests/test_html.py b/tests/test_html.py\r\nindex f24165b..dbdfe59 100644\r\n--- a/tests/test_html.py\r\n+++ b/tests/test_html.py\r\n@@ -1614,12 +1614,19 @@ def test_metadata_sort_desc(app_client):\r\n \"/fixtures/compound_three_primary_keys/a,a,a\",\r\n \"/fixtures/paginated_view\",\r\n \"/fixtures/facetable\",\r\n+ \"/fixtures?sql=select+1\",\r\n ],\r\n )\r\n-def test_base_url_config(app_client_base_url_prefix, path):\r\n+@pytest.mark.parametrize(\"use_prefix\", (True, False))\r\n+def test_base_url_config(app_client_base_url_prefix, path, use_prefix):\r\n client = app_client_base_url_prefix\r\n- response = client.get(\"/prefix/\" + path.lstrip(\"/\"))\r\n+ path_to_get = path\r\n+ if use_prefix:\r\n+ path_to_get = \"/prefix/\" + path.lstrip(\"/\")\r\n+ response = client.get(path_to_get)\r\n soup = Soup(response.body, \"html.parser\")\r\n+ if path == \"/fixtures?sql=select+1\":\r\n+ assert False\r\n for el in soup.findAll([\"a\", \"link\", \"script\"]):\r\n if \"href\" in el.attrs:\r\n href = el[\"href\"]\r\n@@ -1642,11 +1649,12 @@ def test_base_url_config(app_client_base_url_prefix, path):\r\n # If this has been made absolute it may start http://localhost/\r\n if href.startswith(\"http://localhost/\"):\r\n href = href[len(\"http://localost/\") :]\r\n- assert href.startswith(\"/prefix/\"), {\r\n+ assert href.startswith(\"/prefix/\"), json.dumps({\r\n \"path\": path,\r\n+ \"path_to_get\": path_to_get,\r\n \"href_or_src\": href,\r\n \"element_parent\": str(el.parent),\r\n- }\r\n+ }, indent=4, default=repr)\r\n \r\n \r\n def test_base_url_affects_metadata_extra_css_urls(app_client_base_url_prefix):\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058790545, "label": "base_url is omitted in JSON and CSV views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030807433", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/399", "id": 1030807433, "node_id": "IC_kwDOCGYnMM49cN-J", "user": {"value": 6025893, "label": "chris48s"}, "created_at": "2022-02-06T10:54:09Z", "updated_at": "2022-02-06T10:54:09Z", "author_association": "NONE", "body": "> Interesting that some accept an SRID and others do not - presumably GeomFromGeoJSON() always uses SRID=4326?\r\n\r\nThe ewtk/ewkb ones don't accept an SRID is because ewkt encodes the SRID in the string, so you would do this with a wkt string:\r\n\r\n`GeomFromText('POINT(529090 179645)', 27700)`\r\n\r\nbut for ewkt it would be\r\n\r\n`GeomFromEWKT('SRID=27700;POINT(529090 179645)')`\r\n\r\nThe specs for KML and GeoJSON specify a Coordinate Reference System for the format\r\n\r\n- https://datatracker.ietf.org/doc/html/rfc7946#section-4\r\n- https://docs.opengeospatial.org/is/12-007r2/12-007r2.html#1274\r\n\r\nGML can specify the SRID in the XML at feature level e.g:\r\n\r\n```\r\n\r\n 529090, 179645\r\n\r\n```\r\n\r\nThere's a few more obscure formats in there, but broadly I think it is safe to assume an SRID param exists on the function for cases where the SRID is not implied by or specified in the input format.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1124731464, "label": "Make it easier to insert geometries, with documentation and maybe code"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/471#issuecomment-1238873948", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/471", "id": 1238873948, "node_id": "IC_kwDOCGYnMM5J17dc", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-09-07T03:46:26Z", "updated_at": "2022-09-07T03:46:26Z", "author_association": "OWNER", "body": "> Is it still nfortunately slow and tricky when playing with floats ?\r\n\r\nNot sure what you mean here?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1352932716, "label": "sqlite-utils query --functions mechanism for registering extra functions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1688#issuecomment-1079550754", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1688", "id": 1079550754, "node_id": "IC_kwDOBm6k_c5AWKMi", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2022-03-26T01:27:27Z", "updated_at": "2022-03-26T03:16:29Z", "author_association": "CONTRIBUTOR", "body": "> Is there a way to serve a static assets when using the plugins/ directory method instead of installing plugins as a new python package?\r\n\r\nAs a workaround, I found I can serve my statics from a non-plugin specific folder using the [--static](https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files) CLI flag.\r\n\r\n```bash\r\ndatasette ~/Library/Safari/History.db \\\r\n --plugins-dir=plugins/ \\\r\n --static assets:dist/\r\n```\r\n\r\nIt's not ideal because it means I'll change the cache pattern path depending on how the plugin is running (via pip install or as a one off script), but it's usable as a workaround.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1181432624, "label": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/179#issuecomment-392606418", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/179", "id": 392606418, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MjYwNjQxOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2018-05-28T21:32:37Z", "updated_at": "2018-05-28T21:32:37Z", "author_association": "OWNER", "body": "> It could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates.\r\n\r\nThat's now covered by the plugins mechanism - you can create plugins that define custom template functions: http://datasette.readthedocs.io/en/stable/plugins.html#prepare-jinja2-environment-env", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 288438570, "label": "More metadata options for template authors "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059652538", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/412", "id": 1059652538, "node_id": "IC_kwDOCGYnMM4_KQO6", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-03-05T02:13:17Z", "updated_at": "2022-03-05T02:13:17Z", "author_association": "OWNER", "body": "> It looks like the existing `pd.read_sql_query()` method has an optional dependency on SQLAlchemy:\r\n> \r\n> ```\r\n> ...\r\n> import pandas as pd\r\n> pd.read_sql_query(db.conn, \"select * from articles\")\r\n> # ImportError: Using URI string without sqlalchemy installed.\r\n> ```\r\nHah, no I was wrong about this: SQLAlchemy is not needed for SQLite to work, I just had the arguments the wrong way round:\r\n```python\r\npd.read_sql_query(\"select * from articles\", db.conn)\r\n# Shows a DateFrame\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1160182768, "label": "Optional Pandas integration"}, "performed_via_github_app": null}