{"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-489353316", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 489353316, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTM1MzMxNg==", "user": {"value": 46059, "label": "carsonyl"}, "created_at": "2019-05-04T18:36:36Z", "updated_at": "2019-05-04T18:36:36Z", "author_association": "NONE", "body": "Hi @simonw - I just hit this issue when trying out Datasette after your PyCon talk today. Datasette is pinned to Sanic 0.7.0, but it looks like 0.8.0 added the option to remove the uvloop dependency for Windows by having an environment variable `SANIC_NO_UVLOOP` at install time. Maybe that'll be sufficient before a port to Starlette?", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 1, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/633#issuecomment-620841496", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/633", "id": 620841496, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMDg0MTQ5Ng==", "user": {"value": 46165, "label": "nryberg"}, "created_at": "2020-04-28T20:37:50Z", "updated_at": "2020-04-28T20:37:50Z", "author_association": "NONE", "body": "Using the Heroku web interface, you can set the WEB_CONCURRENCY = 1\r\n\r\n![image](https://user-images.githubusercontent.com/46165/80535319-352c8100-8966-11ea-9d4f-df2622ec8bff.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 522334771, "label": "Publish to Heroku is broken: \"WARNING: You must pass the application as an import string to enable 'reload' or 'workers\""}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/evernote-to-sqlite/issues/14#issuecomment-911772943", "issue_url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/14", "id": 911772943, "node_id": "IC_kwDOEhK-wc42WI0P", "user": {"value": 46968, "label": "step21"}, "created_at": "2021-09-02T14:53:11Z", "updated_at": "2021-09-02T14:53:11Z", "author_association": "NONE", "body": "Additionally, assuming the line numbers match up with the provided enenx file, the mentioned line plus one before and after is as follows:\r\n```\r\n]]>\r\n\r\n

\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 986829194, "label": "xml.etree.ElementTree.Parse Error - mismatched tag"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/186#issuecomment-374872202", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/186", "id": 374872202, "node_id": "MDEyOklzc3VlQ29tbWVudDM3NDg3MjIwMg==", "user": {"value": 47107, "label": "stefanocudini"}, "created_at": "2018-03-21T09:07:22Z", "updated_at": "2018-03-21T09:07:22Z", "author_association": "NONE", "body": "--debug is perfect tnk", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 306811513, "label": "proposal new option to disable user agents cache"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/141#issuecomment-346974336", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/141", "id": 346974336, "node_id": "MDEyOklzc3VlQ29tbWVudDM0Njk3NDMzNg==", "user": {"value": 50138, "label": "janimo"}, "created_at": "2017-11-26T00:00:35Z", "updated_at": "2017-11-26T00:00:35Z", "author_association": "NONE", "body": "FWIW I worked around this by setting TMPDIR to ~/tmp before running the command.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275814941, "label": "datasette publish can fail if /tmp is on a different device"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/124#issuecomment-346987395", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/124", "id": 346987395, "node_id": "MDEyOklzc3VlQ29tbWVudDM0Njk4NzM5NQ==", "user": {"value": 50138, "label": "janimo"}, "created_at": "2017-11-26T06:24:08Z", "updated_at": "2017-11-26T06:24:08Z", "author_association": "NONE", "body": "Are there performance gains when using immutable as opposed to read-only? From what I see other processes can still modify the DB when immutable, but there are no change notifications.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125805, "label": "Option to open readonly but not immutable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/124#issuecomment-347123991", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/124", "id": 347123991, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NzEyMzk5MQ==", "user": {"value": 50138, "label": "janimo"}, "created_at": "2017-11-27T09:25:15Z", "updated_at": "2017-11-27T09:25:15Z", "author_association": "NONE", "body": "That's the only reference to immutable I saw as well, making me think that there may be no perceivable advantages over simply using mode=ro. Since the database is never or seldom updated the change notifications should not impact performance.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125805, "label": "Option to open readonly but not immutable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-974711959", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 974711959, "node_id": "IC_kwDOBm6k_c46GOyX", "user": {"value": 52649, "label": "tannewt"}, "created_at": "2021-11-20T21:11:51Z", "updated_at": "2021-11-20T21:11:51Z", "author_association": "NONE", "body": "I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1732#issuecomment-1115542067", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1732", "id": 1115542067, "node_id": "IC_kwDOBm6k_c5CfdIz", "user": {"value": 52649, "label": "tannewt"}, "created_at": "2022-05-03T01:50:44Z", "updated_at": "2022-05-03T01:50:44Z", "author_association": "NONE", "body": "I haven\u2019t set one up unfortunately. My time is very limited because we just had a baby. \n\nOn Mon, May 2, 2022, at 6:42 PM, Simon Willison wrote:\n> \n> \n> Thanks, this definitely sounds like a bug. Do you have simple steps to reproduce this?\n> \n> \n> \u2014\n> Reply to this email directly, view it on GitHub , or unsubscribe .\n> You are receiving this because you authored the thread.Message ID: ***@***.***>\n> \n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1221849746, "label": "Custom page variables aren't decoded"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2133#issuecomment-1671649530", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2133", "id": 1671649530, "node_id": "IC_kwDOBm6k_c5jo1j6", "user": {"value": 54462, "label": "HaveF"}, "created_at": "2023-08-09T15:41:14Z", "updated_at": "2023-08-09T15:41:14Z", "author_association": "NONE", "body": "Yes, using this approach(`datasette install -r requirements.txt`) will result in more consistency. \r\n\r\nI'm curious about the results of the `datasette plugins --all` command. Where will we use the output of this command? Will it include configuration information for these plugins in the future? If so, will we need to consider the configuration of these plugins in addition to installing them on different computers?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1841501975, "label": "[feature request]`datasette install plugins.json` options"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2133#issuecomment-1672360472", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2133", "id": 1672360472, "node_id": "IC_kwDOBm6k_c5jrjIY", "user": {"value": 54462, "label": "HaveF"}, "created_at": "2023-08-10T00:31:24Z", "updated_at": "2023-08-10T00:31:24Z", "author_association": "NONE", "body": "It looks very nice now.\r\nFinally, no more manual installation of plugins one by one. Thank you, Simon! \u2764\ufe0f \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1841501975, "label": "[feature request]`datasette install plugins.json` options"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1171#issuecomment-754911290", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1171", "id": 754911290, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDkxMTI5MA==", "user": {"value": 59874, "label": "rcoup"}, "created_at": "2021-01-05T21:31:15Z", "updated_at": "2021-01-05T21:31:15Z", "author_association": "NONE", "body": "We did this for [Sno](https://sno.earth) under macOS \u2014 it's a PyInstaller binary/setup which uses [Packages](http://s.sudre.free.fr/Software/Packages/about.html) for packaging.\r\n\r\n* [Building & Signing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L67-L95)\r\n* [Packaging & Notarizing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L121-L215)\r\n* [Github Workflow](https://github.com/koordinates/sno/blob/master/.github/workflows/build.yml#L228-L269) has the CI side of it\r\n\r\nFYI (if you ever get to it) for Windows you need to get a code signing certificate. And if you want automated CI, you'll want to get an \"EV CodeSigning for HSM\" certificate from GlobalSign, which then lets you put the certificate into Azure Key Vault. Which you can use with [azuresigntool](https://github.com/vcsjones/AzureSignTool) to sign your code & installer. (Non-EV certificates are a waste of time, the user still gets big warnings at install time).\r\n", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778450486, "label": "GitHub Actions workflow to build and sign macOS binary executables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/93#issuecomment-344424382", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/93", "id": 344424382, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQyNDM4Mg==", "user": {"value": 67420, "label": "atomotic"}, "created_at": "2017-11-14T22:42:16Z", "updated_at": "2017-11-14T22:42:16Z", "author_association": "NONE", "body": "tried quickly, this seems working:\r\n\r\n```\r\n~ pip3 install pyinstaller\r\n~ pyinstaller -F --add-data /usr/local/lib/python3.6/site-packages/datasette/templates:datasette/templates --add-data /usr/local/lib/python3.6/site-packages/datasette/static:datasette/static /usr/local/bin/datasette\r\n\r\n~ du -h dist/datasette\r\n6.8M\tdist/datasette\r\n~ file dist/datasette\r\ndist/datasette: Mach-O 64-bit executable x86_64\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273944952, "label": "Package as standalone binary"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/93#issuecomment-344430299", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/93", "id": 344430299, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQzMDI5OQ==", "user": {"value": 67420, "label": "atomotic"}, "created_at": "2017-11-14T23:06:33Z", "updated_at": "2017-11-14T23:06:33Z", "author_association": "NONE", "body": "i will look better tomorrow, it's late i surely made some mistake\r\nhttps://asciinema.org/a/ZyAWbetrlriDadwWyVPUWB94H", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273944952, "label": "Package as standalone binary"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/93#issuecomment-344516406", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/93", "id": 344516406, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDUxNjQwNg==", "user": {"value": 67420, "label": "atomotic"}, "created_at": "2017-11-15T08:09:41Z", "updated_at": "2017-11-15T08:09:41Z", "author_association": "NONE", "body": "actually you can use travis to build for linux/macos and [appveyor](https://www.appveyor.com/) to build for windows.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273944952, "label": "Package as standalone binary"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/36#issuecomment-1006708046", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36", "id": 1006708046, "node_id": "IC_kwDOD079W848ASVO", "user": {"value": 71983, "label": "scoates"}, "created_at": "2022-01-06T16:04:46Z", "updated_at": "2022-01-06T16:04:46Z", "author_association": "NONE", "body": "This one got me, today, too. \ud83d\udc4d", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 988493790, "label": "Correct naming of tool in readme"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47", "id": 645515103, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw==", "user": {"value": 73579, "label": "hpk42"}, "created_at": "2020-06-17T17:30:01Z", "updated_at": "2020-06-17T17:30:01Z", "author_association": "NONE", "body": "It's the one with python3.7::\n\n >>> sqlite3.sqlite_version\n '3.11.0'\n\n \nOn Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote:\n\n> That means your version of SQLite is old enough that it doesn't support the FTS5 extension.\n> \n> Could you share what operating system you're running, and what the output is that you get from running this?\n> \n> python -c 'import sqlite3; print(sqlite3.connect(\":memory:\").execute(\"select sqlite_version()\").fetchone()[0])'\n> \n> I can teach this tool to fall back on FTS4 if FTS5 isn't available.\n> \n> -- \n> You are receiving this because you authored the thread.\n> Reply to this email directly or view it on GitHub:\n> https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 639542974, "label": "Fall back to FTS4 if FTS5 is not available"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2145#issuecomment-1685471752", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2145", "id": 1685471752, "node_id": "IC_kwDOBm6k_c5kdkII", "user": {"value": 77071, "label": "pkulchenko"}, "created_at": "2023-08-21T01:07:23Z", "updated_at": "2023-08-21T01:07:23Z", "author_association": "NONE", "body": "@simonw, since you're referencing \"rowid\" column by name, I just want to note that there may be an existing rowid column with completely different semantics (https://www.sqlite.org/lang_createtable.html#rowid), which is likely to break this logic. I don't see a good way to detect a proper \"rowid\" name short of checking if there is a field with that name and using the alternative (`_rowid_` or `oid`), which is not ideal, but may work.\r\n\r\nIn terms of the original issue, maybe a way to deal with it is to use rowid by default and then use primary key for WITHOUT ROWID tables (as they are guaranteed to be not null), but I suspect it may require significant changes to the API (and doesn't fully address the issue of what value to pass to indicate NULL when editing records). Would it make sense to generate a random string to indicate NULL values when editing?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1857234285, "label": "If a row has a primary key of `null` various things break"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2143#issuecomment-1690799608", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2143", "id": 1690799608, "node_id": "IC_kwDOBm6k_c5kx434", "user": {"value": 77071, "label": "pkulchenko"}, "created_at": "2023-08-24T00:09:47Z", "updated_at": "2023-08-24T00:10:41Z", "author_association": "NONE", "body": "@simonw, FWIW, I do exactly the same thing for one of my projects (both to allow multiple configuration files to be passed on the command line and setting individual values) and it works quite well for me and my users. I even use the same parameter name for both (https://studio.zerobrane.com/doc-configuration#configuration-via-command-line), but I understand why you may want to use different ones for files and individual values. There is one small difference that I accept code snippets, but I don't think it matters much in this case.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1855885427, "label": "De-tangling Metadata before Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/267#issuecomment-414860009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/267", "id": 414860009, "node_id": "MDEyOklzc3VlQ29tbWVudDQxNDg2MDAwOQ==", "user": {"value": 78156, "label": "annapowellsmith"}, "created_at": "2018-08-21T23:57:51Z", "updated_at": "2018-08-21T23:57:51Z", "author_association": "NONE", "body": "Looks to me like hashing, redirects and caching were documented as part of https://github.com/simonw/datasette/commit/788a542d3c739da5207db7d1fb91789603cdd336#diff-3021b0e065dce289c34c3b49b3952a07 - so perhaps this can be closed? :tada:", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323716411, "label": "Documentation for URL hashing, redirects and cache policy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2155#issuecomment-1737906995", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2155", "id": 1737906995, "node_id": "IC_kwDOBm6k_c5nllsz", "user": {"value": 79087, "label": "cadeef"}, "created_at": "2023-09-27T18:44:02Z", "updated_at": "2023-09-27T18:44:02Z", "author_association": "NONE", "body": "@simonw Any chance we can get this tiny patch merged for an upcoming release?", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1865572575, "label": "Fix hupper.start_reloader entry point"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-643083451", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 643083451, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MzA4MzQ1MQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-06-12T06:04:14Z", "updated_at": "2020-06-12T06:04:14Z", "author_association": "NONE", "body": "Hmm, I haven't tried removing `ProxyPassReverse`, but it doesn't touch the HTML, which is the issue I'm seeing. You can read the [documentation here](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassreverse). `ProxyPassReverse` is a standard directive when proxying with Apache. I've used it dozens of times with other applications.\r\n\r\nLooking a little more at the code, I think the issue here is that the behaviour of `base_url` makes sense when Datasette is _mounted_ at a path within a larger application, but not when HTTP requests are being _proxied_ to it.\r\n\r\nIn a _mount_ situation, it is perfectly fine to construct URLs reusing the domain and path from the request. In a _proxy_ situation, it never is, as the domain and path in the request are not the domain and path that the non-proxy client actually needs to use. That is, links which include the Apache \u2192 Datasette request origin, `localhost:8001`, instead of the browser \u2192 Apache request origin, `example.com`, will be broken.\r\n\r\nThe tests you pointed to also reflect this in two ways:\r\n\r\n1. They strip a leading `http://localhost`, allowing such URLs in the facet links to pass, but inclusion of that in a proxy situation would mean the URL is broken.\r\n\r\n2. The test client emits direct ASGI events instead of actual proxied HTTP requests. The headers of these ASGI events don't reflect the way an HTTP proxy works; instead they pass through the original request path which contains `base_url`. This works because Datasette responds to requests equivalently at either `/\u2026` or `/{base_url}/\u2026`, which makes some sense in a _mount_ situation but is unconventional (albeit workable) for a proxied app.\r\n\r\nApps that support being proxied automatically support being mounted, but apps that only support being mounted don't automatically support being proxied.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1238#issuecomment-790857004", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1238", "id": 790857004, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2021-03-04T19:06:55Z", "updated_at": "2021-03-04T19:06:55Z", "author_association": "NONE", "body": "@rgieseke Ah, that's super helpful. Thank you for the workaround for now!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813899472, "label": "Custom pages don't work with base_url setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-795893813", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 795893813, "node_id": "MDEyOklzc3VlQ29tbWVudDc5NTg5MzgxMw==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2021-03-10T18:43:39Z", "updated_at": "2021-03-10T18:43:39Z", "author_association": "NONE", "body": "@simonw Unfortunately this issue as I reported it is not actually solved in version 0.55.\r\n\r\nEvery link which is returned by the `Datasette.absolute_url` method is still wrong, because it uses the request URL as the base. This still includes the suggested facet links and pagination links.\r\n\r\nWhat I wrote originally still stands:\r\n\r\n> Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include `base_url` or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at `http://localhost:8001`.\r\n> \r\n> I looked into this a little in the source code, and it seems to be an issue anywhere `request.url` or `request.path` is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the `path_with_\u2026` family of utility functions and the `Datasette.absolute_url` method.\r\n\r\n Would you prefer to re-open this issue or have me create a new one?\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-795939998", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 795939998, "node_id": "MDEyOklzc3VlQ29tbWVudDc5NTkzOTk5OA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2021-03-10T19:16:55Z", "updated_at": "2021-03-10T19:16:55Z", "author_association": "NONE", "body": "Nod. The problem with the tests is that they're ignoring the origin (hostname, port) of links. In a reverse proxy situation, the frontend request origin is different than the backend request origin. The problem is Datasette generates links with the backend request origin.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-795950636", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 795950636, "node_id": "MDEyOklzc3VlQ29tbWVudDc5NTk1MDYzNg==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2021-03-10T19:24:13Z", "updated_at": "2021-03-10T19:24:13Z", "author_association": "NONE", "body": "I think this could be solved by one of:\r\n\r\n1. Stop generating absolute URLs, e.g. ones that include an origin. Relative URLs with absolute paths are fine, as long as they take `base_url` into account (as they do now, yay!).\r\n2. Extend `base_url` to include the expected frontend origin, and then use that information when generating absolute URLs.\r\n3. Document which HTTP headers the reverse proxy should set (e.g. the `X-Forwarded-*` family of conventional headers) to pass the frontend origin information to Datasette, and then use that information when generating absolute URLs.\r\n\r\nOption 1 seems like the easiest to me, if you can get away with never having to generate an absolute URL.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/8#issuecomment-464341721", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8", "id": 464341721, "node_id": "MDEyOklzc3VlQ29tbWVudDQ2NDM0MTcyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-02-16T12:08:41Z", "updated_at": "2019-02-16T12:08:41Z", "author_association": "NONE", "body": "We also get an error if a column name contains a `.`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 403922644, "label": "Problems handling column names containing spaces or - "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/18#issuecomment-480621924", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/18", "id": 480621924, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MDYyMTkyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-07T19:31:42Z", "updated_at": "2019-04-07T19:31:42Z", "author_association": "NONE", "body": "I've just noticed that SQLite lets you IGNORE inserts that collide with a pre-existing key. This can be quite handy if you have a dataset that keeps changing in part, and you don't want to upsert and replace pre-existing PK rows but you do want to ignore collisions to existing PK rows.\r\n\r\nDo `sqlite_utils` support such (cavalier!) behaviour?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 413871266, "label": ".insert/.upsert/.insert_all/.upsert_all should add missing columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/8#issuecomment-482994231", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8", "id": 482994231, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Mjk5NDIzMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-14T15:04:07Z", "updated_at": "2019-04-14T15:29:33Z", "author_association": "NONE", "body": "\r\n\r\nPLEASE IGNORE THE BELOW... I did a package update and rebuilt the kernel I was working in... may just have been an old version of sqlite_utils, seems to be working now. (Too many containers / too many environments!)\r\n\r\n\r\nHas an issue been reintroduced here with FTS? eg I'm getting an error thrown by spaces in column names here:\r\n\r\n```\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order)\r\n\r\ndef enable_fts(self, columns, fts_version=\"FTS5\"):\r\n--> 329 \"Enables FTS on the specified columns\"\r\n 330 sql = \"\"\"\r\n 331 CREATE VIRTUAL TABLE \"{table}_fts\" USING {fts_version} (\r\n```\r\n\r\nwhen trying an `insert_all`.\r\n\r\nAlso, if a col has a `.` in it, I seem to get:\r\n\r\n```\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order)\r\n 327 jsonify_if_needed(record.get(key, None)) for key in all_columns\r\n 328 )\r\n--> 329 result = self.db.conn.execute(sql, values)\r\n 330 self.db.conn.commit()\r\n 331 self.last_id = result.lastrowid\r\n\r\nOperationalError: near \".\": syntax error\r\n```\r\n\r\n(Can't post a worked minimal example right now; racing trying to build something against a live timing screen that will stop until next weekend in an hour or two...)\r\n\r\nPS Hmmm I did a test and they seem to work; I must be messing up s/where else...\r\n\r\n```\r\nimport sqlite3\r\nfrom sqlite_utils import Database\r\n\r\ndbname='testingDB_sqlite_utils.db'\r\n\r\n#!rm $dbname\r\nconn = sqlite3.connect(dbname, timeout=10)\r\n\r\n\r\n#Setup database tables\r\nc = conn.cursor()\r\n\r\nsetup='''\r\nCREATE TABLE IF NOT EXISTS \"test1\" (\r\n \"NO\" INTEGER,\r\n \"NAME\" TEXT\r\n);\r\n\r\nCREATE TABLE IF NOT EXISTS \"test2\" (\r\n \"NO\" INTEGER,\r\n `TIME OF DAY` TEXT\r\n);\r\n\r\nCREATE TABLE IF NOT EXISTS \"test3\" (\r\n \"NO\" INTEGER,\r\n `AVG. SPEED (MPH)` FLOAT\r\n);\r\n'''\r\n\r\nc.executescript(setup)\r\n\r\n\r\nDB = Database(conn)\r\n\r\nimport pandas as pd\r\n\r\ndf1 = pd.DataFrame({'NO':[1,2],'NAME':['a','b']})\r\nDB['test1'].insert_all(df1.to_dict(orient='records'))\r\n\r\ndf2 = pd.DataFrame({'NO':[1,2],'TIME OF DAY':['early on','late']})\r\nDB['test2'].insert_all(df2.to_dict(orient='records'))\r\n\r\ndf3 = pd.DataFrame({'NO':[1,2],'AVG. SPEED (MPH)':['123.3','123.4']})\r\nDB['test3'].insert_all(df3.to_dict(orient='records'))\r\n```\r\n\r\nall seem to work ok. I'm still getting errors in my set up though, which is not too different to the text cases?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 403922644, "label": "Problems handling column names containing spaces or - "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 571138093, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-06T13:28:31Z", "updated_at": "2020-01-06T13:28:31Z", "author_association": "NONE", "body": "I think I actually had several issues in play...\r\n\r\nThe missing key was one, but I think there is also an issue as per below.\r\n\r\nFor example, in the following:\r\n\r\n```python\r\ndef init_testdb(dbname='test.db'):\r\n \r\n if os.path.exists(dbname):\r\n os.remove(dbname)\r\n\r\n conn = sqlite3.connect(dbname)\r\n db = Database(conn)\r\n \r\n return conn, db\r\n\r\nconn, db = init_testdb()\r\n\r\nc = conn.cursor()\r\nc.executescript('CREATE TABLE \"test1\" (\"Col1\" TEXT, \"Col2\" TEXT, PRIMARY KEY (\"Col1\"));')\r\nc.executescript('CREATE TABLE \"test2\" (\"Col1\" TEXT, \"Col2\" TEXT, PRIMARY KEY (\"Col1\"));')\r\n\r\nprint('Test 1...')\r\nfor i in range(3):\r\n db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n\r\nprint('Test 2...')\r\nfor i in range(3):\r\n db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},\r\n {'Col1':'c','Col2':'x'}], pk=('Col1'))\r\nprint('Done...')\r\n\r\n---------------------------------------------------------------------------\r\nTest 1...\r\nTest 2...\r\n IndexError: list index out of range \r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n in \r\n 22 print('Test 2...')\r\n 23 for i in range(3):\r\n---> 24 db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n 25 db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},\r\n 26 {'Col1':'c','Col2':'x'}], pk=('Col1'))\r\n\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts)\r\n 1157 alter=alter,\r\n 1158 extracts=extracts,\r\n-> 1159 upsert=True,\r\n 1160 )\r\n 1161 \r\n\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert)\r\n 1097 # self.last_rowid will be 0 if a \"INSERT OR IGNORE\" happened\r\n 1098 if (hash_id or pk) and self.last_rowid:\r\n-> 1099 row = list(self.rows_where(\"rowid = ?\", [self.last_rowid]))[0]\r\n 1100 if hash_id:\r\n 1101 self.last_pk = row[hash_id]\r\n\r\nIndexError: list index out of range\r\n```\r\n\r\nthe first test works but the second fails. Is the length of the list of items being upserted leaking somewhere?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 573047321, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-10T14:02:56Z", "updated_at": "2020-01-10T14:09:23Z", "author_association": "NONE", "body": "Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop).\r\n\r\nCould it be sqlite3? I'm on 3.30.1.\r\n\r\nUPDATE: just tried it on jupyter.org/try and I get the error there, too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 580745213, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-31T14:02:38Z", "updated_at": "2020-01-31T14:21:09Z", "author_association": "NONE", "body": "So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error.\r\n\r\nIf I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly.\r\n\r\nIt feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1033641009", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/203", "id": 1033641009, "node_id": "IC_kwDOCGYnMM49nBwx", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-09T11:06:18Z", "updated_at": "2022-02-09T11:06:18Z", "author_association": "NONE", "body": "Is there any progress elsewhere on the handling of compound / composite foreign keys, or is this PR still effectively open?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 743384829, "label": "changes to allow for compound foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041313679", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406", "id": 1041313679, "node_id": "IC_kwDOCGYnMM4-ES-P", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-16T09:59:51Z", "updated_at": "2022-02-16T10:00:10Z", "author_association": "NONE", "body": "The `CustomColumnType()` approach looks good. This pushes you into the mindspace that you are defining and working with a custom column type.\r\n\r\nWhen creating the table, you could then error, or at least warn, if someone wasn't setting a column on a `type` or a custom column type, which I guess is where `mypy` comes in?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1128466114, "label": "Creating tables with custom datatypes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1041325398", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/402", "id": 1041325398, "node_id": "IC_kwDOCGYnMM4-EV1W", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-16T10:12:48Z", "updated_at": "2022-02-16T10:18:55Z", "author_association": "NONE", "body": "> My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns\r\n\r\nOther possible pairs: unconventional date/datetime and timezone pairs eg `2022-02-16::17.00, London`; or more generally, numerical value and unit of measurement pairs (eg if you want to cast into and out of different measurement units using packages like `pint`) or currencies etc. Actually, in that case, I guess you may be presenting things that are unit typed already, and so a conversion would need to parse things into an appropriate, possibly two column `value, unit` format.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1125297737, "label": "Advanced class-based `conversions=` mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041363433", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406", "id": 1041363433, "node_id": "IC_kwDOCGYnMM4-EfHp", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-16T10:57:03Z", "updated_at": "2022-02-16T10:57:19Z", "author_association": "NONE", "body": "Wondering if this actually relates to https://github.com/simonw/sqlite-utils/issues/402 ?\r\n\r\nI also wonder if this would be a sensible approach for eg registering `pint` based quantity conversions into and out of the db, perhaps storing the quantity as a serialised `magnitude measurement` single column string?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1128466114, "label": "Creating tables with custom datatypes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1248440137", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406", "id": 1248440137, "node_id": "IC_kwDOCGYnMM5Kaa9J", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-09-15T18:13:50Z", "updated_at": "2022-09-15T18:13:50Z", "author_association": "NONE", "body": "I was wondering if you have any more thoughts on this? I have a tangible use case now: adding a \"vector\" column to a database to support semantic search using doc2vec embeddings ([example](https://psychemedia.github.io/storynotes/Lang_Doc2Vec.html); note that the `vtfunc` package may no longer be reliable...).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1128466114, "label": "Creating tables with custom datatypes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1166#issuecomment-783560017", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1166", "id": 783560017, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw==", "user": {"value": 94334, "label": "thorn0"}, "created_at": "2021-02-22T18:00:57Z", "updated_at": "2021-02-22T18:13:11Z", "author_association": "NONE", "body": "Hi! I don't think Prettier supports this syntax for globs: `datasette/static/*[!.min].js` Are you sure that works?\r\nPrettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (`foo-[1-5].js`).\r\n\r\nTested it. Apparently, it works as a negated character class in regexes (like `[^.min]`). I wonder where this syntax comes from. Micromatch doesn't support that:\r\n\r\n```js\r\nmicromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']);\r\n// result: [\"static/n.js\"] -- brackets are treated like [!.min] in regexes, without negation\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777140799, "label": "Adopt Prettier for JavaScript code formatting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1143#issuecomment-744618787", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1143", "id": 744618787, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NDYxODc4Nw==", "user": {"value": 114388, "label": "yurivish"}, "created_at": "2020-12-14T18:15:00Z", "updated_at": "2020-12-15T02:21:53Z", "author_association": "NONE", "body": "From a quick look at the README, it does seem to do everything I need, thanks!\r\n\r\nI think the argument for inclusion in core is to lower the chances of unwanted data access. A local server can be accessed by anybody who can make an HTTP request to your computer regardless of CORS rules, but the default `*` rule additionally opens up access to the local instance to any website you visit while it is running. \r\n\r\nThat's probably not what people typically intend, particularly when the data is of a sensitive nature. A default of requiring the user to specify the origin (allowing `*` but encouraging a narrower scope) would solve this problem entirely, I think.\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 764059235, "label": "More flexible CORS support in core, to encourage good security practices"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1989#issuecomment-1402347667", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1989", "id": 1402347667, "node_id": "IC_kwDOBm6k_c5TliCT", "user": {"value": 116795, "label": "pax"}, "created_at": "2023-01-24T17:48:59Z", "updated_at": "2023-01-24T17:48:59Z", "author_association": "NONE", "body": "The problem (in my particular use case) with using a VIEW is that I'd need one of the columns to be searchable \u2013 but that ([enable-fts](https://github.com/simonw/datasette-search-all)) doesn't work with views :/\r\n\r\n__\r\nside-suggestion: I don't know how feasible this might be, but when one column (or table) would be marked as hidden, could the _Download SQLite DB_ link take that into account? \ud83e\uddd0", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1531991339, "label": "Suggestion: Hiding columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/191#issuecomment-381602005", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/191", "id": 381602005, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTYwMjAwNQ==", "user": {"value": 119974, "label": "coleifer"}, "created_at": "2018-04-16T13:37:32Z", "updated_at": "2018-04-16T13:37:32Z", "author_association": "NONE", "body": "I don't think it should be too difficult... you can look at what @ghaering did with pysqlite (and similarly what I copied for pysqlite3). You would theoretically take an amalgamation build of Sqlite (all code in a single .c and .h file). The `AmalgamationLibSqliteBuilder` class detects the presence of this amalgamated source file and builds a statically-linked pysqlite.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 310533258, "label": "Figure out how to bundle a more up-to-date SQLite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/191#issuecomment-392828475", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/191", "id": 392828475, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MjgyODQ3NQ==", "user": {"value": 119974, "label": "coleifer"}, "created_at": "2018-05-29T15:50:18Z", "updated_at": "2018-05-29T15:50:18Z", "author_association": "NONE", "body": "Python standard-library SQLite dynamically links against the system sqlite3. So presumably you installed a more up-to-date sqlite3 somewhere on your `LD_LIBRARY_PATH`.\r\n\r\nTo compile a statically-linked pysqlite you need to include an amalgamation in the project root when building the extension. Read the relevant setup.py.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 310533258, "label": "Figure out how to bundle a more up-to-date SQLite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/516#issuecomment-1339837520", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/516", "id": 1339837520, "node_id": "IC_kwDOCGYnMM5P3ExQ", "user": {"value": 122043, "label": "briandorsey"}, "created_at": "2022-12-06T19:02:30Z", "updated_at": "2022-12-06T19:02:30Z", "author_association": "NONE", "body": "`--verbose` or `--verbosity=ABC` were the flags I looked for. Expected to see them at a global level near `--version`. But only sharing because that's where I looked first, I don't have a strong opinion on the exact wording/location. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1479914599, "label": "Feature request: output number of ignored/replaced rows for insert command"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/516#issuecomment-1339839767", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/516", "id": 1339839767, "node_id": "IC_kwDOCGYnMM5P3FUX", "user": {"value": 122043, "label": "briandorsey"}, "created_at": "2022-12-06T19:04:17Z", "updated_at": "2022-12-06T19:04:17Z", "author_association": "NONE", "body": "Current behavior is different when importing via stdin vs. a file. Imports from a file give a progress bar. For this new request, I'd love to see total imported and total ignored/replaced in both cases. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1479914599, "label": "Feature request: output number of ignored/replaced rows for insert command"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/516#issuecomment-1339844639", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/516", "id": 1339844639, "node_id": "IC_kwDOCGYnMM5P3Ggf", "user": {"value": 122043, "label": "briandorsey"}, "created_at": "2022-12-06T19:08:13Z", "updated_at": "2022-12-06T19:08:13Z", "author_association": "NONE", "body": "Reference: tqdm (https://tqdm.github.io/) shows a progress bar when total is known, and falls back to counting units of work done for streams. File input vs. stdin seems like a similar situation. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1479914599, "label": "Feature request: output number of ignored/replaced rows for insert command"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1313271719", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1313271719, "node_id": "IC_kwDOBm6k_c5ORu-n", "user": {"value": 124274, "label": "lucapette"}, "created_at": "2022-11-14T08:25:12Z", "updated_at": "2022-11-14T08:25:12Z", "author_association": "NONE", "body": "Nothing spectacular yet but I think this falls under \"cool/cute application of datasette\": [improving fakedata performance for fun](https://lucapette.me/writing/improving-fakedata-performance-for-fun/). tl;dr I used datasette to visualize benchmarking data.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/28#issuecomment-751125270", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28", "id": 751125270, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MTEyNTI3MA==", "user": {"value": 129786, "label": "jmelloy"}, "created_at": "2020-12-24T22:26:22Z", "updated_at": "2020-12-24T22:26:22Z", "author_association": "NONE", "body": "This comes around if you\u2019ve run the photo export without running an s3 upload. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 624490929, "label": "Invalid SQL no such table: main.uploads"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/316#issuecomment-398030903", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/316", "id": 398030903, "node_id": "MDEyOklzc3VlQ29tbWVudDM5ODAzMDkwMw==", "user": {"value": 132230, "label": "gavinband"}, "created_at": "2018-06-18T12:00:43Z", "updated_at": "2018-06-18T12:00:43Z", "author_association": "NONE", "body": "I should add that I'm using datasette version 0.22, Python 2.7.10 on Mac OS X. Happy to send more info if helpful.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 333238932, "label": "datasette inspect takes a very long time on large dbs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/316#issuecomment-398109204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/316", "id": 398109204, "node_id": "MDEyOklzc3VlQ29tbWVudDM5ODEwOTIwNA==", "user": {"value": 132230, "label": "gavinband"}, "created_at": "2018-06-18T16:12:45Z", "updated_at": "2018-06-18T16:12:45Z", "author_association": "NONE", "body": "Hi Simon,\r\nThanks for the response. Ok I'll try running `datasette inspect` up front.\r\nIn principle the db won't change. However, the site's in development and it's likely I'll need to add views and some auxiliary (smaller) tables as I go along. I will need to be careful with this if it involves an inspect step in each iteration, though.\r\ng.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 333238932, "label": "datasette inspect takes a very long time on large dbs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567127981", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567127981, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEyNzk4MQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T17:18:06Z", "updated_at": "2019-12-18T17:18:06Z", "author_association": "NONE", "body": "Agreed, this would be nice to have. I'm currently working around it in `nginx` with additional location blocks:\r\n\r\n```\r\n\r\n location /datasette/ {\r\n proxy_pass http://127.0.0.1:8001/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n\r\n location /dna-protein-genome/ {\r\n proxy_pass http://127.0.0.1:8001/dna-protein-genome/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n\r\n location /rna-protein-genome/ {\r\n proxy_pass http://127.0.0.1:8001/rna-protein-genome/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n```\r\n\r\nThe 2nd and 3rd above are my databases. This works, but I have a small problem with URLs like `/rna-protein-genome?params....` that I could fix with some more nginx munging. I seem to do this sort of thing once every 5 years and then have to look it all up again.\r\n\r\nThanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567128636", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567128636, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEyODYzNg==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T17:19:46Z", "updated_at": "2019-12-18T17:19:46Z", "author_association": "NONE", "body": "Hmmm, wait, maybe my mindless (copy/paste) use of `proxy_redirect` is causing me grief...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567219479", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567219479, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIxOTQ3OQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:24:23Z", "updated_at": "2019-12-18T21:24:23Z", "author_association": "NONE", "body": "@simonw What about allowing a base url. The `....` tag has been around forever. Then just use all relative URLs, which I guess is likely what you already do. See https://www.w3schools.com/TAGs/tag_base.asp", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/596#issuecomment-567225156", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/596", "id": 567225156, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIyNTE1Ng==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:40:35Z", "updated_at": "2019-12-18T21:40:35Z", "author_association": "NONE", "body": "I initially went looking for a way to hide a column completely. Today I found the setting to truncate cells, but it applies to all cells. In my case I have text columns that can have many thousands of characters. I was wondering whether the metadata JSON would be an appropriate place to indicate how columns are displayed (on a col-by-col basis). E.g., I'd like to be able to specify that only 20 chars of a given column be shown, and the font be monospace. But maybe I can do that in some other way - I barely know anything about datasette yet, sorry!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 507454958, "label": "Handle really wide tables better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/596#issuecomment-567226048", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/596", "id": 567226048, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIyNjA0OA==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:43:13Z", "updated_at": "2019-12-18T21:43:13Z", "author_association": "NONE", "body": "Meant to add that of course it would be better not to reinvent CSS (one time was already enough). But one option would be to provide a mechanism to specify a CSS class for a column (a cell, a row...) and let the user give a URL path to a CSS file on the command line.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 507454958, "label": "Handle really wide tables better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602911133", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602911133, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkxMTEzMw==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-23T23:22:10Z", "updated_at": "2020-03-23T23:22:10Z", "author_association": "NONE", "body": "I just updated #652 to remove a merge conflict. I think it's an easy way to add this functionality. I don't have time to do more though, sorry!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602916580", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602916580, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkxNjU4MA==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-23T23:37:06Z", "updated_at": "2020-03-23T23:37:06Z", "author_association": "NONE", "body": "@simonw You're welcome - I was just trying it out back in December as I thought it should work. Now there's a pandemic to work on though.... so no time at all for more at the moment. BTW, I have datasette running on several protein and full (virus) genome databases I build, and it's great - thank you! Hi and best regards to you & Nat :-)", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 1, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-603539349", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 603539349, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMzUzOTM0OQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-24T22:33:23Z", "updated_at": "2020-03-24T22:33:23Z", "author_association": "NONE", "body": "Hi Simon - I'm just (trying, at least) to follow along in the above. I can't try it out now, but I will if no one else gets to it. Sorry I didn't write any tests in the original bit of code I pushed - I was just trying to see if it could work & whether you'd want to maybe head in that direction. Anyway, thank you, I will certainly use this. Comment back here if no one tried it out & I'll make time.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-603849245", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 603849245, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMzg0OTI0NQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-25T13:48:13Z", "updated_at": "2020-03-25T13:48:13Z", "author_association": "NONE", "body": "Great - thanks again.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/596#issuecomment-720741903", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/596", "id": 720741903, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMDc0MTkwMw==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-11-02T21:44:45Z", "updated_at": "2020-11-02T21:44:45Z", "author_association": "NONE", "body": "Hi & thanks for the note @simonw! I wish I had more time to play with (and contribute to) datasette. I know you don't need me to tell you that it's super cool :-)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 507454958, "label": "Handle really wide tables better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/759#issuecomment-624860451", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/759", "id": 624860451, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNDg2MDQ1MQ==", "user": {"value": 133845, "label": "Krazybug"}, "created_at": "2020-05-06T20:03:01Z", "updated_at": "2020-05-06T20:04:42Z", "author_association": "NONE", "body": "Thank you. Now it's ok with the url\r\n\r\nhttp://localhost:8001/index/summary?_search=language%3Aeng&_sort=title&_searchmode=raw\r\n\r\nBut I'm not able to manage it in the metadata file. Here is mine (note that the sort column is taken into account)\r\nHere it is:\r\n\r\n```\r\n{\r\n \"databases\": {\r\n \"index\": {\r\n \"tables\": {\r\n \"summary\": {\r\n \"sort\": \"title\",\r\n \"searchmode\": \"raw\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n\r\n```\r\nAny idea ?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612673948, "label": "fts search on a column doesn't work anymore due to escape_fts"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/661#issuecomment-580075725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/661", "id": 580075725, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MDA3NTcyNQ==", "user": {"value": 134771, "label": "dvhthomas"}, "created_at": "2020-01-30T04:17:51Z", "updated_at": "2020-01-30T04:17:51Z", "author_association": "NONE", "body": "Thanks for the elegant solution to the problem as stated. I'm packaging right now :-)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 555832585, "label": "--port option to expose a port other than 8001 in \"datasette package\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/675#issuecomment-590405736", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/675", "id": 590405736, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDQwNTczNg==", "user": {"value": 141844, "label": "aviflax"}, "created_at": "2020-02-24T16:06:27Z", "updated_at": "2020-02-24T16:06:27Z", "author_association": "NONE", "body": "> So yeah - if you're happy to design this I think it would be worth us adding.\r\n\r\nGreat! I\u2019ll give it a go.\r\n\r\n\r\n\r\n> Small design suggestion: allow `--copy` to be applied multiple times\u2026\r\n\r\nMakes a ton of sense, will do.\r\n\r\n> Also since Click arguments can take multiple options I don't think you need to have the `:` in there - although if it better matches Docker's own UI it might be more consistent to have it.\r\n\r\nGreat point. I double checked the docs for `docker cp` and in that context the colon is used to delimit a container and a path, while spaces are used to separate the source and target.\r\n\r\nThe usage string is:\r\n\r\n```text\r\ndocker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-\r\ndocker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH\r\n```\r\n\r\nso in fact it\u2019ll be more consistent to use a space to delimit the source and destination paths, like so:\r\n\r\n```shell\r\n$ datasette package --copy /the/source/path /the/target/path data.db\r\n```\r\n\r\nand I suppose the short-form version of the option should be `cp` like so:\r\n\r\n```shell\r\n$ datasette package -cp /the/source/path /the/target/path data.db\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 567902704, "label": "--cp option for datasette publish and datasette package for shipping additional files and directories"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/675#issuecomment-590593247", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/675", "id": 590593247, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDU5MzI0Nw==", "user": {"value": 141844, "label": "aviflax"}, "created_at": "2020-02-24T23:02:52Z", "updated_at": "2020-02-24T23:02:52Z", "author_association": "NONE", "body": "> Design looks great to me.\r\n\r\nExcellent, thanks!\r\n\r\n> I'm not keen on two letter short versions (`-cp`) - I'd rather either have a single character or no short form at all.\r\n\r\nHmm, well, anyone running `datasette package` is probably at least somewhat familiar with UNIX CLIs\u2026 so how about `--cp` as a middle ground?\r\n\r\n```shell\r\n$ datasette package --cp /the/source/path /the/target/path data.db\r\n```\r\n\r\nI think I like it. Easy to remember!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 567902704, "label": "--cp option for datasette publish and datasette package for shipping additional files and directories"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1708945716", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 1708945716, "node_id": "IC_kwDODFE5qs5l3HE0", "user": {"value": 150855, "label": "iloveitaly"}, "created_at": "2023-09-06T19:12:33Z", "updated_at": "2023-09-06T19:12:33Z", "author_association": "NONE", "body": "@maxhawkins curious why you didn't use the stdlib `mailbox` to parse the `mbox` files?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1710950671", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 1710950671, "node_id": "IC_kwDODFE5qs5l-wkP", "user": {"value": 150855, "label": "iloveitaly"}, "created_at": "2023-09-08T01:22:49Z", "updated_at": "2023-09-08T01:22:49Z", "author_association": "NONE", "body": "Makes sense, thanks for explaining!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/31#issuecomment-1251845216", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/31", "id": 1251845216, "node_id": "IC_kwDODEm0Qs5KnaRg", "user": {"value": 150986, "label": "dckc"}, "created_at": "2022-09-20T05:05:03Z", "updated_at": "2022-09-20T05:05:03Z", "author_association": "NONE", "body": "yay! Thanks a bunch for the `twitter-to-sqlite friends` command!\r\n\r\nThe twitter \"Download an archive of your data\" feature doesn't include who I follow, so this is particularly handy.\r\n\r\nThe whole Dogsheep thing is great :) I've written about similar things under [cloud-services](https://www.madmode.com/search/label/cloud-services/):\r\n - 2021: [Closet Librarian Approach to Cloud Services](https://www.madmode.com/2021/closet-librarian-approach-cloud-services.html)\r\n - 2015: [jukekb \\- Browse iTunes libraries and upload playlists to Google Music](https://www.madmode.com/2015/jukekb)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 520508502, "label": "\"friends\" command (similar to \"followers\")"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/983#issuecomment-752882797", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/983", "id": 752882797, "node_id": "MDEyOklzc3VlQ29tbWVudDc1Mjg4Mjc5Nw==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2020-12-31T08:07:59Z", "updated_at": "2020-12-31T15:04:32Z", "author_association": "NONE", "body": "If you're using arrow functions, you can presumably use default parameters, not much difference in support. That would save you 9 bytes. But OTOH you need `\"use strict\";` to use arrow functions etc, and that's 13 bytes.\r\n\r\nYour latest 250-byte one, with use strict, gzips to 199 bytes. The following might be 292 bytes, but compresses to 204, basically the same, and works in any browser (well, IE9+) at all:\r\n\r\n`var datasette=datasette||{};datasette.plugins=function(){var d={};return{register:function(b,c,e){d[b]||(d[b]=[]);d[b].push([c,e])},call:function(b,c){c=c||{};var e=[];(d[b]||[]).forEach(function(a){a=a[0].apply(a[0],a[1].map(function(a){return c[a]}));void 0!==a&&e.push(a)});return e}}}();`\r\n\r\nSource for that is below; I replaced the [fn,parameters] because closure-compiler includes a polyfill for that, and I ran `closure-compiler --language_out ECMASCRIPT3`:\r\n\r\n```js\r\nvar datasette = datasette || {};\r\ndatasette.plugins = (() => {\r\n var registry = {};\r\n return {\r\n register: (hook, fn, parameters) => {\r\n if (!registry[hook]) {\r\n registry[hook] = [];\r\n }\r\n registry[hook].push([fn, parameters]);\r\n },\r\n call: (hook, args) => {\r\n args = args || {};\r\n var results = [];\r\n (registry[hook] || []).forEach((data) => {\r\n /* Call with the correct arguments */\r\n var result = data[0].apply(data[0], data[1].map(parameter => args[parameter]));\r\n if (result !== undefined) {\r\n results.push(result);\r\n }\r\n });\r\n return results;\r\n }\r\n };\r\n})();\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 712260429, "label": "JavaScript plugin hooks mechanism similar to pluggy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/983#issuecomment-752888552", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/983", "id": 752888552, "node_id": "MDEyOklzc3VlQ29tbWVudDc1Mjg4ODU1Mg==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2020-12-31T08:33:11Z", "updated_at": "2020-12-31T08:34:27Z", "author_association": "NONE", "body": "If you could say that all hook functions had to accept one options parameter (and could use object destructuring if they wished to only see a subset), you could have this, which minifies (to all-browser-JS) to 200 bytes, gzips to 146, and works practically the same:\r\n\r\n```js\r\nvar datasette = datasette || {};\r\ndatasette.plugins = (() => {\r\n var registry = {};\r\n return {\r\n register: (hook, fn) => {\r\n registry[hook] = registry[hook] || [];\r\n registry[hook].push(fn);\r\n },\r\n call: (hook, args) => {\r\n var results = (registry[hook] || []).map(fn => fn(args||{}));\r\n return results;\r\n }\r\n };\r\n})();\r\n```\r\n\r\n`var datasette=datasette||{};datasette.plugins=function(){var b={};return{register:function(a,c){b[a]=b[a]||[];b[a].push(c)},call:function(a,c){return(b[a]||[]).map(function(a){return a(c||{})})}}}();`\r\n\r\nCalled the same, definitions tiny bit different:\r\n\r\n```js\r\ndatasette.plugins.register('numbers', ({a, b}) => a + b)\r\ndatasette.plugins.register('numbers', o => o.a * o.b)\r\ndatasette.plugins.call('numbers', {a: 4, b: 6})\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 712260429, "label": "JavaScript plugin hooks mechanism similar to pluggy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1165#issuecomment-753033121", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1165", "id": 753033121, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzAzMzEyMQ==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2020-12-31T19:33:47Z", "updated_at": "2020-12-31T19:33:47Z", "author_association": "NONE", "body": "Sorry to go on about it, but it's my only example ;) And thought it might be of interest/use. Here is FixMyStreet's Cypress workflow https://github.com/mysociety/fixmystreet/blob/master/.github/workflows/cypress.yml with the master script that sets up server etc at https://github.com/mysociety/fixmystreet/blob/master/bin/browser-tests (that has features such as working inside/outside Vagrant, and can do JS code coverage) and then the tests are at https://github.com/mysociety/fixmystreet/tree/master/.cypress/cypress/integration", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 776635426, "label": "Mechanism for executing JavaScript unit tests"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/983#issuecomment-753587963", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/983", "id": 753587963, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzU4Nzk2Mw==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2021-01-03T09:02:50Z", "updated_at": "2021-01-03T10:00:05Z", "author_association": "NONE", "body": "> but I'm already commited to requiring support for () => {} arrow functions\r\n\r\nDon't think you are :) (e.g. gzipped, using arrow functions in my example saves 2 bytes over spelling out function). On FMS, past month, looking at popular browsers, looks like we'd have 95.41% arrow support, 94.19% module support, and 4.58% (mostly IE9/IE11/Safari 9) supporting neither.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 712260429, "label": "JavaScript plugin hooks mechanism similar to pluggy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1298#issuecomment-823064725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1298", "id": 823064725, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMzA2NDcyNQ==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2021-04-20T07:57:14Z", "updated_at": "2021-04-20T07:57:14Z", "author_association": "NONE", "body": "My suggestions, originally made on twitter, but might be better here now:\r\n\r\n1. Could have a CSS shadow (one of the comments on https://stackoverflow.com/questions/44793453/how-do-i-add-a-top-and-bottom-shadow-while-scrolling-but-only-when-needed is a codepen for horizontal instead of vertical);\r\n\r\n2. Could give the table a max-height (either the window or work out the available space) so that it is both vertically/horizontally scrollable and you don't have to scroll to the bottom in order to see this;\r\n\r\n3. On a desktop browser, what I think I'd want is an absolute grid to work with - left query/filters, TR chart (or map), BR table. No problem with scrolling then. Here is a mockup I made when this was about the map plugin:\r\n![image](https://user-images.githubusercontent.com/154364/115358389-82c47e00-a1b5-11eb-8a63-0ca14fd23d32.png)\r\n![image](https://user-images.githubusercontent.com/154364/115358454-97087b00-a1b5-11eb-9501-cf884ae72d7c.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855476501, "label": "improve table horizontal scroll experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1298#issuecomment-823102978", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1298", "id": 823102978, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMzEwMjk3OA==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2021-04-20T08:51:23Z", "updated_at": "2021-04-20T08:51:23Z", "author_association": "NONE", "body": "2. Max height would still let you scroll the page to underneath the facets to the table, but would mean the table would never take up more than your window size, so the horizontal scrollbar would be visible as soon as the table took up the size of the window.\r\n3. Yes, this wouldn't be for mobile :) It'd be desktop-only styling. On mobile you can scroll much more easily with touch, anyway. In your case, perhaps better would be the whole top half would be facets, bottom left quadrant chart, bottom right table. Depends upon the particular use case, as you say.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855476501, "label": "improve table horizontal scroll experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1362#issuecomment-855428296", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1362", "id": 855428296, "node_id": "MDEyOklzc3VlQ29tbWVudDg1NTQyODI5Ng==", "user": {"value": 154364, "label": "dracos"}, "created_at": "2021-06-06T16:53:20Z", "updated_at": "2021-06-06T16:53:20Z", "author_association": "NONE", "body": "> Presumably this would also require adding Content-Security-Policy to the Vary header though, which will have a nasty effect on Cloudflare and Fastly and such like.\r\n\r\nNo, because Vary header is about *request* headers that cause the response to vary, not response headers.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 912864936, "label": "Consider using CSP to protect against future XSS"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1111506339", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/159", "id": 1111506339, "node_id": "IC_kwDOCGYnMM5CQD2j", "user": {"value": 154364, "label": "dracos"}, "created_at": "2022-04-27T21:35:13Z", "updated_at": "2022-04-27T21:35:13Z", "author_association": "NONE", "body": "Just stumbled across this, wondering why none of my deletes were working.", "reactions": "{\"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 702386948, "label": ".delete_where() does not auto-commit (unlike .insert() or .upsert())"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1493051222", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/159", "id": 1493051222, "node_id": "IC_kwDOCGYnMM5Y_idW", "user": {"value": 154364, "label": "dracos"}, "created_at": "2023-04-01T17:21:05Z", "updated_at": "2023-04-01T17:21:05Z", "author_association": "NONE", "body": "In a related issue, nearly a year later I just stumbled across this again, as I wondered why none of my rebuild-fts were rebuilding. It looks like: disable_fts in db.py commits; enable_fts partly commits except the last step (due to executescript committing a pending transaction); rebuild_fts won't commit unless manually done as above with e.g. a context manager.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 702386948, "label": ".delete_where() does not auto-commit (unlike .insert() or .upsert())"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/265#issuecomment-1493052396", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/265", "id": 1493052396, "node_id": "IC_kwDOCGYnMM5Y_ivs", "user": {"value": 154364, "label": "dracos"}, "created_at": "2023-04-01T17:27:18Z", "updated_at": "2023-04-01T17:27:18Z", "author_association": "NONE", "body": "`enable_fts` is a function in datasette, not in this repo, which doesn't do any escaping of search terms. It sounds like from https://docs.datasette.io/en/stable/full_text_search.html#advanced-sqlite-search-queries you might want to enable raw searching, as otherwise it's disabled and everything is escaped by default.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 907795562, "label": "Using enable_fts before search term"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1519#issuecomment-983890815", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1519", "id": 983890815, "node_id": "IC_kwDOBm6k_c46pPt_", "user": {"value": 157158, "label": "phubbard"}, "created_at": "2021-12-01T17:50:09Z", "updated_at": "2021-12-01T17:50:09Z", "author_association": "NONE", "body": "thanks so very much for the prompt attention and fix! Plus, the animated GIF showing the bug is just extra and I love it. Interactions like this are why I love open source.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058790545, "label": "base_url is omitted in JSON and CSV views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-920543967", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 920543967, "node_id": "IC_kwDOBm6k_c423mLf", "user": {"value": 164214, "label": "sethvincent"}, "created_at": "2021-09-16T03:19:08Z", "updated_at": "2021-09-16T03:19:08Z", "author_association": "NONE", "body": ":wave: I just put together a small example using the lambda container image support: https://github.com/sethvincent/datasette-aws-lambda-example\r\n\r\nIt uses mangum and AWS's [python runtime interface client](https://github.com/aws/aws-lambda-python-runtime-interface-client) to handle the lambda event stuff.\r\n\r\nI'd be happy to help with a publish plugin for AWS lambda as I plan to use this for upcoming projects.\r\n\r\nThe example uses the [serverless](https://www.serverless.com) cli for deployment but there might be a more suitable deployment approach for the plugin. It would be cool if users didn't have to install anything additional other than the aws cli and its associated config/credentials setup.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/healthkit-to-sqlite/issues/9#issuecomment-514745798", "issue_url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9", "id": 514745798, "node_id": "MDEyOklzc3VlQ29tbWVudDUxNDc0NTc5OA==", "user": {"value": 166463, "label": "tholo"}, "created_at": "2019-07-24T18:25:36Z", "updated_at": "2019-07-24T18:25:36Z", "author_association": "NONE", "body": "This is on macOS 10.14.6, with Python 3.7.4, packages in the virtual environment:\r\n\r\n```\r\nPackage Version\r\n------------------- -------\r\naiofiles 0.4.0\r\nClick 7.0\r\nclick-default-group 1.2.1\r\ndatasette 0.29.2\r\nh11 0.8.1\r\nhealthkit-to-sqlite 0.3.1\r\nhttptools 0.0.13\r\nhupper 1.8.1\r\nimportlib-metadata 0.18\r\nJinja2 2.10.1\r\nMarkupSafe 1.1.1\r\nPint 0.8.1\r\npip 19.2.1\r\npluggy 0.12.0\r\nsetuptools 41.0.1\r\nsqlite-utils 1.7\r\ntabulate 0.8.3\r\nuvicorn 0.8.4\r\nuvloop 0.12.2\r\nwebsockets 7.0\r\nzipp 0.5.2\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 472429048, "label": "Too many SQL variables"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/healthkit-to-sqlite/issues/9#issuecomment-515370687", "issue_url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9", "id": 515370687, "node_id": "MDEyOklzc3VlQ29tbWVudDUxNTM3MDY4Nw==", "user": {"value": 166463, "label": "tholo"}, "created_at": "2019-07-26T09:01:19Z", "updated_at": "2019-07-26T09:01:19Z", "author_association": "NONE", "body": "Yes, that did fix the issue I was seeing \u2014 it will now import my complete HealthKit data.\n\nThorsten\n\n> On Jul 25, 2019, at 23:07, Simon Willison wrote:\n> \n> @tholo this should be fixed in just-released version 0.3.2 - could you run a pip install -U healthkit-to-sqlite and let me know if it works for you now?\n> \n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub , or mute the thread .\n> \n\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 472429048, "label": "Too many SQL variables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1062124485", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1062124485, "node_id": "IC_kwDOBm6k_c4_TrvF", "user": {"value": 167160, "label": "khusmann"}, "created_at": "2022-03-08T19:26:32Z", "updated_at": "2022-03-08T19:26:32Z", "author_association": "NONE", "body": "Looks like I'm late to the party here, but wanted to join the convo if there's still time before this interface is solidified in v1.0. My plugin use case is for education / social science data, which is meta-data heavy in the documentation of measurement scales, instruments, collection procedures, etc. that I want to connect to columns, tables, and dbs (and render in static pages, but looks like I can do that with the jinja plugin hook). I'm still digging in and I think @brandonrobertz 's approach will work for me at least for now, but I want to bump this thread in the meantime -- are there still plans for an async metadata hook at some point in the future? (or are you considering other directions?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1065929510", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1065929510, "node_id": "IC_kwDOBm6k_c4_iMsm", "user": {"value": 167160, "label": "khusmann"}, "created_at": "2022-03-12T17:49:59Z", "updated_at": "2022-03-12T17:49:59Z", "author_association": "NONE", "body": "Ok, I'm taking a slightly different approach, which I think is sort of close to the in-memory _metadata table idea.\r\n\r\nI'm using a startup hook to load metadata / other info from the database, which I store in the datasette object for later:\r\n\r\n```\r\n@hookimpl\r\ndef startup(datasette):\r\n async def inner():\r\n datasette._mypluginmetadata = # await db query\r\n return inner\r\n```\r\n\r\nThen, I can use this in other plugins:\r\n\r\n```\r\n@hookimpl\r\ndef render_cell(value, column, table, database, datasette):\r\n # use datasette._mypluginmetadata\r\n```\r\n\r\nFor my app I don't need anything to update dynamically so it's fine to pre-populate everything on startup. It's also good to have things precached especially for a hook like render_cell, which would otherwise require a ton of redundant db queries.\r\n\r\nMakes me wonder if we could take a sort of similar caching approach with the internal _metadata table. Like have a little watchdog that could query all of the attached dbs for their _metadata tables every 5min or so, which then could be merged into the in memory _metadata table which then could be accessed sync by the plugins, or something like that.\r\n\r\nFor most the use cases I can think of, live updates don't need to take into effect immediately; refreshing a cache every 5min or on some other trigger (adjustable w a config setting) would be just fine. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1065951744", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1065951744, "node_id": "IC_kwDOBm6k_c4_iSIA", "user": {"value": 167160, "label": "khusmann"}, "created_at": "2022-03-12T19:47:17Z", "updated_at": "2022-03-12T19:47:17Z", "author_association": "NONE", "body": "Awesome, thanks @brandonrobertz !\r\n\r\nThe plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases. Rather than extending that plugin, at this point I've realized I need a lot more flexibility in metadata for my data model (esp around formatting cell values and custom file exports) so rather than extending that I'll continue working on a plugin specific to my app.\r\n\r\nIf I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config?\r\n\r\n> Making the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict.\r\n\r\nI agree -- because things like `render_cell` will potentially want to read metadata/config, `get_metadata` should really remain sync and lightweight, which we can do with something like the remote-metadata plugin that could also poll metadata tables in attached databases.\r\n\r\nThat leaves your app, where it sounds like you want changes made by the user in the browser in to be immediately reflected, rather than have to wait for the next metadata refresh. In this case I wonder if you could have your app make a sync write to the datasette object so the change would have the immediate effect, but then have a separate async polling mechanism to eventually write that change out to the database for long-term persistence. Then you'd have the best of both worlds, I think? But probably not worth the trouble if your use cases are small (and/or you're not reading metadata/config from tight loops like render_cell).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066143991", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066143991, "node_id": "IC_kwDOBm6k_c4_jBD3", "user": {"value": 167160, "label": "khusmann"}, "created_at": "2022-03-13T17:13:09Z", "updated_at": "2022-03-13T17:13:09Z", "author_association": "NONE", "body": "Thanks for taking the time to reply @brandonrobertz , this is really helpful info.\r\n\r\n> See \"Many small queries are efficient in sqlite\" for more information on the rationale here. Also note that in the datasette-live-config reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had.\r\n\r\nAh, that's nifty! Yeah, then caching on the python side is likely a waste :) I'm new to working with sqlite so this is super good to know the many-small-queries is a common pattern\r\n\r\n> I tested on very large Datasette deployments (hundreds of DBs, millions of rows).\r\n\r\nFor my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests? I'm less concerned now that I know a little more about sqlite's caching, but that special situation will jump you to a few orders of magnitude above what the sqlite article describes (e.g. 200 vs 20,000 queries+metadata merges for a page displaying 100 rows of a 200 column table). It wouldn't scale with db size as much as # of visible cells being rendered on the page, although they would be identical queries I suppose so will cache well.\r\n\r\n(If you didn't test this specific situation, no worries -- I'm just trying to calibrate my intuition on this and can do my own benchmarks at some point.)\r\n\r\n> Simon talked about eventually making something like this a standard feature of Datasette\r\n\r\nYeah, getting metadata (and static pages as well for that matter) from internal tables definitely has my vote for including as a standard feature! Its really nice to be able to distribute a single *.db with all the metadata and static pages bundled. My metadata are sufficiently complex/domain specific that it makes sense to continue on my own plugin for now, but I'll be thinking about more general parts I can spin off as possible contributions to liveconfig (if you're open to them) or other plugins in this ecosystem.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066194130", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066194130, "node_id": "IC_kwDOBm6k_c4_jNTS", "user": {"value": 167160, "label": "khusmann"}, "created_at": "2022-03-13T22:23:04Z", "updated_at": "2022-03-13T22:23:04Z", "author_association": "NONE", "body": "Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558437707", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558437707, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODQzNzcwNw==", "user": {"value": 172847, "label": "pkoppstein"}, "created_at": "2019-11-26T03:02:53Z", "updated_at": "2019-11-26T03:03:29Z", "author_association": "NONE", "body": "@simonw - Thanks for the reply!\r\n\r\nMy reading of the heroku documents is that if one sets things up using git, then one can use \"git push\" (from a {local, GitHub, GitLab} git repository to Heroku) to \"update\" a Heroku deployment, but I'm not sure exactly how this works. However, assuming there is some way to use \"git push\" to update the Heroku deployment, the question becomes how can one do this in conjunction with datasette.\r\n\r\nAgain based on my reading the heroku documents, it would seem that the following should work (but it doesn't quite):\r\n\r\n1) Use datasette to create a deployment (named MYAPP)\r\n2) Put it in maintenance mode\r\n3) heroku git:clone -a MYAPP\r\n -- This results in an empty repository (as expected)\r\n4) In another directory, heroku slugs:download -a MYAPP\r\n5) Copy the downloaded slug into the repository\r\n6) Make some change to metadata.json\r\n6) Commit and push it back\r\n7) Take the deployment out of maintenance mode\r\n8) Refresh the deployment\r\n\r\nUsing the heroku console, I've verified that the edits appear on heroku, but somehow they are not reflected in the running app.\r\n\r\nI'm hopeful that with some small tweak or perhaps the addition of a bit of voodoo, this strategy will work. \r\n\r\nI think it will be important to get this working for another reason: getting Heroku, Cloudcube, and datasette to work together, to overcome the slug size limitation so that large SQLite databases can be deployed to Heroku using Datasette.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558852316", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558852316, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODg1MjMxNg==", "user": {"value": 172847, "label": "pkoppstein"}, "created_at": "2019-11-26T22:54:23Z", "updated_at": "2019-11-26T22:54:23Z", "author_association": "NONE", "body": "@jacobian - Thanks for your help. Having to upload an entire slug each time a small change is needed in `metadata.json` seems no better than the current situation so I probably won't go down that rabbit hole just yet. In any case, the really important goal is moving the SQLite file out of Heroku in a way that the Heroku app can still read it efficiently. Is this possible? Is Cloudcube the right place to start? Is there any alternative? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-559916057", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 559916057, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTkxNjA1Nw==", "user": {"value": 172847, "label": "pkoppstein"}, "created_at": "2019-11-30T06:08:50Z", "updated_at": "2019-11-30T06:08:50Z", "author_association": "NONE", "body": "@simonw, @jacobian - I was able to resolve the metadata.json issue by adding `-m metadata.json` to the Procfile. Now `git push heroku master` picks up the changes, though I have the impression that heroku is doing more work than necessary (e.g. one of the information messages is: `Installing requirements with pip`).\r\n\r\nI also had to set the environment variable WEB_CONCURRENCY -- I used WEB_CONCURRENCY=1.\r\n\r\nI am still anxious to know whether it's possible for Datasette on Heroku to access the SQLite file at another location. Cloudcube seems the most promising, and I'm hoping it can be done by tweaking the Procfile suitably, but maybe that's too optimistic?\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-356161672", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 356161672, "node_id": "MDEyOklzc3VlQ29tbWVudDM1NjE2MTY3Mg==", "user": {"value": 173848, "label": "yozlet"}, "created_at": "2018-01-09T02:35:35Z", "updated_at": "2018-01-09T02:35:35Z", "author_association": "NONE", "body": "@wulfmann I think I disagree, except I'm not entirely sure what you mean by that first paragraph. The JSON API that Datasette currently exposes is quite different to GraphQL.\r\n\r\nFurthermore, there's no \"just\" about connecting micro-graphql to a DB; at least, no more \"just\" than adding any other API. You still need to configure the schema, which is exactly the kind of thing that Datasette does for JSON API. This is why I think that GraphQL's a good fit here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/983#issuecomment-706413753", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/983", "id": 706413753, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNjQxMzc1Mw==", "user": {"value": 173848, "label": "yozlet"}, "created_at": "2020-10-09T21:41:12Z", "updated_at": "2020-10-09T21:41:12Z", "author_association": "NONE", "body": "If you don't mind a somewhat bonkers idea: how about a JS client-side plugin capability that allows any user looking at a Datasette site to pull in external plugins for data manipulation, even if the Datasette owner hasn't added them? (Yes, this may be _much_ too ambitious. If you're remotely interested, maybe fork this discussion to a different issue.) \r\n\r\nThis is some fascinating reading about what JS sandboxing looks like these days:\r\nhttps://www.figma.com/blog/how-we-built-the-figma-plugin-system/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 712260429, "label": "JavaScript plugin hooks mechanism similar to pluggy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/983#issuecomment-753218817", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/983", "id": 753218817, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzIxODgxNw==", "user": {"value": 173848, "label": "yozlet"}, "created_at": "2020-12-31T22:32:25Z", "updated_at": "2020-12-31T22:32:25Z", "author_association": "NONE", "body": "Amazing work! And you've put in far more work than I'd expect to reduce the payload (which is admirable).\r\n\r\nSo, to add a plugin with the current design, it goes in (a) the template or (b) a bookmarklet, right?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 712260429, "label": "JavaScript plugin hooks mechanism similar to pluggy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2189#issuecomment-1728192688", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2189", "id": 1728192688, "node_id": "IC_kwDOBm6k_c5nAiCw", "user": {"value": 173848, "label": "yozlet"}, "created_at": "2023-09-20T17:53:31Z", "updated_at": "2023-09-20T17:53:31Z", "author_association": "NONE", "body": "`/me munches popcorn at a furious rate, utterly entralled`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1901416155, "label": "Server hang on parallel execution of queries to named in-memory databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1955#issuecomment-1357084279", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1955", "id": 1357084279, "node_id": "IC_kwDOBm6k_c5Q43Z3", "user": {"value": 178162, "label": "andrewdotn"}, "created_at": "2022-12-19T04:34:16Z", "updated_at": "2022-12-19T04:34:16Z", "author_association": "NONE", "body": "You were super-close on the python version of the test here, changing `http` to `https` on 8b73fc6b47dffd8836f5c58aae1e57c1f66a5754 is enough to pass the test:\r\n\r\n```diff\r\ndiff --git a/tests/conftest.py b/tests/conftest.py\r\nindex 69dee68b4a3f..ba07a11d37f6 100644\r\n--- a/tests/conftest.py\r\n+++ b/tests/conftest.py\r\n@@ -207,7 +207,7 @@ def ds_localhost_https_server(tmp_path_factory):\r\n stderr=subprocess.STDOUT,\r\n cwd=tempfile.gettempdir(),\r\n )\r\n- wait_until_responds(\"http://localhost:8042/\", verify=client_cert)\r\n+ wait_until_responds(\"https://localhost:8042/\", verify=client_cert)\r\n # Check it started successfully\r\n assert not ds_proc.poll(), ds_proc.stdout.read().decode(\"utf-8\")\r\n yield ds_proc, client_cert\r\n```\r\n\r\nMy speculation about what was happening here: when `wait_until_responds()` would time out due to SSL connection problems, because `.terminate()` isn\u2019t in a `finally`, the datasette process wouldn\u2019t get killed. That could (1) hang CI and (2) cause all your future local test runs to mysteriously fail because they\u2019d be secretly talking to that old datasette process still hanging around from a past test run with an old temporary server certificate, and that old server cert wouldn\u2019t validate against your newly-created ca cert.\r\n\r\nA `finally` for `.terminate()` would help; a fancier version could be a context manager for running the external `datasette` process that could:\r\n - ensure the process always exited when no longer needed\r\n - if you want to be fancy, call `terminate()`, `wait()` for a short timeout for the process to exit, then try `kill()` and `wait()` again; raise an exception complaining about the seemingly-unkillable process if all that fails\r\n - raise an error if the process exited with a non-zero error code; here it\u2019s likely that some `datasette` executions were secretly failing with `[Errno 48] error while attempting to bind on address ('127.0.0.1', 8042): address already in use`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1496652622, "label": "invoke_startup() is not run in some conditions, e.g. gunicorn/uvicorn workers, breaking lots of things"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1398#issuecomment-889599513", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1398", "id": 889599513, "node_id": "IC_kwDOBm6k_c41BjYZ", "user": {"value": 192984, "label": "aitoehigie"}, "created_at": "2021-07-30T03:21:49Z", "updated_at": "2021-07-30T03:21:49Z", "author_association": "NONE", "body": "Does the library support this now?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 947044667, "label": "Documentation on using Datasette as a library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1419734229", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1419734229, "node_id": "IC_kwDOCGYnMM5Un2zV", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-06T20:53:28Z", "updated_at": "2023-02-06T21:16:29Z", "author_association": "NONE", "body": "I think it's not currently possible: sqlite-utils requires that it be one of `integer`, `text`, `float`, `blob` ([see code](https://github.com/simonw/sqlite-utils/blob/fc221f9b62ed8624b1d2098e564f525c84497969/sqlite_utils/cli.py#L2266))\r\n\r\nIMO, this is a bit of friction and it would be nice if it was more permissive. SQLite permits developers to use any data type when creating a table. For example, this is a perfectly cromulent sqlite session that creates a table with columns of type `baz` and `bar`:\r\n\r\n```\r\nsqlite> create table foo(column1 baz, column2 bar);\r\nsqlite> .schema foo\r\nCREATE TABLE foo(column1 baz, column2 bar);\r\nsqlite> select * from pragma_table_info('foo');\r\ncid name type notnull dflt_value pk \r\n---------- ---------- ---------- ---------- ---------- ----------\r\n0 column1 baz 0 0 \r\n1 column2 bar 0 0 \r\n```\r\n\r\nThe idea is that the application developer will know what meaning to ascribe to those types. For example, I'm working on a plugin to Datasette. Dates are tricky to handle. If you have some existing rows, you can look at the values in them to know how a user is serializing the dates -- as an ISO 8601 string? An RFC 3339 string? With millisecond precision? With timezone offset? But if you don't yet have any rows, you have to guess. If the column is of type `TEXT`, you don't even know that it's meant to hold a date! In this case, my plugin will look to see if the column is of type `DATE` or `DATETIME`, and assume a certain representation when writing.\r\n\r\nPerhaps there is an argument that sqlite-utils is trying to conform to SQLite's strict mode, and that is why it limits the choices. In strict mode, SQLite requires that the data type be one of `INT`, `INTEGER`, `REAL`, `TEXT`, `BLOB`, `ANY`. But that can't be the case -- sqlite-utils supports `FLOAT`, which is not one of the valid types in strict mode, and it rejects `INT`, `REAL` and `ANY`, which _are_ valid.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1419740776", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1419740776, "node_id": "IC_kwDOCGYnMM5Un4Zo", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-06T20:59:01Z", "updated_at": "2023-02-06T20:59:01Z", "author_association": "NONE", "body": "That said, it looks like the check is only enforced at the CLI level. If you use the API directly, I think it'll work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420809773", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1420809773, "node_id": "IC_kwDOCGYnMM5Ur9Yt", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-07T13:53:01Z", "updated_at": "2023-02-07T13:53:01Z", "author_association": "NONE", "body": "Ah, it looks like that is controlled by this dict: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L178\r\n\r\nI suspect you could overwrite the datetime entry to achieve what you want", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420992261", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1420992261, "node_id": "IC_kwDOCGYnMM5Usp8F", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-07T15:45:58Z", "updated_at": "2023-02-07T15:45:58Z", "author_association": "NONE", "body": "I'd support that, but I'm not the author of this library.\r\n\r\nOne challenge is that would be a breaking change. Do you see a way to enable it without affecting existing users or bumping the major version number?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421033725", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1421033725, "node_id": "IC_kwDOCGYnMM5Us0D9", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-07T16:12:13Z", "updated_at": "2023-02-07T16:12:13Z", "author_association": "NONE", "body": "I think the bigger issue is that `sqlite-utils` mixes mechanism (it implements the [12-step way to alter SQLite tables](https://www.sqlite.org/lang_altertable.html#otheralter)) and policy (it has an opinionated stance on what column types should be used).\r\n\r\nThat might be a design choice to make it accessible to users by providing a reasonable set of defaults, but it doesn't quite fit my use case.\r\n\r\nIt might make sense to extract a separate library that provides just the mechanisms, and then `sqlite-utils` would sit on top of that library with its opinionated set of policies.\r\n\r\nThat would be a very big change, though.\r\n\r\nI might take a stab at extracting the library, but just for the table schema migration piece, not all the other features that `sqlite-utils` supports. I wouldn't expect `sqlite-utils` to depend on it.\r\n\r\nPart of my motivation is that I want to provide some other abilities, too, like support for CHECK constraints. I see that the issue in this repo (https://github.com/simonw/sqlite-utils/issues/358) proposes a bunch of short-hand constraints, which I wouldn't want to accidentally expose to people -- I want a layer that is a 1:1 mapping to SQLite.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421081939", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/524", "id": 1421081939, "node_id": "IC_kwDOCGYnMM5Us_1T", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-02-07T16:42:25Z", "updated_at": "2023-02-07T16:43:42Z", "author_association": "NONE", "body": "Ha, yes, I might end up making something very niche. That's OK.\r\n\r\nI'm building a UI for [Datasette](https://datasette.io/) that lets users make schema changes, so it's important to me that the tool work in a non-surprising way -- if you ask for a column of type X, you should get type X. If the column or table previously had CHECK constraints, they shouldn't be silently removed. And so on. I had hoped that I could just lean on sqlite-utils, but I think it's a little too surprising.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1572766460, "label": "Transformation type `--type DATETIME`"}, "performed_via_github_app": null}