{"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/53#issuecomment-748436453", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53", "id": 748436453, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODQzNjQ1Mw==", "user": {"value": 27, "label": "anotherjesse"}, "created_at": "2020-12-19T07:47:01Z", "updated_at": "2020-12-19T07:47:01Z", "author_association": "NONE", "body": "I think this should probably be closed as won't fix.\r\n\r\nAttempting to make a patch for this I realized that the since_id would limit to tweets posted since that since_id, not when it was favorited. So favoriting something in the older would be missed if you used `--since` with a cron script\r\n\r\nBetter to just use `--stop_after` set to a couple hundred", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771324837, "label": "--since support for favorites"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/healthkit-to-sqlite/issues/11#issuecomment-711083698", "issue_url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/11", "id": 711083698, "node_id": "MDEyOklzc3VlQ29tbWVudDcxMTA4MzY5OA==", "user": {"value": 572, "label": "jarib"}, "created_at": "2020-10-17T21:39:15Z", "updated_at": "2020-10-17T21:39:15Z", "author_association": "NONE", "body": "Nice! Works perfectly. Thanks for the quick response and great tooling in general.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 723838331, "label": "export.xml file name varies with different language settings"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-810943882", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 810943882, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMDk0Mzg4Mg==", "user": {"value": 701, "label": "jokull"}, "created_at": "2021-03-31T10:03:55Z", "updated_at": "2021-03-31T10:03:55Z", "author_association": "NONE", "body": "+1 on using nested queries to achieve this! Would be great as streaming CSV is an amazing feature.\r\n\r\nSome UX/DX details:\r\n\r\nI was expecting it to work to simply add `&_stream=on` to custom SQL queries because the docs say \r\n\r\n> Any Datasette table, view or **custom SQL query** can be exported as CSV.\r\n\r\nAfter a bit of testing back and forth I realized streaming only works for full tables. \r\n\r\nWould love this feature because I'm using `pandas.read_csv` to paint graphs from custom queries and the graphs are cut off because of the 1000 row limit. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/15#issuecomment-605439685", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15", "id": 605439685, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNTQzOTY4NQ==", "user": {"value": 2029, "label": "garethr"}, "created_at": "2020-03-28T12:17:01Z", "updated_at": "2020-03-28T12:17:01Z", "author_association": "NONE", "body": "That looks great, thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 544571092, "label": "Assets table with downloads"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622279374", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33", "id": 622279374, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjI3OTM3NA==", "user": {"value": 2029, "label": "garethr"}, "created_at": "2020-05-01T07:12:47Z", "updated_at": "2020-05-01T07:12:47Z", "author_association": "NONE", "body": "I also go it working with:\r\n\r\n```yaml\r\nrun: echo ${{ secrets.github_token }} | github-to-sqlite auth\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 609950090, "label": "Fall back to authentication via ENV"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-927312650", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54", "id": 927312650, "node_id": "IC_kwDODEm0Qs43RasK", "user": {"value": 2182, "label": "danp"}, "created_at": "2021-09-26T14:09:51Z", "updated_at": "2021-09-26T14:09:51Z", "author_association": "NONE", "body": "Similar trouble with ageinfo using 0.22. Here's what my ageinfo.js file looks like:\r\n\r\n```\r\nwindow.YTD.ageinfo.part0 = [\r\n {\r\n \"ageMeta\" : { }\r\n }\r\n]\r\n```\r\n\r\nCommenting out the registration for ageinfo in archive.py gets my archive to import.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779088071, "label": "Archive import appears to be broken on recent exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/pocket-to-sqlite/issues/11#issuecomment-1221521377", "issue_url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/11", "id": 1221521377, "node_id": "IC_kwDODLZ_YM5Izu_h", "user": {"value": 2467, "label": "fernand0"}, "created_at": "2022-08-21T10:51:37Z", "updated_at": "2022-08-21T10:51:37Z", "author_association": "NONE", "body": "I didn't see there is a PR about this: https://github.com/dogsheep/pocket-to-sqlite/pull/7", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1345452427, "label": "-a option is used for \"--auth\" and for \"--all\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2214#issuecomment-1844819002", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2214", "id": 1844819002, "node_id": "IC_kwDOBm6k_c5t9bQ6", "user": {"value": 2874, "label": "precipice"}, "created_at": "2023-12-07T07:36:33Z", "updated_at": "2023-12-07T07:36:33Z", "author_association": "NONE", "body": "If I uncheck `expand labels` in the Advanced CSV export dialog, the error does not occur. Re-checking that box and re-running the export does cause the error to occur.\r\n\r\n![CleanShot 2023-12-06 at 23 34 58@2x](https://github.com/simonw/datasette/assets/2874/12c6c241-35ce-4ded-8dc7-fc250d809ed9)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 2029908157, "label": "CSV export fails for some `text` foreign key references"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/269#issuecomment-859940977", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/269", "id": 859940977, "node_id": "MDEyOklzc3VlQ29tbWVudDg1OTk0MDk3Nw==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-11T22:33:08Z", "updated_at": "2021-06-11T22:33:08Z", "author_association": "NONE", "body": "`true` and `false` json values are cast to integer, which is not optimal.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919250621, "label": "bool type not supported"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/270#issuecomment-860031071", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/270", "id": 860031071, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MDAzMTA3MQ==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-12T10:00:24Z", "updated_at": "2021-06-12T10:00:24Z", "author_association": "NONE", "body": "Sure, I am sorry if my message hasn't been clear enough. I am also a new user :)\r\n\r\nAt the beginning, I just call `sqlite-utils insert \"$db\" \"$table\" \"$jsonfile\"` to create the database. sqlite-utils convert JSON values into `TEXT`, when it tries to determine the schema automatically. I then try to transform the table to set `JSON` as type:\r\n\r\n```\r\nsqlite-utils transform species.sqlite species --type criteria json\r\nUsage: sqlite-utils transform [OPTIONS] PATH TABLE\r\nTry 'sqlite-utils transform --help' for help.\r\n\r\nError: Invalid value for '--type': 'json' is not one of 'INTEGER', 'TEXT', 'FLOAT', 'BLOB'.\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919314806, "label": "Cannot set type JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/269#issuecomment-860031217", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/269", "id": 860031217, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MDAzMTIxNw==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-12T10:01:53Z", "updated_at": "2021-06-12T10:01:53Z", "author_association": "NONE", "body": "`sqlite-utils transform` does not allow setting the column type to boolean:\r\n```\r\nError: Invalid value for '--type': 'bool' is not one of 'INTEGER', 'TEXT', 'FLOAT', 'BLOB'.\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919250621, "label": "bool type not supported"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-860047794", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 860047794, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MDA0Nzc5NA==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-12T12:36:15Z", "updated_at": "2021-06-12T12:36:15Z", "author_association": "NONE", "body": "@mroswell That is a very nice solution. I wonder if custom classes, like `col-columnName-value` could be automatically added to cells when facets on such column are enabled, to allow custom styling without having to modify templates or add custom JavaScript code.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1375#issuecomment-860548546", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1375", "id": 860548546, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MDU0ODU0Ng==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-14T09:41:59Z", "updated_at": "2021-06-14T09:41:59Z", "author_association": "NONE", "body": "> There is a feature for this at the moment, but it's a little bit hidden: you can use `?_json=col` to tell\r\n> Datasette that you would like a specific column to be exported as nested JSON: https://docs.datasette.io/en/stable/json_api.html#special-json-arguments\r\n\r\nThanks :)\r\n \r\n> I considered trying to make this automatic - so it detects columns that appear to contain valid JSON and outputs them as nested objects - but the problem with that is that it can lead to inconsistent results - you might hit the API and find that not every column contains valid JSON (compared to the previous day) resulting in the API retuning string instead of the expected dictionary and breaking your code.\r\n\r\nIf a developer is not sure if the JSON fields are valid, but then retrieves and parse them, it should handle errors too. Handling inconsistent data is necessary due to the nature of SQLite. A global or dataset option to render the data as they have been defined (JSON, boolean, etc.) when requesting JSON could allow the user to download a regular JSON from the browser without having to rely on APIs. I would guess someone could just make a custom template with an extra JSON-parsed download button otherwise :)", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919508498, "label": "JSON export dumps JSON fields as TEXT"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/270#issuecomment-862574390", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/270", "id": 862574390, "node_id": "MDEyOklzc3VlQ29tbWVudDg2MjU3NDM5MA==", "user": {"value": 4068, "label": "frafra"}, "created_at": "2021-06-16T17:34:49Z", "updated_at": "2021-06-16T17:34:49Z", "author_association": "NONE", "body": "Sorry, I got confused because SQLite has a JSON column type, even if it is treated as TEXT, and I though automatic facets were available for JSON arrays stored as JSON only :)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 919314806, "label": "Cannot set type JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/438#issuecomment-1139379923", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/438", "id": 1139379923, "node_id": "IC_kwDOCGYnMM5D6Y7T", "user": {"value": 4068, "label": "frafra"}, "created_at": "2022-05-27T08:05:01Z", "updated_at": "2022-05-27T08:05:01Z", "author_association": "NONE", "body": "I tried to debug it using `pdb`, but it looks `sqlite-utils` catches the exception, so it is not quick to figure out where the failure is happening.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250161887, "label": "illegal UTF-16 surrogate"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/438#issuecomment-1139392769", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/438", "id": 1139392769, "node_id": "IC_kwDOCGYnMM5D6cEB", "user": {"value": 4068, "label": "frafra"}, "created_at": "2022-05-27T08:21:53Z", "updated_at": "2022-05-27T08:21:53Z", "author_association": "NONE", "body": "Argument were specified in the wrong order. `PATH TABLE FILE` can be misleading :)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250161887, "label": "illegal UTF-16 surrogate"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1139426398", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1139426398, "node_id": "IC_kwDOCGYnMM5D6kRe", "user": {"value": 4068, "label": "frafra"}, "created_at": "2022-05-27T09:04:05Z", "updated_at": "2022-05-27T10:44:54Z", "author_association": "NONE", "body": "This code works:\r\n\r\n```python\r\nimport csv\r\nimport sqlite_utils\r\ndb = sqlite_utils.Database(\"test.db\")\r\nreader = csv.DictReader(open(\"csv\", encoding=\"utf-16-le\").read().split(\"\\r\\n\"), delimiter=\";\")\r\ndb[\"test\"].insert_all(reader, pk=\"Id\")\r\n```\r\n\r\nI used `iconv` to change the encoding; sqlite-utils can import the resulting file, even if it stops at 98 %:\r\n\r\n```\r\nsqlite-utils insert --csv test test.db clean \r\n [------------------------------------] 0%\r\n [###################################-] 98% 00:00:00\r\n```\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1139484453", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/433", "id": 1139484453, "node_id": "IC_kwDOCGYnMM5D6ycl", "user": {"value": 4068, "label": "frafra"}, "created_at": "2022-05-27T10:20:08Z", "updated_at": "2022-05-27T10:20:08Z", "author_association": "NONE", "body": "I can confirm. This only happens with sqlite-utils. I am using gnome-terminal with bash.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1239034903, "label": "CLI eats my cursor"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1818#issuecomment-1257290709", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1818", "id": 1257290709, "node_id": "IC_kwDOBm6k_c5K8LvV", "user": {"value": 5363, "label": "nelsonjchen"}, "created_at": "2022-09-25T22:17:06Z", "updated_at": "2022-09-25T22:17:06Z", "author_association": "NONE", "body": "I wonder if having an option for displaying the max row id might help too. Not accurate especially if something was deleted, but useful for DBs as a dump. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1384549993, "label": "Setting to turn off table row counts entirely"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1818#issuecomment-1258738740", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1818", "id": 1258738740, "node_id": "IC_kwDOBm6k_c5LBtQ0", "user": {"value": 5363, "label": "nelsonjchen"}, "created_at": "2022-09-26T22:52:45Z", "updated_at": "2022-09-26T22:55:57Z", "author_association": "NONE", "body": "thoughts on order of precedence to use:\r\n\r\n* sqlite-utils count, if present. closest thing to a standard i guess.\r\n* row(max_id) if like, the first and/or last x amount of rows ids are all contiguous. kind of a cheap/dumb/imperfect heuristic to see if the table is dump/not dump. if the check passes, still stick on `est.` after the display.\r\n* count(*) if enabled in datasette ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1384549993, "label": "Setting to turn off table row counts entirely"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-791053721", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32", "id": 791053721, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MTA1MzcyMQ==", "user": {"value": 6213, "label": "dsisnero"}, "created_at": "2021-03-05T00:31:27Z", "updated_at": "2021-03-05T00:31:27Z", "author_association": "NONE", "body": "I am getting the same thing for US West (N. California) us-west-1", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 803333769, "label": "KeyError: 'Contents' on running upload"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2054#issuecomment-1499797384", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2054", "id": 1499797384, "node_id": "IC_kwDOBm6k_c5ZZReI", "user": {"value": 6213, "label": "dsisnero"}, "created_at": "2023-04-07T00:46:50Z", "updated_at": "2023-04-07T00:46:50Z", "author_association": "NONE", "body": "you should have a look at Roda written in ruby . ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1657861026, "label": "Make detailed notes on how table, query and row views work right now"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1298#issuecomment-1125083348", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1298", "id": 1125083348, "node_id": "IC_kwDOBm6k_c5DD2jU", "user": {"value": 7150, "label": "llimllib"}, "created_at": "2022-05-12T14:43:51Z", "updated_at": "2022-05-12T14:43:51Z", "author_association": "NONE", "body": "user report: I found this issue because the first time I tried to use datasette for real, I displayed a large table, and thought there was no horizontal scroll bar at all. I didn't even consider that I had to scroll all the way to the end of the page to find it.\r\n\r\nJust chipping in to say that this confused me, and I didn't even find the scroll bar until after I saw this issue. I don't know what the right answer is, but IMO the UI should suggest to the user that there is a way to view the data that's hidden to the right.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855476501, "label": "improve table horizontal scroll experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-359697938", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 359697938, "node_id": "MDEyOklzc3VlQ29tbWVudDM1OTY5NzkzOA==", "user": {"value": 7193, "label": "gijs"}, "created_at": "2018-01-23T07:17:56Z", "updated_at": "2018-01-23T07:17:56Z", "author_association": "NONE", "body": "\ud83d\udc4d I'd like this too! ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/417#issuecomment-1074256603", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/417", "id": 1074256603, "node_id": "IC_kwDOCGYnMM5AB9rb", "user": {"value": 9954, "label": "blaine"}, "created_at": "2022-03-21T18:19:41Z", "updated_at": "2022-03-21T18:19:41Z", "author_association": "NONE", "body": "That makes sense; just a little hint that points folks towards doing the right thing might be helpful!\r\n\r\nfwiw, the reason I was using jq in the first place was just a quick way to extract one attribute from an actual JSON array. When I initially imported it, I got a table with a bunch of embedded JSON values, rather than a native table, because each array entry had two attributes, one with the data I _actually_ wanted. Not sure how common a use-case this is, though (and easily fixed, aside from the jq weirdness!)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1175744654, "label": "insert fails on JSONL with whitespace"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/pocket-to-sqlite/issues/10#issuecomment-1239516561", "issue_url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/10", "id": 1239516561, "node_id": "IC_kwDODLZ_YM5J4YWR", "user": {"value": 11887, "label": "ashanan"}, "created_at": "2022-09-07T15:07:38Z", "updated_at": "2022-09-07T15:07:38Z", "author_association": "NONE", "body": "Thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1246826792, "label": "When running `auth` command, don't overwrite an existing auth.json file"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/328#issuecomment-925300720", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/328", "id": 925300720, "node_id": "IC_kwDOCGYnMM43Jvfw", "user": {"value": 12752, "label": "gravis"}, "created_at": "2021-09-22T20:21:33Z", "updated_at": "2021-09-22T20:21:33Z", "author_association": "NONE", "body": "Wow, that was fast! Thank you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1004613267, "label": "Invalid JSON output when no rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-617208503", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 617208503, "node_id": "MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw==", "user": {"value": 12976, "label": "nkirsch"}, "created_at": "2020-04-21T14:16:24Z", "updated_at": "2020-04-21T14:16:24Z", "author_association": "NONE", "body": "@eads I'm interested in helping, if there's still a need...", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/474#issuecomment-1229449018", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/474", "id": 1229449018, "node_id": "IC_kwDOCGYnMM5JR-c6", "user": {"value": 14294, "label": "hubgit"}, "created_at": "2022-08-28T12:40:13Z", "updated_at": "2022-08-28T12:40:13Z", "author_association": "NONE", "body": "Creating the table before inserting is a useful workaround, thanks. It does require figuring out the `create table` syntax and listing all the fields manually, though, which loses some of the magic of sqlite-utils.\r\n\r\nI was expecting to find an option like `--headers=foo,bar` (or `--header-row='foo\\tbar'`, if that would be easier) - not necessarily that exact syntax, but something that would essentially be treated the same as having a header row in the file.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1353074021, "label": "Add an option for specifying column names when inserting CSV data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-1236200834", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 1236200834, "node_id": "IC_kwDOCGYnMM5Jru2C", "user": {"value": 14294, "label": "hubgit"}, "created_at": "2022-09-03T21:26:32Z", "updated_at": "2022-09-03T21:26:32Z", "author_association": "NONE", "body": "I was looking for something like this today, for extracting columns containing objects (and arrays of objects) into separate tables. \r\n\r\nWould it make sense (especially for the fields containing arrays of objects) to create a one-to-many relationship, where each row of the newly created table would contain the id of the row that originally contained it?\r\n\r\nIf the extracted objects have a unique id and are repeated, it could even create a many-to-many relationship, with a third table for the joins.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-571412923", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16", "id": 571412923, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MTQxMjkyMw==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-01-07T03:06:46Z", "updated_at": "2020-01-07T03:06:46Z", "author_association": "NONE", "body": "I re-tried after doing `auth`, and I get the same result.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546051181, "label": "Exception running first command: IndexError: list index out of range"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-602136481", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16", "id": 602136481, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjEzNjQ4MQ==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-03-22T02:08:57Z", "updated_at": "2020-03-22T02:08:57Z", "author_association": "NONE", "body": "I'd love to be using your library as a better cached gh layer for a new library I have built, replacing large parts of the very ugly https://github.com/jayvdb/pypidb/blob/master/pypidb/_github.py , and then probably being able to rebuild the setuppy chunk as a feature here at a later stage.\r\n\r\nI would also need tokenless and netrc support, but I would be happy to add those bits.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546051181, "label": "Exception running first command: IndexError: list index out of range"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1522#issuecomment-974607456", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1522", "id": 974607456, "node_id": "IC_kwDOBm6k_c46F1Rg", "user": {"value": 17906, "label": "mrchrisadams"}, "created_at": "2021-11-20T07:10:11Z", "updated_at": "2021-11-20T07:10:11Z", "author_association": "NONE", "body": "As a a sanity check, would it be worth looking at trying to push the multi-process container on another provider of a knative / cloud run / tekton ? I have a somewhat similar use case for a future proejct, so i'm been very grateful to you sharing all the progress in this issue.\r\n\r\nAs I understand it, Scaleway also offer a very similar offering using what appear to be many similar components that might at least see if it's an issue with more than one knative based FaaS provider\r\n\r\nhttps://www.scaleway.com/en/serverless-containers/\r\nhttps://developers.scaleway.com/en/products/containers/api/#main-features\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1058896236, "label": "Deploy a live instance of demos/apache-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/7#issuecomment-906015471", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/7", "id": 906015471, "node_id": "IC_kwDOD079W842ALLv", "user": {"value": 18232, "label": "dkam"}, "created_at": "2021-08-26T02:01:01Z", "updated_at": "2021-08-26T02:01:01Z", "author_association": "NONE", "body": "Perceptual hashes might be what you're after : http://phash.org", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 602585497, "label": "Integrate image content hashing"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1035717429", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 1035717429, "node_id": "IC_kwDOD079W849u8s1", "user": {"value": 18504, "label": "harperreed"}, "created_at": "2022-02-11T01:55:38Z", "updated_at": "2022-02-11T01:55:38Z", "author_association": "NONE", "body": "I would love this merged! ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2147#issuecomment-1687433388", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2147", "id": 1687433388, "node_id": "IC_kwDOBm6k_c5klDCs", "user": {"value": 18899, "label": "jackowayed"}, "created_at": "2023-08-22T05:05:33Z", "updated_at": "2023-08-22T05:05:33Z", "author_association": "NONE", "body": "Thanks for all this! You're totally right that the ASGI option is doable, if a bit low level and coupled to the current URI design. I'm totally fine with that being the final answer.\r\n\r\nprocess_view is interesting and in the general direction of what I had in mind.\r\n\r\nA somewhat less powerful idea: Is there value in giving a hook for just the query that's about to be run? Maybe I'm thinking a little narrowly about this problem I decided I wanted to solve, but I could see other uses for a hook of the sketch below:\r\n\r\n```\r\ndef prepare_query(database, table, query):\r\n \"\"\"Modify query that is about to be run in some way. Return the (possibly rewritten) query to run, or None to disallow running the query\"\"\"\r\n```\r\n(Maybe you actually want to return a tuple so there can be an error message when you disallow, or something.)\r\n\r\nMaybe it's too narrowly useful and some of the other pieces of datasette obviate some of these ideas, but off the cuff I could imagine using it to:\r\n* Require a LIMIT. Either fail the query or add the limit if it's not there.\r\n* Do logging, like my usecase.\r\n* Do other analysis on whether you want to allow the query to run; a linter? query complexity? \r\n\r\nDefinitely feel free to say no, or not now. This is all me just playing around with what datasette and its plugin architecture can do with toy ideas, so don't let me push you to commit to a hook you don't feel confident fits well in the design.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1858228057, "label": "Plugin hook for database queries that are run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2147#issuecomment-1690955706", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2147", "id": 1690955706, "node_id": "IC_kwDOBm6k_c5kye-6", "user": {"value": 18899, "label": "jackowayed"}, "created_at": "2023-08-24T03:54:35Z", "updated_at": "2023-08-24T03:54:35Z", "author_association": "NONE", "body": "That's fair. The best idea I can think of is that if a plugin wanted to limit intensive queries, it could add LIMITs or something. A hook that gives you visibility of queries and maybe the option to reject felt a little more limited than the existing plugin hooks, so I was trying to think of what else one might want to do while looking at to-be-run queries.\r\n\r\nBut without a real motivating example, I see why you don't want to add that.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1858228057, "label": "Plugin hook for database queries that are run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1141711418", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/26", "id": 1141711418, "node_id": "IC_kwDOCGYnMM5EDSI6", "user": {"value": 19304, "label": "nileshtrivedi"}, "created_at": "2022-05-31T06:21:15Z", "updated_at": "2022-05-31T06:21:15Z", "author_association": "NONE", "body": "I ran into this. My use case has a JSON file with array of `book` objects with a key called `reviews` which is also an array of objects. My JSON is human-edited and does not specify IDs for either books or reviews. Because sqlite-utils does not support inserting nested objects, I instead have to maintain two separate CSV files with `id` column in `books.csv` and `book_id` column in reviews.csv.\r\n\r\nI think the right way to declare the relationship while inserting a JSON might be to describe the relationship:\r\n\r\n`sqlite-utils insert data.db books mydata.json --hasmany reviews --hasone author --manytomany tags`\r\n\r\nThis is relying on the assumption that foreign keys can point to `rowid` primary key.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455486286, "label": "Mechanism for turning nested JSON into foreign keys / many-to-many"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/153#issuecomment-348252037", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/153", "id": 348252037, "node_id": "MDEyOklzc3VlQ29tbWVudDM0ODI1MjAzNw==", "user": {"value": 20264, "label": "ftrain"}, "created_at": "2017-11-30T16:59:00Z", "updated_at": "2017-11-30T16:59:00Z", "author_association": "NONE", "body": "WOW!\n\n--\nPaul Ford // (646) 369-7128 // @ftrain\n\nOn Thu, Nov 30, 2017 at 11:47 AM, Simon Willison \nwrote:\n\n> Remaining work on this now lives in a milestone:\n> https://github.com/simonw/datasette/milestone/6\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> ,\n> or mute the thread\n> \n> .\n>\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 276842536, "label": "Ability to customize presentation of specific columns in HTML view"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/54#issuecomment-524300388", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/54", "id": 524300388, "node_id": "MDEyOklzc3VlQ29tbWVudDUyNDMwMDM4OA==", "user": {"value": 20264, "label": "ftrain"}, "created_at": "2019-08-23T12:41:09Z", "updated_at": "2019-08-23T12:41:09Z", "author_association": "NONE", "body": "Extremely cool and easy to understand. Thank you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 480961330, "label": "Ability to list views, and to access db[\"view_name\"].rows / rows_where / etc"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1669877769", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 1669877769, "node_id": "IC_kwDOD079W85jiFAJ", "user": {"value": 22996, "label": "chrismytton"}, "created_at": "2023-08-08T15:52:52Z", "updated_at": "2023-08-08T15:52:52Z", "author_association": "NONE", "body": "You can also install this with pip using this oneliner:\r\n\r\n```\r\npip install git+https://github.com/RhetTbull/dogsheep-photos.git@update_for_bigsur\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/79#issuecomment-1847317568", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79", "id": 1847317568, "node_id": "IC_kwDODFdgUs5uG9RA", "user": {"value": 23789, "label": "nedbat"}, "created_at": "2023-12-08T14:50:13Z", "updated_at": "2023-12-08T14:50:13Z", "author_association": "NONE", "body": "Adding `&per_page=100` would reduce the number of API requests by 3x.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1570375808, "label": "Deploy demo job is failing due to rate limit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/991#issuecomment-712855389", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/991", "id": 712855389, "node_id": "MDEyOklzc3VlQ29tbWVudDcxMjg1NTM4OQ==", "user": {"value": 24740, "label": "furilo"}, "created_at": "2020-10-20T13:36:41Z", "updated_at": "2020-10-20T13:36:41Z", "author_association": "NONE", "body": "Here is one quick sketch (done in Figma :P) for an idea: a possible filter to switch between showing all tables from all databases, or grouping tables by database. \r\n\r\n(the switch is interactive) \r\n\r\nAll tables: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=1%3A2&viewport=536%2C348%2C0.5&scaling=min-zoom\r\n\r\nGrouped: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=3%3A974&viewport=536%2C348%2C0.5&scaling=min-zoom\r\n\r\nWhen only 1 database: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=1%3A162&viewport=536%2C348%2C0.5&scaling=min-zoom\r\n\r\nIs this is useful, I can send some more suggestions/sketches. \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 714377268, "label": "Redesign application homepage"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791089881", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 791089881, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MTA4OTg4MQ==", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-03-05T02:03:19Z", "updated_at": "2021-03-05T02:03:19Z", "author_association": "NONE", "body": "I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout.\r\n\r\nIs it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-849708617", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 849708617, "node_id": "MDEyOklzc3VlQ29tbWVudDg0OTcwODYxNw==", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-05-27T15:01:42Z", "updated_at": "2021-05-27T15:01:42Z", "author_association": "NONE", "body": "Any updates?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-884672647", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 884672647, "node_id": "IC_kwDODFE5qs40uwiH", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-07-22T05:56:31Z", "updated_at": "2021-07-22T14:03:08Z", "author_association": "NONE", "body": "How does this commit look? https://github.com/maxhawkins/google-takeout-to-sqlite/commit/72802a83fee282eb5d02d388567731ba4301050d\r\n\r\nIt seems that Takeout's mbox format is pretty simple, so we can get away with just splitting the file on lines begining with `From `. My commit just splits the file every time a line starts with `From ` and uses `email.message_from_bytes` to parse each chunk.\r\n\r\nI was able to load a 12GB takeout mbox without the program using more than a couple hundred MB of memory during the import process. It does make us lose the progress bar, but maybe I can add that back in a later commit.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885022230", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 885022230, "node_id": "IC_kwDODFE5qs40wF4W", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-07-22T15:51:46Z", "updated_at": "2021-07-22T15:51:46Z", "author_association": "NONE", "body": "One thing I noticed is this importer doesn't save attachments along with the body of the emails. It would be nice if those got stored as blobs in a separate attachments table so attachments can be included while fetching search results.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885094284", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 885094284, "node_id": "IC_kwDODFE5qs40wXeM", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-07-22T17:41:32Z", "updated_at": "2021-07-22T17:41:32Z", "author_association": "NONE", "body": "I added a follow-up commit that deals with emails that don't have a `Date` header: https://github.com/maxhawkins/google-takeout-to-sqlite/commit/4bc70103582c10802c85a523ef1e99a8a2154aa9", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 888075098, "node_id": "IC_kwDODFE5qs407vNa", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-07-28T07:18:56Z", "updated_at": "2021-07-28T07:18:56Z", "author_association": "NONE", "body": "> I'm not sure why but my most recent import, when displayed in Datasette, looks like this:\r\n> \r\n> \"mbox__mbox_emails__753_446_rows\"\r\n\r\nI did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead.\r\n\r\n@simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-894581223", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 894581223, "node_id": "IC_kwDODFE5qs41Ujnn", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-08-07T00:57:48Z", "updated_at": "2021-08-07T00:57:48Z", "author_association": "NONE", "body": "Just added two more fixes:\r\n\r\n* Added parsing for rfc 2047 encoded unicode headers\r\n* Body is now stored as TEXT rather than a BLOB regardless of what order the messages are parsed in.\r\n\r\nI was able to run this on my Takeout export and everything seems to work fine. @simonw let me know if this looks good to merge.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-896378525", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 896378525, "node_id": "IC_kwDODFE5qs41baad", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-08-10T23:28:45Z", "updated_at": "2021-08-10T23:28:45Z", "author_association": "NONE", "body": "I added parsing of text/html emails using BeautifulSoup.\r\n\r\nAround half of the emails in my archive don't include a text/plain payload so adding html parsing makes a good chunk of them searchable.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1003437288", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 1003437288, "node_id": "IC_kwDODFE5qs47zzzo", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2021-12-31T19:06:20Z", "updated_at": "2021-12-31T19:06:20Z", "author_association": "NONE", "body": "> @maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text.\r\n\r\nShouldn't be hard. The easiest way is probably to remove the `if body.content_type == \"text/html\"` clause from [utils.py:254](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8e6d487b697ce2e8ad885acf613a157bfba84c59#diff-25ad9dd1ced1b8bfc37fda8444819c803232c08891e4af3d4064aa205d8174eaR254) and just return content directly without parsing.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1710380941", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8", "id": 1710380941, "node_id": "IC_kwDODFE5qs5l8leN", "user": {"value": 28565, "label": "maxhawkins"}, "created_at": "2023-09-07T15:39:59Z", "updated_at": "2023-09-07T15:39:59Z", "author_association": "NONE", "body": "> @maxhawkins curious why you didn't use the stdlib `mailbox` to parse the `mbox` files?\r\n\r\nMailbox parses the entire mbox into memory. Using the lower level library lets us stream the emails in one at a time to support larger archives. Both libraries are in the stdlib.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 954546309, "label": "Add Gmail takeout mbox import (v2)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/736#issuecomment-620401172", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/736", "id": 620401172, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMDQwMTE3Mg==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-28T06:09:28Z", "updated_at": "2020-04-28T06:09:28Z", "author_association": "NONE", "body": "> Would you mind trying publishing your database using one of the other options - Heroku, Cloud Run or https://fly.io/ - and see if you have the same bug there?\r\n\r\nIt works in heroku, than might be a bug with datasette-publish-now.\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 606720674, "label": "strange behavior using accented characters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/735#issuecomment-620401443", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/735", "id": 620401443, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMDQwMTQ0Mw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-28T06:10:20Z", "updated_at": "2020-04-28T06:10:20Z", "author_association": "NONE", "body": "It works in heroku, than might be a bug with datasette-publish-now.\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 605806386, "label": "Error when I click on \"View and edit SQL\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-621008152", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 621008152, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMTAwODE1Mg==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-29T06:05:02Z", "updated_at": "2020-04-29T06:05:02Z", "author_association": "NONE", "body": "Hi @simonw , I have installed it and I have the below errors.\r\n\r\n> Is it possible that your /tmp directory is on a different volume from the template folder? That could cause a problem with the symlinks.\r\n\r\nNo, /tmp folder is in the same volume. \r\n\r\nThank you\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py\", line 607, in link_or_copy_directory\r\n shutil.copytree(src, dst, copy_function=os.link)\r\n File \"/usr/lib/python3.7/shutil.py\", line 365, in copytree\r\n raise Error(errors)\r\nshutil.Error: [('/var/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmpcqv_1i5d/templates/base.html', \"[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmpcqv_1i5d/templates/base.html'\"), ('/var/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmpcqv_1i5d/templates/index.html', \"[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmpcqv_1i5d/templates/index.html'\")]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/bin/datasette\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 610, in invoke return callback(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 103, in heroku\r\n extra_metadata,\r\n File \"/usr/lib/python3.7/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 191, in temporary_heroku_directory\r\n os.path.join(tmp.name, \"templates\"),\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py\", line 609, in link_or_copy_directory\r\n shutil.copytree(src, dst)\r\n File \"/usr/lib/python3.7/shutil.py\", line 321, in copytree\r\n os.makedirs(dst)\r\n File \"/usr/lib/python3.7/os.py\", line 221, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '/tmp/tmpcqv_1i5d/templates'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-621011554", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 621011554, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMTAxMTU1NA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-29T06:17:26Z", "updated_at": "2020-04-29T06:17:26Z", "author_association": "NONE", "body": "A stupid note: I have no `tmpcqv_1i5d` folder in in `/tmp`.\r\n\r\nIt seems to me that it does not create any `/tmp/tmpcqv_1i5d/templates` folder (or other name folder, inside /tmp)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-621030783", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 621030783, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMTAzMDc4Mw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-29T07:16:27Z", "updated_at": "2020-04-29T07:16:27Z", "author_association": "NONE", "body": "Hi @simonw it's debian as Windows Subsystem for Linux \r\n\r\n```\r\nPRETTY_NAME=\"Pengwin\"\r\nNAME=\"Pengwin\"\r\nVERSION_ID=\"10\"\r\nVERSION=\"10 (buster)\"\r\nID=debian\r\nID_LIKE=debian\r\nHOME_URL=\"https://github.com/whitewaterfoundry/Pengwin\"\r\nSUPPORT_URL=\"https://github.com/whitewaterfoundry/Pengwin\"\r\nBUG_REPORT_URL=\"https://github.com/whitewaterfoundry/Pengwin\"\r\nVERSION_CODENAME=buster\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625060561", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625060561, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA2MDU2MQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T06:38:24Z", "updated_at": "2020-05-07T06:38:24Z", "author_association": "NONE", "body": "Hi @simonw probably I could try to do it in Python for windows. I do not like to do these things in win enviroment.\r\n\r\nBecause probably WSL Linux env (in which I do a lot of great things) is not an environment that will be tested for datasette.\r\n\r\nIn win I shouldn't have any problems. Am I right?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625066073", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625066073, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA2NjA3Mw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T06:53:09Z", "updated_at": "2020-05-07T06:53:09Z", "author_association": "NONE", "body": "@simonw another error starting from Windows.\r\n\r\nI run\r\n\r\n```\r\ndatasette publish heroku -n comunepa --template-dir template commissioniComunePalermo.db\r\n```\r\n\r\nAnd I have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"c:\\python37\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\python37\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\Scripts\\datasette.exe\\__main__.py\", line 9, in \r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 53, in heroku\r\n line.split()[0] for line in check_output([\"heroku\", \"plugins\"]).splitlines()\r\n File \"c:\\python37\\lib\\subprocess.py\", line 395, in check_output\r\n **kwargs).stdout\r\n File \"c:\\python37\\lib\\subprocess.py\", line 472, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"c:\\python37\\lib\\subprocess.py\", line 775, in __init__\r\n restore_signals, start_new_session)\r\n File \"c:\\python37\\lib\\subprocess.py\", line 1178, in _execute_child\r\n startupinfo)\r\nFileNotFoundError: [WinError 2] The specified file could not be found\r\n```\r\n\r\n\r\n[files.zip](https://github.com/simonw/datasette/files/4591173/files.zip)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625083715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625083715, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA4MzcxNQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T07:34:18Z", "updated_at": "2020-05-07T07:34:18Z", "author_association": "NONE", "body": "In Windows I'm not very strong. I use debian (inside WSL).\r\n\r\nHowever these are the possible steps:\r\n\r\n- I have installed Python 3 for win (I have 3.7.3);\r\n- I have installed heroku cli for win64 and logged in;\r\n- I have installed datasette running `python -m pip install --upgrade --user datasette`.\r\n\r\nIt's a very basic Python env that I do not use. This time only to reach my goal: try to publish using custom template", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625091976", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625091976, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA5MTk3Ng==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T07:51:25Z", "updated_at": "2020-05-07T07:51:25Z", "author_association": "NONE", "body": "I have installed `heroku plugins:install heroku-builds`, but I have the same error.\r\n\r\nThen I have removed from `datasette\\publish\\heroku.py`\r\n\r\n```python\r\n # Check for heroku-builds plugin\r\n plugins = [\r\n line.split()[0] for line in check_output([\"heroku\", \"plugins\"]).splitlines()\r\n ]\r\n if b\"heroku-builds\" not in plugins:\r\n click.echo(\r\n \"Publishing to Heroku requires the heroku-builds plugin to be installed.\"\r\n )\r\n click.confirm(\r\n \"Install it? (this will run `heroku plugins:install heroku-builds`)\",\r\n abort=True,\r\n )\r\n call([\"heroku\", \"plugins:install\", \"heroku-builds\"])\r\n```\r\n\r\nAnd now I have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 210, in temporary_heroku_directory\r\n yield\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 96, in heroku\r\n list_output = check_output([\"heroku\", \"apps:list\", \"--json\"]).decode(\r\n File \"c:\\python37\\lib\\subprocess.py\", line 395, in check_output\r\n **kwargs).stdout\r\n File \"c:\\python37\\lib\\subprocess.py\", line 472, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"c:\\python37\\lib\\subprocess.py\", line 775, in __init__\r\n restore_signals, start_new_session)\r\n File \"c:\\python37\\lib\\subprocess.py\", line 1178, in _execute_child\r\n startupinfo)\r\nFileNotFoundError: [WinError 2] The specified file could not be found\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python37\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\python37\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\Scripts\\datasette.exe\\__main__.py\", line 9, in \r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 120, in heroku\r\n call([\"heroku\", \"builds:create\", \"-a\", app_name, \"--include-vcs-ignore\"])\r\n File \"c:\\python37\\lib\\contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 213, in temporary_heroku_directory\r\n tmp.cleanup()\r\n File \"c:\\python37\\lib\\tempfile.py\", line 809, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\python37\\lib\\shutil.py\", line 513, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\python37\\lib\\shutil.py\", line 401, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\python37\\lib\\shutil.py\", line 399, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] Unable to access file. The file is being used by another process: 'C:\\\\Users\\\\aborr\\\\AppData\\\\Local\\\\Temp\\\\tmpkcxy8i_q'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-632249565", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 632249565, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjI0OTU2NQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-21T17:47:40Z", "updated_at": "2020-05-21T17:47:40Z", "author_association": "NONE", "body": "@simonw can I test it know? What I must do to update it?\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-632255088", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 632255088, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjI1NTA4OA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-21T17:58:51Z", "updated_at": "2020-05-21T17:58:51Z", "author_association": "NONE", "body": "Thank you very much!!\r\n\r\nI will try and I write you here", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-632305868", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 632305868, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjMwNTg2OA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-21T19:43:23Z", "updated_at": "2020-05-21T19:43:23Z", "author_association": "NONE", "body": "@simonw now I have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/bin/datasette\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 103, in heroku\r\n extra_metadata,\r\n File \"/usr/lib/python3.7/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 191, in temporary_heroku_directory\r\n os.path.join(tmp.name, \"templates\"),\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py\", line 605, in link_or_copy_directory\r\n shutil.copytree(src, dst, copy_function=os.link, dirs_exist_ok=True)\r\nTypeError: copytree() got an unexpected keyword argument 'dirs_exist_ok'\r\n```\r\n\r\nDo I must open a new issue?\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-634283355", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 634283355, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNDI4MzM1NQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-26T21:15:34Z", "updated_at": "2020-05-26T21:15:34Z", "author_association": "NONE", "body": "> Oh no! It looks like `dirs_exist_ok` is Python 3.8 only. This is a bad fix, it needs to work on older Python's too. Re-opening.\r\n\r\nThank you very much", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-634446887", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 634446887, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNDQ0Njg4Nw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-27T06:01:28Z", "updated_at": "2020-05-27T06:01:28Z", "author_association": "NONE", "body": "Dear @simonw thank you for your time, now IT WORKS!!!\r\n\r\nI hope that this edit to datasette code is not for an exceptional case (my PC configuration) and that it will be useful to other users. \r\n\r\nThank you again!!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-635386935", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 635386935, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNTM4NjkzNQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-28T14:32:53Z", "updated_at": "2020-05-28T14:32:53Z", "author_association": "NONE", "body": "Wow, I'm in some way very proud!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/69#issuecomment-710768396", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/69", "id": 710768396, "node_id": "MDEyOklzc3VlQ29tbWVudDcxMDc2ODM5Ng==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-10-17T07:46:59Z", "updated_at": "2020-10-17T07:46:59Z", "author_association": "NONE", "body": "Great @simonw thank you very much", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 534507142, "label": "Feature request: enable extensions loading"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/188#issuecomment-710778368", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/188", "id": 710778368, "node_id": "MDEyOklzc3VlQ29tbWVudDcxMDc3ODM2OA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-10-17T08:52:58Z", "updated_at": "2020-10-17T08:52:58Z", "author_association": "NONE", "body": "I have done a stupid question.\r\n\r\nIf I run\r\n\r\n```\r\nsqlite-utils :memory: \"select spatialite_version()\" --load-extension=/usr/local/lib/mod_spatialite.so\r\n```\r\n\r\nI have `[{\"spatialite_version()\": \"5.0.0\"}]`\r\n\r\nThank you for this great tool", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 723708310, "label": "About loading spatialite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-778008752", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 778008752, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODAwODc1Mg==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2021-02-12T06:37:34Z", "updated_at": "2021-02-12T06:37:34Z", "author_association": "NONE", "body": "I have used my path, I'm running it from the folder in wich I have the db.\n\nDo I must an absolute path?\n\nDo I must create exactly that folder?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-778467759", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 778467759, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2021-02-12T21:35:17Z", "updated_at": "2021-02-12T21:35:17Z", "author_association": "NONE", "body": "Thank you", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1845#issuecomment-1279924827", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1845", "id": 1279924827, "node_id": "IC_kwDOBm6k_c5MShpb", "user": {"value": 30636, "label": "kindly"}, "created_at": "2022-10-16T08:54:53Z", "updated_at": "2022-10-16T08:54:53Z", "author_association": "NONE", "body": "> It was part of a larger idea I was exploring around ensuring Datasette could be used to start interacting with CSV/JSON data out-of-the-box, without needing to first convert that data into SQLite using separate tools.\r\n\r\nThis would be great. My organization deals with very nested JSON open data and I have been wanting to find a way to hook into datasette so that the analysts do not have to first convert to sqlite first.\r\n\r\nThis can kind of be done with datasette-lite. \r\n\r\nFrom this random nested JSON API:\r\nhttps://api.nobelprize.org/v1/prize.json\r\n\r\nYou can use the API of https://flatterer.herokuapp.com to return a multi table sqlite database:\r\n\r\nhttps://lite.datasette.io/?url=https://flatterer.herokuapp.com/api/convert?output_format=sqlite%26file_url=https://api.nobelprize.org/v1/prize.json\r\n\r\nThis is great and fun, but it would be great if there was some plugin mechanism that you could feed a local datasette a nested JSON file directly, possibly hooking into other flattening tools for this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1410305897, "label": "Reconsider the Datasette first-run experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782745199", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782745199, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0NTE5OQ==", "user": {"value": 30665, "label": "frankieroberto"}, "created_at": "2021-02-20T20:32:03Z", "updated_at": "2021-02-20T20:32:03Z", "author_association": "NONE", "body": "I think it\u2019s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. Mainly because it allows you to add extra keys in a backwards-compatible way. Also just seems more expected somehow.\r\n\r\nThe API design guidance for the UK government also recommends this: https://www.gov.uk/guidance/gds-api-technical-and-data-standards#use-json\r\n\r\nI also strongly dislike having versioned APIs (eg with a `/v1/` path prefix, as it invariably means that old versions stop working at some point, even though the bit of the API you\u2019re using might not have changed at all.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 1}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782746755", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782746755, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0Njc1NQ==", "user": {"value": 30665, "label": "frankieroberto"}, "created_at": "2021-02-20T20:44:05Z", "updated_at": "2021-02-20T20:44:05Z", "author_association": "NONE", "body": "Minor suggestion: rename `size` query param to `limit`, to better reflect that it\u2019s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword?\r\n\r\nI like the idea of specifying a limit of 0 if you don\u2019t want any rows data - and returning an empty array under the `rows` key seems fine.\r\n\r\nHave you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don\u2019t parse the JSON. Could be default (can\u2019t be much bigger with gzip?) or opt-in.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-783265830", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 783265830, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzI2NTgzMA==", "user": {"value": 30665, "label": "frankieroberto"}, "created_at": "2021-02-22T10:21:14Z", "updated_at": "2021-02-22T10:21:14Z", "author_association": "NONE", "body": "@simonw:\r\n\r\n> The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword.\r\n\r\nInteresting! Although I don't think it matters too much what the underlying implementation is - I more meant that `limit` is familiar to developers conceptually as \"up to and including this number, if they exist\", whereas \"size\" is potentially more ambiguous. However, it's probably no big deal either way.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1204#issuecomment-951731255", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1204", "id": 951731255, "node_id": "IC_kwDOBm6k_c44ukQ3", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-10-26T09:01:28Z", "updated_at": "2021-10-26T09:01:28Z", "author_association": "NONE", "body": "> Writing the tests will be a bit tricky since we need to confirm that the `include_table_top(datasette, database, actor, table)` arguments were all passed correctly but the only thing we get back from the plugin is a list of templates. Maybe encode those values into the template names somehow?\r\n\r\nWhy not return a data structure instead of just a template name?\r\n\r\nI've already done some custom hacking to modify datasette but the plugin mechanism you are building here would be much cleaner than what I've built. I'd be happy to help with testing this PR and fleshing it out further if you are still considering merging this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 793002853, "label": "WIP: Plugin includes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/878#issuecomment-951740637", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/878", "id": 951740637, "node_id": "IC_kwDOBm6k_c44umjd", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-10-26T09:12:15Z", "updated_at": "2021-10-26T09:12:15Z", "author_association": "NONE", "body": "This sounds really ambitious but also really awesome. I like the idea that basically any piece of a page could be selectively replaced.\r\n\r\nIt sort of sounds like a python asyncio version of https://github.com/observablehq/runtime", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648435885, "label": "New pattern for views that return either JSON or HTML, available for plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1532#issuecomment-981966693", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1532", "id": 981966693, "node_id": "IC_kwDOBm6k_c46h59l", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-11-29T19:56:52Z", "updated_at": "2021-11-29T19:56:52Z", "author_association": "NONE", "body": "FWIW I've written some web components that consume the json api and I think it's a really nice way to work with datasette. I like the combination with datasette+sqlite as a back-end feeding data to a front-end that's entirely javascript + html.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1065429936, "label": "Use datasette-table Web Component to guide the design of the JSON API for 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1304#issuecomment-981980048", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1304", "id": 981980048, "node_id": "IC_kwDOBm6k_c46h9OQ", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-11-29T20:13:53Z", "updated_at": "2021-11-29T20:14:11Z", "author_association": "NONE", "body": "There isn't any way to do this with sqlite as far as I know. The only option is to insert the right number of ? placeholders into the sql template and then provide an array of values.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 863884805, "label": "Document how to send multiple values for \"Named parameters\" "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1532#issuecomment-982745406", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1532", "id": 982745406, "node_id": "IC_kwDOBm6k_c46k4E-", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-11-30T15:28:57Z", "updated_at": "2021-11-30T15:28:57Z", "author_association": "NONE", "body": "It's a really great API and the documentation is really great too. Honestly, in more than 20 years of professional experience, I haven't worked with any software API that was more of a joy to use. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1065429936, "label": "Use datasette-table Web Component to guide the design of the JSON API for 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1304#issuecomment-988461884", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1304", "id": 988461884, "node_id": "IC_kwDOBm6k_c466rs8", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-12-08T03:20:26Z", "updated_at": "2021-12-08T03:20:26Z", "author_association": "NONE", "body": "The easiest or most straightforward thing to do is to use named parameters like:\r\n\r\n```sql\r\nselect * where key IN (:p1, :p2, :p3)\r\n```\r\n\r\nAnd simply construct the list of placeholders dynamically based on the number of values.\r\n\r\nDoing this is possible with datasette if you forgo \"canned queries\" and just use the raw query endpoint and pass the query sql, along with p1, p2 ... in the request.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 863884805, "label": "Document how to send multiple values for \"Named parameters\" "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1304#issuecomment-988463455", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1304", "id": 988463455, "node_id": "IC_kwDOBm6k_c466sFf", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-12-08T03:23:14Z", "updated_at": "2021-12-08T03:23:14Z", "author_association": "NONE", "body": "I actually think it would be a useful thing to add support for in datasette. It wouldn't be difficult to unwind an array of params and add the placeholders automatically.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 863884805, "label": "Document how to send multiple values for \"Named parameters\" "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1528#issuecomment-988468238", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1528", "id": 988468238, "node_id": "IC_kwDOBm6k_c466tQO", "user": {"value": 30934, "label": "20after4"}, "created_at": "2021-12-08T03:35:45Z", "updated_at": "2021-12-08T03:35:45Z", "author_association": "NONE", "body": "FWIW I implemented something similar with a bit of plugin code:\r\n\r\n```python\r\n@hookimpl\r\ndef canned_queries(datasette: Datasette, database: str) -> Mapping[str, str]:\r\n # load \"canned queries\" from the filesystem under\r\n # www/sql/db/query_name.sql\r\n queries = {}\r\n\r\n sqldir = Path(__file__).parent.parent / \"sql\"\r\n if database:\r\n sqldir = sqldir / database\r\n\r\n if not sqldir.is_dir():\r\n return queries\r\n\r\n for f in sqldir.glob('*.sql'):\r\n try:\r\n sql = f.read_text('utf8').strip()\r\n if not len(sql):\r\n log(f\"Skipping empty canned query file: {f}\")\r\n continue\r\n queries[f.stem] = { \"sql\": sql }\r\n except OSError as err:\r\n log(err)\r\n\r\n return queries\r\n\r\n\r\n\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1060631257, "label": "Add new `\"sql_file\"` key to Canned Queries in metadata?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1722943484", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1722943484, "node_id": "IC_kwDOBm6k_c5msgf8", "user": {"value": 30934, "label": "20after4"}, "created_at": "2023-09-18T08:14:47Z", "updated_at": "2023-09-18T08:14:47Z", "author_association": "NONE", "body": "This is such a well thought out contribution. I don't think I've seen such a thoroughly considered PR on any project in recent memory.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/swarm-to-sqlite/issues/12#issuecomment-941274088", "issue_url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/12", "id": 941274088, "node_id": "IC_kwDODD6af844GrPo", "user": {"value": 33631, "label": "fs111"}, "created_at": "2021-10-12T18:31:57Z", "updated_at": "2021-10-12T18:31:57Z", "author_association": "NONE", "body": "I am running into the same problem. Is there any workaround?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 951817328, "label": "403 when getting token"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1574#issuecomment-1008279307", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1574", "id": 1008279307, "node_id": "IC_kwDOBm6k_c48GR8L", "user": {"value": 33631, "label": "fs111"}, "created_at": "2022-01-09T11:26:06Z", "updated_at": "2022-01-09T11:26:06Z", "author_association": "NONE", "body": "@fgregg my thinking was backwards compatibility. I don't know what people do to their builds, I just wanted a smaller image for my use case.\r\n\r\n@simonw any chance to take a look at this? If there is no interest, feel free to close the PR", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1084193403, "label": "introduce new option for datasette package to use a slim base image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1574#issuecomment-1084216224", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1574", "id": 1084216224, "node_id": "IC_kwDOBm6k_c5An9Og", "user": {"value": 33631, "label": "fs111"}, "created_at": "2022-03-31T07:45:25Z", "updated_at": "2022-03-31T07:45:25Z", "author_association": "NONE", "body": "@simonw I like that you want to go \"slim by default\". Do you want another PR for that or should I just wait?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1084193403, "label": "introduce new option for datasette package to use a slim base image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1574#issuecomment-1214765672", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1574", "id": 1214765672, "node_id": "IC_kwDOBm6k_c5IZ9po", "user": {"value": 33631, "label": "fs111"}, "created_at": "2022-08-15T08:49:31Z", "updated_at": "2022-08-15T08:49:31Z", "author_association": "NONE", "body": "closing as this is now the default", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1084193403, "label": "introduce new option for datasette package to use a slim base image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/46", "id": 592999503, "node_id": "MDEyOklzc3VlQ29tbWVudDU5Mjk5OTUwMw==", "user": {"value": 35075, "label": "chrishas35"}, "created_at": "2020-02-29T22:08:20Z", "updated_at": "2020-02-29T22:08:20Z", "author_association": "NONE", "body": "@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the \"value\" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named \"value\". I have an existing lookup table that I've populated with columns \"id\" and \"name\" as opposed to \"id\" and \"value\", and seems I can't use `extracts=`, unless I'm missing something...\r\n\r\nInitial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so:\r\n```\r\ntable = db.table(\"trees\", extracts={\"species_id\": (\"Species\", \"name\"})\r\n```\r\n\r\nI haven't dug too much into the existing code yet, but does this make sense? Worth doing?\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 471780443, "label": "extracts= option for insert/update/etc"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/89#issuecomment-593122605", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/89", "id": 593122605, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MzEyMjYwNQ==", "user": {"value": 35075, "label": "chrishas35"}, "created_at": "2020-03-01T17:33:11Z", "updated_at": "2020-03-01T17:33:11Z", "author_association": "NONE", "body": "If you're happy with the proposed implementation, I have code & tests written that I'll get ready for a PR.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 573578548, "label": "Ability to customize columns used by extracts= feature"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/249#issuecomment-803502424", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/249", "id": 803502424, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMzUwMjQyNA==", "user": {"value": 36287, "label": "prabhur"}, "created_at": "2021-03-21T02:43:32Z", "updated_at": "2021-03-21T02:43:32Z", "author_association": "NONE", "body": "> Did you run `enable-fts` before you inserted the data?\r\n> \r\n> If so you'll need to run `populate-fts` after the insert to populate the FTS index.\r\n> \r\n> A better solution may be to add `--create-triggers` to the `enable-fts` command to add triggers that will automatically keep the index updated as you insert new records.\r\n\r\nWow. Wasn't expecting a response this quick, especially during a weekend. :-) Sincerely appreciate it.\r\nI tried the `populate-fts` and that did the trick. My bad for not consulting the docs again. I think I forgot to add that step when I automated the workflow.\r\nThanks for the suggestion. I'll close this issue. Have a great weekend and many many thanks for creating these suite of tools around sqlite.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 836963850, "label": "Full text search possibly broken?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1624#issuecomment-1261194164", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1624", "id": 1261194164, "node_id": "IC_kwDOBm6k_c5LLEu0", "user": {"value": 38532, "label": "palfrey"}, "created_at": "2022-09-28T16:54:22Z", "updated_at": "2022-09-28T16:54:22Z", "author_association": "NONE", "body": "https://github.com/simonw/datasette-cors seems to workaround this", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1122427321, "label": "Index page `/` has no CORS headers"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/20#issuecomment-633234781", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20", "id": 633234781, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMzIzNDc4MQ==", "user": {"value": 41439, "label": "dmd"}, "created_at": "2020-05-24T13:56:13Z", "updated_at": "2020-05-24T13:56:13Z", "author_association": "NONE", "body": "As that seems to be closed, can you give a hint on how to make this work?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 613006393, "label": "Ability to serve thumbnailed Apple Photo from its place on disk"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/540#issuecomment-1537744000", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/540", "id": 1537744000, "node_id": "IC_kwDOCGYnMM5bqByA", "user": {"value": 42327, "label": "pquentin"}, "created_at": "2023-05-08T04:56:12Z", "updated_at": "2023-05-08T04:56:12Z", "author_association": "NONE", "body": "Hey @simonw, urllib3 maintainer here :wave:\r\n\r\nSorry for breaking your CI. I understand you may prefer to pin the Python version, but note that specifying just `python: \"3\"` will get you the latest. We use that in urllib3: https://github.com/urllib3/urllib3/blob/main/.readthedocs.yml\r\n\r\nI can open PRs to sqlite-utils / datasette if you're interested", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1699184583, "label": "sphinx.builders.linkcheck build error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/409#issuecomment-472844001", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/409", "id": 472844001, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3Mjg0NDAwMQ==", "user": {"value": 43100, "label": "Uninen"}, "created_at": "2019-03-14T13:04:20Z", "updated_at": "2019-03-14T13:04:42Z", "author_association": "NONE", "body": "It seems this affects the Datasette Publish -site as well: https://github.com/simonw/datasette-publish-support/issues/3", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 408376825, "label": "Zeit API v1 does not work for new users - need to migrate to v2"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1886#issuecomment-1316289392", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1886", "id": 1316289392, "node_id": "IC_kwDOBm6k_c5OdPtw", "user": {"value": 45195, "label": "rtanglao"}, "created_at": "2022-11-16T03:54:17Z", "updated_at": "2022-11-16T03:58:56Z", "author_association": "NONE", "body": "Happy Birthday Datasette!\r\n\r\nThanks Simon!!\r\n\r\nI use datasette on everything most notably [my flickr metadata SQLite DB](https://www.dropbox.com/s/6j10e2vohp2j5kf/roland2019-2020.db?dl=0) to make art.\r\n\r\nDatasette lite on my 2019 flickr metadata is super helpful too:\r\nhttps://lite.datasette.io/?csv=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-flickr-sqlite-csv%2Fmain%2F2019-roland-flickr-metadata.csv\r\n\r\nEven better datasette lite on all firefox support questions from 2021: https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-kits-api3%2Fmain%2FYEARLY_CSV_FILES%2F2021-firefox-sumo-questions.db\r\n\r\nThanks again Simon! So great! What a gift to the world!!!!!!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1447050738, "label": "Call for birthday presents: if you're using Datasette, let us know how you're using it here"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/619#issuecomment-697973420", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/619", "id": 697973420, "node_id": "MDEyOklzc3VlQ29tbWVudDY5Nzk3MzQyMA==", "user": {"value": 45416, "label": "obra"}, "created_at": "2020-09-23T21:07:58Z", "updated_at": "2020-09-23T21:07:58Z", "author_association": "NONE", "body": "I've just run into this after crafting a complex query and discovered that hitting back loses my query.\r\n\r\nEven showing me the whole bad query would be a huge improvement over the current status quo.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 520655983, "label": "\"Invalid SQL\" page should let you edit the SQL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-698110186", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 698110186, "node_id": "MDEyOklzc3VlQ29tbWVudDY5ODExMDE4Ng==", "user": {"value": 45416, "label": "obra"}, "created_at": "2020-09-24T04:49:51Z", "updated_at": "2020-09-24T04:49:51Z", "author_association": "NONE", "body": "As a half-measure, I'd get value out of being able to upload a CSV and have datasette run csv-to-sqlite on it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-698174957", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 698174957, "node_id": "MDEyOklzc3VlQ29tbWVudDY5ODE3NDk1Nw==", "user": {"value": 45416, "label": "obra"}, "created_at": "2020-09-24T07:42:05Z", "updated_at": "2020-09-24T07:42:05Z", "author_association": "NONE", "body": "\nOh. Awesome. \n\nOn Thu, Sep 24, 2020 at 12:28:53AM -0700, Simon Willison wrote:\n> @obra there's a plugin for that! https://github.com/simonw/\n> datasette-upload-csvs\n> \n> \u00e2\u0080\u0094\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.*\n> \n\n-- \n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null}