html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/dogsheep/twitter-to-sqlite/issues/53#issuecomment-748436453,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53,748436453,MDEyOklzc3VlQ29tbWVudDc0ODQzNjQ1Mw==,27,anotherjesse,2020-12-19T07:47:01Z,2020-12-19T07:47:01Z,NONE,"I think this should probably be closed as won't fix. Attempting to make a patch for this I realized that the since_id would limit to tweets posted since that since_id, not when it was favorited. So favoriting something in the older would be missed if you used `--since` with a cron script Better to just use `--stop_after` set to a couple hundred","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771324837,--since support for favorites, https://github.com/dogsheep/healthkit-to-sqlite/issues/11#issuecomment-711083698,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/11,711083698,MDEyOklzc3VlQ29tbWVudDcxMTA4MzY5OA==,572,jarib,2020-10-17T21:39:15Z,2020-10-17T21:39:15Z,NONE,Nice! Works perfectly. Thanks for the quick response and great tooling in general.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",723838331,export.xml file name varies with different language settings, https://github.com/simonw/datasette/issues/526#issuecomment-810943882,https://api.github.com/repos/simonw/datasette/issues/526,810943882,MDEyOklzc3VlQ29tbWVudDgxMDk0Mzg4Mg==,701,jokull,2021-03-31T10:03:55Z,2021-03-31T10:03:55Z,NONE,"+1 on using nested queries to achieve this! Would be great as streaming CSV is an amazing feature. Some UX/DX details: I was expecting it to work to simply add `&_stream=on` to custom SQL queries because the docs say > Any Datasette table, view or **custom SQL query** can be exported as CSV. After a bit of testing back and forth I realized streaming only works for full tables. Would love this feature because I'm using `pandas.read_csv` to paint graphs from custom queries and the graphs are cut off because of the 1000 row limit. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/dogsheep/github-to-sqlite/issues/15#issuecomment-605439685,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15,605439685,MDEyOklzc3VlQ29tbWVudDYwNTQzOTY4NQ==,2029,garethr,2020-03-28T12:17:01Z,2020-03-28T12:17:01Z,NONE,"That looks great, thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",544571092,Assets table with downloads, https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622279374,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33,622279374,MDEyOklzc3VlQ29tbWVudDYyMjI3OTM3NA==,2029,garethr,2020-05-01T07:12:47Z,2020-05-01T07:12:47Z,NONE,"I also go it working with: ```yaml run: echo ${{ secrets.github_token }} | github-to-sqlite auth ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",609950090,Fall back to authentication via ENV, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-927312650,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,927312650,IC_kwDODEm0Qs43RasK,2182,danp,2021-09-26T14:09:51Z,2021-09-26T14:09:51Z,NONE,"Similar trouble with ageinfo using 0.22. Here's what my ageinfo.js file looks like: ``` window.YTD.ageinfo.part0 = [ { ""ageMeta"" : { } } ] ``` Commenting out the registration for ageinfo in archive.py gets my archive to import.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071,Archive import appears to be broken on recent exports, https://github.com/dogsheep/pocket-to-sqlite/issues/11#issuecomment-1221521377,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/11,1221521377,IC_kwDODLZ_YM5Izu_h,2467,fernand0,2022-08-21T10:51:37Z,2022-08-21T10:51:37Z,NONE,I didn't see there is a PR about this: https://github.com/dogsheep/pocket-to-sqlite/pull/7,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1345452427,"-a option is used for ""--auth"" and for ""--all""", https://github.com/simonw/datasette/issues/2214#issuecomment-1844819002,https://api.github.com/repos/simonw/datasette/issues/2214,1844819002,IC_kwDOBm6k_c5t9bQ6,2874,precipice,2023-12-07T07:36:33Z,2023-12-07T07:36:33Z,NONE,"If I uncheck `expand labels` in the Advanced CSV export dialog, the error does not occur. Re-checking that box and re-running the export does cause the error to occur. ![CleanShot 2023-12-06 at 23 34 58@2x](https://github.com/simonw/datasette/assets/2874/12c6c241-35ce-4ded-8dc7-fc250d809ed9) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",2029908157,CSV export fails for some `text` foreign key references, https://github.com/simonw/sqlite-utils/issues/269#issuecomment-859940977,https://api.github.com/repos/simonw/sqlite-utils/issues/269,859940977,MDEyOklzc3VlQ29tbWVudDg1OTk0MDk3Nw==,4068,frafra,2021-06-11T22:33:08Z,2021-06-11T22:33:08Z,NONE,"`true` and `false` json values are cast to integer, which is not optimal.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",919250621,bool type not supported, https://github.com/simonw/sqlite-utils/issues/270#issuecomment-860031071,https://api.github.com/repos/simonw/sqlite-utils/issues/270,860031071,MDEyOklzc3VlQ29tbWVudDg2MDAzMTA3MQ==,4068,frafra,2021-06-12T10:00:24Z,2021-06-12T10:00:24Z,NONE,"Sure, I am sorry if my message hasn't been clear enough. I am also a new user :) At the beginning, I just call `sqlite-utils insert ""$db"" ""$table"" ""$jsonfile""` to create the database. sqlite-utils convert JSON values into `TEXT`, when it tries to determine the schema automatically. I then try to transform the table to set `JSON` as type: ``` sqlite-utils transform species.sqlite species --type criteria json Usage: sqlite-utils transform [OPTIONS] PATH TABLE Try 'sqlite-utils transform --help' for help. Error: Invalid value for '--type': 'json' is not one of 'INTEGER', 'TEXT', 'FLOAT', 'BLOB'. ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",919314806,Cannot set type JSON, https://github.com/simonw/sqlite-utils/issues/269#issuecomment-860031217,https://api.github.com/repos/simonw/sqlite-utils/issues/269,860031217,MDEyOklzc3VlQ29tbWVudDg2MDAzMTIxNw==,4068,frafra,2021-06-12T10:01:53Z,2021-06-12T10:01:53Z,NONE,"`sqlite-utils transform` does not allow setting the column type to boolean: ``` Error: Invalid value for '--type': 'bool' is not one of 'INTEGER', 'TEXT', 'FLOAT', 'BLOB'. ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",919250621,bool type not supported, https://github.com/simonw/datasette/issues/1286#issuecomment-860047794,https://api.github.com/repos/simonw/datasette/issues/1286,860047794,MDEyOklzc3VlQ29tbWVudDg2MDA0Nzc5NA==,4068,frafra,2021-06-12T12:36:15Z,2021-06-12T12:36:15Z,NONE,"@mroswell That is a very nice solution. I wonder if custom classes, like `col-columnName-value` could be automatically added to cells when facets on such column are enabled, to allow custom styling without having to modify templates or add custom JavaScript code.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",849220154,Better default display of arrays of items, https://github.com/simonw/datasette/issues/1375#issuecomment-860548546,https://api.github.com/repos/simonw/datasette/issues/1375,860548546,MDEyOklzc3VlQ29tbWVudDg2MDU0ODU0Ng==,4068,frafra,2021-06-14T09:41:59Z,2021-06-14T09:41:59Z,NONE,"> There is a feature for this at the moment, but it's a little bit hidden: you can use `?_json=col` to tell > Datasette that you would like a specific column to be exported as nested JSON: https://docs.datasette.io/en/stable/json_api.html#special-json-arguments Thanks :) > I considered trying to make this automatic - so it detects columns that appear to contain valid JSON and outputs them as nested objects - but the problem with that is that it can lead to inconsistent results - you might hit the API and find that not every column contains valid JSON (compared to the previous day) resulting in the API retuning string instead of the expected dictionary and breaking your code. If a developer is not sure if the JSON fields are valid, but then retrieves and parse them, it should handle errors too. Handling inconsistent data is necessary due to the nature of SQLite. A global or dataset option to render the data as they have been defined (JSON, boolean, etc.) when requesting JSON could allow the user to download a regular JSON from the browser without having to rely on APIs. I would guess someone could just make a custom template with an extra JSON-parsed download button otherwise :)","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",919508498,JSON export dumps JSON fields as TEXT, https://github.com/simonw/sqlite-utils/issues/270#issuecomment-862574390,https://api.github.com/repos/simonw/sqlite-utils/issues/270,862574390,MDEyOklzc3VlQ29tbWVudDg2MjU3NDM5MA==,4068,frafra,2021-06-16T17:34:49Z,2021-06-16T17:34:49Z,NONE,"Sorry, I got confused because SQLite has a JSON column type, even if it is treated as TEXT, and I though automatic facets were available for JSON arrays stored as JSON only :)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",919314806,Cannot set type JSON, https://github.com/simonw/sqlite-utils/issues/438#issuecomment-1139379923,https://api.github.com/repos/simonw/sqlite-utils/issues/438,1139379923,IC_kwDOCGYnMM5D6Y7T,4068,frafra,2022-05-27T08:05:01Z,2022-05-27T08:05:01Z,NONE,"I tried to debug it using `pdb`, but it looks `sqlite-utils` catches the exception, so it is not quick to figure out where the failure is happening.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1250161887,illegal UTF-16 surrogate, https://github.com/simonw/sqlite-utils/issues/438#issuecomment-1139392769,https://api.github.com/repos/simonw/sqlite-utils/issues/438,1139392769,IC_kwDOCGYnMM5D6cEB,4068,frafra,2022-05-27T08:21:53Z,2022-05-27T08:21:53Z,NONE,Argument were specified in the wrong order. `PATH TABLE FILE` can be misleading :),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1250161887,illegal UTF-16 surrogate, https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1139426398,https://api.github.com/repos/simonw/sqlite-utils/issues/439,1139426398,IC_kwDOCGYnMM5D6kRe,4068,frafra,2022-05-27T09:04:05Z,2022-05-27T10:44:54Z,NONE,"This code works: ```python import csv import sqlite_utils db = sqlite_utils.Database(""test.db"") reader = csv.DictReader(open(""csv"", encoding=""utf-16-le"").read().split(""\r\n""), delimiter="";"") db[""test""].insert_all(reader, pk=""Id"") ``` I used `iconv` to change the encoding; sqlite-utils can import the resulting file, even if it stops at 98 %: ``` sqlite-utils insert --csv test test.db clean [------------------------------------] 0% [###################################-] 98% 00:00:00 ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1250495688,Misleading progress bar against utf-16-le CSV input, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1139484453,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1139484453,IC_kwDOCGYnMM5D6ycl,4068,frafra,2022-05-27T10:20:08Z,2022-05-27T10:20:08Z,NONE,I can confirm. This only happens with sqlite-utils. I am using gnome-terminal with bash.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/datasette/issues/1818#issuecomment-1257290709,https://api.github.com/repos/simonw/datasette/issues/1818,1257290709,IC_kwDOBm6k_c5K8LvV,5363,nelsonjchen,2022-09-25T22:17:06Z,2022-09-25T22:17:06Z,NONE,"I wonder if having an option for displaying the max row id might help too. Not accurate especially if something was deleted, but useful for DBs as a dump. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1384549993,Setting to turn off table row counts entirely, https://github.com/simonw/datasette/issues/1818#issuecomment-1258738740,https://api.github.com/repos/simonw/datasette/issues/1818,1258738740,IC_kwDOBm6k_c5LBtQ0,5363,nelsonjchen,2022-09-26T22:52:45Z,2022-09-26T22:55:57Z,NONE,"thoughts on order of precedence to use: * sqlite-utils count, if present. closest thing to a standard i guess. * row(max_id) if like, the first and/or last x amount of rows ids are all contiguous. kind of a cheap/dumb/imperfect heuristic to see if the table is dump/not dump. if the check passes, still stick on `est.` after the display. * count(*) if enabled in datasette ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1384549993,Setting to turn off table row counts entirely, https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-791053721,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,791053721,MDEyOklzc3VlQ29tbWVudDc5MTA1MzcyMQ==,6213,dsisnero,2021-03-05T00:31:27Z,2021-03-05T00:31:27Z,NONE,I am getting the same thing for US West (N. California) us-west-1,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload, https://github.com/simonw/datasette/issues/2054#issuecomment-1499797384,https://api.github.com/repos/simonw/datasette/issues/2054,1499797384,IC_kwDOBm6k_c5ZZReI,6213,dsisnero,2023-04-07T00:46:50Z,2023-04-07T00:46:50Z,NONE,you should have a look at Roda written in ruby . ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1657861026,"Make detailed notes on how table, query and row views work right now", https://github.com/simonw/datasette/issues/1298#issuecomment-1125083348,https://api.github.com/repos/simonw/datasette/issues/1298,1125083348,IC_kwDOBm6k_c5DD2jU,7150,llimllib,2022-05-12T14:43:51Z,2022-05-12T14:43:51Z,NONE,"user report: I found this issue because the first time I tried to use datasette for real, I displayed a large table, and thought there was no horizontal scroll bar at all. I didn't even consider that I had to scroll all the way to the end of the page to find it. Just chipping in to say that this confused me, and I didn't even find the scroll bar until after I saw this issue. I don't know what the right answer is, but IMO the UI should suggest to the user that there is a way to view the data that's hidden to the right.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/176#issuecomment-359697938,https://api.github.com/repos/simonw/datasette/issues/176,359697938,MDEyOklzc3VlQ29tbWVudDM1OTY5NzkzOA==,7193,gijs,2018-01-23T07:17:56Z,2018-01-23T07:17:56Z,NONE,👍 I'd like this too! ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/sqlite-utils/issues/417#issuecomment-1074256603,https://api.github.com/repos/simonw/sqlite-utils/issues/417,1074256603,IC_kwDOCGYnMM5AB9rb,9954,blaine,2022-03-21T18:19:41Z,2022-03-21T18:19:41Z,NONE,"That makes sense; just a little hint that points folks towards doing the right thing might be helpful! fwiw, the reason I was using jq in the first place was just a quick way to extract one attribute from an actual JSON array. When I initially imported it, I got a table with a bunch of embedded JSON values, rather than a native table, because each array entry had two attributes, one with the data I _actually_ wanted. Not sure how common a use-case this is, though (and easily fixed, aside from the jq weirdness!)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1175744654,insert fails on JSONL with whitespace, https://github.com/dogsheep/pocket-to-sqlite/issues/10#issuecomment-1239516561,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/10,1239516561,IC_kwDODLZ_YM5J4YWR,11887,ashanan,2022-09-07T15:07:38Z,2022-09-07T15:07:38Z,NONE,Thanks!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1246826792,"When running `auth` command, don't overwrite an existing auth.json file", https://github.com/simonw/sqlite-utils/issues/328#issuecomment-925300720,https://api.github.com/repos/simonw/sqlite-utils/issues/328,925300720,IC_kwDOCGYnMM43Jvfw,12752,gravis,2021-09-22T20:21:33Z,2021-09-22T20:21:33Z,NONE,"Wow, that was fast! Thank you!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1004613267,Invalid JSON output when no rows, https://github.com/simonw/datasette/issues/176#issuecomment-617208503,https://api.github.com/repos/simonw/datasette/issues/176,617208503,MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw==,12976,nkirsch,2020-04-21T14:16:24Z,2020-04-21T14:16:24Z,NONE,"@eads I'm interested in helping, if there's still a need...","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/sqlite-utils/issues/474#issuecomment-1229449018,https://api.github.com/repos/simonw/sqlite-utils/issues/474,1229449018,IC_kwDOCGYnMM5JR-c6,14294,hubgit,2022-08-28T12:40:13Z,2022-08-28T12:40:13Z,NONE,"Creating the table before inserting is a useful workaround, thanks. It does require figuring out the `create table` syntax and listing all the fields manually, though, which loses some of the magic of sqlite-utils. I was expecting to find an option like `--headers=foo,bar` (or `--header-row='foo\tbar'`, if that would be easier) - not necessarily that exact syntax, but something that would essentially be treated the same as having a header row in the file.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1353074021,Add an option for specifying column names when inserting CSV data, https://github.com/simonw/sqlite-utils/issues/239#issuecomment-1236200834,https://api.github.com/repos/simonw/sqlite-utils/issues/239,1236200834,IC_kwDOCGYnMM5Jru2C,14294,hubgit,2022-09-03T21:26:32Z,2022-09-03T21:26:32Z,NONE,"I was looking for something like this today, for extracting columns containing objects (and arrays of objects) into separate tables. Would it make sense (especially for the fields containing arrays of objects) to create a one-to-many relationship, where each row of the newly created table would contain the id of the row that originally contained it? If the extracted objects have a unique id and are repeated, it could even create a many-to-many relationship, with a third table for the joins.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",816526538,sqlite-utils extract could handle nested objects, https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-571412923,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16,571412923,MDEyOklzc3VlQ29tbWVudDU3MTQxMjkyMw==,15092,jayvdb,2020-01-07T03:06:46Z,2020-01-07T03:06:46Z,NONE,"I re-tried after doing `auth`, and I get the same result.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546051181,Exception running first command: IndexError: list index out of range, https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-602136481,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16,602136481,MDEyOklzc3VlQ29tbWVudDYwMjEzNjQ4MQ==,15092,jayvdb,2020-03-22T02:08:57Z,2020-03-22T02:08:57Z,NONE,"I'd love to be using your library as a better cached gh layer for a new library I have built, replacing large parts of the very ugly https://github.com/jayvdb/pypidb/blob/master/pypidb/_github.py , and then probably being able to rebuild the setuppy chunk as a feature here at a later stage. I would also need tokenless and netrc support, but I would be happy to add those bits.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546051181,Exception running first command: IndexError: list index out of range, https://github.com/simonw/datasette/issues/1522#issuecomment-974607456,https://api.github.com/repos/simonw/datasette/issues/1522,974607456,IC_kwDOBm6k_c46F1Rg,17906,mrchrisadams,2021-11-20T07:10:11Z,2021-11-20T07:10:11Z,NONE,"As a a sanity check, would it be worth looking at trying to push the multi-process container on another provider of a knative / cloud run / tekton ? I have a somewhat similar use case for a future proejct, so i'm been very grateful to you sharing all the progress in this issue. As I understand it, Scaleway also offer a very similar offering using what appear to be many similar components that might at least see if it's an issue with more than one knative based FaaS provider https://www.scaleway.com/en/serverless-containers/ https://developers.scaleway.com/en/products/containers/api/#main-features ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058896236,Deploy a live instance of demos/apache-proxy, https://github.com/dogsheep/dogsheep-photos/issues/7#issuecomment-906015471,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/7,906015471,IC_kwDOD079W842ALLv,18232,dkam,2021-08-26T02:01:01Z,2021-08-26T02:01:01Z,NONE,Perceptual hashes might be what you're after : http://phash.org,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",602585497,Integrate image content hashing, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1035717429,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,1035717429,IC_kwDOD079W849u8s1,18504,harperreed,2022-02-11T01:55:38Z,2022-02-11T01:55:38Z,NONE,I would love this merged! ,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/simonw/datasette/issues/2147#issuecomment-1687433388,https://api.github.com/repos/simonw/datasette/issues/2147,1687433388,IC_kwDOBm6k_c5klDCs,18899,jackowayed,2023-08-22T05:05:33Z,2023-08-22T05:05:33Z,NONE,"Thanks for all this! You're totally right that the ASGI option is doable, if a bit low level and coupled to the current URI design. I'm totally fine with that being the final answer. process_view is interesting and in the general direction of what I had in mind. A somewhat less powerful idea: Is there value in giving a hook for just the query that's about to be run? Maybe I'm thinking a little narrowly about this problem I decided I wanted to solve, but I could see other uses for a hook of the sketch below: ``` def prepare_query(database, table, query): """"""Modify query that is about to be run in some way. Return the (possibly rewritten) query to run, or None to disallow running the query"""""" ``` (Maybe you actually want to return a tuple so there can be an error message when you disallow, or something.) Maybe it's too narrowly useful and some of the other pieces of datasette obviate some of these ideas, but off the cuff I could imagine using it to: * Require a LIMIT. Either fail the query or add the limit if it's not there. * Do logging, like my usecase. * Do other analysis on whether you want to allow the query to run; a linter? query complexity? Definitely feel free to say no, or not now. This is all me just playing around with what datasette and its plugin architecture can do with toy ideas, so don't let me push you to commit to a hook you don't feel confident fits well in the design.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1858228057,Plugin hook for database queries that are run, https://github.com/simonw/datasette/issues/2147#issuecomment-1690955706,https://api.github.com/repos/simonw/datasette/issues/2147,1690955706,IC_kwDOBm6k_c5kye-6,18899,jackowayed,2023-08-24T03:54:35Z,2023-08-24T03:54:35Z,NONE,"That's fair. The best idea I can think of is that if a plugin wanted to limit intensive queries, it could add LIMITs or something. A hook that gives you visibility of queries and maybe the option to reject felt a little more limited than the existing plugin hooks, so I was trying to think of what else one might want to do while looking at to-be-run queries. But without a real motivating example, I see why you don't want to add that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1858228057,Plugin hook for database queries that are run, https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1141711418,https://api.github.com/repos/simonw/sqlite-utils/issues/26,1141711418,IC_kwDOCGYnMM5EDSI6,19304,nileshtrivedi,2022-05-31T06:21:15Z,2022-05-31T06:21:15Z,NONE,"I ran into this. My use case has a JSON file with array of `book` objects with a key called `reviews` which is also an array of objects. My JSON is human-edited and does not specify IDs for either books or reviews. Because sqlite-utils does not support inserting nested objects, I instead have to maintain two separate CSV files with `id` column in `books.csv` and `book_id` column in reviews.csv. I think the right way to declare the relationship while inserting a JSON might be to describe the relationship: `sqlite-utils insert data.db books mydata.json --hasmany reviews --hasone author --manytomany tags` This is relying on the assumption that foreign keys can point to `rowid` primary key.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455486286,Mechanism for turning nested JSON into foreign keys / many-to-many, https://github.com/simonw/datasette/issues/153#issuecomment-348252037,https://api.github.com/repos/simonw/datasette/issues/153,348252037,MDEyOklzc3VlQ29tbWVudDM0ODI1MjAzNw==,20264,ftrain,2017-11-30T16:59:00Z,2017-11-30T16:59:00Z,NONE,"WOW! -- Paul Ford // (646) 369-7128 // @ftrain On Thu, Nov 30, 2017 at 11:47 AM, Simon Willison wrote: > Remaining work on this now lives in a milestone: > https://github.com/simonw/datasette/milestone/6 > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > , > or mute the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",276842536,Ability to customize presentation of specific columns in HTML view, https://github.com/simonw/sqlite-utils/issues/54#issuecomment-524300388,https://api.github.com/repos/simonw/sqlite-utils/issues/54,524300388,MDEyOklzc3VlQ29tbWVudDUyNDMwMDM4OA==,20264,ftrain,2019-08-23T12:41:09Z,2019-08-23T12:41:09Z,NONE,Extremely cool and easy to understand. Thank you!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",480961330,"Ability to list views, and to access db[""view_name""].rows / rows_where / etc", https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1669877769,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,1669877769,IC_kwDOD079W85jiFAJ,22996,chrismytton,2023-08-08T15:52:52Z,2023-08-08T15:52:52Z,NONE,"You can also install this with pip using this oneliner: ``` pip install git+https://github.com/RhetTbull/dogsheep-photos.git@update_for_bigsur ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/github-to-sqlite/issues/79#issuecomment-1847317568,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79,1847317568,IC_kwDODFdgUs5uG9RA,23789,nedbat,2023-12-08T14:50:13Z,2023-12-08T14:50:13Z,NONE,Adding `&per_page=100` would reduce the number of API requests by 3x.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1570375808,Deploy demo job is failing due to rate limit, https://github.com/simonw/datasette/issues/991#issuecomment-712855389,https://api.github.com/repos/simonw/datasette/issues/991,712855389,MDEyOklzc3VlQ29tbWVudDcxMjg1NTM4OQ==,24740,furilo,2020-10-20T13:36:41Z,2020-10-20T13:36:41Z,NONE,"Here is one quick sketch (done in Figma :P) for an idea: a possible filter to switch between showing all tables from all databases, or grouping tables by database. (the switch is interactive) All tables: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=1%3A2&viewport=536%2C348%2C0.5&scaling=min-zoom Grouped: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=3%3A974&viewport=536%2C348%2C0.5&scaling=min-zoom When only 1 database: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=1%3A162&viewport=536%2C348%2C0.5&scaling=min-zoom Is this is useful, I can send some more suggestions/sketches. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",714377268,Redesign application homepage, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791089881,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,791089881,MDEyOklzc3VlQ29tbWVudDc5MTA4OTg4MQ==,28565,maxhawkins,2021-03-05T02:03:19Z,2021-03-05T02:03:19Z,NONE,"I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout. Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-849708617,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,849708617,MDEyOklzc3VlQ29tbWVudDg0OTcwODYxNw==,28565,maxhawkins,2021-05-27T15:01:42Z,2021-05-27T15:01:42Z,NONE,Any updates?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-884672647,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,884672647,IC_kwDODFE5qs40uwiH,28565,maxhawkins,2021-07-22T05:56:31Z,2021-07-22T14:03:08Z,NONE,"How does this commit look? https://github.com/maxhawkins/google-takeout-to-sqlite/commit/72802a83fee282eb5d02d388567731ba4301050d It seems that Takeout's mbox format is pretty simple, so we can get away with just splitting the file on lines begining with `From `. My commit just splits the file every time a line starts with `From ` and uses `email.message_from_bytes` to parse each chunk. I was able to load a 12GB takeout mbox without the program using more than a couple hundred MB of memory during the import process. It does make us lose the progress bar, but maybe I can add that back in a later commit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885022230,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885022230,IC_kwDODFE5qs40wF4W,28565,maxhawkins,2021-07-22T15:51:46Z,2021-07-22T15:51:46Z,NONE,One thing I noticed is this importer doesn't save attachments along with the body of the emails. It would be nice if those got stored as blobs in a separate attachments table so attachments can be included while fetching search results.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885094284,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885094284,IC_kwDODFE5qs40wXeM,28565,maxhawkins,2021-07-22T17:41:32Z,2021-07-22T17:41:32Z,NONE,I added a follow-up commit that deals with emails that don't have a `Date` header: https://github.com/maxhawkins/google-takeout-to-sqlite/commit/4bc70103582c10802c85a523ef1e99a8a2154aa9,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,888075098,IC_kwDODFE5qs407vNa,28565,maxhawkins,2021-07-28T07:18:56Z,2021-07-28T07:18:56Z,NONE,"> I'm not sure why but my most recent import, when displayed in Datasette, looks like this: > > I did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead. @simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-894581223,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,894581223,IC_kwDODFE5qs41Ujnn,28565,maxhawkins,2021-08-07T00:57:48Z,2021-08-07T00:57:48Z,NONE,"Just added two more fixes: * Added parsing for rfc 2047 encoded unicode headers * Body is now stored as TEXT rather than a BLOB regardless of what order the messages are parsed in. I was able to run this on my Takeout export and everything seems to work fine. @simonw let me know if this looks good to merge.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-896378525,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,896378525,IC_kwDODFE5qs41baad,28565,maxhawkins,2021-08-10T23:28:45Z,2021-08-10T23:28:45Z,NONE,"I added parsing of text/html emails using BeautifulSoup. Around half of the emails in my archive don't include a text/plain payload so adding html parsing makes a good chunk of them searchable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1003437288,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1003437288,IC_kwDODFE5qs47zzzo,28565,maxhawkins,2021-12-31T19:06:20Z,2021-12-31T19:06:20Z,NONE,"> @maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text. Shouldn't be hard. The easiest way is probably to remove the `if body.content_type == ""text/html""` clause from [utils.py:254](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8e6d487b697ce2e8ad885acf613a157bfba84c59#diff-25ad9dd1ced1b8bfc37fda8444819c803232c08891e4af3d4064aa205d8174eaR254) and just return content directly without parsing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1710380941,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1710380941,IC_kwDODFE5qs5l8leN,28565,maxhawkins,2023-09-07T15:39:59Z,2023-09-07T15:39:59Z,NONE,"> @maxhawkins curious why you didn't use the stdlib `mailbox` to parse the `mbox` files? Mailbox parses the entire mbox into memory. Using the lower level library lets us stream the emails in one at a time to support larger archives. Both libraries are in the stdlib.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/simonw/datasette/issues/736#issuecomment-620401172,https://api.github.com/repos/simonw/datasette/issues/736,620401172,MDEyOklzc3VlQ29tbWVudDYyMDQwMTE3Mg==,30607,aborruso,2020-04-28T06:09:28Z,2020-04-28T06:09:28Z,NONE,"> Would you mind trying publishing your database using one of the other options - Heroku, Cloud Run or https://fly.io/ - and see if you have the same bug there? It works in heroku, than might be a bug with datasette-publish-now. Thank you","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",606720674,strange behavior using accented characters, https://github.com/simonw/datasette/issues/735#issuecomment-620401443,https://api.github.com/repos/simonw/datasette/issues/735,620401443,MDEyOklzc3VlQ29tbWVudDYyMDQwMTQ0Mw==,30607,aborruso,2020-04-28T06:10:20Z,2020-04-28T06:10:20Z,NONE,"It works in heroku, than might be a bug with datasette-publish-now. Thank you","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",605806386,"Error when I click on ""View and edit SQL""", https://github.com/simonw/datasette/issues/744#issuecomment-621008152,https://api.github.com/repos/simonw/datasette/issues/744,621008152,MDEyOklzc3VlQ29tbWVudDYyMTAwODE1Mg==,30607,aborruso,2020-04-29T06:05:02Z,2020-04-29T06:05:02Z,NONE,"Hi @simonw , I have installed it and I have the below errors. > Is it possible that your /tmp directory is on a different volume from the template folder? That could cause a problem with the symlinks. No, /tmp folder is in the same volume. Thank you ``` Traceback (most recent call last): File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py"", line 607, in link_or_copy_directory shutil.copytree(src, dst, copy_function=os.link) File ""/usr/lib/python3.7/shutil.py"", line 365, in copytree raise Error(errors) shutil.Error: [('/var/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmpcqv_1i5d/templates/base.html', ""[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmpcqv_1i5d/templates/base.html'""), ('/var/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmpcqv_1i5d/templates/index.html', ""[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmpcqv_1i5d/templates/index.html'"")] During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/home/aborruso/.local/bin/datasette"", line 8, in sys.exit(cli()) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py"", line 103, in heroku extra_metadata, File ""/usr/lib/python3.7/contextlib.py"", line 112, in __enter__ return next(self.gen) File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py"", line 191, in temporary_heroku_directory os.path.join(tmp.name, ""templates""), File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py"", line 609, in link_or_copy_directory shutil.copytree(src, dst) File ""/usr/lib/python3.7/shutil.py"", line 321, in copytree os.makedirs(dst) File ""/usr/lib/python3.7/os.py"", line 221, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/tmp/tmpcqv_1i5d/templates' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-621011554,https://api.github.com/repos/simonw/datasette/issues/744,621011554,MDEyOklzc3VlQ29tbWVudDYyMTAxMTU1NA==,30607,aborruso,2020-04-29T06:17:26Z,2020-04-29T06:17:26Z,NONE,"A stupid note: I have no `tmpcqv_1i5d` folder in in `/tmp`. It seems to me that it does not create any `/tmp/tmpcqv_1i5d/templates` folder (or other name folder, inside /tmp)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-621030783,https://api.github.com/repos/simonw/datasette/issues/744,621030783,MDEyOklzc3VlQ29tbWVudDYyMTAzMDc4Mw==,30607,aborruso,2020-04-29T07:16:27Z,2020-04-29T07:16:27Z,NONE,"Hi @simonw it's debian as Windows Subsystem for Linux ``` PRETTY_NAME=""Pengwin"" NAME=""Pengwin"" VERSION_ID=""10"" VERSION=""10 (buster)"" ID=debian ID_LIKE=debian HOME_URL=""https://github.com/whitewaterfoundry/Pengwin"" SUPPORT_URL=""https://github.com/whitewaterfoundry/Pengwin"" BUG_REPORT_URL=""https://github.com/whitewaterfoundry/Pengwin"" VERSION_CODENAME=buster ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-625060561,https://api.github.com/repos/simonw/datasette/issues/744,625060561,MDEyOklzc3VlQ29tbWVudDYyNTA2MDU2MQ==,30607,aborruso,2020-05-07T06:38:24Z,2020-05-07T06:38:24Z,NONE,"Hi @simonw probably I could try to do it in Python for windows. I do not like to do these things in win enviroment. Because probably WSL Linux env (in which I do a lot of great things) is not an environment that will be tested for datasette. In win I shouldn't have any problems. Am I right?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-625066073,https://api.github.com/repos/simonw/datasette/issues/744,625066073,MDEyOklzc3VlQ29tbWVudDYyNTA2NjA3Mw==,30607,aborruso,2020-05-07T06:53:09Z,2020-05-07T06:53:09Z,NONE,"@simonw another error starting from Windows. I run ``` datasette publish heroku -n comunepa --template-dir template commissioniComunePalermo.db ``` And I have ``` Traceback (most recent call last): File ""c:\python37\lib\runpy.py"", line 193, in _run_module_as_main ""__main__"", mod_spec) File ""c:\python37\lib\runpy.py"", line 85, in _run_code exec(code, run_globals) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\Scripts\datasette.exe\__main__.py"", line 9, in File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py"", line 53, in heroku line.split()[0] for line in check_output([""heroku"", ""plugins""]).splitlines() File ""c:\python37\lib\subprocess.py"", line 395, in check_output **kwargs).stdout File ""c:\python37\lib\subprocess.py"", line 472, in run with Popen(*popenargs, **kwargs) as process: File ""c:\python37\lib\subprocess.py"", line 775, in __init__ restore_signals, start_new_session) File ""c:\python37\lib\subprocess.py"", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The specified file could not be found ``` [files.zip](https://github.com/simonw/datasette/files/4591173/files.zip) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-625083715,https://api.github.com/repos/simonw/datasette/issues/744,625083715,MDEyOklzc3VlQ29tbWVudDYyNTA4MzcxNQ==,30607,aborruso,2020-05-07T07:34:18Z,2020-05-07T07:34:18Z,NONE,"In Windows I'm not very strong. I use debian (inside WSL). However these are the possible steps: - I have installed Python 3 for win (I have 3.7.3); - I have installed heroku cli for win64 and logged in; - I have installed datasette running `python -m pip install --upgrade --user datasette`. It's a very basic Python env that I do not use. This time only to reach my goal: try to publish using custom template","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-625091976,https://api.github.com/repos/simonw/datasette/issues/744,625091976,MDEyOklzc3VlQ29tbWVudDYyNTA5MTk3Ng==,30607,aborruso,2020-05-07T07:51:25Z,2020-05-07T07:51:25Z,NONE,"I have installed `heroku plugins:install heroku-builds`, but I have the same error. Then I have removed from `datasette\publish\heroku.py` ```python # Check for heroku-builds plugin plugins = [ line.split()[0] for line in check_output([""heroku"", ""plugins""]).splitlines() ] if b""heroku-builds"" not in plugins: click.echo( ""Publishing to Heroku requires the heroku-builds plugin to be installed."" ) click.confirm( ""Install it? (this will run `heroku plugins:install heroku-builds`)"", abort=True, ) call([""heroku"", ""plugins:install"", ""heroku-builds""]) ``` And now I have ``` Traceback (most recent call last): File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py"", line 210, in temporary_heroku_directory yield File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py"", line 96, in heroku list_output = check_output([""heroku"", ""apps:list"", ""--json""]).decode( File ""c:\python37\lib\subprocess.py"", line 395, in check_output **kwargs).stdout File ""c:\python37\lib\subprocess.py"", line 472, in run with Popen(*popenargs, **kwargs) as process: File ""c:\python37\lib\subprocess.py"", line 775, in __init__ restore_signals, start_new_session) File ""c:\python37\lib\subprocess.py"", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The specified file could not be found During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""c:\python37\lib\runpy.py"", line 193, in _run_module_as_main ""__main__"", mod_spec) File ""c:\python37\lib\runpy.py"", line 85, in _run_code exec(code, run_globals) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\Scripts\datasette.exe\__main__.py"", line 9, in File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py"", line 120, in heroku call([""heroku"", ""builds:create"", ""-a"", app_name, ""--include-vcs-ignore""]) File ""c:\python37\lib\contextlib.py"", line 130, in __exit__ self.gen.throw(type, value, traceback) File ""C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py"", line 213, in temporary_heroku_directory tmp.cleanup() File ""c:\python37\lib\tempfile.py"", line 809, in cleanup _shutil.rmtree(self.name) File ""c:\python37\lib\shutil.py"", line 513, in rmtree return _rmtree_unsafe(path, onerror) File ""c:\python37\lib\shutil.py"", line 401, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File ""c:\python37\lib\shutil.py"", line 399, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] Unable to access file. The file is being used by another process: 'C:\\Users\\aborr\\AppData\\Local\\Temp\\tmpkcxy8i_q' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-632249565,https://api.github.com/repos/simonw/datasette/issues/744,632249565,MDEyOklzc3VlQ29tbWVudDYzMjI0OTU2NQ==,30607,aborruso,2020-05-21T17:47:40Z,2020-05-21T17:47:40Z,NONE,"@simonw can I test it know? What I must do to update it? Thank you","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-632255088,https://api.github.com/repos/simonw/datasette/issues/744,632255088,MDEyOklzc3VlQ29tbWVudDYzMjI1NTA4OA==,30607,aborruso,2020-05-21T17:58:51Z,2020-05-21T17:58:51Z,NONE,"Thank you very much!! I will try and I write you here","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-632305868,https://api.github.com/repos/simonw/datasette/issues/744,632305868,MDEyOklzc3VlQ29tbWVudDYzMjMwNTg2OA==,30607,aborruso,2020-05-21T19:43:23Z,2020-05-21T19:43:23Z,NONE,"@simonw now I have ``` Traceback (most recent call last): File ""/home/aborruso/.local/bin/datasette"", line 8, in sys.exit(cli()) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py"", line 103, in heroku extra_metadata, File ""/usr/lib/python3.7/contextlib.py"", line 112, in __enter__ return next(self.gen) File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py"", line 191, in temporary_heroku_directory os.path.join(tmp.name, ""templates""), File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py"", line 605, in link_or_copy_directory shutil.copytree(src, dst, copy_function=os.link, dirs_exist_ok=True) TypeError: copytree() got an unexpected keyword argument 'dirs_exist_ok' ``` Do I must open a new issue? Thank you","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-634283355,https://api.github.com/repos/simonw/datasette/issues/744,634283355,MDEyOklzc3VlQ29tbWVudDYzNDI4MzM1NQ==,30607,aborruso,2020-05-26T21:15:34Z,2020-05-26T21:15:34Z,NONE,"> Oh no! It looks like `dirs_exist_ok` is Python 3.8 only. This is a bad fix, it needs to work on older Python's too. Re-opening. Thank you very much","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-634446887,https://api.github.com/repos/simonw/datasette/issues/744,634446887,MDEyOklzc3VlQ29tbWVudDYzNDQ0Njg4Nw==,30607,aborruso,2020-05-27T06:01:28Z,2020-05-27T06:01:28Z,NONE,"Dear @simonw thank you for your time, now IT WORKS!!! I hope that this edit to datasette code is not for an exceptional case (my PC configuration) and that it will be useful to other users. Thank you again!!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/datasette/issues/744#issuecomment-635386935,https://api.github.com/repos/simonw/datasette/issues/744,635386935,MDEyOklzc3VlQ29tbWVudDYzNTM4NjkzNQ==,30607,aborruso,2020-05-28T14:32:53Z,2020-05-28T14:32:53Z,NONE,"Wow, I'm in some way very proud!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",608058890,link_or_copy_directory() error - Invalid cross-device link, https://github.com/simonw/sqlite-utils/issues/69#issuecomment-710768396,https://api.github.com/repos/simonw/sqlite-utils/issues/69,710768396,MDEyOklzc3VlQ29tbWVudDcxMDc2ODM5Ng==,30607,aborruso,2020-10-17T07:46:59Z,2020-10-17T07:46:59Z,NONE,Great @simonw thank you very much,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",534507142,Feature request: enable extensions loading, https://github.com/simonw/sqlite-utils/issues/188#issuecomment-710778368,https://api.github.com/repos/simonw/sqlite-utils/issues/188,710778368,MDEyOklzc3VlQ29tbWVudDcxMDc3ODM2OA==,30607,aborruso,2020-10-17T08:52:58Z,2020-10-17T08:52:58Z,NONE,"I have done a stupid question. If I run ``` sqlite-utils :memory: ""select spatialite_version()"" --load-extension=/usr/local/lib/mod_spatialite.so ``` I have `[{""spatialite_version()"": ""5.0.0""}]` Thank you for this great tool","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",723708310,About loading spatialite, https://github.com/simonw/datasette/issues/1220#issuecomment-778008752,https://api.github.com/repos/simonw/datasette/issues/1220,778008752,MDEyOklzc3VlQ29tbWVudDc3ODAwODc1Mg==,30607,aborruso,2021-02-12T06:37:34Z,2021-02-12T06:37:34Z,NONE,"I have used my path, I'm running it from the folder in wich I have the db. Do I must an absolute path? Do I must create exactly that folder?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1220#issuecomment-778467759,https://api.github.com/repos/simonw/datasette/issues/1220,778467759,MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ==,30607,aborruso,2021-02-12T21:35:17Z,2021-02-12T21:35:17Z,NONE,Thank you,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1845#issuecomment-1279924827,https://api.github.com/repos/simonw/datasette/issues/1845,1279924827,IC_kwDOBm6k_c5MShpb,30636,kindly,2022-10-16T08:54:53Z,2022-10-16T08:54:53Z,NONE,"> It was part of a larger idea I was exploring around ensuring Datasette could be used to start interacting with CSV/JSON data out-of-the-box, without needing to first convert that data into SQLite using separate tools. This would be great. My organization deals with very nested JSON open data and I have been wanting to find a way to hook into datasette so that the analysts do not have to first convert to sqlite first. This can kind of be done with datasette-lite. From this random nested JSON API: https://api.nobelprize.org/v1/prize.json You can use the API of https://flatterer.herokuapp.com to return a multi table sqlite database: https://lite.datasette.io/?url=https://flatterer.herokuapp.com/api/convert?output_format=sqlite%26file_url=https://api.nobelprize.org/v1/prize.json This is great and fun, but it would be great if there was some plugin mechanism that you could feed a local datasette a nested JSON file directly, possibly hooking into other flattening tools for this.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1410305897,Reconsider the Datasette first-run experience, https://github.com/simonw/datasette/issues/782#issuecomment-782745199,https://api.github.com/repos/simonw/datasette/issues/782,782745199,MDEyOklzc3VlQ29tbWVudDc4Mjc0NTE5OQ==,30665,frankieroberto,2021-02-20T20:32:03Z,2021-02-20T20:32:03Z,NONE,"I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. Mainly because it allows you to add extra keys in a backwards-compatible way. Also just seems more expected somehow. The API design guidance for the UK government also recommends this: https://www.gov.uk/guidance/gds-api-technical-and-data-standards#use-json I also strongly dislike having versioned APIs (eg with a `/v1/` path prefix, as it invariably means that old versions stop working at some point, even though the bit of the API you’re using might not have changed at all.","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 1}",627794879,Redesign default .json format, https://github.com/simonw/datasette/issues/782#issuecomment-782746755,https://api.github.com/repos/simonw/datasette/issues/782,782746755,MDEyOklzc3VlQ29tbWVudDc4Mjc0Njc1NQ==,30665,frankieroberto,2021-02-20T20:44:05Z,2021-02-20T20:44:05Z,NONE,"Minor suggestion: rename `size` query param to `limit`, to better reflect that it’s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword? I like the idea of specifying a limit of 0 if you don’t want any rows data - and returning an empty array under the `rows` key seems fine. Have you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don’t parse the JSON. Could be default (can’t be much bigger with gzip?) or opt-in.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",627794879,Redesign default .json format, https://github.com/simonw/datasette/issues/782#issuecomment-783265830,https://api.github.com/repos/simonw/datasette/issues/782,783265830,MDEyOklzc3VlQ29tbWVudDc4MzI2NTgzMA==,30665,frankieroberto,2021-02-22T10:21:14Z,2021-02-22T10:21:14Z,NONE,"@simonw: > The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword. Interesting! Although I don't think it matters too much what the underlying implementation is - I more meant that `limit` is familiar to developers conceptually as ""up to and including this number, if they exist"", whereas ""size"" is potentially more ambiguous. However, it's probably no big deal either way.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",627794879,Redesign default .json format, https://github.com/simonw/datasette/pull/1204#issuecomment-951731255,https://api.github.com/repos/simonw/datasette/issues/1204,951731255,IC_kwDOBm6k_c44ukQ3,30934,20after4,2021-10-26T09:01:28Z,2021-10-26T09:01:28Z,NONE,"> Writing the tests will be a bit tricky since we need to confirm that the `include_table_top(datasette, database, actor, table)` arguments were all passed correctly but the only thing we get back from the plugin is a list of templates. Maybe encode those values into the template names somehow? Why not return a data structure instead of just a template name? I've already done some custom hacking to modify datasette but the plugin mechanism you are building here would be much cleaner than what I've built. I'd be happy to help with testing this PR and fleshing it out further if you are still considering merging this.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",793002853,WIP: Plugin includes, https://github.com/simonw/datasette/issues/878#issuecomment-951740637,https://api.github.com/repos/simonw/datasette/issues/878,951740637,IC_kwDOBm6k_c44umjd,30934,20after4,2021-10-26T09:12:15Z,2021-10-26T09:12:15Z,NONE,"This sounds really ambitious but also really awesome. I like the idea that basically any piece of a page could be selectively replaced. It sort of sounds like a python asyncio version of https://github.com/observablehq/runtime","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648435885,"New pattern for views that return either JSON or HTML, available for plugins", https://github.com/simonw/datasette/issues/1532#issuecomment-981966693,https://api.github.com/repos/simonw/datasette/issues/1532,981966693,IC_kwDOBm6k_c46h59l,30934,20after4,2021-11-29T19:56:52Z,2021-11-29T19:56:52Z,NONE,FWIW I've written some web components that consume the json api and I think it's a really nice way to work with datasette. I like the combination with datasette+sqlite as a back-end feeding data to a front-end that's entirely javascript + html.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1065429936,Use datasette-table Web Component to guide the design of the JSON API for 1.0, https://github.com/simonw/datasette/issues/1304#issuecomment-981980048,https://api.github.com/repos/simonw/datasette/issues/1304,981980048,IC_kwDOBm6k_c46h9OQ,30934,20after4,2021-11-29T20:13:53Z,2021-11-29T20:14:11Z,NONE,There isn't any way to do this with sqlite as far as I know. The only option is to insert the right number of ? placeholders into the sql template and then provide an array of values.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1532#issuecomment-982745406,https://api.github.com/repos/simonw/datasette/issues/1532,982745406,IC_kwDOBm6k_c46k4E-,30934,20after4,2021-11-30T15:28:57Z,2021-11-30T15:28:57Z,NONE,"It's a really great API and the documentation is really great too. Honestly, in more than 20 years of professional experience, I haven't worked with any software API that was more of a joy to use. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1065429936,Use datasette-table Web Component to guide the design of the JSON API for 1.0, https://github.com/simonw/datasette/issues/1304#issuecomment-988461884,https://api.github.com/repos/simonw/datasette/issues/1304,988461884,IC_kwDOBm6k_c466rs8,30934,20after4,2021-12-08T03:20:26Z,2021-12-08T03:20:26Z,NONE,"The easiest or most straightforward thing to do is to use named parameters like: ```sql select * where key IN (:p1, :p2, :p3) ``` And simply construct the list of placeholders dynamically based on the number of values. Doing this is possible with datasette if you forgo ""canned queries"" and just use the raw query endpoint and pass the query sql, along with p1, p2 ... in the request.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1304#issuecomment-988463455,https://api.github.com/repos/simonw/datasette/issues/1304,988463455,IC_kwDOBm6k_c466sFf,30934,20after4,2021-12-08T03:23:14Z,2021-12-08T03:23:14Z,NONE,I actually think it would be a useful thing to add support for in datasette. It wouldn't be difficult to unwind an array of params and add the placeholders automatically.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1528#issuecomment-988468238,https://api.github.com/repos/simonw/datasette/issues/1528,988468238,IC_kwDOBm6k_c466tQO,30934,20after4,2021-12-08T03:35:45Z,2021-12-08T03:35:45Z,NONE,"FWIW I implemented something similar with a bit of plugin code: ```python @hookimpl def canned_queries(datasette: Datasette, database: str) -> Mapping[str, str]: # load ""canned queries"" from the filesystem under # www/sql/db/query_name.sql queries = {} sqldir = Path(__file__).parent.parent / ""sql"" if database: sqldir = sqldir / database if not sqldir.is_dir(): return queries for f in sqldir.glob('*.sql'): try: sql = f.read_text('utf8').strip() if not len(sql): log(f""Skipping empty canned query file: {f}"") continue queries[f.stem] = { ""sql"": sql } except OSError as err: log(err) return queries ```","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",1060631257,"Add new `""sql_file""` key to Canned Queries in metadata?", https://github.com/simonw/datasette/pull/2052#issuecomment-1722943484,https://api.github.com/repos/simonw/datasette/issues/2052,1722943484,IC_kwDOBm6k_c5msgf8,30934,20after4,2023-09-18T08:14:47Z,2023-09-18T08:14:47Z,NONE,This is such a well thought out contribution. I don't think I've seen such a thoroughly considered PR on any project in recent memory.,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/dogsheep/swarm-to-sqlite/issues/12#issuecomment-941274088,https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/12,941274088,IC_kwDODD6af844GrPo,33631,fs111,2021-10-12T18:31:57Z,2021-10-12T18:31:57Z,NONE,I am running into the same problem. Is there any workaround?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",951817328,403 when getting token, https://github.com/simonw/datasette/pull/1574#issuecomment-1008279307,https://api.github.com/repos/simonw/datasette/issues/1574,1008279307,IC_kwDOBm6k_c48GR8L,33631,fs111,2022-01-09T11:26:06Z,2022-01-09T11:26:06Z,NONE,"@fgregg my thinking was backwards compatibility. I don't know what people do to their builds, I just wanted a smaller image for my use case. @simonw any chance to take a look at this? If there is no interest, feel free to close the PR","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1084193403,introduce new option for datasette package to use a slim base image, https://github.com/simonw/datasette/pull/1574#issuecomment-1084216224,https://api.github.com/repos/simonw/datasette/issues/1574,1084216224,IC_kwDOBm6k_c5An9Og,33631,fs111,2022-03-31T07:45:25Z,2022-03-31T07:45:25Z,NONE,"@simonw I like that you want to go ""slim by default"". Do you want another PR for that or should I just wait?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1084193403,introduce new option for datasette package to use a slim base image, https://github.com/simonw/datasette/pull/1574#issuecomment-1214765672,https://api.github.com/repos/simonw/datasette/issues/1574,1214765672,IC_kwDOBm6k_c5IZ9po,33631,fs111,2022-08-15T08:49:31Z,2022-08-15T08:49:31Z,NONE,closing as this is now the default,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1084193403,introduce new option for datasette package to use a slim base image, https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503,https://api.github.com/repos/simonw/sqlite-utils/issues/46,592999503,MDEyOklzc3VlQ29tbWVudDU5Mjk5OTUwMw==,35075,chrishas35,2020-02-29T22:08:20Z,2020-02-29T22:08:20Z,NONE,"@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the ""value"" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named ""value"". I have an existing lookup table that I've populated with columns ""id"" and ""name"" as opposed to ""id"" and ""value"", and seems I can't use `extracts=`, unless I'm missing something... Initial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so: ``` table = db.table(""trees"", extracts={""species_id"": (""Species"", ""name""}) ``` I haven't dug too much into the existing code yet, but does this make sense? Worth doing? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",471780443,extracts= option for insert/update/etc, https://github.com/simonw/sqlite-utils/issues/89#issuecomment-593122605,https://api.github.com/repos/simonw/sqlite-utils/issues/89,593122605,MDEyOklzc3VlQ29tbWVudDU5MzEyMjYwNQ==,35075,chrishas35,2020-03-01T17:33:11Z,2020-03-01T17:33:11Z,NONE,"If you're happy with the proposed implementation, I have code & tests written that I'll get ready for a PR.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",573578548,Ability to customize columns used by extracts= feature, https://github.com/simonw/sqlite-utils/issues/249#issuecomment-803502424,https://api.github.com/repos/simonw/sqlite-utils/issues/249,803502424,MDEyOklzc3VlQ29tbWVudDgwMzUwMjQyNA==,36287,prabhur,2021-03-21T02:43:32Z,2021-03-21T02:43:32Z,NONE,"> Did you run `enable-fts` before you inserted the data? > > If so you'll need to run `populate-fts` after the insert to populate the FTS index. > > A better solution may be to add `--create-triggers` to the `enable-fts` command to add triggers that will automatically keep the index updated as you insert new records. Wow. Wasn't expecting a response this quick, especially during a weekend. :-) Sincerely appreciate it. I tried the `populate-fts` and that did the trick. My bad for not consulting the docs again. I think I forgot to add that step when I automated the workflow. Thanks for the suggestion. I'll close this issue. Have a great weekend and many many thanks for creating these suite of tools around sqlite.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",836963850,Full text search possibly broken?, https://github.com/simonw/datasette/issues/1624#issuecomment-1261194164,https://api.github.com/repos/simonw/datasette/issues/1624,1261194164,IC_kwDOBm6k_c5LLEu0,38532,palfrey,2022-09-28T16:54:22Z,2022-09-28T16:54:22Z,NONE,https://github.com/simonw/datasette-cors seems to workaround this,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1122427321,Index page `/` has no CORS headers, https://github.com/dogsheep/dogsheep-photos/issues/20#issuecomment-633234781,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20,633234781,MDEyOklzc3VlQ29tbWVudDYzMzIzNDc4MQ==,41439,dmd,2020-05-24T13:56:13Z,2020-05-24T13:56:13Z,NONE,"As that seems to be closed, can you give a hint on how to make this work?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",613006393,Ability to serve thumbnailed Apple Photo from its place on disk, https://github.com/simonw/sqlite-utils/issues/540#issuecomment-1537744000,https://api.github.com/repos/simonw/sqlite-utils/issues/540,1537744000,IC_kwDOCGYnMM5bqByA,42327,pquentin,2023-05-08T04:56:12Z,2023-05-08T04:56:12Z,NONE,"Hey @simonw, urllib3 maintainer here :wave: Sorry for breaking your CI. I understand you may prefer to pin the Python version, but note that specifying just `python: ""3""` will get you the latest. We use that in urllib3: https://github.com/urllib3/urllib3/blob/main/.readthedocs.yml I can open PRs to sqlite-utils / datasette if you're interested","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1699184583,sphinx.builders.linkcheck build error, https://github.com/simonw/datasette/issues/409#issuecomment-472844001,https://api.github.com/repos/simonw/datasette/issues/409,472844001,MDEyOklzc3VlQ29tbWVudDQ3Mjg0NDAwMQ==,43100,Uninen,2019-03-14T13:04:20Z,2019-03-14T13:04:42Z,NONE,It seems this affects the Datasette Publish -site as well: https://github.com/simonw/datasette-publish-support/issues/3,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",408376825,Zeit API v1 does not work for new users - need to migrate to v2, https://github.com/simonw/datasette/issues/1886#issuecomment-1316289392,https://api.github.com/repos/simonw/datasette/issues/1886,1316289392,IC_kwDOBm6k_c5OdPtw,45195,rtanglao,2022-11-16T03:54:17Z,2022-11-16T03:58:56Z,NONE,"Happy Birthday Datasette! Thanks Simon!! I use datasette on everything most notably [my flickr metadata SQLite DB](https://www.dropbox.com/s/6j10e2vohp2j5kf/roland2019-2020.db?dl=0) to make art. Datasette lite on my 2019 flickr metadata is super helpful too: https://lite.datasette.io/?csv=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-flickr-sqlite-csv%2Fmain%2F2019-roland-flickr-metadata.csv Even better datasette lite on all firefox support questions from 2021: https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-kits-api3%2Fmain%2FYEARLY_CSV_FILES%2F2021-firefox-sumo-questions.db Thanks again Simon! So great! What a gift to the world!!!!!! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1447050738,"Call for birthday presents: if you're using Datasette, let us know how you're using it here", https://github.com/simonw/datasette/issues/619#issuecomment-697973420,https://api.github.com/repos/simonw/datasette/issues/619,697973420,MDEyOklzc3VlQ29tbWVudDY5Nzk3MzQyMA==,45416,obra,2020-09-23T21:07:58Z,2020-09-23T21:07:58Z,NONE,"I've just run into this after crafting a complex query and discovered that hitting back loses my query. Even showing me the whole bad query would be a huge improvement over the current status quo.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",520655983,"""Invalid SQL"" page should let you edit the SQL", https://github.com/simonw/datasette/issues/123#issuecomment-698110186,https://api.github.com/repos/simonw/datasette/issues/123,698110186,MDEyOklzc3VlQ29tbWVudDY5ODExMDE4Ng==,45416,obra,2020-09-24T04:49:51Z,2020-09-24T04:49:51Z,NONE,"As a half-measure, I'd get value out of being able to upload a CSV and have datasette run csv-to-sqlite on it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/simonw/datasette/issues/123#issuecomment-698174957,https://api.github.com/repos/simonw/datasette/issues/123,698174957,MDEyOklzc3VlQ29tbWVudDY5ODE3NDk1Nw==,45416,obra,2020-09-24T07:42:05Z,2020-09-24T07:42:05Z,NONE," Oh. Awesome. On Thu, Sep 24, 2020 at 12:28:53AM -0700, Simon Willison wrote: > @obra there's a plugin for that! https://github.com/simonw/ > datasette-upload-csvs > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe.* > -- ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/simonw/datasette/issues/187#issuecomment-489353316,https://api.github.com/repos/simonw/datasette/issues/187,489353316,MDEyOklzc3VlQ29tbWVudDQ4OTM1MzMxNg==,46059,carsonyl,2019-05-04T18:36:36Z,2019-05-04T18:36:36Z,NONE,"Hi @simonw - I just hit this issue when trying out Datasette after your PyCon talk today. Datasette is pinned to Sanic 0.7.0, but it looks like 0.8.0 added the option to remove the uvloop dependency for Windows by having an environment variable `SANIC_NO_UVLOOP` at install time. Maybe that'll be sufficient before a port to Starlette?","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",309033998,Windows installation error, https://github.com/simonw/datasette/issues/633#issuecomment-620841496,https://api.github.com/repos/simonw/datasette/issues/633,620841496,MDEyOklzc3VlQ29tbWVudDYyMDg0MTQ5Ng==,46165,nryberg,2020-04-28T20:37:50Z,2020-04-28T20:37:50Z,NONE,"Using the Heroku web interface, you can set the WEB_CONCURRENCY = 1 ![image](https://user-images.githubusercontent.com/46165/80535319-352c8100-8966-11ea-9d4f-df2622ec8bff.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",522334771,"Publish to Heroku is broken: ""WARNING: You must pass the application as an import string to enable 'reload' or 'workers""", https://github.com/dogsheep/evernote-to-sqlite/issues/14#issuecomment-911772943,https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/14,911772943,IC_kwDOEhK-wc42WI0P,46968,step21,2021-09-02T14:53:11Z,2021-09-02T14:53:11Z,NONE,"Additionally, assuming the line numbers match up with the provided enenx file, the mentioned line plus one before and after is as follows: ``` ]]>

```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",986829194,xml.etree.ElementTree.Parse Error - mismatched tag, https://github.com/simonw/datasette/issues/186#issuecomment-374872202,https://api.github.com/repos/simonw/datasette/issues/186,374872202,MDEyOklzc3VlQ29tbWVudDM3NDg3MjIwMg==,47107,stefanocudini,2018-03-21T09:07:22Z,2018-03-21T09:07:22Z,NONE,--debug is perfect tnk,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",306811513,proposal new option to disable user agents cache, https://github.com/simonw/datasette/issues/141#issuecomment-346974336,https://api.github.com/repos/simonw/datasette/issues/141,346974336,MDEyOklzc3VlQ29tbWVudDM0Njk3NDMzNg==,50138,janimo,2017-11-26T00:00:35Z,2017-11-26T00:00:35Z,NONE,FWIW I worked around this by setting TMPDIR to ~/tmp before running the command.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275814941,datasette publish can fail if /tmp is on a different device, https://github.com/simonw/datasette/issues/124#issuecomment-346987395,https://api.github.com/repos/simonw/datasette/issues/124,346987395,MDEyOklzc3VlQ29tbWVudDM0Njk4NzM5NQ==,50138,janimo,2017-11-26T06:24:08Z,2017-11-26T06:24:08Z,NONE,"Are there performance gains when using immutable as opposed to read-only? From what I see other processes can still modify the DB when immutable, but there are no change notifications.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125805,Option to open readonly but not immutable, https://github.com/simonw/datasette/issues/124#issuecomment-347123991,https://api.github.com/repos/simonw/datasette/issues/124,347123991,MDEyOklzc3VlQ29tbWVudDM0NzEyMzk5MQ==,50138,janimo,2017-11-27T09:25:15Z,2017-11-27T09:25:15Z,NONE,"That's the only reference to immutable I saw as well, making me think that there may be no perceivable advantages over simply using mode=ro. Since the database is never or seldom updated the change notifications should not impact performance.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125805,Option to open readonly but not immutable, https://github.com/simonw/datasette/issues/1426#issuecomment-974711959,https://api.github.com/repos/simonw/datasette/issues/1426,974711959,IC_kwDOBm6k_c46GOyX,52649,tannewt,2021-11-20T21:11:51Z,2021-11-20T21:11:51Z,NONE,I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default", https://github.com/simonw/datasette/issues/1732#issuecomment-1115542067,https://api.github.com/repos/simonw/datasette/issues/1732,1115542067,IC_kwDOBm6k_c5CfdIz,52649,tannewt,2022-05-03T01:50:44Z,2022-05-03T01:50:44Z,NONE,"I haven’t set one up unfortunately. My time is very limited because we just had a baby. On Mon, May 2, 2022, at 6:42 PM, Simon Willison wrote: > > > Thanks, this definitely sounds like a bug. Do you have simple steps to reproduce this? > > > — > Reply to this email directly, view it on GitHub , or unsubscribe . > You are receiving this because you authored the thread.Message ID: ***@***.***> > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1221849746,Custom page variables aren't decoded, https://github.com/simonw/datasette/issues/2133#issuecomment-1671649530,https://api.github.com/repos/simonw/datasette/issues/2133,1671649530,IC_kwDOBm6k_c5jo1j6,54462,HaveF,2023-08-09T15:41:14Z,2023-08-09T15:41:14Z,NONE,"Yes, using this approach(`datasette install -r requirements.txt`) will result in more consistency. I'm curious about the results of the `datasette plugins --all` command. Where will we use the output of this command? Will it include configuration information for these plugins in the future? If so, will we need to consider the configuration of these plugins in addition to installing them on different computers?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1841501975,[feature request]`datasette install plugins.json` options, https://github.com/simonw/datasette/issues/2133#issuecomment-1672360472,https://api.github.com/repos/simonw/datasette/issues/2133,1672360472,IC_kwDOBm6k_c5jrjIY,54462,HaveF,2023-08-10T00:31:24Z,2023-08-10T00:31:24Z,NONE,"It looks very nice now. Finally, no more manual installation of plugins one by one. Thank you, Simon! ❤️ ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1841501975,[feature request]`datasette install plugins.json` options, https://github.com/simonw/datasette/issues/1171#issuecomment-754911290,https://api.github.com/repos/simonw/datasette/issues/1171,754911290,MDEyOklzc3VlQ29tbWVudDc1NDkxMTI5MA==,59874,rcoup,2021-01-05T21:31:15Z,2021-01-05T21:31:15Z,NONE,"We did this for [Sno](https://sno.earth) under macOS — it's a PyInstaller binary/setup which uses [Packages](http://s.sudre.free.fr/Software/Packages/about.html) for packaging. * [Building & Signing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L67-L95) * [Packaging & Notarizing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L121-L215) * [Github Workflow](https://github.com/koordinates/sno/blob/master/.github/workflows/build.yml#L228-L269) has the CI side of it FYI (if you ever get to it) for Windows you need to get a code signing certificate. And if you want automated CI, you'll want to get an ""EV CodeSigning for HSM"" certificate from GlobalSign, which then lets you put the certificate into Azure Key Vault. Which you can use with [azuresigntool](https://github.com/vcsjones/AzureSignTool) to sign your code & installer. (Non-EV certificates are a waste of time, the user still gets big warnings at install time). ","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",778450486,GitHub Actions workflow to build and sign macOS binary executables, https://github.com/simonw/datasette/issues/93#issuecomment-344424382,https://api.github.com/repos/simonw/datasette/issues/93,344424382,MDEyOklzc3VlQ29tbWVudDM0NDQyNDM4Mg==,67420,atomotic,2017-11-14T22:42:16Z,2017-11-14T22:42:16Z,NONE,"tried quickly, this seems working: ``` ~ pip3 install pyinstaller ~ pyinstaller -F --add-data /usr/local/lib/python3.6/site-packages/datasette/templates:datasette/templates --add-data /usr/local/lib/python3.6/site-packages/datasette/static:datasette/static /usr/local/bin/datasette ~ du -h dist/datasette 6.8M dist/datasette ~ file dist/datasette dist/datasette: Mach-O 64-bit executable x86_64 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273944952,Package as standalone binary, https://github.com/simonw/datasette/issues/93#issuecomment-344430299,https://api.github.com/repos/simonw/datasette/issues/93,344430299,MDEyOklzc3VlQ29tbWVudDM0NDQzMDI5OQ==,67420,atomotic,2017-11-14T23:06:33Z,2017-11-14T23:06:33Z,NONE,"i will look better tomorrow, it's late i surely made some mistake https://asciinema.org/a/ZyAWbetrlriDadwWyVPUWB94H","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273944952,Package as standalone binary, https://github.com/simonw/datasette/issues/93#issuecomment-344516406,https://api.github.com/repos/simonw/datasette/issues/93,344516406,MDEyOklzc3VlQ29tbWVudDM0NDUxNjQwNg==,67420,atomotic,2017-11-15T08:09:41Z,2017-11-15T08:09:41Z,NONE,actually you can use travis to build for linux/macos and [appveyor](https://www.appveyor.com/) to build for windows.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273944952,Package as standalone binary, https://github.com/dogsheep/dogsheep-photos/pull/36#issuecomment-1006708046,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36,1006708046,IC_kwDOD079W848ASVO,71983,scoates,2022-01-06T16:04:46Z,2022-01-06T16:04:46Z,NONE,"This one got me, today, too. 👍","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",988493790,Correct naming of tool in readme, https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47,645515103,MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw==,73579,hpk42,2020-06-17T17:30:01Z,2020-06-17T17:30:01Z,NONE,"It's the one with python3.7:: >>> sqlite3.sqlite_version '3.11.0' On Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote: > That means your version of SQLite is old enough that it doesn't support the FTS5 extension. > > Could you share what operating system you're running, and what the output is that you get from running this? > > python -c 'import sqlite3; print(sqlite3.connect("":memory:"").execute(""select sqlite_version()"").fetchone()[0])' > > I can teach this tool to fall back on FTS4 if FTS5 isn't available. > > -- > You are receiving this because you authored the thread. > Reply to this email directly or view it on GitHub: > https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",639542974,Fall back to FTS4 if FTS5 is not available, https://github.com/simonw/datasette/issues/2145#issuecomment-1685471752,https://api.github.com/repos/simonw/datasette/issues/2145,1685471752,IC_kwDOBm6k_c5kdkII,77071,pkulchenko,2023-08-21T01:07:23Z,2023-08-21T01:07:23Z,NONE,"@simonw, since you're referencing ""rowid"" column by name, I just want to note that there may be an existing rowid column with completely different semantics (https://www.sqlite.org/lang_createtable.html#rowid), which is likely to break this logic. I don't see a good way to detect a proper ""rowid"" name short of checking if there is a field with that name and using the alternative (`_rowid_` or `oid`), which is not ideal, but may work. In terms of the original issue, maybe a way to deal with it is to use rowid by default and then use primary key for WITHOUT ROWID tables (as they are guaranteed to be not null), but I suspect it may require significant changes to the API (and doesn't fully address the issue of what value to pass to indicate NULL when editing records). Would it make sense to generate a random string to indicate NULL values when editing?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1857234285,If a row has a primary key of `null` various things break, https://github.com/simonw/datasette/issues/2143#issuecomment-1690799608,https://api.github.com/repos/simonw/datasette/issues/2143,1690799608,IC_kwDOBm6k_c5kx434,77071,pkulchenko,2023-08-24T00:09:47Z,2023-08-24T00:10:41Z,NONE,"@simonw, FWIW, I do exactly the same thing for one of my projects (both to allow multiple configuration files to be passed on the command line and setting individual values) and it works quite well for me and my users. I even use the same parameter name for both (https://studio.zerobrane.com/doc-configuration#configuration-via-command-line), but I understand why you may want to use different ones for files and individual values. There is one small difference that I accept code snippets, but I don't think it matters much in this case.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1855885427,De-tangling Metadata before Datasette 1.0, https://github.com/simonw/datasette/issues/267#issuecomment-414860009,https://api.github.com/repos/simonw/datasette/issues/267,414860009,MDEyOklzc3VlQ29tbWVudDQxNDg2MDAwOQ==,78156,annapowellsmith,2018-08-21T23:57:51Z,2018-08-21T23:57:51Z,NONE,"Looks to me like hashing, redirects and caching were documented as part of https://github.com/simonw/datasette/commit/788a542d3c739da5207db7d1fb91789603cdd336#diff-3021b0e065dce289c34c3b49b3952a07 - so perhaps this can be closed? :tada:","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323716411,"Documentation for URL hashing, redirects and cache policy", https://github.com/simonw/datasette/pull/2155#issuecomment-1737906995,https://api.github.com/repos/simonw/datasette/issues/2155,1737906995,IC_kwDOBm6k_c5nllsz,79087,cadeef,2023-09-27T18:44:02Z,2023-09-27T18:44:02Z,NONE,@simonw Any chance we can get this tiny patch merged for an upcoming release?,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1865572575,Fix hupper.start_reloader entry point, https://github.com/simonw/datasette/issues/838#issuecomment-643083451,https://api.github.com/repos/simonw/datasette/issues/838,643083451,MDEyOklzc3VlQ29tbWVudDY0MzA4MzQ1MQ==,79913,tsibley,2020-06-12T06:04:14Z,2020-06-12T06:04:14Z,NONE,"Hmm, I haven't tried removing `ProxyPassReverse`, but it doesn't touch the HTML, which is the issue I'm seeing. You can read the [documentation here](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassreverse). `ProxyPassReverse` is a standard directive when proxying with Apache. I've used it dozens of times with other applications. Looking a little more at the code, I think the issue here is that the behaviour of `base_url` makes sense when Datasette is _mounted_ at a path within a larger application, but not when HTTP requests are being _proxied_ to it. In a _mount_ situation, it is perfectly fine to construct URLs reusing the domain and path from the request. In a _proxy_ situation, it never is, as the domain and path in the request are not the domain and path that the non-proxy client actually needs to use. That is, links which include the Apache → Datasette request origin, `localhost:8001`, instead of the browser → Apache request origin, `example.com`, will be broken. The tests you pointed to also reflect this in two ways: 1. They strip a leading `http://localhost`, allowing such URLs in the facet links to pass, but inclusion of that in a proxy situation would mean the URL is broken. 2. The test client emits direct ASGI events instead of actual proxied HTTP requests. The headers of these ASGI events don't reflect the way an HTTP proxy works; instead they pass through the original request path which contains `base_url`. This works because Datasette responds to requests equivalently at either `/…` or `/{base_url}/…`, which makes some sense in a _mount_ situation but is unconventional (albeit workable) for a proxied app. Apps that support being proxied automatically support being mounted, but apps that only support being mounted don't automatically support being proxied.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097,Incorrect URLs when served behind a proxy with base_url set, https://github.com/simonw/datasette/issues/1238#issuecomment-790857004,https://api.github.com/repos/simonw/datasette/issues/1238,790857004,MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA==,79913,tsibley,2021-03-04T19:06:55Z,2021-03-04T19:06:55Z,NONE,"@rgieseke Ah, that's super helpful. Thank you for the workaround for now!","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813899472,Custom pages don't work with base_url setting, https://github.com/simonw/datasette/issues/838#issuecomment-795893813,https://api.github.com/repos/simonw/datasette/issues/838,795893813,MDEyOklzc3VlQ29tbWVudDc5NTg5MzgxMw==,79913,tsibley,2021-03-10T18:43:39Z,2021-03-10T18:43:39Z,NONE,"@simonw Unfortunately this issue as I reported it is not actually solved in version 0.55. Every link which is returned by the `Datasette.absolute_url` method is still wrong, because it uses the request URL as the base. This still includes the suggested facet links and pagination links. What I wrote originally still stands: > Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include `base_url` or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at `http://localhost:8001`. > > I looked into this a little in the source code, and it seems to be an issue anywhere `request.url` or `request.path` is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the `path_with_…` family of utility functions and the `Datasette.absolute_url` method. Would you prefer to re-open this issue or have me create a new one? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097,Incorrect URLs when served behind a proxy with base_url set, https://github.com/simonw/datasette/issues/838#issuecomment-795939998,https://api.github.com/repos/simonw/datasette/issues/838,795939998,MDEyOklzc3VlQ29tbWVudDc5NTkzOTk5OA==,79913,tsibley,2021-03-10T19:16:55Z,2021-03-10T19:16:55Z,NONE,"Nod. The problem with the tests is that they're ignoring the origin (hostname, port) of links. In a reverse proxy situation, the frontend request origin is different than the backend request origin. The problem is Datasette generates links with the backend request origin.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097,Incorrect URLs when served behind a proxy with base_url set, https://github.com/simonw/datasette/issues/838#issuecomment-795950636,https://api.github.com/repos/simonw/datasette/issues/838,795950636,MDEyOklzc3VlQ29tbWVudDc5NTk1MDYzNg==,79913,tsibley,2021-03-10T19:24:13Z,2021-03-10T19:24:13Z,NONE,"I think this could be solved by one of: 1. Stop generating absolute URLs, e.g. ones that include an origin. Relative URLs with absolute paths are fine, as long as they take `base_url` into account (as they do now, yay!). 2. Extend `base_url` to include the expected frontend origin, and then use that information when generating absolute URLs. 3. Document which HTTP headers the reverse proxy should set (e.g. the `X-Forwarded-*` family of conventional headers) to pass the frontend origin information to Datasette, and then use that information when generating absolute URLs. Option 1 seems like the easiest to me, if you can get away with never having to generate an absolute URL.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097,Incorrect URLs when served behind a proxy with base_url set, https://github.com/simonw/sqlite-utils/issues/8#issuecomment-464341721,https://api.github.com/repos/simonw/sqlite-utils/issues/8,464341721,MDEyOklzc3VlQ29tbWVudDQ2NDM0MTcyMQ==,82988,psychemedia,2019-02-16T12:08:41Z,2019-02-16T12:08:41Z,NONE,We also get an error if a column name contains a `.`,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",403922644,Problems handling column names containing spaces or - , https://github.com/simonw/sqlite-utils/issues/18#issuecomment-480621924,https://api.github.com/repos/simonw/sqlite-utils/issues/18,480621924,MDEyOklzc3VlQ29tbWVudDQ4MDYyMTkyNA==,82988,psychemedia,2019-04-07T19:31:42Z,2019-04-07T19:31:42Z,NONE,"I've just noticed that SQLite lets you IGNORE inserts that collide with a pre-existing key. This can be quite handy if you have a dataset that keeps changing in part, and you don't want to upsert and replace pre-existing PK rows but you do want to ignore collisions to existing PK rows. Do `sqlite_utils` support such (cavalier!) behaviour?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",413871266,.insert/.upsert/.insert_all/.upsert_all should add missing columns, https://github.com/simonw/sqlite-utils/issues/8#issuecomment-482994231,https://api.github.com/repos/simonw/sqlite-utils/issues/8,482994231,MDEyOklzc3VlQ29tbWVudDQ4Mjk5NDIzMQ==,82988,psychemedia,2019-04-14T15:04:07Z,2019-04-14T15:29:33Z,NONE," PLEASE IGNORE THE BELOW... I did a package update and rebuilt the kernel I was working in... may just have been an old version of sqlite_utils, seems to be working now. (Too many containers / too many environments!) Has an issue been reintroduced here with FTS? eg I'm getting an error thrown by spaces in column names here: ``` /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) def enable_fts(self, columns, fts_version=""FTS5""): --> 329 ""Enables FTS on the specified columns"" 330 sql = """""" 331 CREATE VIRTUAL TABLE ""{table}_fts"" USING {fts_version} ( ``` when trying an `insert_all`. Also, if a col has a `.` in it, I seem to get: ``` /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near ""."": syntax error ``` (Can't post a worked minimal example right now; racing trying to build something against a live timing screen that will stop until next weekend in an hour or two...) PS Hmmm I did a test and they seem to work; I must be messing up s/where else... ``` import sqlite3 from sqlite_utils import Database dbname='testingDB_sqlite_utils.db' #!rm $dbname conn = sqlite3.connect(dbname, timeout=10) #Setup database tables c = conn.cursor() setup=''' CREATE TABLE IF NOT EXISTS ""test1"" ( ""NO"" INTEGER, ""NAME"" TEXT ); CREATE TABLE IF NOT EXISTS ""test2"" ( ""NO"" INTEGER, `TIME OF DAY` TEXT ); CREATE TABLE IF NOT EXISTS ""test3"" ( ""NO"" INTEGER, `AVG. SPEED (MPH)` FLOAT ); ''' c.executescript(setup) DB = Database(conn) import pandas as pd df1 = pd.DataFrame({'NO':[1,2],'NAME':['a','b']}) DB['test1'].insert_all(df1.to_dict(orient='records')) df2 = pd.DataFrame({'NO':[1,2],'TIME OF DAY':['early on','late']}) DB['test2'].insert_all(df2.to_dict(orient='records')) df3 = pd.DataFrame({'NO':[1,2],'AVG. SPEED (MPH)':['123.3','123.4']}) DB['test3'].insert_all(df3.to_dict(orient='records')) ``` all seem to work ok. I'm still getting errors in my set up though, which is not too different to the text cases?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",403922644,Problems handling column names containing spaces or - , https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093,https://api.github.com/repos/simonw/sqlite-utils/issues/73,571138093,MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw==,82988,psychemedia,2020-01-06T13:28:31Z,2020-01-06T13:28:31Z,NONE,"I think I actually had several issues in play... The missing key was one, but I think there is also an issue as per below. For example, in the following: ```python def init_testdb(dbname='test.db'): if os.path.exists(dbname): os.remove(dbname) conn = sqlite3.connect(dbname) db = Database(conn) return conn, db conn, db = init_testdb() c = conn.cursor() c.executescript('CREATE TABLE ""test1"" (""Col1"" TEXT, ""Col2"" TEXT, PRIMARY KEY (""Col1""));') c.executescript('CREATE TABLE ""test2"" (""Col1"" TEXT, ""Col2"" TEXT, PRIMARY KEY (""Col1""));') print('Test 1...') for i in range(3): db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) print('Test 2...') for i in range(3): db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}, {'Col1':'c','Col2':'x'}], pk=('Col1')) print('Done...') --------------------------------------------------------------------------- Test 1... Test 2... IndexError: list index out of range --------------------------------------------------------------------------- IndexError Traceback (most recent call last) in 22 print('Test 2...') 23 for i in range(3): ---> 24 db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) 25 db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}, 26 {'Col1':'c','Col2':'x'}], pk=('Col1')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1159 upsert=True, 1160 ) 1161 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert) 1097 # self.last_rowid will be 0 if a ""INSERT OR IGNORE"" happened 1098 if (hash_id or pk) and self.last_rowid: -> 1099 row = list(self.rows_where(""rowid = ?"", [self.last_rowid]))[0] 1100 if hash_id: 1101 self.last_pk = row[hash_id] IndexError: list index out of range ``` the first test works but the second fails. Is the length of the list of items being upserted leaking somewhere?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",545407916,upsert_all() throws issue when upserting to empty table, https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321,https://api.github.com/repos/simonw/sqlite-utils/issues/73,573047321,MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ==,82988,psychemedia,2020-01-10T14:02:56Z,2020-01-10T14:09:23Z,NONE,"Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop). Could it be sqlite3? I'm on 3.30.1. UPDATE: just tried it on jupyter.org/try and I get the error there, too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",545407916,upsert_all() throws issue when upserting to empty table, https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213,https://api.github.com/repos/simonw/sqlite-utils/issues/73,580745213,MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw==,82988,psychemedia,2020-01-31T14:02:38Z,2020-01-31T14:21:09Z,NONE,"So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error. If I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly. It feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",545407916,upsert_all() throws issue when upserting to empty table, https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1033641009,https://api.github.com/repos/simonw/sqlite-utils/issues/203,1033641009,IC_kwDOCGYnMM49nBwx,82988,psychemedia,2022-02-09T11:06:18Z,2022-02-09T11:06:18Z,NONE,"Is there any progress elsewhere on the handling of compound / composite foreign keys, or is this PR still effectively open?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829,changes to allow for compound foreign keys, https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041313679,https://api.github.com/repos/simonw/sqlite-utils/issues/406,1041313679,IC_kwDOCGYnMM4-ES-P,82988,psychemedia,2022-02-16T09:59:51Z,2022-02-16T10:00:10Z,NONE,"The `CustomColumnType()` approach looks good. This pushes you into the mindspace that you are defining and working with a custom column type. When creating the table, you could then error, or at least warn, if someone wasn't setting a column on a `type` or a custom column type, which I guess is where `mypy` comes in?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1128466114,Creating tables with custom datatypes, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1041325398,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1041325398,IC_kwDOCGYnMM4-EV1W,82988,psychemedia,2022-02-16T10:12:48Z,2022-02-16T10:18:55Z,NONE,"> My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns Other possible pairs: unconventional date/datetime and timezone pairs eg `2022-02-16::17.00, London`; or more generally, numerical value and unit of measurement pairs (eg if you want to cast into and out of different measurement units using packages like `pint`) or currencies etc. Actually, in that case, I guess you may be presenting things that are unit typed already, and so a conversion would need to parse things into an appropriate, possibly two column `value, unit` format. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737,Advanced class-based `conversions=` mechanism, https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041363433,https://api.github.com/repos/simonw/sqlite-utils/issues/406,1041363433,IC_kwDOCGYnMM4-EfHp,82988,psychemedia,2022-02-16T10:57:03Z,2022-02-16T10:57:19Z,NONE,"Wondering if this actually relates to https://github.com/simonw/sqlite-utils/issues/402 ? I also wonder if this would be a sensible approach for eg registering `pint` based quantity conversions into and out of the db, perhaps storing the quantity as a serialised `magnitude measurement` single column string?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1128466114,Creating tables with custom datatypes, https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1248440137,https://api.github.com/repos/simonw/sqlite-utils/issues/406,1248440137,IC_kwDOCGYnMM5Kaa9J,82988,psychemedia,2022-09-15T18:13:50Z,2022-09-15T18:13:50Z,NONE,"I was wondering if you have any more thoughts on this? I have a tangible use case now: adding a ""vector"" column to a database to support semantic search using doc2vec embeddings ([example](https://psychemedia.github.io/storynotes/Lang_Doc2Vec.html); note that the `vtfunc` package may no longer be reliable...).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1128466114,Creating tables with custom datatypes, https://github.com/simonw/datasette/issues/1166#issuecomment-783560017,https://api.github.com/repos/simonw/datasette/issues/1166,783560017,MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw==,94334,thorn0,2021-02-22T18:00:57Z,2021-02-22T18:13:11Z,NONE,"Hi! I don't think Prettier supports this syntax for globs: `datasette/static/*[!.min].js` Are you sure that works? Prettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (`foo-[1-5].js`). Tested it. Apparently, it works as a negated character class in regexes (like `[^.min]`). I wonder where this syntax comes from. Micromatch doesn't support that: ```js micromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']); // result: [""static/n.js""] -- brackets are treated like [!.min] in regexes, without negation ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777140799,Adopt Prettier for JavaScript code formatting, https://github.com/simonw/datasette/issues/1143#issuecomment-744618787,https://api.github.com/repos/simonw/datasette/issues/1143,744618787,MDEyOklzc3VlQ29tbWVudDc0NDYxODc4Nw==,114388,yurivish,2020-12-14T18:15:00Z,2020-12-15T02:21:53Z,NONE,"From a quick look at the README, it does seem to do everything I need, thanks! I think the argument for inclusion in core is to lower the chances of unwanted data access. A local server can be accessed by anybody who can make an HTTP request to your computer regardless of CORS rules, but the default `*` rule additionally opens up access to the local instance to any website you visit while it is running. That's probably not what people typically intend, particularly when the data is of a sensitive nature. A default of requiring the user to specify the origin (allowing `*` but encouraging a narrower scope) would solve this problem entirely, I think. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",764059235,"More flexible CORS support in core, to encourage good security practices", https://github.com/simonw/datasette/issues/1989#issuecomment-1402347667,https://api.github.com/repos/simonw/datasette/issues/1989,1402347667,IC_kwDOBm6k_c5TliCT,116795,pax,2023-01-24T17:48:59Z,2023-01-24T17:48:59Z,NONE,"The problem (in my particular use case) with using a VIEW is that I'd need one of the columns to be searchable – but that ([enable-fts](https://github.com/simonw/datasette-search-all)) doesn't work with views :/ __ side-suggestion: I don't know how feasible this might be, but when one column (or table) would be marked as hidden, could the _Download SQLite DB_ link take that into account? 🧐","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1531991339,Suggestion: Hiding columns, https://github.com/simonw/datasette/issues/191#issuecomment-381602005,https://api.github.com/repos/simonw/datasette/issues/191,381602005,MDEyOklzc3VlQ29tbWVudDM4MTYwMjAwNQ==,119974,coleifer,2018-04-16T13:37:32Z,2018-04-16T13:37:32Z,NONE,I don't think it should be too difficult... you can look at what @ghaering did with pysqlite (and similarly what I copied for pysqlite3). You would theoretically take an amalgamation build of Sqlite (all code in a single .c and .h file). The `AmalgamationLibSqliteBuilder` class detects the presence of this amalgamated source file and builds a statically-linked pysqlite.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",310533258,Figure out how to bundle a more up-to-date SQLite, https://github.com/simonw/datasette/issues/191#issuecomment-392828475,https://api.github.com/repos/simonw/datasette/issues/191,392828475,MDEyOklzc3VlQ29tbWVudDM5MjgyODQ3NQ==,119974,coleifer,2018-05-29T15:50:18Z,2018-05-29T15:50:18Z,NONE,"Python standard-library SQLite dynamically links against the system sqlite3. So presumably you installed a more up-to-date sqlite3 somewhere on your `LD_LIBRARY_PATH`. To compile a statically-linked pysqlite you need to include an amalgamation in the project root when building the extension. Read the relevant setup.py.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",310533258,Figure out how to bundle a more up-to-date SQLite, https://github.com/simonw/sqlite-utils/issues/516#issuecomment-1339837520,https://api.github.com/repos/simonw/sqlite-utils/issues/516,1339837520,IC_kwDOCGYnMM5P3ExQ,122043,briandorsey,2022-12-06T19:02:30Z,2022-12-06T19:02:30Z,NONE,"`--verbose` or `--verbosity=ABC` were the flags I looked for. Expected to see them at a global level near `--version`. But only sharing because that's where I looked first, I don't have a strong opinion on the exact wording/location. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1479914599,Feature request: output number of ignored/replaced rows for insert command, https://github.com/simonw/sqlite-utils/issues/516#issuecomment-1339839767,https://api.github.com/repos/simonw/sqlite-utils/issues/516,1339839767,IC_kwDOCGYnMM5P3FUX,122043,briandorsey,2022-12-06T19:04:17Z,2022-12-06T19:04:17Z,NONE,"Current behavior is different when importing via stdin vs. a file. Imports from a file give a progress bar. For this new request, I'd love to see total imported and total ignored/replaced in both cases. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1479914599,Feature request: output number of ignored/replaced rows for insert command, https://github.com/simonw/sqlite-utils/issues/516#issuecomment-1339844639,https://api.github.com/repos/simonw/sqlite-utils/issues/516,1339844639,IC_kwDOCGYnMM5P3Ggf,122043,briandorsey,2022-12-06T19:08:13Z,2022-12-06T19:08:13Z,NONE,"Reference: tqdm (https://tqdm.github.io/) shows a progress bar when total is known, and falls back to counting units of work done for streams. File input vs. stdin seems like a similar situation. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1479914599,Feature request: output number of ignored/replaced rows for insert command, https://github.com/simonw/datasette/issues/1886#issuecomment-1313271719,https://api.github.com/repos/simonw/datasette/issues/1886,1313271719,IC_kwDOBm6k_c5ORu-n,124274,lucapette,2022-11-14T08:25:12Z,2022-11-14T08:25:12Z,NONE,"Nothing spectacular yet but I think this falls under ""cool/cute application of datasette"": [improving fakedata performance for fun](https://lucapette.me/writing/improving-fakedata-performance-for-fun/). tl;dr I used datasette to visualize benchmarking data.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1447050738,"Call for birthday presents: if you're using Datasette, let us know how you're using it here", https://github.com/dogsheep/dogsheep-photos/issues/28#issuecomment-751125270,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28,751125270,MDEyOklzc3VlQ29tbWVudDc1MTEyNTI3MA==,129786,jmelloy,2020-12-24T22:26:22Z,2020-12-24T22:26:22Z,NONE,This comes around if you’ve run the photo export without running an s3 upload. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",624490929,Invalid SQL no such table: main.uploads, https://github.com/simonw/datasette/issues/316#issuecomment-398030903,https://api.github.com/repos/simonw/datasette/issues/316,398030903,MDEyOklzc3VlQ29tbWVudDM5ODAzMDkwMw==,132230,gavinband,2018-06-18T12:00:43Z,2018-06-18T12:00:43Z,NONE,"I should add that I'm using datasette version 0.22, Python 2.7.10 on Mac OS X. Happy to send more info if helpful.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",333238932,datasette inspect takes a very long time on large dbs, https://github.com/simonw/datasette/issues/316#issuecomment-398109204,https://api.github.com/repos/simonw/datasette/issues/316,398109204,MDEyOklzc3VlQ29tbWVudDM5ODEwOTIwNA==,132230,gavinband,2018-06-18T16:12:45Z,2018-06-18T16:12:45Z,NONE,"Hi Simon, Thanks for the response. Ok I'll try running `datasette inspect` up front. In principle the db won't change. However, the site's in development and it's likely I'll need to add views and some auxiliary (smaller) tables as I go along. I will need to be careful with this if it involves an inspect step in each iteration, though. g. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",333238932,datasette inspect takes a very long time on large dbs, https://github.com/simonw/datasette/issues/394#issuecomment-567127981,https://api.github.com/repos/simonw/datasette/issues/394,567127981,MDEyOklzc3VlQ29tbWVudDU2NzEyNzk4MQ==,132978,terrycojones,2019-12-18T17:18:06Z,2019-12-18T17:18:06Z,NONE,"Agreed, this would be nice to have. I'm currently working around it in `nginx` with additional location blocks: ``` location /datasette/ { proxy_pass http://127.0.0.1:8001/; proxy_redirect off; include proxy_params; } location /dna-protein-genome/ { proxy_pass http://127.0.0.1:8001/dna-protein-genome/; proxy_redirect off; include proxy_params; } location /rna-protein-genome/ { proxy_pass http://127.0.0.1:8001/rna-protein-genome/; proxy_redirect off; include proxy_params; } ``` The 2nd and 3rd above are my databases. This works, but I have a small problem with URLs like `/rna-protein-genome?params....` that I could fix with some more nginx munging. I seem to do this sort of thing once every 5 years and then have to look it all up again. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-567128636,https://api.github.com/repos/simonw/datasette/issues/394,567128636,MDEyOklzc3VlQ29tbWVudDU2NzEyODYzNg==,132978,terrycojones,2019-12-18T17:19:46Z,2019-12-18T17:19:46Z,NONE,"Hmmm, wait, maybe my mindless (copy/paste) use of `proxy_redirect` is causing me grief...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-567219479,https://api.github.com/repos/simonw/datasette/issues/394,567219479,MDEyOklzc3VlQ29tbWVudDU2NzIxOTQ3OQ==,132978,terrycojones,2019-12-18T21:24:23Z,2019-12-18T21:24:23Z,NONE,"@simonw What about allowing a base url. The `....` tag has been around forever. Then just use all relative URLs, which I guess is likely what you already do. See https://www.w3schools.com/TAGs/tag_base.asp","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/596#issuecomment-567225156,https://api.github.com/repos/simonw/datasette/issues/596,567225156,MDEyOklzc3VlQ29tbWVudDU2NzIyNTE1Ng==,132978,terrycojones,2019-12-18T21:40:35Z,2019-12-18T21:40:35Z,NONE,"I initially went looking for a way to hide a column completely. Today I found the setting to truncate cells, but it applies to all cells. In my case I have text columns that can have many thousands of characters. I was wondering whether the metadata JSON would be an appropriate place to indicate how columns are displayed (on a col-by-col basis). E.g., I'd like to be able to specify that only 20 chars of a given column be shown, and the font be monospace. But maybe I can do that in some other way - I barely know anything about datasette yet, sorry!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",507454958,Handle really wide tables better, https://github.com/simonw/datasette/issues/596#issuecomment-567226048,https://api.github.com/repos/simonw/datasette/issues/596,567226048,MDEyOklzc3VlQ29tbWVudDU2NzIyNjA0OA==,132978,terrycojones,2019-12-18T21:43:13Z,2019-12-18T21:43:13Z,NONE,"Meant to add that of course it would be better not to reinvent CSS (one time was already enough). But one option would be to provide a mechanism to specify a CSS class for a column (a cell, a row...) and let the user give a URL path to a CSS file on the command line.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",507454958,Handle really wide tables better, https://github.com/simonw/datasette/issues/394#issuecomment-602911133,https://api.github.com/repos/simonw/datasette/issues/394,602911133,MDEyOklzc3VlQ29tbWVudDYwMjkxMTEzMw==,132978,terrycojones,2020-03-23T23:22:10Z,2020-03-23T23:22:10Z,NONE,"I just updated #652 to remove a merge conflict. I think it's an easy way to add this functionality. I don't have time to do more though, sorry!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-602916580,https://api.github.com/repos/simonw/datasette/issues/394,602916580,MDEyOklzc3VlQ29tbWVudDYwMjkxNjU4MA==,132978,terrycojones,2020-03-23T23:37:06Z,2020-03-23T23:37:06Z,NONE,"@simonw You're welcome - I was just trying it out back in December as I thought it should work. Now there's a pandemic to work on though.... so no time at all for more at the moment. BTW, I have datasette running on several protein and full (virus) genome databases I build, and it's great - thank you! Hi and best regards to you & Nat :-)","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 1, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-603539349,https://api.github.com/repos/simonw/datasette/issues/394,603539349,MDEyOklzc3VlQ29tbWVudDYwMzUzOTM0OQ==,132978,terrycojones,2020-03-24T22:33:23Z,2020-03-24T22:33:23Z,NONE,"Hi Simon - I'm just (trying, at least) to follow along in the above. I can't try it out now, but I will if no one else gets to it. Sorry I didn't write any tests in the original bit of code I pushed - I was just trying to see if it could work & whether you'd want to maybe head in that direction. Anyway, thank you, I will certainly use this. Comment back here if no one tried it out & I'll make time.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-603849245,https://api.github.com/repos/simonw/datasette/issues/394,603849245,MDEyOklzc3VlQ29tbWVudDYwMzg0OTI0NQ==,132978,terrycojones,2020-03-25T13:48:13Z,2020-03-25T13:48:13Z,NONE,"Great - thanks again. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/596#issuecomment-720741903,https://api.github.com/repos/simonw/datasette/issues/596,720741903,MDEyOklzc3VlQ29tbWVudDcyMDc0MTkwMw==,132978,terrycojones,2020-11-02T21:44:45Z,2020-11-02T21:44:45Z,NONE,Hi & thanks for the note @simonw! I wish I had more time to play with (and contribute to) datasette. I know you don't need me to tell you that it's super cool :-),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",507454958,Handle really wide tables better, https://github.com/simonw/datasette/issues/759#issuecomment-624860451,https://api.github.com/repos/simonw/datasette/issues/759,624860451,MDEyOklzc3VlQ29tbWVudDYyNDg2MDQ1MQ==,133845,Krazybug,2020-05-06T20:03:01Z,2020-05-06T20:04:42Z,NONE,"Thank you. Now it's ok with the url http://localhost:8001/index/summary?_search=language%3Aeng&_sort=title&_searchmode=raw But I'm not able to manage it in the metadata file. Here is mine (note that the sort column is taken into account) Here it is: ``` { ""databases"": { ""index"": { ""tables"": { ""summary"": { ""sort"": ""title"", ""searchmode"": ""raw"" } } } } } ``` Any idea ?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612673948,fts search on a column doesn't work anymore due to escape_fts, https://github.com/simonw/datasette/issues/661#issuecomment-580075725,https://api.github.com/repos/simonw/datasette/issues/661,580075725,MDEyOklzc3VlQ29tbWVudDU4MDA3NTcyNQ==,134771,dvhthomas,2020-01-30T04:17:51Z,2020-01-30T04:17:51Z,NONE,Thanks for the elegant solution to the problem as stated. I'm packaging right now :-),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",555832585,"--port option to expose a port other than 8001 in ""datasette package""", https://github.com/simonw/datasette/issues/675#issuecomment-590405736,https://api.github.com/repos/simonw/datasette/issues/675,590405736,MDEyOklzc3VlQ29tbWVudDU5MDQwNTczNg==,141844,aviflax,2020-02-24T16:06:27Z,2020-02-24T16:06:27Z,NONE,"> So yeah - if you're happy to design this I think it would be worth us adding. Great! I’ll give it a go. > Small design suggestion: allow `--copy` to be applied multiple times… Makes a ton of sense, will do. > Also since Click arguments can take multiple options I don't think you need to have the `:` in there - although if it better matches Docker's own UI it might be more consistent to have it. Great point. I double checked the docs for `docker cp` and in that context the colon is used to delimit a container and a path, while spaces are used to separate the source and target. The usage string is: ```text docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|- docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH ``` so in fact it’ll be more consistent to use a space to delimit the source and destination paths, like so: ```shell $ datasette package --copy /the/source/path /the/target/path data.db ``` and I suppose the short-form version of the option should be `cp` like so: ```shell $ datasette package -cp /the/source/path /the/target/path data.db ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",567902704,--cp option for datasette publish and datasette package for shipping additional files and directories, https://github.com/simonw/datasette/issues/675#issuecomment-590593247,https://api.github.com/repos/simonw/datasette/issues/675,590593247,MDEyOklzc3VlQ29tbWVudDU5MDU5MzI0Nw==,141844,aviflax,2020-02-24T23:02:52Z,2020-02-24T23:02:52Z,NONE,"> Design looks great to me. Excellent, thanks! > I'm not keen on two letter short versions (`-cp`) - I'd rather either have a single character or no short form at all. Hmm, well, anyone running `datasette package` is probably at least somewhat familiar with UNIX CLIs… so how about `--cp` as a middle ground? ```shell $ datasette package --cp /the/source/path /the/target/path data.db ``` I think I like it. Easy to remember!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",567902704,--cp option for datasette publish and datasette package for shipping additional files and directories, https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1708945716,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1708945716,IC_kwDODFE5qs5l3HE0,150855,iloveitaly,2023-09-06T19:12:33Z,2023-09-06T19:12:33Z,NONE,@maxhawkins curious why you didn't use the stdlib `mailbox` to parse the `mbox` files?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1710950671,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1710950671,IC_kwDODFE5qs5l-wkP,150855,iloveitaly,2023-09-08T01:22:49Z,2023-09-08T01:22:49Z,NONE,"Makes sense, thanks for explaining!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/twitter-to-sqlite/issues/31#issuecomment-1251845216,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/31,1251845216,IC_kwDODEm0Qs5KnaRg,150986,dckc,2022-09-20T05:05:03Z,2022-09-20T05:05:03Z,NONE,"yay! Thanks a bunch for the `twitter-to-sqlite friends` command! The twitter ""Download an archive of your data"" feature doesn't include who I follow, so this is particularly handy. The whole Dogsheep thing is great :) I've written about similar things under [cloud-services](https://www.madmode.com/search/label/cloud-services/): - 2021: [Closet Librarian Approach to Cloud Services](https://www.madmode.com/2021/closet-librarian-approach-cloud-services.html) - 2015: [jukekb \- Browse iTunes libraries and upload playlists to Google Music](https://www.madmode.com/2015/jukekb)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",520508502,"""friends"" command (similar to ""followers"")", https://github.com/simonw/datasette/issues/983#issuecomment-752882797,https://api.github.com/repos/simonw/datasette/issues/983,752882797,MDEyOklzc3VlQ29tbWVudDc1Mjg4Mjc5Nw==,154364,dracos,2020-12-31T08:07:59Z,2020-12-31T15:04:32Z,NONE,"If you're using arrow functions, you can presumably use default parameters, not much difference in support. That would save you 9 bytes. But OTOH you need `""use strict"";` to use arrow functions etc, and that's 13 bytes. Your latest 250-byte one, with use strict, gzips to 199 bytes. The following might be 292 bytes, but compresses to 204, basically the same, and works in any browser (well, IE9+) at all: `var datasette=datasette||{};datasette.plugins=function(){var d={};return{register:function(b,c,e){d[b]||(d[b]=[]);d[b].push([c,e])},call:function(b,c){c=c||{};var e=[];(d[b]||[]).forEach(function(a){a=a[0].apply(a[0],a[1].map(function(a){return c[a]}));void 0!==a&&e.push(a)});return e}}}();` Source for that is below; I replaced the [fn,parameters] because closure-compiler includes a polyfill for that, and I ran `closure-compiler --language_out ECMASCRIPT3`: ```js var datasette = datasette || {}; datasette.plugins = (() => { var registry = {}; return { register: (hook, fn, parameters) => { if (!registry[hook]) { registry[hook] = []; } registry[hook].push([fn, parameters]); }, call: (hook, args) => { args = args || {}; var results = []; (registry[hook] || []).forEach((data) => { /* Call with the correct arguments */ var result = data[0].apply(data[0], data[1].map(parameter => args[parameter])); if (result !== undefined) { results.push(result); } }); return results; } }; })(); ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/simonw/datasette/issues/983#issuecomment-752888552,https://api.github.com/repos/simonw/datasette/issues/983,752888552,MDEyOklzc3VlQ29tbWVudDc1Mjg4ODU1Mg==,154364,dracos,2020-12-31T08:33:11Z,2020-12-31T08:34:27Z,NONE,"If you could say that all hook functions had to accept one options parameter (and could use object destructuring if they wished to only see a subset), you could have this, which minifies (to all-browser-JS) to 200 bytes, gzips to 146, and works practically the same: ```js var datasette = datasette || {}; datasette.plugins = (() => { var registry = {}; return { register: (hook, fn) => { registry[hook] = registry[hook] || []; registry[hook].push(fn); }, call: (hook, args) => { var results = (registry[hook] || []).map(fn => fn(args||{})); return results; } }; })(); ``` `var datasette=datasette||{};datasette.plugins=function(){var b={};return{register:function(a,c){b[a]=b[a]||[];b[a].push(c)},call:function(a,c){return(b[a]||[]).map(function(a){return a(c||{})})}}}();` Called the same, definitions tiny bit different: ```js datasette.plugins.register('numbers', ({a, b}) => a + b) datasette.plugins.register('numbers', o => o.a * o.b) datasette.plugins.call('numbers', {a: 4, b: 6}) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/simonw/datasette/issues/1165#issuecomment-753033121,https://api.github.com/repos/simonw/datasette/issues/1165,753033121,MDEyOklzc3VlQ29tbWVudDc1MzAzMzEyMQ==,154364,dracos,2020-12-31T19:33:47Z,2020-12-31T19:33:47Z,NONE,"Sorry to go on about it, but it's my only example ;) And thought it might be of interest/use. Here is FixMyStreet's Cypress workflow https://github.com/mysociety/fixmystreet/blob/master/.github/workflows/cypress.yml with the master script that sets up server etc at https://github.com/mysociety/fixmystreet/blob/master/bin/browser-tests (that has features such as working inside/outside Vagrant, and can do JS code coverage) and then the tests are at https://github.com/mysociety/fixmystreet/tree/master/.cypress/cypress/integration","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",776635426,Mechanism for executing JavaScript unit tests, https://github.com/simonw/datasette/issues/983#issuecomment-753587963,https://api.github.com/repos/simonw/datasette/issues/983,753587963,MDEyOklzc3VlQ29tbWVudDc1MzU4Nzk2Mw==,154364,dracos,2021-01-03T09:02:50Z,2021-01-03T10:00:05Z,NONE,"> but I'm already commited to requiring support for () => {} arrow functions Don't think you are :) (e.g. gzipped, using arrow functions in my example saves 2 bytes over spelling out function). On FMS, past month, looking at popular browsers, looks like we'd have 95.41% arrow support, 94.19% module support, and 4.58% (mostly IE9/IE11/Safari 9) supporting neither.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/simonw/datasette/issues/1298#issuecomment-823064725,https://api.github.com/repos/simonw/datasette/issues/1298,823064725,MDEyOklzc3VlQ29tbWVudDgyMzA2NDcyNQ==,154364,dracos,2021-04-20T07:57:14Z,2021-04-20T07:57:14Z,NONE,"My suggestions, originally made on twitter, but might be better here now: 1. Could have a CSS shadow (one of the comments on https://stackoverflow.com/questions/44793453/how-do-i-add-a-top-and-bottom-shadow-while-scrolling-but-only-when-needed is a codepen for horizontal instead of vertical); 2. Could give the table a max-height (either the window or work out the available space) so that it is both vertically/horizontally scrollable and you don't have to scroll to the bottom in order to see this; 3. On a desktop browser, what I think I'd want is an absolute grid to work with - left query/filters, TR chart (or map), BR table. No problem with scrolling then. Here is a mockup I made when this was about the map plugin: ![image](https://user-images.githubusercontent.com/154364/115358389-82c47e00-a1b5-11eb-8a63-0ca14fd23d32.png) ![image](https://user-images.githubusercontent.com/154364/115358454-97087b00-a1b5-11eb-9501-cf884ae72d7c.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/1298#issuecomment-823102978,https://api.github.com/repos/simonw/datasette/issues/1298,823102978,MDEyOklzc3VlQ29tbWVudDgyMzEwMjk3OA==,154364,dracos,2021-04-20T08:51:23Z,2021-04-20T08:51:23Z,NONE,"2. Max height would still let you scroll the page to underneath the facets to the table, but would mean the table would never take up more than your window size, so the horizontal scrollbar would be visible as soon as the table took up the size of the window. 3. Yes, this wouldn't be for mobile :) It'd be desktop-only styling. On mobile you can scroll much more easily with touch, anyway. In your case, perhaps better would be the whole top half would be facets, bottom left quadrant chart, bottom right table. Depends upon the particular use case, as you say.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/1362#issuecomment-855428296,https://api.github.com/repos/simonw/datasette/issues/1362,855428296,MDEyOklzc3VlQ29tbWVudDg1NTQyODI5Ng==,154364,dracos,2021-06-06T16:53:20Z,2021-06-06T16:53:20Z,NONE,"> Presumably this would also require adding Content-Security-Policy to the Vary header though, which will have a nasty effect on Cloudflare and Fastly and such like. No, because Vary header is about *request* headers that cause the response to vary, not response headers.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",912864936,Consider using CSP to protect against future XSS, https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1111506339,https://api.github.com/repos/simonw/sqlite-utils/issues/159,1111506339,IC_kwDOCGYnMM5CQD2j,154364,dracos,2022-04-27T21:35:13Z,2022-04-27T21:35:13Z,NONE,"Just stumbled across this, wondering why none of my deletes were working.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",702386948,.delete_where() does not auto-commit (unlike .insert() or .upsert()), https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1493051222,https://api.github.com/repos/simonw/sqlite-utils/issues/159,1493051222,IC_kwDOCGYnMM5Y_idW,154364,dracos,2023-04-01T17:21:05Z,2023-04-01T17:21:05Z,NONE,"In a related issue, nearly a year later I just stumbled across this again, as I wondered why none of my rebuild-fts were rebuilding. It looks like: disable_fts in db.py commits; enable_fts partly commits except the last step (due to executescript committing a pending transaction); rebuild_fts won't commit unless manually done as above with e.g. a context manager.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",702386948,.delete_where() does not auto-commit (unlike .insert() or .upsert()), https://github.com/simonw/sqlite-utils/issues/265#issuecomment-1493052396,https://api.github.com/repos/simonw/sqlite-utils/issues/265,1493052396,IC_kwDOCGYnMM5Y_ivs,154364,dracos,2023-04-01T17:27:18Z,2023-04-01T17:27:18Z,NONE,"`enable_fts` is a function in datasette, not in this repo, which doesn't do any escaping of search terms. It sounds like from https://docs.datasette.io/en/stable/full_text_search.html#advanced-sqlite-search-queries you might want to enable raw searching, as otherwise it's disabled and everything is escaped by default.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",907795562,Using enable_fts before search term, https://github.com/simonw/datasette/issues/1519#issuecomment-983890815,https://api.github.com/repos/simonw/datasette/issues/1519,983890815,IC_kwDOBm6k_c46pPt_,157158,phubbard,2021-12-01T17:50:09Z,2021-12-01T17:50:09Z,NONE,"thanks so very much for the prompt attention and fix! Plus, the animated GIF showing the bug is just extra and I love it. Interactions like this are why I love open source.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058790545,base_url is omitted in JSON and CSV views, https://github.com/simonw/datasette/issues/236#issuecomment-920543967,https://api.github.com/repos/simonw/datasette/issues/236,920543967,IC_kwDOBm6k_c423mLf,164214,sethvincent,2021-09-16T03:19:08Z,2021-09-16T03:19:08Z,NONE,":wave: I just put together a small example using the lambda container image support: https://github.com/sethvincent/datasette-aws-lambda-example It uses mangum and AWS's [python runtime interface client](https://github.com/aws/aws-lambda-python-runtime-interface-client) to handle the lambda event stuff. I'd be happy to help with a publish plugin for AWS lambda as I plan to use this for upcoming projects. The example uses the [serverless](https://www.serverless.com) cli for deployment but there might be a more suitable deployment approach for the plugin. It would be cool if users didn't have to install anything additional other than the aws cli and its associated config/credentials setup.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/dogsheep/healthkit-to-sqlite/issues/9#issuecomment-514745798,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9,514745798,MDEyOklzc3VlQ29tbWVudDUxNDc0NTc5OA==,166463,tholo,2019-07-24T18:25:36Z,2019-07-24T18:25:36Z,NONE,"This is on macOS 10.14.6, with Python 3.7.4, packages in the virtual environment: ``` Package Version ------------------- ------- aiofiles 0.4.0 Click 7.0 click-default-group 1.2.1 datasette 0.29.2 h11 0.8.1 healthkit-to-sqlite 0.3.1 httptools 0.0.13 hupper 1.8.1 importlib-metadata 0.18 Jinja2 2.10.1 MarkupSafe 1.1.1 Pint 0.8.1 pip 19.2.1 pluggy 0.12.0 setuptools 41.0.1 sqlite-utils 1.7 tabulate 0.8.3 uvicorn 0.8.4 uvloop 0.12.2 websockets 7.0 zipp 0.5.2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",472429048,Too many SQL variables, https://github.com/dogsheep/healthkit-to-sqlite/issues/9#issuecomment-515370687,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9,515370687,MDEyOklzc3VlQ29tbWVudDUxNTM3MDY4Nw==,166463,tholo,2019-07-26T09:01:19Z,2019-07-26T09:01:19Z,NONE,"Yes, that did fix the issue I was seeing — it will now import my complete HealthKit data. Thorsten > On Jul 25, 2019, at 23:07, Simon Willison wrote: > > @tholo this should be fixed in just-released version 0.3.2 - could you run a pip install -U healthkit-to-sqlite and let me know if it works for you now? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub , or mute the thread . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",472429048,Too many SQL variables, https://github.com/simonw/datasette/issues/1384#issuecomment-1062124485,https://api.github.com/repos/simonw/datasette/issues/1384,1062124485,IC_kwDOBm6k_c4_TrvF,167160,khusmann,2022-03-08T19:26:32Z,2022-03-08T19:26:32Z,NONE,"Looks like I'm late to the party here, but wanted to join the convo if there's still time before this interface is solidified in v1.0. My plugin use case is for education / social science data, which is meta-data heavy in the documentation of measurement scales, instruments, collection procedures, etc. that I want to connect to columns, tables, and dbs (and render in static pages, but looks like I can do that with the jinja plugin hook). I'm still digging in and I think @brandonrobertz 's approach will work for me at least for now, but I want to bump this thread in the meantime -- are there still plans for an async metadata hook at some point in the future? (or are you considering other directions?)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1065929510,https://api.github.com/repos/simonw/datasette/issues/1384,1065929510,IC_kwDOBm6k_c4_iMsm,167160,khusmann,2022-03-12T17:49:59Z,2022-03-12T17:49:59Z,NONE,"Ok, I'm taking a slightly different approach, which I think is sort of close to the in-memory _metadata table idea. I'm using a startup hook to load metadata / other info from the database, which I store in the datasette object for later: ``` @hookimpl def startup(datasette): async def inner(): datasette._mypluginmetadata = # await db query return inner ``` Then, I can use this in other plugins: ``` @hookimpl def render_cell(value, column, table, database, datasette): # use datasette._mypluginmetadata ``` For my app I don't need anything to update dynamically so it's fine to pre-populate everything on startup. It's also good to have things precached especially for a hook like render_cell, which would otherwise require a ton of redundant db queries. Makes me wonder if we could take a sort of similar caching approach with the internal _metadata table. Like have a little watchdog that could query all of the attached dbs for their _metadata tables every 5min or so, which then could be merged into the in memory _metadata table which then could be accessed sync by the plugins, or something like that. For most the use cases I can think of, live updates don't need to take into effect immediately; refreshing a cache every 5min or on some other trigger (adjustable w a config setting) would be just fine. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1065951744,https://api.github.com/repos/simonw/datasette/issues/1384,1065951744,IC_kwDOBm6k_c4_iSIA,167160,khusmann,2022-03-12T19:47:17Z,2022-03-12T19:47:17Z,NONE,"Awesome, thanks @brandonrobertz ! The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases. Rather than extending that plugin, at this point I've realized I need a lot more flexibility in metadata for my data model (esp around formatting cell values and custom file exports) so rather than extending that I'll continue working on a plugin specific to my app. If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config? > Making the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict. I agree -- because things like `render_cell` will potentially want to read metadata/config, `get_metadata` should really remain sync and lightweight, which we can do with something like the remote-metadata plugin that could also poll metadata tables in attached databases. That leaves your app, where it sounds like you want changes made by the user in the browser in to be immediately reflected, rather than have to wait for the next metadata refresh. In this case I wonder if you could have your app make a sync write to the datasette object so the change would have the immediate effect, but then have a separate async polling mechanism to eventually write that change out to the database for long-term persistence. Then you'd have the best of both worlds, I think? But probably not worth the trouble if your use cases are small (and/or you're not reading metadata/config from tight loops like render_cell).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1066143991,https://api.github.com/repos/simonw/datasette/issues/1384,1066143991,IC_kwDOBm6k_c4_jBD3,167160,khusmann,2022-03-13T17:13:09Z,2022-03-13T17:13:09Z,NONE,"Thanks for taking the time to reply @brandonrobertz , this is really helpful info. > See ""Many small queries are efficient in sqlite"" for more information on the rationale here. Also note that in the datasette-live-config reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had. Ah, that's nifty! Yeah, then caching on the python side is likely a waste :) I'm new to working with sqlite so this is super good to know the many-small-queries is a common pattern > I tested on very large Datasette deployments (hundreds of DBs, millions of rows). For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests? I'm less concerned now that I know a little more about sqlite's caching, but that special situation will jump you to a few orders of magnitude above what the sqlite article describes (e.g. 200 vs 20,000 queries+metadata merges for a page displaying 100 rows of a 200 column table). It wouldn't scale with db size as much as # of visible cells being rendered on the page, although they would be identical queries I suppose so will cache well. (If you didn't test this specific situation, no worries -- I'm just trying to calibrate my intuition on this and can do my own benchmarks at some point.) > Simon talked about eventually making something like this a standard feature of Datasette Yeah, getting metadata (and static pages as well for that matter) from internal tables definitely has my vote for including as a standard feature! Its really nice to be able to distribute a single *.db with all the metadata and static pages bundled. My metadata are sufficiently complex/domain specific that it makes sense to continue on my own plugin for now, but I'll be thinking about more general parts I can spin off as possible contributions to liveconfig (if you're open to them) or other plugins in this ecosystem.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1066194130,https://api.github.com/repos/simonw/datasette/issues/1384,1066194130,IC_kwDOBm6k_c4_jNTS,167160,khusmann,2022-03-13T22:23:04Z,2022-03-13T22:23:04Z,NONE,"Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/639#issuecomment-558437707,https://api.github.com/repos/simonw/datasette/issues/639,558437707,MDEyOklzc3VlQ29tbWVudDU1ODQzNzcwNw==,172847,pkoppstein,2019-11-26T03:02:53Z,2019-11-26T03:03:29Z,NONE,"@simonw - Thanks for the reply! My reading of the heroku documents is that if one sets things up using git, then one can use ""git push"" (from a {local, GitHub, GitLab} git repository to Heroku) to ""update"" a Heroku deployment, but I'm not sure exactly how this works. However, assuming there is some way to use ""git push"" to update the Heroku deployment, the question becomes how can one do this in conjunction with datasette. Again based on my reading the heroku documents, it would seem that the following should work (but it doesn't quite): 1) Use datasette to create a deployment (named MYAPP) 2) Put it in maintenance mode 3) heroku git:clone -a MYAPP -- This results in an empty repository (as expected) 4) In another directory, heroku slugs:download -a MYAPP 5) Copy the downloaded slug into the repository 6) Make some change to metadata.json 6) Commit and push it back 7) Take the deployment out of maintenance mode 8) Refresh the deployment Using the heroku console, I've verified that the edits appear on heroku, but somehow they are not reflected in the running app. I'm hopeful that with some small tweak or perhaps the addition of a bit of voodoo, this strategy will work. I think it will be important to get this working for another reason: getting Heroku, Cloudcube, and datasette to work together, to overcome the slug size limitation so that large SQLite databases can be deployed to Heroku using Datasette. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",527670799,updating metadata.json without recreating the app, https://github.com/simonw/datasette/issues/639#issuecomment-558852316,https://api.github.com/repos/simonw/datasette/issues/639,558852316,MDEyOklzc3VlQ29tbWVudDU1ODg1MjMxNg==,172847,pkoppstein,2019-11-26T22:54:23Z,2019-11-26T22:54:23Z,NONE,"@jacobian - Thanks for your help. Having to upload an entire slug each time a small change is needed in `metadata.json` seems no better than the current situation so I probably won't go down that rabbit hole just yet. In any case, the really important goal is moving the SQLite file out of Heroku in a way that the Heroku app can still read it efficiently. Is this possible? Is Cloudcube the right place to start? Is there any alternative? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",527670799,updating metadata.json without recreating the app, https://github.com/simonw/datasette/issues/639#issuecomment-559916057,https://api.github.com/repos/simonw/datasette/issues/639,559916057,MDEyOklzc3VlQ29tbWVudDU1OTkxNjA1Nw==,172847,pkoppstein,2019-11-30T06:08:50Z,2019-11-30T06:08:50Z,NONE,"@simonw, @jacobian - I was able to resolve the metadata.json issue by adding `-m metadata.json` to the Procfile. Now `git push heroku master` picks up the changes, though I have the impression that heroku is doing more work than necessary (e.g. one of the information messages is: `Installing requirements with pip`). I also had to set the environment variable WEB_CONCURRENCY -- I used WEB_CONCURRENCY=1. I am still anxious to know whether it's possible for Datasette on Heroku to access the SQLite file at another location. Cloudcube seems the most promising, and I'm hoping it can be done by tweaking the Procfile suitably, but maybe that's too optimistic? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",527670799,updating metadata.json without recreating the app, https://github.com/simonw/datasette/issues/176#issuecomment-356161672,https://api.github.com/repos/simonw/datasette/issues/176,356161672,MDEyOklzc3VlQ29tbWVudDM1NjE2MTY3Mg==,173848,yozlet,2018-01-09T02:35:35Z,2018-01-09T02:35:35Z,NONE,"@wulfmann I think I disagree, except I'm not entirely sure what you mean by that first paragraph. The JSON API that Datasette currently exposes is quite different to GraphQL. Furthermore, there's no ""just"" about connecting micro-graphql to a DB; at least, no more ""just"" than adding any other API. You still need to configure the schema, which is exactly the kind of thing that Datasette does for JSON API. This is why I think that GraphQL's a good fit here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/983#issuecomment-706413753,https://api.github.com/repos/simonw/datasette/issues/983,706413753,MDEyOklzc3VlQ29tbWVudDcwNjQxMzc1Mw==,173848,yozlet,2020-10-09T21:41:12Z,2020-10-09T21:41:12Z,NONE,"If you don't mind a somewhat bonkers idea: how about a JS client-side plugin capability that allows any user looking at a Datasette site to pull in external plugins for data manipulation, even if the Datasette owner hasn't added them? (Yes, this may be _much_ too ambitious. If you're remotely interested, maybe fork this discussion to a different issue.) This is some fascinating reading about what JS sandboxing looks like these days: https://www.figma.com/blog/how-we-built-the-figma-plugin-system/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/simonw/datasette/issues/983#issuecomment-753218817,https://api.github.com/repos/simonw/datasette/issues/983,753218817,MDEyOklzc3VlQ29tbWVudDc1MzIxODgxNw==,173848,yozlet,2020-12-31T22:32:25Z,2020-12-31T22:32:25Z,NONE,"Amazing work! And you've put in far more work than I'd expect to reduce the payload (which is admirable). So, to add a plugin with the current design, it goes in (a) the template or (b) a bookmarklet, right?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/simonw/datasette/issues/2189#issuecomment-1728192688,https://api.github.com/repos/simonw/datasette/issues/2189,1728192688,IC_kwDOBm6k_c5nAiCw,173848,yozlet,2023-09-20T17:53:31Z,2023-09-20T17:53:31Z,NONE,"`/me munches popcorn at a furious rate, utterly entralled`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1901416155,Server hang on parallel execution of queries to named in-memory databases, https://github.com/simonw/datasette/issues/1955#issuecomment-1357084279,https://api.github.com/repos/simonw/datasette/issues/1955,1357084279,IC_kwDOBm6k_c5Q43Z3,178162,andrewdotn,2022-12-19T04:34:16Z,2022-12-19T04:34:16Z,NONE,"You were super-close on the python version of the test here, changing `http` to `https` on 8b73fc6b47dffd8836f5c58aae1e57c1f66a5754 is enough to pass the test: ```diff diff --git a/tests/conftest.py b/tests/conftest.py index 69dee68b4a3f..ba07a11d37f6 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -207,7 +207,7 @@ def ds_localhost_https_server(tmp_path_factory): stderr=subprocess.STDOUT, cwd=tempfile.gettempdir(), ) - wait_until_responds(""http://localhost:8042/"", verify=client_cert) + wait_until_responds(""https://localhost:8042/"", verify=client_cert) # Check it started successfully assert not ds_proc.poll(), ds_proc.stdout.read().decode(""utf-8"") yield ds_proc, client_cert ``` My speculation about what was happening here: when `wait_until_responds()` would time out due to SSL connection problems, because `.terminate()` isn’t in a `finally`, the datasette process wouldn’t get killed. That could (1) hang CI and (2) cause all your future local test runs to mysteriously fail because they’d be secretly talking to that old datasette process still hanging around from a past test run with an old temporary server certificate, and that old server cert wouldn’t validate against your newly-created ca cert. A `finally` for `.terminate()` would help; a fancier version could be a context manager for running the external `datasette` process that could: - ensure the process always exited when no longer needed - if you want to be fancy, call `terminate()`, `wait()` for a short timeout for the process to exit, then try `kill()` and `wait()` again; raise an exception complaining about the seemingly-unkillable process if all that fails - raise an error if the process exited with a non-zero error code; here it’s likely that some `datasette` executions were secretly failing with `[Errno 48] error while attempting to bind on address ('127.0.0.1', 8042): address already in use`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1496652622,"invoke_startup() is not run in some conditions, e.g. gunicorn/uvicorn workers, breaking lots of things", https://github.com/simonw/datasette/issues/1398#issuecomment-889599513,https://api.github.com/repos/simonw/datasette/issues/1398,889599513,IC_kwDOBm6k_c41BjYZ,192984,aitoehigie,2021-07-30T03:21:49Z,2021-07-30T03:21:49Z,NONE,Does the library support this now?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",947044667,Documentation on using Datasette as a library, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1419734229,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1419734229,IC_kwDOCGYnMM5Un2zV,193185,cldellow,2023-02-06T20:53:28Z,2023-02-06T21:16:29Z,NONE,"I think it's not currently possible: sqlite-utils requires that it be one of `integer`, `text`, `float`, `blob` ([see code](https://github.com/simonw/sqlite-utils/blob/fc221f9b62ed8624b1d2098e564f525c84497969/sqlite_utils/cli.py#L2266)) IMO, this is a bit of friction and it would be nice if it was more permissive. SQLite permits developers to use any data type when creating a table. For example, this is a perfectly cromulent sqlite session that creates a table with columns of type `baz` and `bar`: ``` sqlite> create table foo(column1 baz, column2 bar); sqlite> .schema foo CREATE TABLE foo(column1 baz, column2 bar); sqlite> select * from pragma_table_info('foo'); cid name type notnull dflt_value pk ---------- ---------- ---------- ---------- ---------- ---------- 0 column1 baz 0 0 1 column2 bar 0 0 ``` The idea is that the application developer will know what meaning to ascribe to those types. For example, I'm working on a plugin to Datasette. Dates are tricky to handle. If you have some existing rows, you can look at the values in them to know how a user is serializing the dates -- as an ISO 8601 string? An RFC 3339 string? With millisecond precision? With timezone offset? But if you don't yet have any rows, you have to guess. If the column is of type `TEXT`, you don't even know that it's meant to hold a date! In this case, my plugin will look to see if the column is of type `DATE` or `DATETIME`, and assume a certain representation when writing. Perhaps there is an argument that sqlite-utils is trying to conform to SQLite's strict mode, and that is why it limits the choices. In strict mode, SQLite requires that the data type be one of `INT`, `INTEGER`, `REAL`, `TEXT`, `BLOB`, `ANY`. But that can't be the case -- sqlite-utils supports `FLOAT`, which is not one of the valid types in strict mode, and it rejects `INT`, `REAL` and `ANY`, which _are_ valid.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1419740776,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1419740776,IC_kwDOCGYnMM5Un4Zo,193185,cldellow,2023-02-06T20:59:01Z,2023-02-06T20:59:01Z,NONE,"That said, it looks like the check is only enforced at the CLI level. If you use the API directly, I think it'll work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420809773,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1420809773,IC_kwDOCGYnMM5Ur9Yt,193185,cldellow,2023-02-07T13:53:01Z,2023-02-07T13:53:01Z,NONE,"Ah, it looks like that is controlled by this dict: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L178 I suspect you could overwrite the datetime entry to achieve what you want","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420992261,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1420992261,IC_kwDOCGYnMM5Usp8F,193185,cldellow,2023-02-07T15:45:58Z,2023-02-07T15:45:58Z,NONE,"I'd support that, but I'm not the author of this library. One challenge is that would be a breaking change. Do you see a way to enable it without affecting existing users or bumping the major version number?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421033725,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1421033725,IC_kwDOCGYnMM5Us0D9,193185,cldellow,2023-02-07T16:12:13Z,2023-02-07T16:12:13Z,NONE,"I think the bigger issue is that `sqlite-utils` mixes mechanism (it implements the [12-step way to alter SQLite tables](https://www.sqlite.org/lang_altertable.html#otheralter)) and policy (it has an opinionated stance on what column types should be used). That might be a design choice to make it accessible to users by providing a reasonable set of defaults, but it doesn't quite fit my use case. It might make sense to extract a separate library that provides just the mechanisms, and then `sqlite-utils` would sit on top of that library with its opinionated set of policies. That would be a very big change, though. I might take a stab at extracting the library, but just for the table schema migration piece, not all the other features that `sqlite-utils` supports. I wouldn't expect `sqlite-utils` to depend on it. Part of my motivation is that I want to provide some other abilities, too, like support for CHECK constraints. I see that the issue in this repo (https://github.com/simonw/sqlite-utils/issues/358) proposes a bunch of short-hand constraints, which I wouldn't want to accidentally expose to people -- I want a layer that is a 1:1 mapping to SQLite.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421081939,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1421081939,IC_kwDOCGYnMM5Us_1T,193185,cldellow,2023-02-07T16:42:25Z,2023-02-07T16:43:42Z,NONE,"Ha, yes, I might end up making something very niche. That's OK. I'm building a UI for [Datasette](https://datasette.io/) that lets users make schema changes, so it's important to me that the tool work in a non-surprising way -- if you ask for a column of type X, you should get type X. If the column or table previously had CHECK constraints, they shouldn't be silently removed. And so on. I had hoped that I could just lean on sqlite-utils, but I think it's a little too surprising.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/datasette/issues/1994#issuecomment-1399957897,https://api.github.com/repos/simonw/datasette/issues/1994,1399957897,IC_kwDOBm6k_c5TcamJ,201897,julienma,2023-01-23T08:21:08Z,2023-01-23T08:21:08Z,NONE,"Me too, on a M1. Not sure if it's compatible?","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 1, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1536851861,Stuck on loading screen, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-781451701,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,781451701,MDEyOklzc3VlQ29tbWVudDc4MTQ1MTcwMQ==,203343,Btibert3,2021-02-18T16:06:21Z,2021-02-18T16:06:21Z,NONE,Awesome!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790198930,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,790198930,MDEyOklzc3VlQ29tbWVudDc5MDE5ODkzMA==,203343,Btibert3,2021-03-04T00:58:40Z,2021-03-04T00:58:40Z,NONE,"I am just seeing this sorry, yes! I will kick the tires later on tonight. My apologies for the delay.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790934616,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,790934616,MDEyOklzc3VlQ29tbWVudDc5MDkzNDYxNg==,203343,Btibert3,2021-03-04T20:54:44Z,2021-03-04T20:54:44Z,NONE,"Sorry for the delay, I got sidetracked after class last night. I am getting the following error: ``` /content# google-takeout-to-sqlite mbox takeout.db Takeout/Mail/gmail.mbox Usage: google-takeout-to-sqlite [OPTIONS] COMMAND [ARGS]...Try 'google-takeout-to-sqlite --help' for help. Error: No such command 'mbox'. ``` On the box, I installed with pip after cloning: https://github.com/UtahDave/google-takeout-to-sqlite.git","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1002735370,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1002735370,IC_kwDODFE5qs47xIcK,203343,Btibert3,2021-12-29T18:58:23Z,2021-12-29T18:58:23Z,NONE,"@maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/simonw/datasette/issues/327#issuecomment-1043609198,https://api.github.com/repos/simonw/datasette/issues/327,1043609198,IC_kwDOBm6k_c4-NDZu,208018,dholth,2022-02-17T23:21:36Z,2022-02-17T23:33:01Z,NONE,"On fly.io. This particular database goes from 1.4GB to 200M. Slower, part of that might be having no `--inspect-file`? ``` $ datasette publish fly ... --generate-dir /tmp/deploy-this ... $ mksquashfs large.db large.squashfs $ rm large.db # don't accidentally put it in the image $ cat Dockerfile FROM python:3.8 COPY . /app WORKDIR /app ENV DATASETTE_SECRET 'xyzzy' RUN pip install -U datasette # RUN datasette inspect large.db --inspect-file inspect-data.json ENV PORT 8080 EXPOSE 8080 CMD mount -o loop -t squashfs large.squashfs /mnt; datasette serve --host 0.0.0.0 -i /mnt/large.db --cors --port $PORT ``` It would also be possible to copy the file onto the ~6GB available on the ephemeral container filesystem on startup. A little against the spirit of the thing? On this example the whole docker image is 2.42 GB and the squashfs version is 1.14 GB.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",335200136,Explore if SquashFS can be used to shrink size of packaged Docker containers, https://github.com/simonw/datasette/issues/327#issuecomment-1043626870,https://api.github.com/repos/simonw/datasette/issues/327,1043626870,IC_kwDOBm6k_c4-NHt2,208018,dholth,2022-02-17T23:37:24Z,2022-02-17T23:37:24Z,NONE,On second thought any kind of quick-to-decompress-on-startup could be helpful if we're paying for the container registry and deployment bandwidth but not ephemeral storage.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",335200136,Explore if SquashFS can be used to shrink size of packaged Docker containers, https://github.com/simonw/datasette/issues/1634#issuecomment-1065334891,https://api.github.com/repos/simonw/datasette/issues/1634,1065334891,IC_kwDOBm6k_c4_f7hr,208018,dholth,2022-03-11T17:38:08Z,2022-03-11T17:38:08Z,NONE,I noticed the image was large when using fly. Is it possible to use a -slim base?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1131295060,Update Dockerfile generated by `datasette publish`, https://github.com/simonw/datasette/pull/1574#issuecomment-1105464661,https://api.github.com/repos/simonw/datasette/issues/1574,1105464661,IC_kwDOBm6k_c5B5A1V,208018,dholth,2022-04-21T16:51:24Z,2022-04-21T16:51:24Z,NONE,"tfw you have more ephemeral storage than upstream bandwidth ``` FROM python:3.10-slim AS base RUN apt update && apt -y install zstd ENV DATASETTE_SECRET 'sosecret' RUN --mount=type=cache,target=/root/.cache/pip pip install -U datasette datasette-pretty-json datasette-graphql ENV PORT 8080 EXPOSE 8080 FROM base AS pack COPY . /app WORKDIR /app RUN datasette inspect --inspect-file inspect-data.json RUN zstd --rm *.db FROM base AS unpack COPY --from=pack /app /app WORKDIR /app CMD [""/bin/bash"", ""-c"", ""shopt -s nullglob && zstd --rm -d *.db.zst && datasette serve --host 0.0.0.0 --cors --inspect-file inspect-data.json --metadata metadata.json --create --port $PORT *.db""] ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1084193403,introduce new option for datasette package to use a slim base image, https://github.com/simonw/datasette/issues/409#issuecomment-472875713,https://api.github.com/repos/simonw/datasette/issues/409,472875713,MDEyOklzc3VlQ29tbWVudDQ3Mjg3NTcxMw==,209967,michaelmcandrew,2019-03-14T14:14:39Z,2019-03-14T14:14:39Z,NONE,also linking this zeit issue in case it is helpful: https://github.com/zeit/now-examples/issues/163#issuecomment-440125769,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",408376825,Zeit API v1 does not work for new users - need to migrate to v2, https://github.com/simonw/datasette/issues/417#issuecomment-751504136,https://api.github.com/repos/simonw/datasette/issues/417,751504136,MDEyOklzc3VlQ29tbWVudDc1MTUwNDEzNg==,212369,drewda,2020-12-27T19:02:06Z,2020-12-27T19:02:06Z,NONE,"Very much looking forward to seeing this functionality come together. This is probably out-of-scope for an initial release, but in the future it could be useful to also think of how to run this is a container'ized context. For example, an immutable datasette container that points to an S3 bucket of SQLite DBs or CSVs. Or an immutable datasette container pointing to a NFS volume elsewhere on a Kubernetes cluster.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421546944,Datasette Library, https://github.com/simonw/datasette/issues/185#issuecomment-370461231,https://api.github.com/repos/simonw/datasette/issues/185,370461231,MDEyOklzc3VlQ29tbWVudDM3MDQ2MTIzMQ==,222245,carlmjohnson,2018-03-05T15:43:56Z,2018-03-05T15:44:27Z,NONE,"Yes. I think the simplest implementation is to change lines like ```python metadata = self.ds.metadata.get('databases', {}).get(name, {}) ``` to ```python metadata = { **self.ds.metadata, **self.ds.metadata.get('databases', {}).get(name, {}), } ``` so that specified inner values overwrite outer values, but only if they exist.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",299760684,Metadata should be a nested arbitrary KV store, https://github.com/simonw/datasette/issues/185#issuecomment-376590265,https://api.github.com/repos/simonw/datasette/issues/185,376590265,MDEyOklzc3VlQ29tbWVudDM3NjU5MDI2NQ==,222245,carlmjohnson,2018-03-27T16:32:51Z,2018-03-27T16:32:51Z,NONE,">I think the templates themselves should be able to indicate if they want the inherited values or not. That way we could support arbitrary key/values and avoid the application code having special knowledge of license_url etc. Yes, you could have `metadata` that works like `metadata` does currently and `inherited_metadata` that works with inheritance.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",299760684,Metadata should be a nested arbitrary KV store, https://github.com/simonw/datasette/issues/185#issuecomment-376592044,https://api.github.com/repos/simonw/datasette/issues/185,376592044,MDEyOklzc3VlQ29tbWVudDM3NjU5MjA0NA==,222245,carlmjohnson,2018-03-27T16:38:23Z,2018-03-27T16:38:23Z,NONE,"It would be nice to also allow arbitrary keys (maybe under a parent key called params or something to prevent conflicts). For our datasette project, we just have a bunch of dictionaries defined in the base template for things like site URL and column humanized names: https://github.com/baltimore-sun-data/salaries-datasette/blob/master/templates/base.html It would be cleaner if this were in the metadata.json.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",299760684,Metadata should be a nested arbitrary KV store, https://github.com/simonw/datasette/issues/185#issuecomment-376614973,https://api.github.com/repos/simonw/datasette/issues/185,376614973,MDEyOklzc3VlQ29tbWVudDM3NjYxNDk3Mw==,222245,carlmjohnson,2018-03-27T17:49:00Z,2018-03-27T17:49:00Z,NONE,"@simonw Other than metadata, the biggest item on wishlist for the salaries project was the ability to reorder by column. Of course, that could be done with a custom SQL query, but we didn't want to have to reimplement all the nav/pagination stuff from scratch. @carolinp, feel free to add your thoughts. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",299760684,Metadata should be a nested arbitrary KV store, https://github.com/simonw/datasette/issues/193#issuecomment-379142500,https://api.github.com/repos/simonw/datasette/issues/193,379142500,MDEyOklzc3VlQ29tbWVudDM3OTE0MjUwMA==,222245,carlmjohnson,2018-04-06T04:05:58Z,2018-04-06T04:05:58Z,NONE,"You could try pulling out a validate query strings method. If it fails validation build the error object from the message. If it passes, you only need to go down a happy path. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",310882100,Cleaner mechanism for handling custom errors, https://github.com/simonw/datasette/issues/184#issuecomment-379788103,https://api.github.com/repos/simonw/datasette/issues/184,379788103,MDEyOklzc3VlQ29tbWVudDM3OTc4ODEwMw==,222245,carlmjohnson,2018-04-09T15:15:11Z,2018-04-09T15:15:11Z,NONE,Visit https://salaries.news.baltimoresun.com/salaries/bad-table.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",292011379,500 from missing table name, https://github.com/simonw/datasette/issues/189#issuecomment-379791047,https://api.github.com/repos/simonw/datasette/issues/189,379791047,MDEyOklzc3VlQ29tbWVudDM3OTc5MTA0Nw==,222245,carlmjohnson,2018-04-09T15:23:45Z,2018-04-09T15:23:45Z,NONE,Awesome!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",309471814,Ability to sort (and paginate) by column, https://github.com/simonw/datasette/issues/189#issuecomment-381429213,https://api.github.com/repos/simonw/datasette/issues/189,381429213,MDEyOklzc3VlQ29tbWVudDM4MTQyOTIxMw==,222245,carlmjohnson,2018-04-15T18:54:22Z,2018-04-15T18:54:22Z,NONE,"I think I found a bug. I tried to sort by middle initial in my salaries set, and many middle initials are null. The next_url gets set by Datasette to: http://localhost:8001/salaries-d3a5631/2017+Maryland+state+salaries?_next=None%2C391&_sort=middle_initial But then `None` is interpreted literally and it tries to find a name with the middle initial ""None"" and ends up skipping ahead to O on page 2.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",309471814,Ability to sort (and paginate) by column, https://github.com/simonw/datasette/issues/185#issuecomment-412663658,https://api.github.com/repos/simonw/datasette/issues/185,412663658,MDEyOklzc3VlQ29tbWVudDQxMjY2MzY1OA==,222245,carlmjohnson,2018-08-13T21:04:11Z,2018-08-13T21:04:11Z,NONE,That seems good to me.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",299760684,Metadata should be a nested arbitrary KV store, https://github.com/simonw/datasette/issues/227#issuecomment-439194286,https://api.github.com/repos/simonw/datasette/issues/227,439194286,MDEyOklzc3VlQ29tbWVudDQzOTE5NDI4Ng==,222245,carlmjohnson,2018-11-15T21:20:37Z,2018-11-15T21:20:37Z,NONE,I'm diving back into https://salaries.news.baltimoresun.com and what I really want is the ability to inject the request into my context.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",315960272,prepare_context() plugin hook, https://github.com/simonw/datasette/pull/426#issuecomment-485557574,https://api.github.com/repos/simonw/datasette/issues/426,485557574,MDEyOklzc3VlQ29tbWVudDQ4NTU1NzU3NA==,222245,carlmjohnson,2019-04-22T21:23:22Z,2019-04-22T21:23:22Z,NONE,Can you cut a new release with this?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",431756352,Upgrade to Jinja2==2.10.1, https://github.com/simonw/datasette/issues/184#issuecomment-494459264,https://api.github.com/repos/simonw/datasette/issues/184,494459264,MDEyOklzc3VlQ29tbWVudDQ5NDQ1OTI2NA==,222245,carlmjohnson,2019-05-21T16:17:29Z,2019-05-21T16:17:29Z,NONE,"Reopening this because it still raises 500 for incorrect table capitalization. Example: - https://salaries.news.baltimoresun.com/salaries/2018+Maryland+state+salaries/1 200 OK - https://salaries.news.baltimoresun.com/salaries/bad-table/1 400 - https://salaries.news.baltimoresun.com/salaries/2018+maryland+state+salaries/1 500 Internal Error (note lowercase 'm') I think because the table name exists but is not in its canonical form, it triggers a dict lookup error.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",292011379,500 from missing table name, https://github.com/simonw/datasette/issues/782#issuecomment-712569695,https://api.github.com/repos/simonw/datasette/issues/782,712569695,MDEyOklzc3VlQ29tbWVudDcxMjU2OTY5NQ==,222245,carlmjohnson,2020-10-20T03:45:48Z,2020-10-20T03:46:14Z,NONE,"I vote against headers. It has a lot of strikes against it: poor discoverability, new developers often don’t know how to use them, makes CORS harder, makes it hard to use eg with JQ, needs ad hoc specification for each bit of metadata, etc. The only advantage of headers is that you don’t need to do .rows, but that’s actually good as a data validation step anyway—if .rows is missing assume there’s an error and do your error handling path instead of parsing the rest.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",627794879,Redesign default .json format, https://github.com/simonw/datasette/issues/983#issuecomment-754210356,https://api.github.com/repos/simonw/datasette/issues/983,754210356,MDEyOklzc3VlQ29tbWVudDc1NDIxMDM1Ng==,222245,carlmjohnson,2021-01-04T20:49:05Z,2021-01-04T20:49:05Z,NONE,"For reasons [I've written about elsewhere](https://blog.carlmjohnson.net/post/2020/time-to-kill-ie11/), I'm in favor of modules. It has several beneficial effects. One, old browsers just ignore it all together. Two, if you include the same plain script on the page more than once, it will be executed twice, but if you include the same module script on a page twice, it will only execute once. Three, you get a module local namespace, instead of having to use the global window namespace or a function private namespace. OTOH, if you are going to use an old style script, the code from before isn't ideal, because you wipe out your registry if the script it included more than once. Also you may as well use object methods and splat arguments. The event based architecture probably makes more sense though. Just make up some event names prefixed with `datasette:` and listen for them on the root. The only concern with that approach is it can sometimes be tricky to make sure your plugins are run after datasette has run. Maybe ```js function mycallback(){ // whatever } if (window.datasette) { window.datasette.init(mycallback); } else { document.addEventListener('datasette:init', mycallback); } ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/dogsheep/google-takeout-to-sqlite/issues/2#issuecomment-747130908,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/2,747130908,MDEyOklzc3VlQ29tbWVudDc0NzEzMDkwOA==,231498,khimaros,2020-12-17T00:47:04Z,2020-12-17T00:47:43Z,NONE,"it looks like almost all of the memory consumption is coming from `json.load()`. another direction here may be to use the new ""Semantic Location History"" data which is already broken down by year and month. it also provides much more interesting data, such as estimated address, form of travel, etc.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",769376447,killed by oomkiller on large location-history, https://github.com/dogsheep/github-to-sqlite/issues/64#issuecomment-860895838,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64,860895838,MDEyOklzc3VlQ29tbWVudDg2MDg5NTgzOA==,231498,khimaros,2021-06-14T18:23:21Z,2021-06-14T21:37:35Z,NONE,"i have a basic working version at https://github.com/khimaros/github-to-sqlite this can be tested with `github-to-sqlite events.db khimaros/events` caveat: the GitHub API doesn't seem to provide a complete history of events.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",920636216,"feature: support ""events""", https://github.com/dogsheep/github-to-sqlite/issues/64#issuecomment-861035862,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64,861035862,MDEyOklzc3VlQ29tbWVudDg2MTAzNTg2Mg==,231498,khimaros,2021-06-14T22:29:20Z,2021-06-14T22:29:20Z,NONE,"it looks like the v4 GraphQL API is the only way to get data beyond 90 days from GitHub. this is significant change, but may be worth considering in the future.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",920636216,"feature: support ""events""", https://github.com/dogsheep/github-to-sqlite/issues/64#issuecomment-861087651,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64,861087651,MDEyOklzc3VlQ29tbWVudDg2MTA4NzY1MQ==,231498,khimaros,2021-06-15T00:48:37Z,2021-06-15T00:48:37Z,NONE,"@simonw -- i've created an omega-query that fetched most of what was interesting to me for a single user. found by poking around in the ""Explorer"" tab in https://docs.github.com/en/graphql/overview/explorer note: pagination is still required via `first` and `last` but it seems to allow unlimited history. ``` query MyQuery { __typename user(login: """") { id pinnedItems(first: 100) { edges { node } } pullRequests(first: 100) { nodes { body title state createdAt } } createdAt issues(first: 100) { pageInfo { endCursor startCursor } nodes { title url createdAt body } } issueComments(first: 100) { edges { node { id updatedAt url body } } } repositories(first: 100) { nodes { createdAt description parent { name } pinnedIssues(first: 100) { edges { node { id } } } pinnedDiscussions(first: 100) { edges { node { id } } } } } starredRepositories(first: 100) { edges { node { id } } } } } ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",920636216,"feature: support ""events""", https://github.com/dogsheep/github-to-sqlite/pull/65#issuecomment-885964242,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65,885964242,IC_kwDODFdgUs40zr3S,231498,khimaros,2021-07-23T23:45:35Z,2021-07-23T23:45:35Z,NONE,@simonw is this PR of interest to you?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923270900,basic support for events, https://github.com/dogsheep/github-to-sqlite/pull/65#issuecomment-1266141699,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65,1266141699,IC_kwDODFdgUs5Ld8oD,231498,khimaros,2022-10-03T22:35:03Z,2022-10-03T22:35:03Z,NONE,"@simonw rebased against latest, please let me know if i should drop this PR.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923270900,basic support for events, https://github.com/simonw/datasette/issues/97#issuecomment-345509500,https://api.github.com/repos/simonw/datasette/issues/97,345509500,MDEyOklzc3VlQ29tbWVudDM0NTUwOTUwMA==,231923,yschimke,2017-11-19T11:26:58Z,2017-11-19T11:26:58Z,NONE,"Specifically docs should make it clearer this file exists https://parlgov.datasettes.com/.json And from that you can build https://parlgov.datasettes.com/parlgov-25f9855.json Then https://parlgov.datasettes.com/parlgov-25f9855/cabinet.json","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274022950,Link to JSON for the list of tables , https://github.com/simonw/datasette/issues/265#issuecomment-392890045,https://api.github.com/repos/simonw/datasette/issues/265,392890045,MDEyOklzc3VlQ29tbWVudDM5Mjg5MDA0NQ==,231923,yschimke,2018-05-29T18:37:49Z,2018-05-29T18:37:49Z,NONE,"Just about to ask for this! Move this page https://github.com/simonw/datasette/wiki/Datasettes into a datasette, with some concept of versioning as well.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323677499,Add links to example Datasette instances to appropiate places in docs, https://github.com/simonw/datasette/issues/97#issuecomment-392895733,https://api.github.com/repos/simonw/datasette/issues/97,392895733,MDEyOklzc3VlQ29tbWVudDM5Mjg5NTczMw==,231923,yschimke,2018-05-29T18:51:35Z,2018-05-29T18:51:35Z,NONE,Do you have an existing example with views?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274022950,Link to JSON for the list of tables , https://github.com/dogsheep/twitter-to-sqlite/issues/57#issuecomment-860063190,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/57,860063190,MDEyOklzc3VlQ29tbWVudDg2MDA2MzE5MA==,232237,stiles,2021-06-12T14:46:44Z,2021-06-12T14:46:44Z,NONE,I'm having the same issue (same versions of python and twitter-to-sqlite). It's the `user-timeline` command. Other commands are working. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",907645813,"Error: Use either --since or --since_id, not both", https://github.com/simonw/datasette/issues/1032#issuecomment-712397537,https://api.github.com/repos/simonw/datasette/issues/1032,712397537,MDEyOklzc3VlQ29tbWVudDcxMjM5NzUzNw==,236498,saulpw,2020-10-19T19:37:55Z,2020-10-19T19:37:55Z,NONE,"python-dateutil is awesome, but it can only guess at one date at a time. So if you have a column of dates that are (presumably) in the same format, it can't use the full set of dates to deduce the format. Also, once it has parsed a date, you can't get the format it used, whether to parse or render other dates. These limitations prevent it from being a silver bullet for date parsing, though they're not enough for me to stop using it!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",724878151,Bring date parsing into Datasette core, https://github.com/simonw/sqlite-utils/issues/448#issuecomment-1162498734,https://api.github.com/repos/simonw/sqlite-utils/issues/448,1162498734,IC_kwDOCGYnMM5FSlKu,236907,mungewell,2022-06-22T00:43:45Z,2022-06-22T00:43:45Z,NONE,"Attempted to test on a machine with a new version of Python, but install failed with an error message for the 'click' package. ``` C:\WINDOWS\system32>""c:\Program Files\Python310\python.exe"" Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> quit() C:\WINDOWS\system32>cd C:\Users\swood\Downloads\sqlite-utils-main-20220621\sqlite-utils-main C:\Users\swood\Downloads\sqlite-utils-main-20220621\sqlite-utils-main>""c:\Program Files\Python310\python.exe"" setup.py install running install running bdist_egg running egg_info ... Installed c:\program files\python310\lib\site-packages\click_default_group_wheel-1.2.2-py3.10.egg Searching for click Downloading https://files.pythonhosted.org/packages/3d/da/f3bbf30f7e71d881585d598f67f4424b2cc4c68f39849542e81183218017/click-default-group-wheel-1.2.2.tar.gz#sha256=e90da42d92c03e88a12ed0c0b69c8a29afb5d36e3dc8d29c423ba4219e6d7747 Best match: click default-group-wheel-1.2.2 Processing click-default-group-wheel-1.2.2.tar.gz Writing C:\Users\swood\AppData\Local\Temp\easy_install-aiaj0_eh\click-default-group-wheel-1.2.2\setup.cfg Running click-default-group-wheel-1.2.2\setup.py -q bdist_egg --dist-dir C:\Users\swood\AppData\Local\Temp\easy_install-aiaj0_eh\click-default-group-wheel-1.2.2\egg-dist-tmp-z61a4h8n zip_safe flag not set; analyzing archive contents... removing 'c:\program files\python310\lib\site-packages\click_default_group_wheel-1.2.2-py3.10.egg' (and everything under it) Copying click_default_group_wheel-1.2.2-py3.10.egg to c:\program files\python310\lib\site-packages click-default-group-wheel 1.2.2 is already the active version in easy-install.pth Installed c:\program files\python310\lib\site-packages\click_default_group_wheel-1.2.2-py3.10.egg error: The 'click' distribution was not found and is required by click-default-group-wheel, sqlite-utils ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279144769,Reading rows from a file => AttributeError: '_io.StringIO' object has no attribute 'readinto', https://github.com/simonw/sqlite-utils/issues/448#issuecomment-1162500525,https://api.github.com/repos/simonw/sqlite-utils/issues/448,1162500525,IC_kwDOCGYnMM5FSlmt,236907,mungewell,2022-06-22T00:46:43Z,2022-06-22T00:46:43Z,NONE,"[log.txt](https://github.com/simonw/sqlite-utils/files/8953589/log.txt) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279144769,Reading rows from a file => AttributeError: '_io.StringIO' object has no attribute 'readinto', https://github.com/simonw/datasette/issues/254#issuecomment-388367027,https://api.github.com/repos/simonw/datasette/issues/254,388367027,MDEyOklzc3VlQ29tbWVudDM4ODM2NzAyNw==,247131,philroche,2018-05-11T13:41:46Z,2018-05-11T13:41:46Z,NONE,"An example deployment @ https://datasette-zkcvlwdrhl.now.sh/simplestreams-270f20c/cloudimage?content_id__exact=com.ubuntu.cloud%3Areleased%3Adownload It is not causing errors, more of an inconvenience. I have worked around it using a `like` query instead. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",322283067,Escaping named parameters in canned queries, https://github.com/simonw/datasette/pull/258#issuecomment-390577711,https://api.github.com/repos/simonw/datasette/issues/258,390577711,MDEyOklzc3VlQ29tbWVudDM5MDU3NzcxMQ==,247131,philroche,2018-05-21T07:38:15Z,2018-05-21T07:38:15Z,NONE,"Excellent, I was not aware of the auto redirect to the new hash. My bad This solves my use case. I do agree that your suggested --no-url-hash approach is much neater. I will investigate ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",322741659,Add new metadata key persistent_urls which removes the hash from all database urls, https://github.com/simonw/datasette/issues/254#issuecomment-626340387,https://api.github.com/repos/simonw/datasette/issues/254,626340387,MDEyOklzc3VlQ29tbWVudDYyNjM0MDM4Nw==,247131,philroche,2020-05-10T14:54:13Z,2020-05-10T14:54:13Z,NONE,"This has now been resolved and is not present in current version of datasette. Sample query @simonw mentioned now returns as expected. https://aggreg8streams.tinyviking.ie/simplestreams?sql=select+*+from+cloudimage+where+%22content_id%22+%3D+%22com.ubuntu.cloud%3Areleased%3Adownload%22+order+by+id+limit+10","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",322283067,Escaping named parameters in canned queries, https://github.com/simonw/datasette/issues/1497#issuecomment-1387433455,https://api.github.com/repos/simonw/datasette/issues/1497,1387433455,IC_kwDOBm6k_c5Sso3v,270255,adamalton,2023-01-18T17:13:45Z,2023-01-18T17:13:45Z,NONE,"You may have just been talking to yourself here, but I found your documentation of the process incredibly useful when I hit the same error myself. Thanks! 🌈 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1034535001,"Publish to Docker Hub failing with ""libcrypt.so.1: cannot open shared object file""", https://github.com/simonw/datasette/issues/2093#issuecomment-1616195496,https://api.github.com/repos/simonw/datasette/issues/2093,1616195496,IC_kwDOBm6k_c5gVS-o,273509,terinjokes,2023-07-02T00:06:54Z,2023-07-02T00:07:17Z,NONE,"I'm not keen on requiring metadata to be within the database. I commonly have multiple DBs, from various sources, and having one config file to provide the metadata works out very well. I use Datasette with databases where I'm not the original source, needing to mutate them to add a metadata table or sqlite-docs makes me uncomfortable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File", https://github.com/simonw/datasette/issues/1050#issuecomment-718317997,https://api.github.com/repos/simonw/datasette/issues/1050,718317997,MDEyOklzc3VlQ29tbWVudDcxODMxNzk5Nw==,283343,thadk,2020-10-29T02:24:50Z,2020-10-29T02:29:24Z,NONE,"Unsolicited feedback for an unreleased feature of the [current](https://github.com/simonw/datasette/commit/5e0b72247ecab4ce0fcec599b77a83d73a480872) unreleased GitHub version (I casually wanted to access a blob row) – the existing #1036 route doesn't support special characters in database or table names (e.g. `@()` ). Maybe this is motivation for your new idea here. Also I got this error/crash with my blob and wasn't able to get the file: https://gist.github.com/thadk/28ac32af0e88747ce9056c90b0b19d34","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",729057388,Switch to .blob render extension for BLOB downloads, https://github.com/simonw/sqlite-utils/issues/227#issuecomment-779785638,https://api.github.com/repos/simonw/sqlite-utils/issues/227,779785638,MDEyOklzc3VlQ29tbWVudDc3OTc4NTYzOA==,295329,camallen,2021-02-16T11:48:03Z,2021-02-16T11:48:03Z,NONE,Thank you @simonw ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",807174161,Error reading csv files with large column data, https://github.com/simonw/datasette/issues/276#issuecomment-744461856,https://api.github.com/repos/simonw/datasette/issues/276,744461856,MDEyOklzc3VlQ29tbWVudDc0NDQ2MTg1Ng==,296686,robintw,2020-12-14T14:04:57Z,2020-12-14T14:04:57Z,NONE,"I'm looking into using datasette with a database with spatialite geometry columns, and came across this issue. Has there been any progress on this since 2018? In one of my tables I'm just storing lat/lon points in a spatialite point geometry, and I've managed to make datasette-cluster-map display the points by extracting the lat and lon in SQL - using something like `select ... ST_X(location) as longitude, ST_Y(location) as latitude from Blah`. Something more 'built-in' would be great though - particularly for the tables I have that store more complex geometries.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/pull/1031#issuecomment-714289680,https://api.github.com/repos/simonw/datasette/issues/1031,714289680,MDEyOklzc3VlQ29tbWVudDcxNDI4OTY4MA==,299380,frankier,2020-10-22T07:23:52Z,2020-10-22T07:23:52Z,NONE,"The bug is that currently when there are databases passed in, but no -i flag, e.g. in configuration directory mode, inclusion in inspect-data.json does not automatically cause databases to be considered immutable, as described in the documentation. The reason is that the -i flag is specified multiple=True, which means when it is not passed in we will get an empty list [], rather than None. So the current code decides that no databases are immutable rather than falling back to inspect-data.json -- as is presumably intended.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",724369025,Fallback to databases in inspect-data.json when no -i options are passed, https://github.com/simonw/datasette/pull/1031#issuecomment-744003454,https://api.github.com/repos/simonw/datasette/issues/1031,744003454,MDEyOklzc3VlQ29tbWVudDc0NDAwMzQ1NA==,299380,frankier,2020-12-13T12:52:56Z,2020-12-13T12:52:56Z,NONE,"Please let me know if there's anything I can do to help get this merged. This is causing problems for me because it means when I build my Docker image my databases aren't considered immutable, which I would like them to be so that a download link is produced.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",724369025,Fallback to databases in inspect-data.json when no -i options are passed, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-780817596,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,780817596,MDEyOklzc3VlQ29tbWVudDc4MDgxNzU5Ng==,306240,UtahDave,2021-02-17T20:01:35Z,2021-02-17T20:01:35Z,NONE,I've got this almost working. Just needs some polish,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-783688547,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,783688547,MDEyOklzc3VlQ29tbWVudDc4MzY4ODU0Nw==,306240,UtahDave,2021-02-22T21:31:28Z,2021-02-22T21:31:28Z,NONE,"@Btibert3 I've opened a PR with my initial attempt at this. Would you be willing to give this a try? https://github.com/dogsheep/google-takeout-to-sqlite/pull/5","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-783794520,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,783794520,MDEyOklzc3VlQ29tbWVudDc4Mzc5NDUyMA==,306240,UtahDave,2021-02-23T01:13:54Z,2021-02-23T01:13:54Z,NONE,"Also, @simonw I created a test based off the existing tests. I think it's working correctly","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-784638394,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,784638394,MDEyOklzc3VlQ29tbWVudDc4NDYzODM5NA==,306240,UtahDave,2021-02-24T00:36:18Z,2021-02-24T00:36:18Z,NONE,I noticed that @simonw is using black for formatting. I ran black on my additions in this PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790389335,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,790389335,MDEyOklzc3VlQ29tbWVudDc5MDM4OTMzNQ==,306240,UtahDave,2021-03-04T07:32:04Z,2021-03-04T07:32:04Z,NONE,"> The command takes quite a while to start running, presumably because this line causes it to have to scan the WHOLE file in order to generate a count: > > https://github.com/dogsheep/google-takeout-to-sqlite/blob/a3de045eba0fae4b309da21aa3119102b0efc576/google_takeout_to_sqlite/utils.py#L66-L67 > > I'm fine with waiting though. It's not like this is a command people run every day - and without that count we can't show a progress bar, which seems pretty important for a process that takes this long. The wait is from python loading the mbox file. This happens regardless if you're getting the length of the mbox. The mbox module is on the slow side. It is possible to do one's own parsing of the mbox, but I kind of wanted to avoid doing that.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790391711,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,790391711,MDEyOklzc3VlQ29tbWVudDc5MDM5MTcxMQ==,306240,UtahDave,2021-03-04T07:36:24Z,2021-03-04T07:36:24Z,NONE,"> Looks like you're doing this: > > ```python > elif message.get_content_type() == ""text/plain"": > body = message.get_payload(decode=True) > ``` > > So presumably that decodes to a unicode string? > > I imagine the reason the column is a `BLOB` for me is that `sqlite-utils` determines the column type based on the first batch of items - https://github.com/simonw/sqlite-utils/blob/09c3386f55f766b135b6a1c00295646c4ae29bec/sqlite_utils/db.py#L1927-L1928 - and I got unlucky and had something in my first batch that wasn't a unicode string. Ah, that's good to know. I think explicitly creating the tables will be a great improvement. I'll add that. Also, I noticed after I opened this PR that the `message.get_payload()` is being deprecated in favor of `message.get_content()` or something like that. I'll see if that handles the decoding better, too. Thanks for the feedback. I should have time tomorrow to put together some improvements.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791530093,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,791530093,MDEyOklzc3VlQ29tbWVudDc5MTUzMDA5Mw==,306240,UtahDave,2021-03-05T16:28:07Z,2021-03-05T16:28:07Z,NONE,"> I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout. > > Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once? @maxhawkins a limitation of the python mbox module is it loads the entire mbox into memory. I did find another approach to this problem that didn't use the builtin python mbox module and created a generator so that it didn't have to load the whole mbox into memory. I was hoping to use standard library modules, but this might be a good reason to investigate that approach a bit more. My worry is making sure a custom processor handles all the ins and outs of the mbox format correctly. Hm. As I'm writing this, I thought of something. I think I can parse each message one at a time, and then use an mbox function to load each message using the python mbox module. That way the mbox module can still deal with the specifics of the mbox format, but I can use a generator. I'll give that a try. Thanks for the feedback @maxhawkins and @simonw. I'll give that a try. @simonw can we hold off on merging this until I can test this new approach?","{""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885098025,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885098025,IC_kwDODFE5qs40wYYp,306240,UtahDave,2021-07-22T17:47:50Z,2021-07-22T17:47:50Z,NONE,"Hi @maxhawkins , I'm sorry, I haven't had any time to work on this. I'll have some time tomorrow to test your commits. I think they look great. I'm great with your commits superseding my initial attempt here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/simonw/datasette/issues/1608#issuecomment-1017993482,https://api.github.com/repos/simonw/datasette/issues/1608,1017993482,IC_kwDOBm6k_c48rVkK,316517,astrojuanlu,2022-01-20T22:46:16Z,2022-01-20T22:46:16Z,NONE,Or you can use https://sphinx-version-warning.readthedocs.io/! 😄 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1109808154,Documentation should clarify /stable/ vs /latest/, https://github.com/simonw/datasette/issues/514#issuecomment-539721880,https://api.github.com/repos/simonw/datasette/issues/514,539721880,MDEyOklzc3VlQ29tbWVudDUzOTcyMTg4MA==,319156,ipmb,2019-10-08T22:00:03Z,2019-10-08T22:00:03Z,NONE,"If you are just using Nginx to open a reserved port, systemd can do that on its own. https://www.freedesktop.org/software/systemd/man/systemd.socket.html. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/dogsheep/dogsheep-photos/pull/38#issuecomment-1656694854,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/38,1656694854,IC_kwDOD079W85ivyhG,319473,coldclimate,2023-07-29T10:00:45Z,2023-07-29T10:00:45Z,NONE,Ran across https://github.com/dogsheep/dogsheep-photos/issues/33 which is the same subject. My PR just fixes docs,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1827427757,photos-to-sql not found?, https://github.com/dogsheep/dogsheep-photos/pull/38#issuecomment-1656694944,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/38,1656694944,IC_kwDOD079W85ivyig,319473,coldclimate,2023-07-29T10:01:19Z,2023-07-29T10:01:19Z,NONE,Duplicate of https://github.com/dogsheep/dogsheep-photos/pull/36 - closing.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1827427757,photos-to-sql not found?, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1656696679,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,1656696679,IC_kwDOD079W85ivy9n,319473,coldclimate,2023-07-29T10:10:29Z,2023-07-29T10:10:29Z,NONE,"+1 to getting this merged down. For future googlers, I installed by... ``` git clone git@github.com:RhetTbull/dogsheep-photos.git cd dogsheep-photos git checkout update_for_bigsur python setup.py install ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-1073123231,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,1073123231,IC_kwDOC8tyDs4_9o-f,343884,lchski,2022-03-19T22:39:29Z,2022-03-19T22:39:29Z,NONE,"I have this issue, too, with a fresh export. None of my `Workout` entries in `export.xml` have an `id` key, though [the sample `export.xml` in the tests folder doesn’t either](https://github.com/dogsheep/healthkit-to-sqlite/blob/main/tests/zip_contents/apple_health_export/export.xml#L14-L21), so I don’t think this is the culprit. Indeed, it seems @simonw is using the [`hash_id` function from `sqlite_utils`](https://sqlite-utils.datasette.io/en/stable/python-api.html#setting-an-id-based-on-the-hash-of-the-row-contents), which creates a column (`id`, in this case) based on a hash of the row’s contents. When I run the script, a `workouts` table is created, with one entry: my first workout. No `workout_points` table is created, as [I’d expect from `utils.py`](https://github.com/dogsheep/healthkit-to-sqlite/blob/main/healthkit_to_sqlite/utils.py#L89-L90). I then get essentially the same error as noted in this thread: ```Importing from HealthKit [###################################-] 98% 00:00:01 Traceback (most recent call last): File ""/Users/lchski/.pyenv/versions/3.10.3/bin/healthkit-to-sqlite"", line 8, in sys.exit(cli()) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 1128, in __call__ return self.main(*args, **kwargs) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 1053, in main rv = self.invoke(ctx) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 754, in invoke return __callback(*args, **kwargs) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/healthkit_to_sqlite/cli.py"", line 57, in cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/healthkit_to_sqlite/utils.py"", line 34, in convert_xml_to_sqlite workout_to_db(el, db, zipfile) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/healthkit_to_sqlite/utils.py"", line 57, in workout_to_db pk = db[""workouts""].insert(record, alter=True, hash_id=""id"").last_pk File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 2822, in insert return self.insert_all( File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 2950, in insert_all self.insert_chunk( File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 2715, in insert_chunk result = self.db.execute(query, params) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 458, in execute return self.conn.execute(sql, parameters) sqlite3.IntegrityError: UNIQUE constraint failed: workouts.id ``` Are there maybe duplicate workouts in the data, which’d cause multiple rows to share the same `id`? It’s strange, though, that no `workout_points` is created at all. Export created from iOS 15.3.1.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-1073139067,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,1073139067,IC_kwDOC8tyDs4_9s17,343884,lchski,2022-03-20T00:54:18Z,2022-03-20T00:54:18Z,NONE,"Update: this appears to be because of running the command twice without clearing the DB in between. Tries to insert a Workout that already exists, causing a collision on the (auto-generated) `id` column. Had a different error with a clean DB, likely due to the workout points format; will make a new issue for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/simonw/sqlite-utils/issues/239#issuecomment-960292442,https://api.github.com/repos/simonw/sqlite-utils/issues/239,960292442,IC_kwDOCGYnMM45POZa,350038,tmaier,2021-11-03T23:28:55Z,2021-11-03T23:28:55Z,NONE,"I am super interested in this feature. After reading the other issues you referenced, I think the right way would be to use the current extract feature and then to use `sqlite-utils convert` to extract the json object into individual columns","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",816526538,sqlite-utils extract could handle nested objects, https://github.com/simonw/sqlite-utils/issues/239#issuecomment-960295228,https://api.github.com/repos/simonw/sqlite-utils/issues/239,960295228,IC_kwDOCGYnMM45PPE8,350038,tmaier,2021-11-03T23:35:37Z,2021-11-03T23:36:50Z,NONE,"I think I only wonder how I would parse the JSON `value` within such a lambda... My naive approach would have been `$ sqlite-utils convert demo.db statuses statuses 'return value' --multi`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",816526538,sqlite-utils extract could handle nested objects, https://github.com/simonw/datasette/issues/1522#issuecomment-976023405,https://api.github.com/repos/simonw/datasette/issues/1522,976023405,IC_kwDOBm6k_c46LO9t,360895,steren,2021-11-23T00:08:07Z,2021-11-23T00:08:07Z,NONE,"If you suspect that Cloud Run throttled CPU could be the cause, you can request to have CPU always allocated with `gcloud beta run deploy --no-cpu-throttling` ([read more](https://cloud.google.com/blog/products/serverless/cloud-run-gets-always-on-cpu-allocation)) It could also be the Cloud Run sandbox that somehow gets in the way here, in which case I recommend testing with the second generation execution environment: `gcloud beta run deploy --execution-environment gen2`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058896236,Deploy a live instance of demos/apache-proxy, https://github.com/simonw/datasette/issues/1479#issuecomment-1709373304,https://api.github.com/repos/simonw/datasette/issues/1479,1709373304,IC_kwDOBm6k_c5l4vd4,363004,hcarter333,2023-09-07T02:14:15Z,2023-09-07T02:14:15Z,NONE,"I ran into the same issue on Windows using `datasette publish cloudrun mydatabase.db --service=my-database` do do a [google cloud publish](https://docs.datasette.io/en/stable/publish.html). @Rik-de-Kort your fix worked perfectly! Thanks! I can always go back and delete the temp directories myself :) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1010112818,"Win32 ""used by another process"" error with datasette publish", https://github.com/simonw/datasette/issues/411#issuecomment-1779267468,https://api.github.com/repos/simonw/datasette/issues/411,1779267468,IC_kwDOBm6k_c5qDXeM,363004,hcarter333,2023-10-25T13:23:04Z,2023-10-25T13:23:04Z,NONE,"Using the [Counties example](https://us-counties.datasette.io/counties/county_for_latitude_longitude?longitude=-122&latitude=37), I was able to pull out the MakePoint method as MakePoint(cast(rm_rnb_history_pres.rx_lng as float), cast(rm_rnb_history_pres.rx_lat as float)) as geometry which worked, giving me a geometry column. ![image](https://github.com/simonw/datasette/assets/363004/6393b712-9e3d-416d-ba37-202934d5f604) gave ![image](https://github.com/simonw/datasette/assets/363004/219db7b2-8107-41b3-a049-ef4d6bd7ac7a) I believe it's the cast to float that does the trick. Prior to using the cast, I also received a 'wrong number of arguments' eror. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",410384988,How to pass named parameter into spatialite MakePoint() function, https://github.com/simonw/datasette/issues/1615#issuecomment-1023997327,https://api.github.com/repos/simonw/datasette/issues/1615,1023997327,IC_kwDOBm6k_c49CPWP,369053,aidansteele,2022-01-28T08:37:36Z,2022-01-28T08:37:36Z,NONE,"Oops, it feels like this should perhaps be migrated to GitHub Discussions - sorry! I don't think I have the ability to do that 😅","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1117132741,Potential simplified publishing mechanism, https://github.com/simonw/datasette/issues/558#issuecomment-511252718,https://api.github.com/repos/simonw/datasette/issues/558,511252718,MDEyOklzc3VlQ29tbWVudDUxMTI1MjcxOA==,380586,0x1997,2019-07-15T01:29:29Z,2019-07-15T01:29:29Z,NONE,"Thanks, the latest version works.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",467218270,Support unicode in url, https://github.com/simonw/datasette/issues/155#issuecomment-347714314,https://api.github.com/repos/simonw/datasette/issues/155,347714314,MDEyOklzc3VlQ29tbWVudDM0NzcxNDMxNA==,388154,wsxiaoys,2017-11-29T00:46:25Z,2017-11-29T00:46:25Z,NONE,"``` CREATE TABLE rhs ( id INTEGER PRIMARY KEY, name TEXT ); CREATE TABLE lhs ( symbol INTEGER PRIMARY KEY, FOREIGN KEY (symbol) REFERENCES rhs(id) ); INSERT INTO rhs VALUES (1, ""foo""); INSERT INTO rhs VALUES (2, ""bar""); INSERT INTO lhs VALUES (1); INSERT INTO lhs VALUES (2); ``` It's expected that in lhs's view, foo / bar should be displayed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",277589569,A primary key column that has foreign key restriction associated won't rendering label column, https://github.com/simonw/datasette/issues/161#issuecomment-350108113,https://api.github.com/repos/simonw/datasette/issues/161,350108113,MDEyOklzc3VlQ29tbWVudDM1MDEwODExMw==,388154,wsxiaoys,2017-12-07T22:02:24Z,2017-12-07T22:02:24Z,NONE,"It's not throwing the validation error anymore, but i still cannot run following with query: ``` WITH RECURSIVE cnt(x) AS (SELECT 1 UNION ALL SELECT x+1 FROM cnt LIMIT 10) SELECT x FROM cnt; ``` I got `near ""WITH"": syntax error`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",278814220,Support WITH query , https://github.com/simonw/datasette/issues/161#issuecomment-350182904,https://api.github.com/repos/simonw/datasette/issues/161,350182904,MDEyOklzc3VlQ29tbWVudDM1MDE4MjkwNA==,388154,wsxiaoys,2017-12-08T06:18:12Z,2017-12-08T06:18:12Z,NONE,"You're right..got this resolved after upgrading the sqlite version. Thanks you!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",278814220,Support WITH query , https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1578840450,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1578840450,IC_kwDOCGYnMM5eGzGC,392720,jonafato,2023-06-06T14:09:04Z,2023-06-06T14:09:04Z,NONE,"I also ran into this recently. See below for a patch for one possible solution (tested via ""it works on my machine"", but I don't expect that this behavior would vary a whole lot across terminal emulators and shells). Another possible solution might be to subclass click's `ProgressBar` to keep the logic within the original context manager. Happy to send a PR or for this patch to serve as the basis for a fix that someone else authors. ```patch diff --git a/sqlite_utils/utils.py b/sqlite_utils/utils.py index 06c1a4c..530a3a3 100644 --- a/sqlite_utils/utils.py +++ b/sqlite_utils/utils.py @@ -147,14 +147,23 @@ def decode_base64_values(doc): class UpdateWrapper: - def __init__(self, wrapped, update): + def __init__(self, wrapped, update, render_finish): self._wrapped = wrapped self._update = update + self._render_finish = render_finish def __iter__(self): - for line in self._wrapped: - self._update(len(line)) - yield line + return self + + def __next__(self): + try: + line = next(self._wrapped) + except StopIteration as e: + self._render_finish() + raise + + self._update(len(line)) + return line def read(self, size=-1): data = self._wrapped.read(size) @@ -178,7 +187,7 @@ def file_progress(file, silent=False, **kwargs): else: file_length = os.path.getsize(file.name) with click.progressbar(length=file_length, **kwargs) as bar: - yield UpdateWrapper(file, bar.update) + yield UpdateWrapper(file, bar.update, bar.render_finish) class Format(enum.Enum): ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/datasette/issues/2001#issuecomment-1403071122,https://api.github.com/repos/simonw/datasette/issues/2001,1403071122,IC_kwDOBm6k_c5ToSqS,406380,gwk,2023-01-25T04:12:41Z,2023-01-25T04:12:41Z,NONE,"@cldellow glad to hear you tried it, as I got grossed out by my own suggestion ;) If you are on macOS I do have one trick for debugging segfaults using lldb.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1553615704,Datasette is not compatible with SQLite's strict quoting compilation option, https://github.com/simonw/datasette/issues/1751#issuecomment-1140321380,https://api.github.com/repos/simonw/datasette/issues/1751,1140321380,IC_kwDOBm6k_c5D9-xk,408765,knutwannheden,2022-05-28T19:52:17Z,2022-05-28T19:52:17Z,NONE,Closing in favor of existing issue #1298.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1251710928,Add scrollbars to table presentation in default layout, https://github.com/simonw/datasette/issues/619#issuecomment-626006493,https://api.github.com/repos/simonw/datasette/issues/619,626006493,MDEyOklzc3VlQ29tbWVudDYyNjAwNjQ5Mw==,412005,davidszotten,2020-05-08T20:29:12Z,2020-05-08T20:29:12Z,NONE,"just trying out datasette and quite like it, thanks! i found this issue annoying enough to have a go at a fix. have you any thoughts on a good approach? (i'm happy to dig in myself if you haven't thought about it yet, but wanted to check if you had an idea for how to fix when you raised the issue)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",520655983,"""Invalid SQL"" page should let you edit the SQL", https://github.com/simonw/datasette/issues/1900#issuecomment-1319596087,https://api.github.com/repos/simonw/datasette/issues/1900,1319596087,IC_kwDOBm6k_c5Op3A3,419145,rdmurphy,2022-11-18T06:16:33Z,2022-11-18T06:16:33Z,NONE,"Interesting! So I tried this locally using your copy of `nps-spatialite.db` and I got the same error. 🤔 ``` ❯ datasette package nps-spatialite.db --spatialite [+] Building 27.5s (10/10) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 622B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.11.0-slim-bullseye 0.9s => [internal] load build context 2.3s => => transferring context: 72.38MB 2.3s => CACHED [1/6] FROM docker.io/library/python:3.11.0-slim-bullseye@sha256:1cd45c5dad845af18d71745c017325725dc979571c1bbe625b67e6051533716c 0.0s => [2/6] COPY . /app 0.1s => [3/6] WORKDIR /app 0.0s => [4/6] RUN apt-get update && apt-get install -y python3-dev gcc libsqlite3-mod-spatialite && rm -rf /var/lib/apt/lists/* 18.5s => [5/6] RUN pip install -U datasette 4.9s => ERROR [6/6] RUN datasette inspect nps-spatialite.db --inspect-file inspect-data.json 0.7s ------ > [6/6] RUN datasette inspect nps-spatialite.db --inspect-file inspect-data.json: #10 0.681 Traceback (most recent call last): #10 0.681 File ""/usr/local/bin/datasette"", line 8, in #10 0.681 sys.exit(cli()) #10 0.681 ^^^^^ #10 0.681 File ""/usr/local/lib/python3.11/site-packages/click/core.py"", line 1130, in __call__ #10 0.682 return self.main(*args, **kwargs) #10 0.682 ^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.682 File ""/usr/local/lib/python3.11/site-packages/click/core.py"", line 1055, in main #10 0.682 rv = self.invoke(ctx) #10 0.682 ^^^^^^^^^^^^^^^^ #10 0.682 File ""/usr/local/lib/python3.11/site-packages/click/core.py"", line 1657, in invoke #10 0.682 return _process_result(sub_ctx.command.invoke(sub_ctx)) #10 0.682 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.682 File ""/usr/local/lib/python3.11/site-packages/click/core.py"", line 1404, in invoke #10 0.682 return ctx.invoke(self.callback, **ctx.params) #10 0.682 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.682 File ""/usr/local/lib/python3.11/site-packages/click/core.py"", line 760, in invoke #10 0.682 return __callback(*args, **kwargs) #10 0.682 ^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/cli.py"", line 164, in inspect #10 0.683 inspect_data = loop.run_until_complete(inspect_(files, sqlite_extensions)) #10 0.683 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/asyncio/base_events.py"", line 650, in run_until_complete #10 0.683 return future.result() #10 0.683 ^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/cli.py"", line 179, in inspect_ #10 0.683 counts = await database.table_counts(limit=3600 * 1000) #10 0.683 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/database.py"", line 304, in table_counts #10 0.683 for table in await self.table_names(): #10 0.683 ^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/database.py"", line 342, in table_names #10 0.683 results = await self.execute( #10 0.683 ^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/database.py"", line 267, in execute #10 0.683 results = await self.execute_fn(sql_operation_in_thread) #10 0.683 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/database.py"", line 213, in execute_fn #10 0.683 return await asyncio.get_event_loop().run_in_executor( #10 0.683 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/concurrent/futures/thread.py"", line 58, in run #10 0.683 result = self.fn(*self.args, **self.kwargs) #10 0.683 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/database.py"", line 209, in in_thread #10 0.683 self.ds._prepare_connection(conn, self.name) #10 0.683 File ""/usr/local/lib/python3.11/site-packages/datasette/app.py"", line 593, in _prepare_connection #10 0.683 conn.execute(""SELECT load_extension(?)"", [extension]) #10 0.683 sqlite3.OperationalError: /usr/lib/x86_64-linux-gnu/mod_spatialite.so.so: cannot open shared object file: No such file or directory ------ executor failed running [/bin/sh -c datasette inspect nps-spatialite.db --inspect-file inspect-data.json]: exit code: 1 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1452572348,datasette package --spatialite throws error during build, https://github.com/simonw/datasette/issues/1900#issuecomment-1319639462,https://api.github.com/repos/simonw/datasette/issues/1900,1319639462,IC_kwDOBm6k_c5OqBmm,419145,rdmurphy,2022-11-18T07:24:19Z,2022-11-18T07:24:19Z,NONE,"Is it, uh, possible we are on different architectures? 😅 I'm using an Apple M1 Pro. I jumped into a bash shell of an unmodified `python:3.11.0-slim-bullseye` container and manually ran `apt-get update` and installed `libsqlite3-mod-spatialite`. I don't end up with with `mod_spatialite.so` in `/usr/lib/x86_64-linux-gnu/` — _mine_ is in `/usr/lib/aarch64-linux-gnu/`. [I swapped that directory in here](https://github.com/simonw/datasette/blob/3db37e9a21f774d6c387fd04bf1e4c870554209e/datasette/utils/__init__.py#L407) in a local copy of `datasette` and poof — it worked!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1452572348,datasette package --spatialite throws error during build, https://github.com/simonw/datasette/issues/1900#issuecomment-1319641636,https://api.github.com/repos/simonw/datasette/issues/1900,1319641636,IC_kwDOBm6k_c5OqCIk,419145,rdmurphy,2022-11-18T07:27:26Z,2022-11-18T07:27:26Z,NONE,"Can confirm that my `uname -a` returns something different at the end: ``` root:xnu-8792.41.9~2/RELEASE_ARM64_T6000 arm64 ``` I'm in `arm64` land, you're in `x86_64`. I am admittedly very fuzzy on how this factors into Docker these days. Honestly thought this was one of the things Docker was suppose to help address. 🤔","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1452572348,datasette package --spatialite throws error during build, https://github.com/simonw/datasette/issues/1900#issuecomment-1319664697,https://api.github.com/repos/simonw/datasette/issues/1900,1319664697,IC_kwDOBm6k_c5OqHw5,419145,rdmurphy,2022-11-18T07:59:36Z,2022-11-18T08:00:38Z,NONE,"Okay, my final observations for the night! I've been pushing and pulling the various levers in `utils/__init__.py` to see what makes this work without hard-coding in something for `arm64` and it seems that if I change `/usr/lib/x86_64-linux-gnu/mod_spatialite.so` [here](https://github.com/simonw/datasette/blob/3ecd131e57add427d847b614c920c9624bb2e66b/datasette/utils/__init__.py#L407) to just `mod_spatialite` it's happy. Unfortunately cannot audit that for `x86_64`, but maybe that's a solution that'd be cross-arch compatible? It seems like it's the hard-coding of that path that's tripping it up. (It was actually [this comment from back in 2018 in an entirely unrelated repo](https://github.com/pelias/docker/pull/28#issuecomment-433168462) that nudged me to try this, ha.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1452572348,datasette package --spatialite throws error during build, https://github.com/simonw/datasette/issues/1706#issuecomment-1344669160,https://api.github.com/repos/simonw/datasette/issues/1706,1344669160,IC_kwDOBm6k_c5QJgXo,419145,rdmurphy,2022-12-09T19:11:40Z,2022-12-09T19:11:40Z,NONE,"Ah, yes! Was just trying to do this and had the same issue. +1 to this!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1198822563,"[feature] immutable mode for a directory, not just individual sqlite file", https://github.com/simonw/datasette/issues/1775#issuecomment-1233328232,https://api.github.com/repos/simonw/datasette/issues/1775,1233328232,IC_kwDOBm6k_c5Jgxho,428820,johnfelipe,2022-08-31T19:18:47Z,2022-08-31T19:18:47Z,NONE,"I want contribute if strings is in pot file, or Json, yml, yaml, Js or other type ..","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1323346408,i18n support, https://github.com/simonw/datasette/issues/1775#issuecomment-1233684765,https://api.github.com/repos/simonw/datasette/issues/1775,1233684765,IC_kwDOBm6k_c5JiIkd,428820,johnfelipe,2022-09-01T03:12:50Z,2022-09-01T03:12:50Z,NONE,"I want to begin translation to es and it documentation, if u like i would do PR asap. But user interface, frontend and backend is a good feature be i18n support. When you have may be .pot file I can translate to, or Json file with all strings for internationalized","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1323346408,i18n support, https://github.com/simonw/datasette/issues/1775#issuecomment-1233702481,https://api.github.com/repos/simonw/datasette/issues/1775,1233702481,IC_kwDOBm6k_c5JiM5R,428820,johnfelipe,2022-09-01T03:48:27Z,2022-09-01T03:48:27Z,NONE,"I want to do may be 3 or 4 PR for your evaluation, and when you allowed then I want supporting that effort El mié, 31 de ago. de 2022, 10:36 p. m., Simon Willison < ***@***.***> escribió: > I don't want to start any efforts around documentation translation until > after the Datasette 1.0 release, because I'd like to be confident that > we're not translating documentation that may have some big changes before > Datasette is fully stable! > > — > Reply to this email directly, view it on GitHub > , > or unsubscribe > > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1323346408,i18n support, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1548913065,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1548913065,IC_kwDOCGYnMM5cUomp,433780,chrislkeller,2023-05-16T03:11:03Z,2023-05-16T03:11:52Z,NONE,"Using this thread and some [other resources](https://sqlite-utils.datasette.io/en/stable/cli.html#spatialite-helpers) I managed to cobble together a couple of sqlite-utils lines to add a geometry column for a table that already has a lat/lng column. ``` # add a geometry column sqlite-utils add-geometry-column [db name] [table name] geometry --type POINT --srid 4326 # add a point for each row to geometry column sqlite-utils --load-extension=spatialite [db name] 'update [table name] SET Geometry=MakePoint(longitude, latitude, 4326);' ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464,"Make it easier to insert geometries, with documentation and maybe code", https://github.com/simonw/datasette/pull/363#issuecomment-417684877,https://api.github.com/repos/simonw/datasette/issues/363,417684877,MDEyOklzc3VlQ29tbWVudDQxNzY4NDg3Nw==,436032,kevboh,2018-08-31T14:39:45Z,2018-08-31T14:39:45Z,NONE,"It looks like the check passed, not sure why it's showing as running in GH.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",355299310,Search all apps during heroku publish, https://github.com/simonw/datasette/issues/101#issuecomment-344597274,https://api.github.com/repos/simonw/datasette/issues/101,344597274,MDEyOklzc3VlQ29tbWVudDM0NDU5NzI3NA==,450244,eaubin,2017-11-15T13:48:55Z,2017-11-15T13:48:55Z,NONE,This is a duplicate of https://github.com/simonw/datasette/issues/100,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274161964,TemplateAssertionError: no filter named 'tojson', https://github.com/simonw/datasette/issues/1265#issuecomment-803160804,https://api.github.com/repos/simonw/datasette/issues/1265,803160804,MDEyOklzc3VlQ29tbWVudDgwMzE2MDgwNA==,468612,yunzheng,2021-03-19T22:05:12Z,2021-03-19T22:05:12Z,NONE,Wow that was fast! Thanks for this very cool project and quick update! 👍 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",836123030,Support for HTTP Basic Authentication, https://github.com/simonw/sqlite-utils/issues/235#issuecomment-1172697090,https://api.github.com/repos/simonw/sqlite-utils/issues/235,1172697090,IC_kwDOCGYnMM5F5fAC,474467,mdrovdahl,2022-07-01T20:37:40Z,2022-07-01T20:37:55Z,NONE,"I just ran into what appears to be the same issue on a MacBook Pro, M1 Pro. Environment: ``` markd@Marks-MacBook-Pro metabase % python --version Python 3.8.9 markd@Marks-MacBook-Pro metabase % sqlite3 --version 3.37.0 2021-12-09 01:34:53 9ff244ce0739f8ee52a3e9671adb4ee54c83c640b02e3f9d185fd2f9a179aapl markd@Marks-MacBook-Pro metabase % sqlite-utils --version sqlite-utils, version 3.27 markd@Marks-MacBook-Pro metabase % sqlite3 gh.db SQLite version 3.37.0 2021-12-09 01:34:53 Enter "".help"" for usage hints. sqlite> .dbconfig defensive off dqs_ddl on dqs_dml on enable_fkey off enable_qpsg off enable_trigger on enable_view on fts3_tokenizer off legacy_alter_table on legacy_file_format off load_extension off no_ckpt_on_close off reset_database off trigger_eqp off trusted_schema on writable_schema off ``` Error ``` markd@Marks-MacBook-Pro metabase % github-to-sqlite repos gh.db a8cteam51 Traceback (most recent call last): File ""/Users/markd/Library/Python/3.8/bin/github-to-sqlite"", line 8, in sys.exit(cli()) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/click/core.py"", line 1130, in __call__ return self.main(*args, **kwargs) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/click/core.py"", line 1055, in main rv = self.invoke(ctx) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/click/core.py"", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/click/core.py"", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/click/core.py"", line 760, in invoke return __callback(*args, **kwargs) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/github_to_sqlite/cli.py"", line 268, in repos utils.ensure_db_shape(db) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/github_to_sqlite/utils.py"", line 688, in ensure_db_shape ensure_foreign_keys(db) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/github_to_sqlite/utils.py"", line 682, in ensure_foreign_keys db[table].add_foreign_key(column, table2, column2) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/sqlite_utils/db.py"", line 2004, in add_foreign_key self.db.add_foreign_keys([(self.name, column, other_table, other_column)]) File ""/Users/markd/Library/Python/3.8/lib/python/site-packages/sqlite_utils/db.py"", line 1019, in add_foreign_keys cursor.execute( sqlite3.OperationalError: table sqlite_master may not be modified ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810618495,Extract columns cannot create foreign key relation: sqlite3.OperationalError: table sqlite_master may not be modified, https://github.com/simonw/sqlite-utils/issues/235#issuecomment-1172766270,https://api.github.com/repos/simonw/sqlite-utils/issues/235,1172766270,IC_kwDOCGYnMM5F5v4-,474467,mdrovdahl,2022-07-01T22:40:26Z,2022-07-01T22:40:26Z,NONE,"Note, I do not get this issue using my Intel MacBook Pro =/ Environment ``` markd@Marks-MBP metabase % python3 --version Python 3.9.13 markd@Marks-MBP metabase % sqlite3 --version 3.37.0 2021-12-09 01:34:53 9ff244ce0739f8ee52a3e9671adb4ee54c83c640b02e3f9d185fd2f9a179aapl markd@Marks-MBP metabase % sqlite-utils --version sqlite-utils, version 3.27 markd@Marks-MBP metabase % sqlite3 github.db SQLite version 3.37.0 2021-12-09 01:34:53 Enter "".help"" for usage hints. sqlite> .dbconfig defensive off dqs_ddl on dqs_dml on enable_fkey off enable_qpsg off enable_trigger on enable_view on fts3_tokenizer off legacy_alter_table on legacy_file_format off load_extension off no_ckpt_on_close off reset_database off trigger_eqp off trusted_schema on writable_schema off ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810618495,Extract columns cannot create foreign key relation: sqlite3.OperationalError: table sqlite_master may not be modified, https://github.com/simonw/sqlite-utils/issues/235#issuecomment-1198414383,https://api.github.com/repos/simonw/sqlite-utils/issues/235,1198414383,IC_kwDOCGYnMM5Hblov,474467,mdrovdahl,2022-07-28T17:10:06Z,2022-07-28T17:10:06Z,NONE,"I was able to fight through this by capturing the SQL commands from the `add_foreign_keys()` function in sqlite-utils and then executing them manually via the sqlite3 client, first setting PRAGMA writable_schema on and then updating the sqlite_master table. Still no clue why they were failing when run in context...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810618495,Extract columns cannot create foreign key relation: sqlite3.OperationalError: table sqlite_master may not be modified, https://github.com/simonw/datasette/issues/1144#issuecomment-744489028,https://api.github.com/repos/simonw/datasette/issues/1144,744489028,MDEyOklzc3VlQ29tbWVudDc0NDQ4OTAyOA==,475613,MarkusH,2020-12-14T14:47:11Z,2020-12-14T14:47:11Z,NONE,"Thanks for opening the issue, @simonw. Let me elaborate on my Tweets. [datasette-chartjs](https://github.com/MarkusH/datasette-chartjs) provides drop down lists to pick the chart visualization (e.g. bar, line, doughnut, pie, ...) as well as the column used for the ""x axis"" (e.g. time). A user can change the values on-demand. The chart will be redrawn w/o querying the database again. However, if a user wants to change the underlying query, they will use the SQL field provided by datasette or any of the other datasette built-in features to amend a query. In order to maintain a user's selections for the plugin, datasette-chartjs copies some parts of [datasette-vega](https://github.com/simonw/datasette-vega) which persist the chosen visualization and column in the hash part of a URL (the stuff behind the `#`). The plugin load the config from the hash upon initialization on the next page and use it accordingly. Additionally, datasette-vega and datasette-chartjs need to make sure to include the hash in all links and forms that cause a reload of the page. This is, such that the config persists between clicks. This ticket is about moving thes parts into datasette that provide the functionality to do so. This includes: 1. a way to load config options with a given prefix from the current URL hash 1. a way to update the current URL hash with a new config value or a bunch of config options 1. updating all necessary links and forms on the current page to include the URL hash whenever its updated 1. to prevent leaking config options to external pages, only ""internal"" links should be updated There's another, optional, feature that we might want to think about during the design phase: the scope of the config. Links within a datasette instance have 1 of 3 scopes: 1. global, for the whole datasette project 1. database, for all tables in a database 1. table, only for a table within a database When updating the links and forms as pointed out in 3. above, it might be worth considering which links need to be updated. I could imagine a plugin that wants to persist some setting across all tables within a database but another setting only within a table.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",765637324,JavaScript to help plugins interact with the fragment part of the URL, https://github.com/simonw/datasette/issues/983#issuecomment-753600999,https://api.github.com/repos/simonw/datasette/issues/983,753600999,MDEyOklzc3VlQ29tbWVudDc1MzYwMDk5OQ==,475613,MarkusH,2021-01-03T11:11:21Z,2021-01-03T11:11:21Z,NONE,"With regards to JS/Browser events, given your example of menu items that plugins could add, I could imagine this code to work: ```js // as part of datasette datasette.events.AddMenuItem = 'DatasetteAddMenuItemEvent'; document.addEventListener(datasette.events.AddMenuItem, (e) => { // do whatever is needed to add the menu item. Data comes from `e` alert(e.title + ' ' + e.link); }); // as part of a plugin const event = new Event(datasette.events.AddMenuItem, {link: '/foo/bar', title: 'Go somewhere'}); Document.dispatchEvent(event) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429,JavaScript plugin hooks mechanism similar to pluggy, https://github.com/simonw/datasette/issues/1331#issuecomment-842495820,https://api.github.com/repos/simonw/datasette/issues/1331,842495820,MDEyOklzc3VlQ29tbWVudDg0MjQ5NTgyMA==,475613,MarkusH,2021-05-17T17:18:05Z,2021-05-17T17:18:05Z,NONE,"Wow, you are _fast_! I didn't notice dependabot had opened a PR already. I was about to. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/1331#issuecomment-842499728,https://api.github.com/repos/simonw/datasette/issues/1331,842499728,MDEyOklzc3VlQ29tbWVudDg0MjQ5OTcyOA==,475613,MarkusH,2021-05-17T17:24:30Z,2021-05-17T17:24:30Z,NONE,"> I wonder if there are any new 3.0 features we should be taking advantage of here that would justify pinning to 3.0 minimum? The changelog reads like bug fixes and removal of deprecated parts to me","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/1331#issuecomment-844970776,https://api.github.com/repos/simonw/datasette/issues/1331,844970776,MDEyOklzc3VlQ29tbWVudDg0NDk3MDc3Ng==,475613,MarkusH,2021-05-20T10:40:25Z,2021-05-20T10:40:25Z,NONE,"Any chance you could push a new datasette release with the updated dependencies in the setup.py, @simonw?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/949#issuecomment-1791571572,https://api.github.com/repos/simonw/datasette/issues/949,1791571572,IC_kwDOBm6k_c5qyTZ0,498744,mhkeller,2023-11-02T21:36:24Z,2023-11-02T21:36:24Z,NONE,"FWIW, code mirror 6 now has this standard although if you want table-specific suggestions, you'd have to handle parsing out which table the user is querying yourself.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",684961449,Try out CodeMirror SQL hints, https://github.com/simonw/sqlite-utils/issues/235#issuecomment-1206241356,https://api.github.com/repos/simonw/sqlite-utils/issues/235,1206241356,IC_kwDOCGYnMM5H5chM,503614,lfdebrux,2022-08-05T09:26:15Z,2022-08-05T09:29:42Z,NONE,"I am getting the same error when using github-to-sqlite (which uses sqlite-utils internally). I am also using an M1 MacBook Pro, with macOS Monterey 12.5, with Python 3.10.6 for arm64 installed using pyenv. I have sqlite-utils 3.28 installed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810618495,Extract columns cannot create foreign key relation: sqlite3.OperationalError: table sqlite_master may not be modified, https://github.com/simonw/datasette/issues/1439#issuecomment-1059863997,https://api.github.com/repos/simonw/datasette/issues/1439,1059863997,IC_kwDOBm6k_c4_LD29,505230,karlcow,2022-03-06T00:57:57Z,2022-03-06T00:57:57Z,NONE,"Probably too late… but I have just seen this because http://simonwillison.net/2022/Mar/5/dash-encoding/#atom-everything And it reminded me of comma tools at W3C. http://www.w3.org/,tools Example, the text version of W3C homepage https://www.w3.org/,text > The challenge comes down to telling the difference between the following: > > * `/db/table` - an HTML table page `/db/table` > * `/db/table.csv` - the CSV version of `/db/table` `/db/table,csv` > * `/db/table.csv` - no this one is actually a database table called `table.csv` `/db/table.csv` > * `/db/table.csv.csv` - the CSV version of `/db/table.csv` `/db/table.csv,csv` > * `/db/table.csv.csv.csv` and so on... `/db/table.csv.csv,csv` I haven't checked all the cases in the thread.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",973139047,Rethink how .ext formats (v.s. ?_format=) works before 1.0, https://github.com/simonw/datasette/issues/1210#issuecomment-773977128,https://api.github.com/repos/simonw/datasette/issues/1210,773977128,MDEyOklzc3VlQ29tbWVudDc3Mzk3NzEyOA==,525780,heyarne,2021-02-05T11:30:34Z,2021-02-05T11:30:34Z,NONE,"Thanks for your quick reply! Having changed my `metadata.yml`, queries AND database I can't really reproduce it anymore, sorry. But at least I'm happy to say that it works now! :) Thanks again for the super nifty tool, very appreciated.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",796234313,Immutable Database w/ Canned Queries, https://github.com/simonw/datasette/issues/1880#issuecomment-1301043042,https://api.github.com/repos/simonw/datasette/issues/1880,1301043042,IC_kwDOBm6k_c5NjFdi,525934,amitkoth,2022-11-02T18:20:14Z,2022-11-02T18:20:14Z,NONE,"Follow on question - is all memory use @simonw - for both datasette and SQLlite confined to the ""query time"" itself i.e. the memory use is relevant only to a particular transaction or query - and then subsequently released?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1433576351,Datasette with many and large databases > Memory use, https://github.com/simonw/datasette/issues/1880#issuecomment-1317420812,https://api.github.com/repos/simonw/datasette/issues/1880,1317420812,IC_kwDOBm6k_c5Ohj8M,525934,amitkoth,2022-11-16T17:50:29Z,2022-11-16T17:50:29Z,NONE,"I appreciate your response @simonw - thanks! I'll clarify what we need further - let's imagine we have 2000 SQLLite databases (for 2000 tenants), but we only want to run _one_ datasette instance for each of those tenants to query/use datasette against their _own_ database only. This means the ""connection"" between datasette and the SQLLite database would be dynamic, based on the tenantID that's required on an incoming request. Is there any specific config or other considerations in this use case, to minimize memory use on a single, efficient VM and serve queries to all these tenants? cc @muadham","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1433576351,Datasette with many and large databases > Memory use, https://github.com/simonw/sqlite-utils/issues/487#issuecomment-1242513223,https://api.github.com/repos/simonw/sqlite-utils/issues/487,1242513223,IC_kwDOCGYnMM5KDz9H,540968,ryanfox,2022-09-09T22:01:10Z,2022-09-09T22:01:10Z,NONE,"Perfect. Apologies, I searched but didn’t see that one for some reason. Thank you. On Fri, Sep 9, 2022 at 3:04 PM Simon Willison ***@***.***> wrote: > This isn't supported yet - there's an older issue for that here: > > - #117 > > — > Reply to this email directly, view it on GitHub > , > or unsubscribe > > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1367835380,Specify foreign key against compound key in other table, https://github.com/simonw/datasette/issues/236#issuecomment-1465208436,https://api.github.com/repos/simonw/datasette/issues/236,1465208436,IC_kwDOBm6k_c5XVU50,545193,sopel,2023-03-12T14:04:15Z,2023-03-12T14:04:15Z,NONE,"I keep coming back to this in search for the related exploration, so I'll just link it now: @simonw has meanwhile researched _how to deploy Datasette to AWS Lambda using function URLs and Mangum_ via https://github.com/simonw/public-notes/issues/6 and concluded _that's everything I need to know in order to build a datasette-publish-lambda plugin_.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/simonw/datasette/issues/2058#issuecomment-1506203550,https://api.github.com/repos/simonw/datasette/issues/2058,1506203550,IC_kwDOBm6k_c5Zxtee,547438,cephillips,2023-04-13T01:48:21Z,2023-04-13T01:48:21Z,NONE,Really interesting how you are using ChatGPT in this.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1663399821,"500 ""attempt to write a readonly database"" error caused by ""PRAGMA schema_version""", https://github.com/simonw/datasette/pull/1159#issuecomment-759306228,https://api.github.com/repos/simonw/datasette/issues/1159,759306228,MDEyOklzc3VlQ29tbWVudDc1OTMwNjIyOA==,552629,lovasoa,2021-01-13T08:59:31Z,2021-01-13T08:59:31Z,NONE,@simonw : Did you have the time to take a look at this ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247,Improve the display of facets information, https://github.com/simonw/datasette/pull/1159#issuecomment-770865698,https://api.github.com/repos/simonw/datasette/issues/1159,770865698,MDEyOklzc3VlQ29tbWVudDc3MDg2NTY5OA==,552629,lovasoa,2021-02-01T13:42:29Z,2021-02-01T13:42:29Z,NONE,@simonw : Could you have a look at this ? I think this really improves readability.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247,Improve the display of facets information, https://github.com/simonw/datasette/pull/1159#issuecomment-804698315,https://api.github.com/repos/simonw/datasette/issues/1159,804698315,MDEyOklzc3VlQ29tbWVudDgwNDY5ODMxNQ==,552629,lovasoa,2021-03-23T07:58:28Z,2021-03-23T07:58:38Z,NONE,@mroswell Did you try it with more columns ? The display is flexible and columns get closer as new ones are added.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247,Improve the display of facets information, https://github.com/simonw/datasette/pull/1159#issuecomment-1100243987,https://api.github.com/repos/simonw/datasette/issues/1159,1100243987,IC_kwDOBm6k_c5BlGQT,552629,lovasoa,2022-04-15T17:24:43Z,2022-04-15T17:24:43Z,NONE,@simonw : do you think this could be merged ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247,Improve the display of facets information, https://github.com/simonw/datasette/pull/1159#issuecomment-1399606104,https://api.github.com/repos/simonw/datasette/issues/1159,1399606104,IC_kwDOBm6k_c5TbEtY,552629,lovasoa,2023-01-22T20:58:10Z,2023-01-22T20:58:10Z,NONE,"great initiative, @cldellow :+1: @simonw, if you want to merge this, that would still be welcome :)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247,Improve the display of facets information, https://github.com/simonw/sqlite-utils/issues/235#issuecomment-1304539296,https://api.github.com/repos/simonw/sqlite-utils/issues/235,1304539296,IC_kwDOCGYnMM5NwbCg,559711,ryascott,2022-11-05T12:40:12Z,2022-11-05T12:40:12Z,NONE,"I had the problem this morning when running: `Python==3.9.6 sqlite3.sqlite_version==3.37.0 sqlite-utils==3.30 ` I upgraded to: `Python ==3.10.8 sqlite3.sqlite_version==3.37.2 sqlite-utils==3.30 ` and the error did not appear anymore. Hope this helps Ryan ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810618495,Extract columns cannot create foreign key relation: sqlite3.OperationalError: table sqlite_master may not be modified, https://github.com/simonw/datasette/issues/596#issuecomment-1238383171,https://api.github.com/repos/simonw/datasette/issues/596,1238383171,IC_kwDOBm6k_c5J0DpD,562352,CharlesNepote,2022-09-06T16:27:25Z,2022-09-06T16:27:25Z,NONE,"Perhaps some ways to address this. 1. Add a **horizontal scrollbar at the top of the table**. There are some solutions here: https://stackoverflow.com/questions/3934271/horizontal-scrollbar-on-top-and-bottom-of-table 2. Use a **fixed table header**. It would be useful when you're lost in the middle of a very big table. Pure CSS solutions seem to exist: https://stackoverflow.com/questions/21168521/table-fixed-header-and-scrollable-body 3. Maybe a possibility to resize columns. Not sure about that because it would more work not to lose it after each reload. 4. A way to keep **favorite views for each user**. The process would be: I select the column I want or not (with existing ""settings"" icon of each column); then I select a ""favoritize view"" option somewhere; then I can recall all my favorite views from a menu. These data could be hosted on the browser.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",507454958,Handle really wide tables better, https://github.com/simonw/datasette/issues/1805#issuecomment-1265161668,https://api.github.com/repos/simonw/datasette/issues/1805,1265161668,IC_kwDOBm6k_c5LaNXE,562352,CharlesNepote,2022-10-03T09:18:05Z,2022-10-03T09:18:05Z,NONE,"> I'm tempted to add `word-wrap: anywhere` only to links that are know to be longer than a certain threshold. Make sense IMHO.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1363552780,truncate_cells_html does not work for links?, https://github.com/simonw/datasette/issues/1824#issuecomment-1268398461,https://api.github.com/repos/simonw/datasette/issues/1824,1268398461,IC_kwDOBm6k_c5Lmjl9,562352,CharlesNepote,2022-10-05T12:55:05Z,2022-10-05T12:55:05Z,NONE,"Here is some working javascript code. There might be better solution, I'm not a JS expert. ```javascript var show_hide = document.querySelector("".show-hide-sql > a""); // Hide SQL query if the URL opened with #_hide_sql var hash = window.location.hash; if(hash === ""#_hide_sql"") { hide_sql(); } show_hide.setAttribute(""href"", ""#""); show_hide.addEventListener(""click"", toggle_sql_display); function toggle_sql_display() { if (show_hide.innerText === ""hide"") { hide_sql(); return; } if (show_hide.innerText === ""show"") { show_sql(); return; } } function hide_sql() { sql_element.style.cssText=""display:none""; show_hide.innerHTML = ""show""; show_hide.setAttribute(""href"", ""#_hide_sql""); } function show_sql() { sql_element.style.cssText=""display:block""; show_hide.innerHTML = ""hide""; show_hide.setAttribute(""href"", ""#_show_sql""); } ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1387712501,Convert &_hide_sql=1 to #_hide_sql, https://github.com/simonw/datasette/issues/1860#issuecomment-1292390996,https://api.github.com/repos/simonw/datasette/issues/1860,1292390996,IC_kwDOBm6k_c5NCFJU,562352,CharlesNepote,2022-10-26T17:43:41Z,2022-10-26T17:43:41Z,NONE,"I guess the issue is here: https://github.com/simonw/datasette/blob/9676b2deb07cff20247ba91dad3e84a4ab0b00d1/datasette/utils/__init__.py#L209 Here is a working regexp allowing it: ```diff - re.compile(r""^select\b""), + re.compile(r""^\s*(/\*.+?(?=\*/)\*/\s*)*select""), ``` `^\s*`: beginning by 0 or an infinite number of \s (spaces, tabs, newlines...) `(/\*.+?(?=\*/)\*/\s*)*`: 0 or an infinite number of chars beginning by `/*` and ending to the next occurrence of `*/` followed by 0 or an infinite number of \s You can play with the regexp here: https://regex101.com/r/aESXDL/3 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1424378012,SQL query field can't begin by a comment, https://github.com/simonw/datasette/issues/1860#issuecomment-1293863145,https://api.github.com/repos/simonw/datasette/issues/1860,1293863145,IC_kwDOBm6k_c5NHsjp,562352,CharlesNepote,2022-10-27T17:43:37Z,2022-10-27T17:43:37Z,NONE,"Sorry I forgot the `-- comments like that`. I'm afraid there is an issue in your regexp, see: https://regex101.com/r/pyubJf/1 I guess I can fix it. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1424378012,SQL query field can't begin by a comment, https://github.com/simonw/datasette/issues/1860#issuecomment-1293912781,https://api.github.com/repos/simonw/datasette/issues/1860,1293912781,IC_kwDOBm6k_c5NH4rN,562352,CharlesNepote,2022-10-27T18:31:15Z,2022-10-27T18:31:15Z,NONE,"Here is my suggestion: `^\s*((?:\-\-.*?\n\s*)|(?:/\*.*?(?=\*/)\*/\s*))*select\b` See the following test: https://regex101.com/r/Doeqqa/1 And here I played all your tests: https://regexr.com/713ir ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1424378012,SQL query field can't begin by a comment, https://github.com/dogsheep/apple-notes-to-sqlite/issues/6#issuecomment-1508784533,https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6,1508784533,IC_kwDOJHON9s5Z7jmV,579727,sirnacnud,2023-04-14T15:22:09Z,2023-04-14T15:22:09Z,NONE,"Just changing the encoding in `extract_notes` to `utf8` seems to fix it for my titles that were messed up. ![Screen Shot 2023-04-14 at 5 14 18 PM](https://user-images.githubusercontent.com/579727/232086062-e7edc4d1-0880-417a-925b-fd6c65b05155.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1617602868,Character encoding problem, https://github.com/simonw/datasette/issues/942#issuecomment-736173084,https://api.github.com/repos/simonw/datasette/issues/942,736173084,MDEyOklzc3VlQ29tbWVudDczNjE3MzA4NA==,596279,zaneselvans,2020-12-01T02:20:58Z,2020-12-01T02:20:58Z,NONE,"Are there common patterns for storing column-based metadata inside SQLite itself? I know Postgres allows ""comment"" fields, which this is kind of trying to replicate. Should the `units` and `description` and possibly other per-column metadata fields be combined into a single (tabular?) structure, that would be displayed above the data on the table / query results page?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681334912,Support column descriptions in metadata.json, https://github.com/simonw/datasette/issues/942#issuecomment-737428262,https://api.github.com/repos/simonw/datasette/issues/942,737428262,MDEyOklzc3VlQ29tbWVudDczNzQyODI2Mg==,596279,zaneselvans,2020-12-02T18:55:21Z,2020-12-02T18:55:21Z,NONE,"Are you thinking that those metadata tables would be added to the SQLite DB by Datasette, when you tell it to wrap up the database, with the metadata coming from the `metadata.json`? Would it be easy to allow the prepopulation of those tables in the database itself? We've been struggling with the best way to make sure that the data is always accompanied by metadata, and baking it all into the database itself would be nice, since then we wouldn't need to worry about separately distributing different files in different contexts.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681334912,Support column descriptions in metadata.json, https://github.com/simonw/datasette/issues/741#issuecomment-806010960,https://api.github.com/repos/simonw/datasette/issues/741,806010960,MDEyOklzc3VlQ29tbWVudDgwNjAxMDk2MA==,596279,zaneselvans,2021-03-24T17:19:42Z,2021-03-24T17:19:42Z,NONE,"Ah, okay so `--extra-options` applies to both `datasette publish` and `datasette package`? There wren't any examples of it being used with `publish` in the docs, so this tripped me up for a bit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",607223136,"Replace ""datasette publish --extra-options"" with ""--setting""", https://github.com/simonw/datasette/issues/942#issuecomment-898032118,https://api.github.com/repos/simonw/datasette/issues/942,898032118,IC_kwDOBm6k_c41huH2,596279,zaneselvans,2021-08-12T23:12:00Z,2021-08-12T23:12:00Z,NONE,"This looks awesome. We'll definitely make extensive use of this feature! On Thu, Aug 12, 2021 at 5:52 PM Simon Willison ***@***.***> wrote: > I like this. Need to solve for mobile though where the cog menu isn't > visible - I think I'll do that with a definition list at the top of the > page. > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > , > or unsubscribe > > . > Triage notifications on the go with GitHub Mobile for iOS > > or Android > > . > -- Zane A. Selvans, PhD Chief Data Wrangler Catalyst Cooperative https://catalyst.coop ***@***.*** Signal/WhatsApp/SMS: +1 720 443 1363 Twitter: @ZaneSelvans PGP : 0x64F7B56F3A127B04 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681334912,Support column descriptions in metadata.json, https://github.com/simonw/datasette/issues/260#issuecomment-1051473892,https://api.github.com/repos/simonw/datasette/issues/260,1051473892,IC_kwDOBm6k_c4-rDfk,596279,zaneselvans,2022-02-26T02:24:15Z,2022-02-26T02:24:15Z,NONE,"Is there already functionality that can be used to validate the `metadata.json` file? Is there a JSON Schema that defines it? Or a validation that's available via datasette with Python? We're working on [automatically building the metadata](https://github.com/catalyst-cooperative/pudl/pull/1479) in CI and when we deploy to cloud run, and it would be nice to be able to check whether the the metadata we're outputting is valid in our tests.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323223872,Validate metadata.json on startup, https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059652834,https://api.github.com/repos/simonw/sqlite-utils/issues/412,1059652834,IC_kwDOCGYnMM4_KQTi,596279,zaneselvans,2022-03-05T02:14:40Z,2022-03-05T02:14:40Z,NONE,"We do a lot of `df.to_sql()` to write into sqlite, mostly in [this moddule](https://github.com/catalyst-cooperative/pudl/blob/main/src/pudl/load.py#L25)","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1160182768,Optional Pandas integration, https://github.com/simonw/datasette/issues/260#issuecomment-1235079469,https://api.github.com/repos/simonw/datasette/issues/260,1235079469,IC_kwDOBm6k_c5JndEt,596279,zaneselvans,2022-09-02T05:24:59Z,2022-09-02T05:24:59Z,NONE,@zschira is working with Pydantic while converting between and validating JSON frictionless datapackage descriptors that annotate an SQLite DB ([extracted from FERC's XBRL data](https://github.com/catalyst-cooperative/ferc-xbrl-extractor)) and the Datasette YAML metadata [so we can publish them with Datasette](https://github.com/catalyst-cooperative/pudl/pull/1831). Maybe there's some overlap? We've been loving Pydantic.,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 1}",323223872,Validate metadata.json on startup, https://github.com/simonw/datasette/issues/782#issuecomment-782756398,https://api.github.com/repos/simonw/datasette/issues/782,782756398,MDEyOklzc3VlQ29tbWVudDc4Mjc1NjM5OA==,601316,simonrjones,2021-02-20T22:05:48Z,2021-02-20T22:05:48Z,NONE,"> I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. I agree it is more predictable if the top level item is an object with a rows or data object that contains an array of data, which then allows for other top-level meta data. I can see the argument for removing this and just using an array for convenience - but I think that's OK as an option (as you have now). Rather than have lots of top-level keys you could have a ""meta"" object to contain non-data stuff. You could use something like ""links"" for API endpoint URLs (or use a standard like HAL). Which would then leave the top level a bit cleaner - if that's what you what. Have you had much feedback from users who use the Datasette API a lot?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",627794879,Redesign default .json format, https://github.com/simonw/datasette/issues/283#issuecomment-789680230,https://api.github.com/repos/simonw/datasette/issues/283,789680230,MDEyOklzc3VlQ29tbWVudDc4OTY4MDIzMA==,605492,justinpinkney,2021-03-03T12:28:42Z,2021-03-03T12:28:42Z,NONE,"One note on using this pragma I got an error on starting datasette `no such table: pragma_database_list`. I diagnosed this to an older version of sqlite3 (3.14.2) and upgrading to a newer version (3.34.2) fixed the issue.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325958506,Support cross-database joins, https://github.com/simonw/datasette/issues/176#issuecomment-431867885,https://api.github.com/repos/simonw/datasette/issues/176,431867885,MDEyOklzc3VlQ29tbWVudDQzMTg2Nzg4NQ==,634572,eads,2018-10-22T15:24:57Z,2018-10-22T15:24:57Z,NONE,"I'd like this as well. It would let me access Datasette-driven projects from GatsbyJS the same way I can access Postgres DBs via Hasura. While I don't see SQLite replacing Postgres for the 50m row datasets I sometimes have to work with, there's a whole class of smaller datasets that are great with Datasette but currently would find another option.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-548508237,https://api.github.com/repos/simonw/datasette/issues/176,548508237,MDEyOklzc3VlQ29tbWVudDU0ODUwODIzNw==,634572,eads,2019-10-31T18:25:44Z,2019-10-31T18:25:44Z,NONE,"👋 I'd be interested in building this out in Q1 or Q2 of 2020 if nobody has tackled it by then. I would love to integrate Datasette into @thechicagoreporter's practice, but we're also fully committed to GraphQL moving forward.","{""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 2, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774730656,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9,774730656,MDEyOklzc3VlQ29tbWVudDc3NDczMDY1Ng==,635179,merwok,2021-02-07T18:45:04Z,2021-02-07T18:45:04Z,NONE,"That URL uses TLS 1.3, but maybe only if the client supports it. It could be your Python version or your SSL library that’s not recent enough.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",801780625,SSL Error, https://github.com/simonw/sqlite-utils/issues/416#issuecomment-1116684581,https://api.github.com/repos/simonw/sqlite-utils/issues/416,1116684581,IC_kwDOCGYnMM5Cj0El,638427,mattkiefer,2022-05-03T21:36:49Z,2022-05-03T21:36:49Z,NONE,"Thanks for addressing this @simonw! However, I just reinstalled sqlite-utils 3.26.1 and get an `ParserError: Unknown string format: None`: ``` sqlite-utils --version sqlite-utils, version 3.26.1 ``` ``` sqlite-utils convert idfpr.db license ""Original Issue Date"" ""r.parsedate(value)"" Traceback (most recent call last): File ""/home/matt/.local/lib/python3.9/site-packages/sqlite_utils/db.py"", line 2514, in convert_value return fn(v) File """", line 2, in fn File ""/home/matt/.local/lib/python3.9/site-packages/sqlite_utils/recipes.py"", line 19, in parsedate parser.parse(value, dayfirst=dayfirst, yearfirst=yearfirst) File ""/usr/lib/python3/dist-packages/dateutil/parser/_parser.py"", line 1374, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File ""/usr/lib/python3/dist-packages/dateutil/parser/_parser.py"", line 649, in parse raise ParserError(""Unknown string format: %s"", timestr) dateutil.parser._parser.ParserError: Unknown string format: None Traceback (most recent call last): File ""/home/matt/.local/bin/sqlite-utils"", line 8, in sys.exit(cli()) File ""/usr/lib/python3/dist-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/usr/lib/python3/dist-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/usr/lib/python3/dist-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/lib/python3/dist-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/usr/lib/python3/dist-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/home/matt/.local/lib/python3.9/site-packages/sqlite_utils/cli.py"", line 2707, in convert db[table].convert( File ""/home/matt/.local/lib/python3.9/site-packages/sqlite_utils/db.py"", line 2530, in convert self.db.execute(sql, where_args or []) File ""/home/matt/.local/lib/python3.9/site-packages/sqlite_utils/db.py"", line 463, in execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: user-defined function raised exception ``` I definitely have some invalid data in the db. Happy to send a copy if it's helpful.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1173023272,Options for how `r.parsedate()` should handle invalid dates, https://github.com/simonw/datasette/issues/1217#issuecomment-774528913,https://api.github.com/repos/simonw/datasette/issues/1217,774528913,MDEyOklzc3VlQ29tbWVudDc3NDUyODkxMw==,639730,virtadpt,2021-02-06T19:23:41Z,2021-02-06T19:23:41Z,NONE,I've had a lot of success running it as an OpenFaaS lambda.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",802513359,Possible to deploy as a python app (for Rstudio connect server)?, https://github.com/simonw/datasette/issues/1886#issuecomment-1314223118,https://api.github.com/repos/simonw/datasette/issues/1886,1314223118,IC_kwDOBm6k_c5OVXQO,639730,virtadpt,2022-11-14T18:51:20Z,2022-11-14T18:51:20Z,NONE,I use Datasette to analyze blocklists by using csv-to-sqlite to pull their contents into a database and Datasette to look around through them. I also use its REST API to query said database as part of filtering out garbage from domains found in those blocklists.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1447050738,"Call for birthday presents: if you're using Datasette, let us know how you're using it here", https://github.com/simonw/datasette/issues/272#issuecomment-400571521,https://api.github.com/repos/simonw/datasette/issues/272,400571521,MDEyOklzc3VlQ29tbWVudDQwMDU3MTUyMQ==,647359,tomchristie,2018-06-27T07:30:07Z,2018-06-27T07:30:07Z,NONE,"I’m up for helping with this. Looks like you’d need static files support, which I’m planning on adding a component for. Anything else obviously missing? For a quick overview it looks very doable - the test client ought to me your test cases stay roughly the same. Are you using any middleware or other components for the Sanic ecosystem? Do you use cookies or sessions at all?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324188953,Port Datasette to ASGI, https://github.com/simonw/datasette/issues/272#issuecomment-404514973,https://api.github.com/repos/simonw/datasette/issues/272,404514973,MDEyOklzc3VlQ29tbWVudDQwNDUxNDk3Mw==,647359,tomchristie,2018-07-12T13:38:24Z,2018-07-12T13:38:24Z,NONE,"Okay. I reckon the latest version should have all the kinds of components you'd need: Recently added ASGI components for Routing and Static Files support, as well as making few tweaks to make sure requests and responses are instantiated efficiently. Don't have any redirect-to-slash / redirect-to-non-slash stuff out of the box yet, which it looks like you might miss.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324188953,Port Datasette to ASGI, https://github.com/simonw/datasette/issues/272#issuecomment-418695115,https://api.github.com/repos/simonw/datasette/issues/272,418695115,MDEyOklzc3VlQ29tbWVudDQxODY5NTExNQ==,647359,tomchristie,2018-09-05T11:21:25Z,2018-09-05T11:21:25Z,NONE,"Some notes: * Starlette just got a bump to 0.3.0 - there's some renamings in there. It's got enough functionality now that you can treat it either as a framework or as a toolkit. Either way the component design is all just *here's an ASGI app* all the way through. * Uvicorn got a bump to 0.3.3 - Removed some cyclical references that were causing garbage collection to impact performance. Ought to be a decent speed bump. * Wrt. passing config - Either use a single envvar that points to a config, or use multiple envvars for the config. Uvicorn could get a flag to read a `.env` file, but I don't see ASGI itself having a specific interface there.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324188953,Port Datasette to ASGI, https://github.com/simonw/datasette/issues/272#issuecomment-494297022,https://api.github.com/repos/simonw/datasette/issues/272,494297022,MDEyOklzc3VlQ29tbWVudDQ5NDI5NzAyMg==,647359,tomchristie,2019-05-21T08:39:17Z,2019-05-21T08:39:17Z,NONE,"Useful context stuff: > ASGI decodes %2F encoded slashes in URLs automatically `raw_path` for ASGI looks to be under consideration: https://github.com/django/asgiref/issues/87 > uvicorn doesn't support Python 3.5 That was an issue specifically against the <=3.5.2 minor point releases of Python, now resolved: https://github.com/encode/uvicorn/issues/330 👍 > Starlette for things like form parsing - but it's 3.6+ only! Yeah - the bits that require 3.6 are anywhere with the ""async for"" syntax. If it wasn't for that I'd downport it, but that one's a pain. It's the one bit of syntax to watch out for if you're looking to bring any bits of implementation across to Datasette. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324188953,Port Datasette to ASGI, https://github.com/simonw/datasette/issues/537#issuecomment-513279397,https://api.github.com/repos/simonw/datasette/issues/537,513279397,MDEyOklzc3VlQ29tbWVudDUxMzI3OTM5Nw==,647359,tomchristie,2019-07-19T15:47:57Z,2019-07-19T15:48:09Z,NONE,"The middleware implementation there works okay with a router nested inside if the scope is *mutated*. (Ie. ""endpoint"" doesn't need to exist at the point that the middleware starts running, but if it *has* been made available by the time an exception is thrown, then it can be used.) Starlette's usage of ""endpoint"" there is unilateral, rather than something I've discussed against the ASGI spec - certainly it's important for any monitoring ASGI middleware to be able to have some kind of visibility onto some limited subset of routing information, and `""endpoint""` in the scope referencing some routed-to callable seemed general enough to be useful. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",463544206,"Populate ""endpoint"" key in ASGI scope", https://github.com/simonw/datasette/issues/537#issuecomment-513442743,https://api.github.com/repos/simonw/datasette/issues/537,513442743,MDEyOklzc3VlQ29tbWVudDUxMzQ0Mjc0Mw==,647359,tomchristie,2019-07-20T06:50:47Z,2019-07-20T06:50:47Z,NONE,"Right now the spec does say “copy the scope, rather than mutate it” https://asgi.readthedocs.io/en/latest/specs/main.html#middleware I wouldn’t be surprised if that there’s room for discussion on evolving the exact language there. There’s obvs a nice element to the strictness there, tho practically I’m not sure it’s something that implementations will follow, and its not something that Starlette chooses to abide by.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",463544206,"Populate ""endpoint"" key in ASGI scope", https://github.com/simonw/datasette/pull/595#issuecomment-541664602,https://api.github.com/repos/simonw/datasette/issues/595,541664602,MDEyOklzc3VlQ29tbWVudDU0MTY2NDYwMg==,647359,tomchristie,2019-10-14T13:03:10Z,2019-10-14T13:03:10Z,NONE,"🤷‍♂️ @stonebig's suggestion would be the best I got too, *if* you want to support 3.5->3.8. It's either that, or hold off on 3.8 support until you're ready to go to 3.6->3.8. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",506300941,bump uvicorn to 0.9.0 to be Python-3.8 friendly, https://github.com/simonw/datasette/pull/595#issuecomment-552327079,https://api.github.com/repos/simonw/datasette/issues/595,552327079,MDEyOklzc3VlQ29tbWVudDU1MjMyNzA3OQ==,647359,tomchristie,2019-11-11T07:34:27Z,2019-11-11T07:34:27Z,NONE,"> Glitch has been upgraded to Python 3.7. Whoop! 🥳 ✨ ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",506300941,bump uvicorn to 0.9.0 to be Python-3.8 friendly, https://github.com/simonw/datasette/issues/144#issuecomment-346427794,https://api.github.com/repos/simonw/datasette/issues/144,346427794,MDEyOklzc3VlQ29tbWVudDM0NjQyNzc5NA==,649467,mhalle,2017-11-22T17:55:45Z,2017-11-22T17:55:45Z,NONE,"Thanks. There is a way to use pip to grab apsw, which also let's you configure it (flags to build extensions, use an internal sqlite, etc). Don't know how that works as a dependency for another package, though. On November 22, 2017 11:38:06 AM EST, Simon Willison wrote: >I have a solution for FTS already, but I'm interested in apsw as a >mechanism for allowing custom virtual tables to be written in Python >(pysqlite only lets you write custom functions) > >Not having PyPI support is pretty tough though. I'm planning a >plugin/extension system which would be ideal for things like an >optional apsw mode, but that's a lot harder if apsw isn't in PyPI. > >-- >You are receiving this because you authored the thread. >Reply to this email directly or view it on GitHub: >https://github.com/simonw/datasette/issues/144#issuecomment-346405660 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",276091279,apsw as alternative sqlite3 binding (for full text search), https://github.com/simonw/datasette/issues/1003#issuecomment-706302863,https://api.github.com/repos/simonw/datasette/issues/1003,706302863,MDEyOklzc3VlQ29tbWVudDcwNjMwMjg2Mw==,649467,mhalle,2020-10-09T17:17:06Z,2020-10-09T17:17:06Z,NONE,"I agree on the descriptive and python-consistent naming. There is already a tojson, but frankly i find the ""to"" and ""from"" confusing in a text templating language where what's a string and what's data isn't 100% transparent.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",718238967,from_json jinja2 filter, https://github.com/simonw/sqlite-utils/issues/171#issuecomment-714219725,https://api.github.com/repos/simonw/sqlite-utils/issues/171,714219725,MDEyOklzc3VlQ29tbWVudDcxNDIxOTcyNQ==,649467,mhalle,2020-10-22T04:38:35Z,2020-10-22T04:38:35Z,NONE,"Thanks. As I said, I think the result (being able to query tree structures like ancestors and descendants) is more important than the implementation, and I agree that this particular sqlite extension is too obscure. Just providing an sqlite utility to build or rebuild a transitive closure table might be more generically useful. I find that hierarchical data shows up pretty frequently in some data science problems.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",707407567,Idea: transitive closure tables for tree structures, https://github.com/simonw/sqlite-utils/issues/220#issuecomment-761015218,https://api.github.com/repos/simonw/sqlite-utils/issues/220,761015218,MDEyOklzc3VlQ29tbWVudDc2MTAxNTIxOA==,649467,mhalle,2021-01-15T15:40:08Z,2021-01-15T15:40:08Z,NONE,"Make sense. If you're coming from the sqlite3 side of things, rather than the datasette side, wanting the fts methods to work for views makes more sense. sqlite3 allows fts5 tables on views, so I was looking for CLI functionality to build the fts virtual tables. Ultimately, though, sharing fts virtual tables across tables and derivative views is likely more efficient. Maybe an explicit error message like, ""fts is not supported for views"" rather than just throwing an exception that the method doesn't exist"" might be helpful. Not critical though. Thanks.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",783778672,Better error message for *_fts methods against views, https://github.com/simonw/sqlite-utils/issues/220#issuecomment-783662968,https://api.github.com/repos/simonw/sqlite-utils/issues/220,783662968,MDEyOklzc3VlQ29tbWVudDc4MzY2Mjk2OA==,649467,mhalle,2021-02-22T20:44:51Z,2021-02-22T20:44:51Z,NONE,"Actually, coming back to this, I have a clearer use case for enabling fts generation for views: making it easier to bring in text from lookup tables and other joins. The datasette documentation describes populating an fts table like so: ``` INSERT INTO ""items_fts"" (rowid, name, description, category_name) SELECT items. rowid, items.name, items.description, categories.name FROM items JOIN categories ON items.category_id=categories.id; ``` Alternatively if you have fts support in sqlite_utils for views (which sqlite and fts5 support), you can do the same thing just by creating a view that captures the above joins as columns, then creating an fts table from that view. Such an fts table can be created using sqlite_utils, where one created with your method can't. The resulting fts table can then be used by a whole family of related tables and views in the manner you described earlier in this issue. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",783778672,Better error message for *_fts methods against views, https://github.com/simonw/datasette/issues/268#issuecomment-789409126,https://api.github.com/repos/simonw/datasette/issues/268,789409126,MDEyOklzc3VlQ29tbWVudDc4OTQwOTEyNg==,649467,mhalle,2021-03-03T03:57:15Z,2021-03-03T03:58:40Z,NONE,"In FTS5, I think doing an FTS search is actually much easier than doing a join against the main table like datasette does now. In fact, FTS5 external content tables provide a transparent interface back to the original table or view. Here's what I'm currently doing: * build a view that joins whatever tables I want and rename the columns to non-joiny names (e.g, `chapter.name AS chapter_name` in the view where needed) * Create an FTS5 table with `content=""viewname""` * As described in the ""external content tables"" section (https://www.sqlite.org/fts5.html#external_content_tables), sql queries can be made directly to the FTS table, which behind the covers makes select calls to the content table when the content of the original columns are needed. * In addition, you get ""rank"" and ""bm25()"" available to you when you select on the _fts table. Unfortunately, datasette doesn't currently seem happy being coerced into doing a real query on an fts5 table. This works: ```select col1, col2, col3 from table_fts where coll1=""value"" and table_fts match escape_fts(""search term"") order by rank``` But this doesn't work in the datasette SQL query interface: ```select col1, col2, col3 from table_fts where coll1=""value"" and table_fts match escape_fts(:search) order by rank``` (the ""search"" input text field doesn't show up) For what datasette is doing right now, I think you could just use contentless fts5 tables (`content=""""`), since all you care about is the rowid since all you're doing a subselect to get the rowid anyway. In fts5, that's just a contentless table. I guess if you want to follow this suggestion, you'd need a somewhat different code path for fts5. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323718842,Mechanism for ranking results from SQLite full-text search, https://github.com/simonw/datasette/issues/268#issuecomment-790257263,https://api.github.com/repos/simonw/datasette/issues/268,790257263,MDEyOklzc3VlQ29tbWVudDc5MDI1NzI2Mw==,649467,mhalle,2021-03-04T03:20:23Z,2021-03-04T03:20:23Z,NONE,"It's kind of an ugly hack, but you can try out what using the fts5 table as an actual datasette-accessible table looks like without changing any datasette code by creating yet another view on top of the fts5 table: `create view proxyview as select *, rank, table_fts as fts from table_fts;` That's now visible from datasette, just like any other view, but you can use `fts match escape_fts(search_string) order by rank`. This is only good as a proof of concept because you're inefficiently going from view -> fts5 external content table -> view -> data table. However, it does show it works.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323718842,Mechanism for ranking results from SQLite full-text search, https://github.com/simonw/datasette/issues/670#issuecomment-696163452,https://api.github.com/repos/simonw/datasette/issues/670,696163452,MDEyOklzc3VlQ29tbWVudDY5NjE2MzQ1Mg==,652285,snth,2020-09-21T14:46:10Z,2020-09-21T14:46:10Z,NONE,I'm currently using PostgREST to serve OpenAPI APIs off Postgresql databases. I would like to try out datasette once this becomes available on Postgres.,"{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",564833696,Prototoype for Datasette on PostgreSQL, https://github.com/simonw/datasette/issues/2087#issuecomment-1636134091,https://api.github.com/repos/simonw/datasette/issues/2087,1636134091,IC_kwDOBm6k_c5hhWzL,653549,adarshp,2023-07-14T17:02:03Z,2023-07-14T17:02:03Z,NONE,"@asg017 - the docs say that the autodetection only occurs in configuration directory mode. I for one would also be interested in the `--settings settings.json` feature. For context, I am developing a large database for use with Datasette, but the database lives in a different network volume than my source code, since the volume in which my source code lives is aggressively backed up, while the location where the database lives is meant for temporary files and is not as aggressively backed up (since the backups would get unreasonably large).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1765870617,`--settings settings.json` option, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-777951854,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,777951854,MDEyOklzc3VlQ29tbWVudDc3Nzk1MTg1NA==,675335,leafgarland,2021-02-12T03:54:39Z,2021-02-12T03:54:39Z,NONE,"I think that is a typo in the docs, you can use > dogsheep-photos apple-photos photos.db","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778014990,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,778014990,MDEyOklzc3VlQ29tbWVudDc3ODAxNDk5MA==,675335,leafgarland,2021-02-12T06:54:14Z,2021-02-12T06:54:14Z,NONE,"Ahh, that might be because macOS Big Sur has changed the structure of the photos db. Might need to wait for a later release, there is a PR which adds support for Big Sur. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/simonw/datasette/pull/868#issuecomment-663924220,https://api.github.com/repos/simonw/datasette/issues/868,663924220,MDEyOklzc3VlQ29tbWVudDY2MzkyNDIyMA==,702729,joshmgrant,2020-07-26T01:29:00Z,2020-07-26T01:29:00Z,NONE,"Ok, so it's been a while but I think I'm making progress. The good news: I have come up with a configuration change for running the tests on Windows on GitHub Actions. The bad news: there's a bunch of extraneous errors on the Windows case. I *think* this is due to Windows file IO and sqlite in a lot of cases, so I'm working through it. I will eventually clean up this PR.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",646448486,initial windows ci setup, https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-691501132,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50,691501132,MDEyOklzc3VlQ29tbWVudDY5MTUwMTEzMg==,706257,bcongdon,2020-09-12T14:48:10Z,2020-09-12T14:48:10Z,NONE,"This seems to be an issue even with larger values of `--stop_after`: ``` $ twitter-to-sqlite favorites twitter.db --stop_after=2000 Importing favorites [####################################] 198 $ ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",698791218,"favorites --stop_after=N stops after min(N, 200)", https://github.com/simonw/datasette/issues/1727#issuecomment-1111448928,https://api.github.com/repos/simonw/datasette/issues/1727,1111448928,IC_kwDOBm6k_c5CP11g,716529,glyph,2022-04-27T20:27:05Z,2022-04-27T20:27:05Z,NONE,"You don't want to re-use an SQLite connection from multiple threads anyway: https://www.sqlite.org/threadsafe.html Multiple connections can operate on the file in parallel, but a single connection can't: > Multi-thread. In this mode, SQLite can be safely used by multiple threads **provided that no single database connection is used simultaneously in two or more threads**. (emphasis mine)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117,Research: demonstrate if parallel SQL queries are worthwhile, https://github.com/simonw/datasette/issues/1727#issuecomment-1111451790,https://api.github.com/repos/simonw/datasette/issues/1727,1111451790,IC_kwDOBm6k_c5CP2iO,716529,glyph,2022-04-27T20:30:33Z,2022-04-27T20:30:33Z,NONE,"> I should try seeing what happens with WAL mode enabled. I've only skimmed above but it looks like you're doing mainly read-only queries? WAL mode is about better interactions between writers & readers, primarily.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117,Research: demonstrate if parallel SQL queries are worthwhile, https://github.com/simonw/datasette/issues/120#issuecomment-355487646,https://api.github.com/repos/simonw/datasette/issues/120,355487646,MDEyOklzc3VlQ29tbWVudDM1NTQ4NzY0Ng==,723567,nickdirienzo,2018-01-05T07:10:12Z,2018-01-05T07:10:12Z,NONE,"Ah, glad I found this issue. I have private data that I'd like to share to a few different people. Personally, a shared username and password would be sufficient for me, more-or-less Basic Auth. Do you have more complex requirements in mind? I'm not sure if ""plugin"" means ""build a plugin"" or ""find a plugin"" or something else entirely. FWIW, I stumbled upon [sanic-auth](https://github.com/pyx/sanic-auth) which looks like a new project to bring some interfaces around auth to sanic, similar to Flask. Alternatively, it shouldn't be too bad to add in Basic Auth. If we went down that route, that would probably be best built as a separate package for sanic that `datasette` brings in. What are your thoughts around this?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275087397,Plugin that adds an authentication layer of some sort, https://github.com/simonw/sqlite-utils/issues/431#issuecomment-1126295407,https://api.github.com/repos/simonw/sqlite-utils/issues/431,1126295407,IC_kwDOCGYnMM5DIedv,738408,rafguns,2022-05-13T17:47:32Z,2022-05-13T17:47:32Z,NONE,"I'd be happy to write a PR for this, if you think it's worth having.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1227571375,Allow making m2m relation of a table to itself, https://github.com/simonw/sqlite-utils/issues/431#issuecomment-1164460052,https://api.github.com/repos/simonw/sqlite-utils/issues/431,1164460052,IC_kwDOCGYnMM5FaEAU,738408,rafguns,2022-06-23T14:12:51Z,2022-06-23T14:12:51Z,NONE,"Yeah, I think I prefer your suggestion: it seems cleaner than my initial `left_name=`/`right_name=` idea. Perhaps one downside is that it's less obvious what the role of each field is: in this example, is `people_id_1` a reference to parent or child?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1227571375,Allow making m2m relation of a table to itself, https://github.com/simonw/datasette/issues/1175#issuecomment-762488336,https://api.github.com/repos/simonw/datasette/issues/1175,762488336,MDEyOklzc3VlQ29tbWVudDc2MjQ4ODMzNg==,758858,hannseman,2021-01-18T21:59:28Z,2021-01-18T22:00:31Z,NONE,"I encountered your issue when trying to find a solution and came up with the following, maybe it can help. ```python import logging.config from typing import Tuple import structlog import uvicorn from example.config import settings shared_processors: Tuple[structlog.types.Processor, ...] = ( structlog.contextvars.merge_contextvars, structlog.stdlib.add_logger_name, structlog.stdlib.add_log_level, structlog.processors.TimeStamper(fmt=""iso""), ) logging_config = { ""version"": 1, ""disable_existing_loggers"": False, ""formatters"": { ""json"": { ""()"": structlog.stdlib.ProcessorFormatter, ""processor"": structlog.processors.JSONRenderer(), ""foreign_pre_chain"": shared_processors, }, ""console"": { ""()"": structlog.stdlib.ProcessorFormatter, ""processor"": structlog.dev.ConsoleRenderer(), ""foreign_pre_chain"": shared_processors, }, **uvicorn.config.LOGGING_CONFIG[""formatters""], }, ""handlers"": { ""default"": { ""level"": ""DEBUG"", ""class"": ""logging.StreamHandler"", ""formatter"": ""json"" if not settings.debug else ""console"", }, ""uvicorn.access"": { ""level"": ""INFO"", ""class"": ""logging.StreamHandler"", ""formatter"": ""access"", }, ""uvicorn.default"": { ""level"": ""INFO"", ""class"": ""logging.StreamHandler"", ""formatter"": ""default"", }, }, ""loggers"": { """": {""handlers"": [""default""], ""level"": ""INFO""}, ""uvicorn.error"": { ""handlers"": [""default"" if not settings.debug else ""uvicorn.default""], ""level"": ""INFO"", ""propagate"": False, }, ""uvicorn.access"": { ""handlers"": [""default"" if not settings.debug else ""uvicorn.access""], ""level"": ""INFO"", ""propagate"": False, }, }, } def setup_logging() -> None: structlog.configure( processors=[ structlog.stdlib.filter_by_level, *shared_processors, structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.StackInfoRenderer(), structlog.processors.format_exc_info, structlog.stdlib.ProcessorFormatter.wrap_for_formatter, ], context_class=dict, logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=structlog.stdlib.AsyncBoundLogger, cache_logger_on_first_use=True, ) logging.config.dictConfig(logging_config) ``` And then it'll be run on the startup event: ```python @app.on_event(""startup"") async def startup_event() -> None: setup_logging() ``` It depends on a setting called `debug` which controls if it should output the regular uvicorn logging or json. ","{""total_count"": 15, ""+1"": 7, ""-1"": 0, ""laugh"": 1, ""hooray"": 1, ""confused"": 0, ""heart"": 5, ""rocket"": 1, ""eyes"": 0}",779156520,Use structlog for logging, https://github.com/simonw/sqlite-utils/issues/205#issuecomment-742299584,https://api.github.com/repos/simonw/sqlite-utils/issues/205,742299584,MDEyOklzc3VlQ29tbWVudDc0MjI5OTU4NA==,765871,kaihendry,2020-12-10T07:24:22Z,2020-12-10T07:24:22Z,NONE,Bumping to ubuntu-20.04 appears to have solved my syntax error. 🤷,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",760960559,"sqlite3.OperationalError: near ""("": syntax error", https://github.com/simonw/datasette/issues/1618#issuecomment-1028294089,https://api.github.com/repos/simonw/datasette/issues/1618,1028294089,IC_kwDOBm6k_c49SoXJ,770231,strada,2022-02-02T19:42:03Z,2022-02-02T19:42:03Z,NONE,"Thanks for looking into this. It might have been nice if `explain` surfaced these function calls. Looks like `explain query plan` does, but only for basic queries. ``` sqlite-utils fixtures.db 'explain query plan select * from pragma_function_list(), pragma_database_list(), pragma_module_list()' -t id parent notused detail ---- -------- --------- ------------------------------------------------ 4 0 0 SCAN pragma_function_list VIRTUAL TABLE INDEX 0: 8 0 0 SCAN pragma_database_list VIRTUAL TABLE INDEX 0: 12 0 0 SCAN pragma_module_list VIRTUAL TABLE INDEX 0: ``` ``` sqlite-utils fixtures.db 'explain query plan select * from pragma_function_list() as fl, pragma_database_list() as dl, pragma_module_list() as ml' -t id parent notused detail ---- -------- --------- ------------------------------ 4 0 0 SCAN fl VIRTUAL TABLE INDEX 0: 8 0 0 SCAN dl VIRTUAL TABLE INDEX 0: 12 0 0 SCAN ml VIRTUAL TABLE INDEX 0: ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1121121305,"Reconsider policy on blocking queries containing the string ""pragma""", https://github.com/simonw/sqlite-utils/issues/420#issuecomment-1082476727,https://api.github.com/repos/simonw/sqlite-utils/issues/420,1082476727,IC_kwDOCGYnMM5AhUi3,770231,strada,2022-03-29T23:52:38Z,2022-03-29T23:52:38Z,NONE,"@simonw Thanks for looking into it and documenting the solution! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1178546862,Document how to use a `--convert` function that runs initialization code first, https://github.com/simonw/sqlite-utils/pull/337#issuecomment-962259527,https://api.github.com/repos/simonw/sqlite-utils/issues/337,962259527,IC_kwDOCGYnMM45WupH,771193,urbas,2021-11-05T22:33:02Z,2021-11-05T22:33:02Z,NONE,"Smokes, it looks like there was a bug in click 8.0.2 (fixed in 8.0.3: https://github.com/pallets/click/issues/2089). Meaning this PR is not needed. Closing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1046271107,Default values for `--attach` and `--param` options, https://github.com/simonw/datasette/issues/1775#issuecomment-1426158181,https://api.github.com/repos/simonw/datasette/issues/1775,1426158181,IC_kwDOBm6k_c5VAXJl,805751,metamoof,2023-02-10T18:04:40Z,2023-02-10T18:04:40Z,NONE,"Is this where we talk about i18n of results? Or is that a separate thread. e.g. Having `country_long` show `España` in the Spanish version of the [global power plants demo site](https://global-power-plants.datasettes.com/) instead of `Spain`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1323346408,i18n support, https://github.com/simonw/datasette/issues/1261#issuecomment-803499509,https://api.github.com/repos/simonw/datasette/issues/1261,803499509,MDEyOklzc3VlQ29tbWVudDgwMzQ5OTUwOQ==,812795,brimstone,2021-03-21T02:06:43Z,2021-03-21T02:06:43Z,NONE,I can confirm 0.9.2 fixes the problem. Thanks for the fast response!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",832092321,Some links aren't properly URL encoded., https://github.com/simonw/sqlite-utils/issues/289#issuecomment-866241836,https://api.github.com/repos/simonw/sqlite-utils/issues/289,866241836,MDEyOklzc3VlQ29tbWVudDg2NjI0MTgzNg==,857609,adamchainz,2021-06-22T18:44:36Z,2021-06-22T18:44:36Z,NONE,Great!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",925677191,Mypy fixes for rows_from_file(), https://github.com/simonw/datasette/issues/352#issuecomment-584203999,https://api.github.com/repos/simonw/datasette/issues/352,584203999,MDEyOklzc3VlQ29tbWVudDU4NDIwMzk5OQ==,870184,xrotwang,2020-02-10T16:18:58Z,2020-02-10T16:18:58Z,NONE,"I don't want to re-open this issue, but I'm wondering whether it would be possible to include the full row for which a specific cell is to be rendered in the hook signature. My use case are rows where custom rendering would need access to multiple values (specifically, rows containing the constituents of interlinear glossed text (IGT) in separate columns, see https://github.com/cldf/cldf/tree/master/components/examples). I could probably cobble this together with custom SQL and the sql-to-html plugin. But having a full row within a `render_cell` implementation seems a lot simpler.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",345821500,render_cell(value) plugin hook, https://github.com/simonw/datasette/issues/667#issuecomment-585109972,https://api.github.com/repos/simonw/datasette/issues/667,585109972,MDEyOklzc3VlQ29tbWVudDU4NTEwOTk3Mg==,870184,xrotwang,2020-02-12T09:21:22Z,2020-02-12T09:21:22Z,NONE,"I think I found a better way to implement my use case: I wrap the `datasette serve` call into my own cli, which - creates the SQLite from CSV data - writes `metadata.json` for datasette - determines suitable config like `max_page_size` - then calls `datasette serve`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",562787785,Allow injecting configuration data from plugins, https://github.com/simonw/datasette/issues/667#issuecomment-585285753,https://api.github.com/repos/simonw/datasette/issues/667,585285753,MDEyOklzc3VlQ29tbWVudDU4NTI4NTc1Mw==,870184,xrotwang,2020-02-12T16:18:22Z,2020-02-12T16:18:22Z,NONE,"@simonw fwiw, here's the plugin I implemented to support CLDF datasets: https://github.com/cldf/datasette-cldf/blob/master/README.md It's a bit of a hybrid in that it does both, building the SQLite database **and** extending datasette by exploting what we know about the data format - so it may not be worth listing it with the other plugins. Having tools like datasette available definitely helps selling people on package formats like CLDF (or CSVW), many thanks for this!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",562787785,Allow injecting configuration data from plugins, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-811362316,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,811362316,MDEyOklzc3VlQ29tbWVudDgxMTM2MjMxNg==,871250,PabloLerma,2021-03-31T19:14:39Z,2021-03-31T19:14:39Z,NONE,👋 could I help somehow for this to be merged? As Big Sur is going to be more used as the time goes I think it would be nice to merge and publish a new version. Nice work!,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/simonw/datasette/pull/578#issuecomment-541837823,https://api.github.com/repos/simonw/datasette/issues/578,541837823,MDEyOklzc3VlQ29tbWVudDU0MTgzNzgyMw==,887095,heussd,2019-10-14T18:19:42Z,2019-10-14T18:19:42Z,NONE,"My use case was: I wanted to use datasette on a Raspberry Pi. `docker pull datasetteproject/datasette` pulled the official image, which then failed to execute because it is not ARM ready. Building my own quite took some time (~60 minutes via Qemu on Intel i5). You are right, the build method is quite new and I would not be surprised if the syntax / command will change in future. The outcome however, a Docker multi-architecture manifest, is aligned with Docker's strategy on how to tackle multiple architectures: transparently, on the registry-side. I just thought it would be nice to have the official image ready for multiple architectures. But I fully understand if the current methods feel too experimental to be mergable...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",499954048,Added support for multi arch builds, https://github.com/simonw/datasette/issues/123#issuecomment-882096402,https://api.github.com/repos/simonw/datasette/issues/123,882096402,IC_kwDOBm6k_c40k7kS,921217,RayBB,2021-07-18T18:07:29Z,2021-07-18T18:07:29Z,NONE,"I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases. My comment is partially inspired by this post about streaming sqlite dbs from github pages or such https://news.ycombinator.com/item?id=27016630 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-877805513,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,877805513,MDEyOklzc3VlQ29tbWVudDg3NzgwNTUxMw==,956433,Mjboothaus,2021-07-11T14:03:01Z,2021-07-11T14:03:01Z,NONE,"Hi Simon -- just experimenting with your excellent software! Up to this point in time I have been using the (paid) [HealthFit App](https://apps.apple.com/au/app/healthfit/id1202650514) to export my workouts from my Apple Watch, one walk at the time into either .GPX or .FIT format and then using another library to suck it into Python and eventually here to my ""Emmaus Walking"" app: https://share.streamlit.io/mjboothaus/emmaus_walking/emmaus_walking/app.py I just used `healthkit-to-sqlite` to convert my export.zip file and it all ""just worked"". I did notice the issue with various numeric fields being stored in the SQLite db as TEXT for now and just thought I'd flag it - but you're already self-reported this issue. Keep up the great work! I was curious if you have any thoughts about periodically exporting ""export.zip"" and how to just update the SQLite file instead of re-creating it each time. Hopefully Apple will give some thought to managing this data in a more sensible fashion as it grows over time. Ideally one could pull it from iCloud (where it is allegedly being backed up). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text", https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-877874117,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,877874117,MDEyOklzc3VlQ29tbWVudDg3Nzg3NDExNw==,956433,Mjboothaus,2021-07-11T23:03:37Z,2021-07-11T23:03:37Z,NONE,P.s. wondering if you have explored using the spatialite functionality with the location data in workouts?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text", https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-1163917719,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,1163917719,IC_kwDOC8tyDs5FX_mX,956433,Mjboothaus,2022-06-23T04:35:02Z,2022-06-23T04:35:02Z,NONE,In terms of unique identifiers - could you use values stored in `HKMetadataKeySyncIdentifier`?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text", https://github.com/dogsheep/healthkit-to-sqlite/issues/24#issuecomment-1464786643,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/24,1464786643,IC_kwDOC8tyDs5XTt7T,956433,Mjboothaus,2023-03-11T02:01:27Z,2023-03-11T02:01:27Z,NONE,Thanks for reporting this and providing a solution -- I was puzzled by this error when I revisited my walking data and experienced this issues. I haven't tried the fix yet.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1515883470,DOC: xml.etree.ElementTree.ParseError due to healthkit version 12 , https://github.com/dogsheep/healthkit-to-sqlite/issues/24#issuecomment-1464796494,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/24,1464796494,IC_kwDOC8tyDs5XTwVO,956433,Mjboothaus,2023-03-11T02:23:42Z,2023-03-11T02:23:42Z,NONE,@simonw - maybe put in some error handling to trap for poorly formed XML (from Apple engineers) so that it suggests that there are problems with export.zip rather than odd looking Python errors :),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1515883470,DOC: xml.etree.ElementTree.ParseError due to healthkit version 12 , https://github.com/simonw/datasette/issues/1619#issuecomment-1455196849,https://api.github.com/repos/simonw/datasette/issues/1619,1455196849,IC_kwDOBm6k_c5WvIqx,969875,BryantD,2023-03-05T20:29:55Z,2023-03-05T20:30:14Z,NONE,"I have this same issue, which is happening with both json links and facets. It is not happening with column sort links in the gear popup menus, but it is happening with the sort arrow that results after you use one of those links. I'm using Apache as a proxy to Datasette; the relevant configs are: ``` ProxyPass /datasette/ http://127.0.0.1:8000/datasette/ nocanon ProxyPreserveHost on ``` ``` { ""base_url"": ""/datasette/"" } ``` If it would be useful to get a look at the running installation via the Web, Simon, let me know.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1121583414,JSON link on row page is 404 if base_url setting is used, https://github.com/simonw/datasette/issues/1590#issuecomment-1010556333,https://api.github.com/repos/simonw/datasette/issues/1590,1010556333,IC_kwDOBm6k_c48O92t,1001306,eelkevdbos,2022-01-12T02:03:59Z,2022-01-12T02:03:59Z,NONE,"Thank you for the quick reply! Just a quick observation, I am running this locally without a proxy, whereas your fly example seems to be running behind an apache proxy (if the name is accurate). Can it be that the apache proxy strips the prefix before it passes on the request to the daphne backend?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1099723916,Table+query JSON and CSV links broken when using `base_url` setting, https://github.com/simonw/datasette/issues/1590#issuecomment-1010559681,https://api.github.com/repos/simonw/datasette/issues/1590,1010559681,IC_kwDOBm6k_c48O-rB,1001306,eelkevdbos,2022-01-12T02:10:20Z,2022-01-12T02:10:20Z,NONE,"In my example, path matching happens at the application layer (being the Django channels URLRouter). That might be a somewhat exotic solution that would normally be solved by a proxy like Apache or Nginx. However, in my specific use case, this is a ""feature"" enabling me to do simple management of databases and metadata from within a Django admin app instance mapped in that same router.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1099723916,Table+query JSON and CSV links broken when using `base_url` setting, https://github.com/simonw/sqlite-utils/issues/159#issuecomment-802032152,https://api.github.com/repos/simonw/sqlite-utils/issues/159,802032152,MDEyOklzc3VlQ29tbWVudDgwMjAzMjE1Mg==,1025224,limar,2021-03-18T15:42:52Z,2021-03-18T15:42:52Z,NONE,"I confirm the bug. Happens for me in version 3.6. I use the call to delete all the records: `table.delete_where()` This does not delete anything. I see that `delete()` method DOES use context manager `with self.db.conn:` which should help. You may want to align the code of both methods.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",702386948,.delete_where() does not auto-commit (unlike .insert() or .upsert()), https://github.com/simonw/sqlite-utils/pull/203#issuecomment-774217792,https://api.github.com/repos/simonw/sqlite-utils/issues/203,774217792,MDEyOklzc3VlQ29tbWVudDc3NDIxNzc5Mg==,1049910,drkane,2021-02-05T18:44:13Z,2021-02-05T18:44:13Z,NONE,"Thanks for looking at this - home schooling kids has prevented me from replying. I'd struggled with how to adapt the API for the foreign keys too - I definitely tried the String/Tuple approach. I hadn't considered the breaking changes that would introduce though. I can take a look at this and try and make the change - see which of your options works best. I've got a workaround for the use-case I was looking at this for, so it wouldn't be a problem for me if it was put on the back burner until a hypothetical v4.0 anyway. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829,changes to allow for compound foreign keys, https://github.com/simonw/datasette/issues/401#issuecomment-455520561,https://api.github.com/repos/simonw/datasette/issues/401,455520561,MDEyOklzc3VlQ29tbWVudDQ1NTUyMDU2MQ==,1055831,dazzag24,2019-01-18T11:48:13Z,2019-01-18T11:48:13Z,NONE,"Thanks. I'll take a look at your changes. I must admit I was struggling to see how to pass info from the python code in __init__.py into the javascript document.addEventListener function.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",400229984,How to pass configuration to plugins?, https://github.com/simonw/datasette/issues/411#issuecomment-519065799,https://api.github.com/repos/simonw/datasette/issues/411,519065799,MDEyOklzc3VlQ29tbWVudDUxOTA2NTc5OQ==,1055831,dazzag24,2019-08-07T12:00:36Z,2019-08-07T12:00:36Z,NONE,"Hi, Apologies for the long delay. I tried your suggesting escaping approach: `SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a LEFT JOIN airports AS b ON (b.rowid = a.fid)WHERE f_table_name = 'airports' AND ref_geometry = MakePoint(:Long || "", "" || :Lat) AND max_items = 6; ` and it returns this error: `wrong number of arguments to function MakePoint()` Anything else you suggest I try? Thanks","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",410384988,How to pass named parameter into spatialite MakePoint() function, https://github.com/simonw/datasette/issues/657#issuecomment-575321322,https://api.github.com/repos/simonw/datasette/issues/657,575321322,MDEyOklzc3VlQ29tbWVudDU3NTMyMTMyMg==,1055831,dazzag24,2020-01-16T20:01:43Z,2020-01-16T20:01:43Z,NONE,"I have successfully tested datasette using a parquet VIRTUAL TABLE. In the first terminal: ```datasette airports.db --load-extension=libparquet``` In another terminal I load the same sqlite db file using the sqlite3 cli client. ```$ sqlite3 airports.db``` and then load the parquet extension and create the virtual table. ``` sqlite> .load /home/darreng/metars/libparquet sqlite> CREATE VIRTUAL TABLE mytable USING parquet('/home/xx/data.parquet'); ``` Now the parquet virtual table is usable by the datasette web UI. Its not an ideal solution but is a proof that datasette works the parquet extension.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",548591089,Allow creation of virtual tables at startup, https://github.com/simonw/datasette/issues/657#issuecomment-576759416,https://api.github.com/repos/simonw/datasette/issues/657,576759416,MDEyOklzc3VlQ29tbWVudDU3Njc1OTQxNg==,1055831,dazzag24,2020-01-21T16:20:19Z,2020-01-21T16:20:19Z,NONE,"Hi, I've completed some changes to my fork of datasette that allows it to automatically create the parquet virtual table when you supply it with a filename that has the "".parquet"" extension. I had to figure out how to make the ""CREATE VIRTUAL TABLE"" statement only be applied to the fake in memory parquet database and not to any others that were also being loaded. Thus it supports mixed mode databases e.g ``` datasette my_test.parquet normal_sqlite_file.db --load-extension=libparquet.so --load-extensio n=mod_spatialite.so ``` Please see my changes here: https://github.com/dazzag24/datasette/commit/8e18394353114f17291fd1857073b1e0485a1faf Thanks ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",548591089,Allow creation of virtual tables at startup, https://github.com/simonw/datasette/issues/327#issuecomment-584657949,https://api.github.com/repos/simonw/datasette/issues/327,584657949,MDEyOklzc3VlQ29tbWVudDU4NDY1Nzk0OQ==,1055831,dazzag24,2020-02-11T14:21:15Z,2020-02-11T14:21:15Z,NONE,See https://github.com/simonw/datasette/issues/657 and my changes that allow datasette to load parquet files ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",335200136,Explore if SquashFS can be used to shrink size of packaged Docker containers, https://github.com/simonw/datasette/issues/506#issuecomment-500238035,https://api.github.com/repos/simonw/datasette/issues/506,500238035,MDEyOklzc3VlQ29tbWVudDUwMDIzODAzNQ==,1059677,Gagravarr,2019-06-09T19:21:18Z,2019-06-09T19:21:18Z,NONE,"If you don't mind calling out to Java, then Apache Tika is able to tell you what a load of ""binary stuff"" is, plus render it to XHTML where possible. There's a python wrapper around the Apache Tika server, but for a more typical datasette usecase you'd probably just want to grab the Tika CLI jar, and call it with `--detect` and/or `--xhtml` to process the unknown binary blob","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",453846217,Option to display binary data, https://github.com/simonw/sqlite-utils/issues/297#issuecomment-1732018273,https://api.github.com/repos/simonw/sqlite-utils/issues/297,1732018273,IC_kwDOCGYnMM5nPIBh,1108600,radusuciu,2023-09-22T20:49:51Z,2023-09-22T20:49:51Z,NONE,This would be awesome to have for multi-gig tsv and csv files! I'm currently looking at a 10 hour countdown for one such important. Not a problem because I'm lazy and happy to let it run and check on it tomorrow..,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism, https://github.com/simonw/datasette/issues/1255#issuecomment-812680519,https://api.github.com/repos/simonw/datasette/issues/1255,812680519,MDEyOklzc3VlQ29tbWVudDgxMjY4MDUxOQ==,1111743,jungle-boogie,2021-04-02T19:37:57Z,2021-04-02T19:37:57Z,NONE,"Hello, I'm also experiencing a timeout in my environment. I don't know if it's because I need more indexes or a more powerful system. My data has 1,271,111 and when I try to create a facet, there's a time out. I've tried this on two different rows that should significantly filter down data: `CITY` and `PARTY_REG`. Simon's johns_hopkins_csse_daily_reports has more rows and it setup with two facets on load. He does have four indexes created, though. Do I need more indexes? I have one simple one so far: ``` CREATE INDEX [idx_party_reg] ON [county_active] ([PARTY_REG]); ``` I'm running Datasette 0.56 installed via pip with Python 3.7.3. `4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux` ``` $ cat /etc/os-release PRETTY_NAME=""Debian GNU/Linux 10 (buster)"" NAME=""Debian GNU/Linux"" VERSION_ID=""10"" VERSION=""10 (buster)"" VERSION_CODENAME=buster ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",826700095,Facets timing out but work when filtering, https://github.com/simonw/datasette/issues/1255#issuecomment-812710120,https://api.github.com/repos/simonw/datasette/issues/1255,812710120,MDEyOklzc3VlQ29tbWVudDgxMjcxMDEyMA==,1111743,jungle-boogie,2021-04-02T20:50:08Z,2021-04-02T20:50:08Z,NONE,"Hello again, I was able to get my facets running with this `settings.json`, which was lifted from one of Simon's datasette's and slightly modified. ``` { ""default_page_size"": 100, ""max_returned_rows"": 1000, ""num_sql_threads"": 3, ""sql_time_limit_ms"": 9000, ""default_facet_size"": 10, ""facet_time_limit_ms"": 9000, ""facet_suggest_time_limit_ms"": 500, ""hash_urls"": false, ""allow_facet"": true, ""suggest_facets"": false, ""default_cache_ttl"": 5, ""default_cache_ttl_hashed"": 31536000, ""cache_size_kb"": 0, ""allow_csv_stream"": true, ""max_csv_mb"": 100, ""truncate_cells_html"": 2048, ""template_debug"": false, ""base_url"": ""/"" } ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",826700095,Facets timing out but work when filtering, https://github.com/simonw/datasette/issues/1245#issuecomment-812711365,https://api.github.com/repos/simonw/datasette/issues/1245,812711365,MDEyOklzc3VlQ29tbWVudDgxMjcxMTM2NQ==,1111743,jungle-boogie,2021-04-02T20:53:35Z,2021-04-02T20:53:35Z,NONE,"Yes, I agree. Alternatively, maybe the header could be at the top and bottom, above the next page button. Maybe even have the header 50 records down?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",817544251,"Sticky table column headers would be useful, especially on the query page", https://github.com/simonw/datasette/issues/916#issuecomment-812742462,https://api.github.com/repos/simonw/datasette/issues/916,812742462,MDEyOklzc3VlQ29tbWVudDgxMjc0MjQ2Mg==,1111743,jungle-boogie,2021-04-02T22:37:27Z,2021-04-02T22:37:27Z,NONE,"Yes, this would be nice! I using Datasette v0.56 and don't see a previous page button.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",672421411,"Support reverse pagination (previous page, has-previous-items)", https://github.com/simonw/datasette/issues/2058#issuecomment-1507264934,https://api.github.com/repos/simonw/datasette/issues/2058,1507264934,IC_kwDOBm6k_c5Z1wmm,1138559,esagara,2023-04-13T16:35:21Z,2023-04-13T16:35:21Z,NONE,"I tried deploying the fix you submitted, but still getting the same errors. I can past the logs here if needed, but I really don't see anything new in them.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1663399821,"500 ""attempt to write a readonly database"" error caused by ""PRAGMA schema_version""", https://github.com/dogsheep/dogsheep-photos/issues/35#issuecomment-813249000,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/35,813249000,MDEyOklzc3VlQ29tbWVudDgxMzI0OTAwMA==,1151557,ligurio,2021-04-05T07:37:57Z,2021-04-05T07:37:57Z,NONE,"There are trained ML models used in Photoprism: - https://dl.photoprism.org/tensorflow/nasnet.zip - https://dl.photoprism.org/tensorflow/nsfw.zip","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842695374,Support to annotate photos on other than macOS OSes, https://github.com/dogsheep/dogsheep.github.io/pull/6#issuecomment-1021264135,https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/6,1021264135,IC_kwDODMzF1s4830EH,1151557,ligurio,2022-01-25T14:52:40Z,2022-01-25T14:52:40Z,NONE,"@simonw, could you review?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842765105,Add testres-db tool, https://github.com/simonw/sqlite-utils/issues/510#issuecomment-1318431389,https://api.github.com/repos/simonw/sqlite-utils/issues/510,1318431389,IC_kwDOCGYnMM5Olaqd,1176293,ar-jan,2022-11-17T10:36:28Z,2022-11-17T10:36:28Z,NONE,The virtual table's _config `version: 4` seems to indicate FTS5.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1434911255,Cannot enable FTS5 despite it being available, https://github.com/simonw/sqlite-utils/issues/510#issuecomment-1320394127,https://api.github.com/repos/simonw/sqlite-utils/issues/510,1320394127,IC_kwDOCGYnMM5Os52P,1176293,ar-jan,2022-11-18T18:37:51Z,2022-11-18T18:37:51Z,NONE,"I guess it is not incorrect when it says the version is `4`, though it is confusing. Maybe it doesn't even refer to FTS4/FTS5 versions, but something else? In any case, it's not related to sqlite-utils, but SQLite itself.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1434911255,Cannot enable FTS5 despite it being available, https://github.com/simonw/datasette/issues/1706#issuecomment-1325164933,https://api.github.com/repos/simonw/datasette/issues/1706,1325164933,IC_kwDOBm6k_c5O_GmF,1176293,ar-jan,2022-11-23T14:34:54Z,2022-11-23T14:34:54Z,NONE,This would be helpful.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1198822563,"[feature] immutable mode for a directory, not just individual sqlite file", https://github.com/simonw/sqlite-utils/issues/488#issuecomment-1364141224,https://api.github.com/repos/simonw/sqlite-utils/issues/488,1364141224,IC_kwDOCGYnMM5RTySo,1176293,ar-jan,2022-12-23T17:38:55Z,2022-12-23T17:38:55Z,NONE,"> text columns containing empty strings should not be rewritten to null. I would actually appreciate an option to do just that for text columns as well.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1373224657,`sqlite-utils transform` should set empty strings to null when converting text columns to integer/float, https://github.com/simonw/datasette/issues/2035#issuecomment-1460808028,https://api.github.com/repos/simonw/datasette/issues/2035,1460808028,IC_kwDOBm6k_c5XEilc,1176293,ar-jan,2023-03-08T20:14:47Z,2023-03-08T20:14:47Z,NONE,"+1, I have been wishing for this feature (also for use with template-sql). It was requested before here #1304.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1615692818,Potential feature: special support for `?a=1&a=2` on the query page, https://github.com/simonw/sqlite-utils/issues/228#issuecomment-1001115286,https://api.github.com/repos/simonw/sqlite-utils/issues/228,1001115286,IC_kwDOCGYnMM47q86W,1206106,agguser,2021-12-26T07:01:31Z,2021-12-26T07:01:31Z,NONE,"`--no-headers` does not work? ``` $ echo 'a,1\nb,2' | sqlite-utils memory --no-headers -t - 'select * from stdin' a 1 --- --- b 2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",807437089,--no-headers option for CSV and TSV, https://github.com/simonw/datasette/issues/858#issuecomment-792129022,https://api.github.com/repos/simonw/datasette/issues/858,792129022,MDEyOklzc3VlQ29tbWVudDc5MjEyOTAyMg==,1219001,robroc,2021-03-07T00:23:34Z,2021-03-07T00:23:34Z,NONE,@smithdc1 Can you tell us what you did to get it to publish in Windows? What commands did you pass?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642388564,publish heroku does not work on Windows 10, https://github.com/simonw/datasette/issues/858#issuecomment-792308036,https://api.github.com/repos/simonw/datasette/issues/858,792308036,MDEyOklzc3VlQ29tbWVudDc5MjMwODAzNg==,1219001,robroc,2021-03-07T16:41:54Z,2021-03-07T16:41:54Z,NONE,"Apologies if I sound dense but I don't see where you would pass 'shell=True'. I'm using the CLI installed via pip. On Sun., Mar. 7, 2021, 2:15 a.m. David Smith, wrote: > To get it to work I had to: > > - > > add shell=true to the various commands in datasette > - > > use the name argument of the publish command. ( > https://docs.datasette.io/en/stable/publish.html) > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > , > or unsubscribe > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642388564,publish heroku does not work on Windows 10, https://github.com/simonw/datasette/issues/1196#issuecomment-819775388,https://api.github.com/repos/simonw/datasette/issues/1196,819775388,MDEyOklzc3VlQ29tbWVudDgxOTc3NTM4OA==,1219001,robroc,2021-04-14T19:28:38Z,2021-04-14T19:28:38Z,NONE,@QAInsights I'm having a similar problem when publishing to Cloud Run on Windows. It's not able to access certain packages in my conda environment where Datasette is installed. Can you explain how you got it to work in WSL? Were you able to access the .db file in the Windows file system? Thank you.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",791237799,Access Denied Error in Windows, https://github.com/dogsheep/hacker-news-to-sqlite/pull/6#issuecomment-1489110168,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/6,1489110168,IC_kwDODtX3eM5YwgSY,1231935,xavdid,2023-03-29T18:36:16Z,2023-03-29T18:36:16Z,NONE,@simonw can you take a look when you have a chance?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1641117021,Add permalink virtual field to items table, https://github.com/simonw/datasette/issues/1989#issuecomment-1491357104,https://api.github.com/repos/simonw/datasette/issues/1989,1491357104,IC_kwDOBm6k_c5Y5E2w,1231935,xavdid,2023-03-31T06:17:23Z,2023-03-31T06:18:05Z,NONE,"I'm running into a similar use case as pax above- I made a `nice` view that just has the data I'm interested in (which doesn't include the `id`, since it's not important in this case). But, by excluding `id` from the view, I can't do fts queries against it because the view has no `id` field to tie to `rowid`: ``` ERROR: conn=, sql = 'select time, text, permalink, num_children from nice where id in (select rowid from items_fts where items_fts match :search) limit 101', params = {'search': 'whatever'}: no such column: id ``` It works fine when I include `id` in my view, but now my `nice` view is cluttered up. Would be great to hide it permanently in the `config.json`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1531991339,Suggestion: Hiding columns, https://github.com/simonw/sqlite-utils/issues/496#issuecomment-1532481862,https://api.github.com/repos/simonw/sqlite-utils/issues/496,1532481862,IC_kwDOCGYnMM5bV9FG,1231935,xavdid,2023-05-03T05:53:26Z,2023-05-03T05:53:26Z,NONE,"Would love to put our heads together for improvements here. I _think_ anything that is `argname=DEFAULT` needs to be typed as `argname: str | Default = DEFAULT` (replacing `str` with the appropriate type(s)). We may be able to get clever and tie the types to that key in the `_defaults` dict (definitely possible in Typescript, but I'm less familiar with advanced python types). Right now, all args are typed as `Default`, which means all callers get type errors. As for table/view, given that they don't have the same methods, it would be nice to be able to get one or the other specifically. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1393202060,devrel/python api: Pylance type hinting, https://github.com/simonw/sqlite-utils/issues/538#issuecomment-1538975545,https://api.github.com/repos/simonw/sqlite-utils/issues/538,1538975545,IC_kwDOCGYnMM5buuc5,1231935,xavdid,2023-05-08T20:06:35Z,2023-05-08T20:06:35Z,NONE,"perfect, thank you!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1695428235,`table.upsert_all` fails to write rows when `not_null` is present, https://github.com/simonw/sqlite-utils/issues/554#issuecomment-1557607516,https://api.github.com/repos/simonw/sqlite-utils/issues/554,1557607516,IC_kwDOCGYnMM5c1zRc,1231935,xavdid,2023-05-22T17:18:33Z,2023-05-22T17:18:33Z,NONE,"Oh and for context - this goes away if I use `.upsert` instead of `insert(..., ignore=True)`, but I don't want to update the value if it's written, just do an insert if it's new. The code is basically: ```py def save_items(table, items): db[""users""].insert(build_user(items[0]), pk=""id"",ignore=True) db[table].insert_all(items) if comments := fetch_comments(): save_items('comments', comments) if posts := fetch_posts(): save_items('posts', posts) ``` So either `comments` or `post` could create the relevant user if those items exist. In cases where they _both_ exist, I get this error. I need the `pk` because either call could create the table.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1720096994,"`IndexError` when doing `.insert(..., pk='id')` after `insert_all`", https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-798436026,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,798436026,MDEyOklzc3VlQ29tbWVudDc5ODQzNjAyNg==,1234956,n8henrie,2021-03-13T14:23:16Z,2021-03-13T14:23:16Z,NONE,"This PR allows my import to succeed. It looks like some events don't have an `id`, but do have `HKExternalUUID` (which gets turned into `metadata_HKExternalUUID`), so I use this as a fallback. If a record has neither of these, I changed it to just print the record (for debugging) and `return`. For some odd reason this ran fine at first, and now (after removing the generated db and trying again) I'm getting a different error (duplicate column name). Looks like it may have run when I had two successive runs without remembering to delete the db in between. Will try to refactor.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-798468572,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,798468572,MDEyOklzc3VlQ29tbWVudDc5ODQ2ODU3Mg==,1234956,n8henrie,2021-03-13T14:47:31Z,2021-03-13T14:47:31Z,NONE,"Ok, new PR works. I'm not `git` enough so I just force-pushed over the old one. I still end up with a lot of activities that are missing an `id` and therefore skipped (since this is used as the primary key). For example: ``` {'workoutActivityType': 'HKWorkoutActivityTypeRunning', 'duration': '35.31666666666667', 'durationUnit': 'min', 'totalDistance': '4.010870267636999', 'totalDistanceUnit': 'mi', 'totalEnergyBurned': '660.3516235351562', 'totalEnergyBurnedUnit': 'Cal', 'sourceName': 'Strava', 'sourceVersion': '22810', 'creationDate': '2020-07-16 13:38:26 -0700', 'startDate': '2020-07-16 06:38:26 -0700', 'endDate': '2020-07-16 07:13:45 -0700'} ``` I also end up with some unhappy characters (in the skipped events), such as: `'sourceName': 'Nathan’s Apple\xa0Watch',`. But it's successfully making it through the file, and the resulting db opens in datasette, so I'd call that progress.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/simonw/datasette/issues/2143#issuecomment-1691094870,https://api.github.com/repos/simonw/datasette/issues/2143,1691094870,IC_kwDOBm6k_c5kzA9W,1238873,rclement,2023-08-24T06:43:40Z,2023-08-24T06:43:40Z,NONE,"If I may, the ""path-like"" configuration is great but one thing that would be even greater: allowing the same configuration to be provided using environment variables. For instance: ``` datasette -s plugins.datasette-complex-plugin.configs '{""foo"": [1,2,3], ""bar"": ""baz""}' ``` could also be provided using: ``` export DS_PLUGINS_DATASETTE-COMPLEX-PLUGIN_CONFIGS='{""foo"": [1,2,3], ""bar"": ""baz""}' datasette ``` (I do not like mixing `-` and `_` in env vars but I do not have a best easily reversible example at the moment) FYI, you could take some inspiration from another great open source data project, Metabase: https://www.metabase.com/docs/latest/configuring-metabase/config-file https://www.metabase.com/docs/latest/configuring-metabase/environment-variables","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1855885427,De-tangling Metadata before Datasette 1.0, https://github.com/simonw/datasette/issues/1218#issuecomment-784157345,https://api.github.com/repos/simonw/datasette/issues/1218,784157345,MDEyOklzc3VlQ29tbWVudDc4NDE1NzM0NQ==,1244799,soobrosa,2021-02-23T12:12:17Z,2021-02-23T12:12:17Z,NONE,"Topline this fixed the same problem for me. ``` brew install python@3.7 ln -s /usr/local/opt/python@3.7/bin/python3.7 /usr/local/opt/python/bin/python3.7 pip3 uninstall -y numpy pip3 uninstall -y setuptools pip3 install setuptools pip3 install numpy pip3 install datasette-publish-fly ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803356942, /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory, https://github.com/simonw/datasette/issues/1479#issuecomment-929927144,https://api.github.com/repos/simonw/datasette/issues/1479,929927144,IC_kwDOBm6k_c43bY_o,1244799,soobrosa,2021-09-29T07:49:40Z,2021-09-29T07:49:40Z,NONE,"My search yielded these four entries: https://github.com/simonw/datasette/issues?q=PermissionError%3A+%5BWinError+32%5D+ Maybe this is the closet hit? https://github.com/simonw/datasette/issues/744 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1010112818,"Win32 ""used by another process"" error with datasette publish", https://github.com/simonw/datasette/issues/417#issuecomment-751127384,https://api.github.com/repos/simonw/datasette/issues/417,751127384,MDEyOklzc3VlQ29tbWVudDc1MTEyNzM4NA==,1279360,dyllan-to-you,2020-12-24T22:56:48Z,2020-12-24T22:56:48Z,NONE,"Instead of scanning the directory every 10s, have you considered listening for the native system events to notify you of updates? I think python has a nice module to do this for you called [watchdog](https://pypi.org/project/watchdog/)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421546944,Datasette Library, https://github.com/simonw/datasette/issues/1276#issuecomment-811209922,https://api.github.com/repos/simonw/datasette/issues/1276,811209922,MDEyOklzc3VlQ29tbWVudDgxMTIwOTkyMg==,1314318,justinallen,2021-03-31T16:27:26Z,2021-03-31T16:27:26Z,NONE,Fantastic. Thank you! ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841456306,"Invalid SQL: ""no such table: pragma_database_list"" on database page", https://github.com/simonw/datasette/issues/2027#issuecomment-1440762383,https://api.github.com/repos/simonw/datasette/issues/2027,1440762383,IC_kwDOBm6k_c5V4EoP,1350673,dmick,2023-02-22T20:35:16Z,2023-02-22T20:35:16Z,NONE,"Was that query to me, @gk7279? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1590183272,"How to redirect from ""/"" to a specific db/table", https://github.com/simonw/datasette/issues/2030#issuecomment-1440814680,https://api.github.com/repos/simonw/datasette/issues/2030,1440814680,IC_kwDOBm6k_c5V4RZY,1350673,dmick,2023-02-22T21:22:42Z,2023-02-22T21:22:42Z,NONE,"@gk7279, you had asked in a separate bug about how to redirect web servers in general. The datasette docs actually have pretty good information on this for both nginx and apache2: https://docs.datasette.io/en/stable/deploying.html#running-datasette-behind-a-proxy ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1594383280,How to use Datasette with apache webserver on GCP?, https://github.com/simonw/datasette/issues/2027#issuecomment-1459455356,https://api.github.com/repos/simonw/datasette/issues/2027,1459455356,IC_kwDOBm6k_c5W_YV8,1350673,dmick,2023-03-08T04:42:22Z,2023-03-08T04:42:22Z,NONE,"I managed to make it work by using nginx's 'exact match' (=) combined with 'prefix match'; that is, match explicitly on `/`, and redirect to `//`, and then have the normal ProxyPath for the unadorned (prefix-matching) `/`. ``` location = / { return 302 //
; } location / { proxy_pass http://127.0.0.1:8001/; proxy_set_header Host $host; } ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1590183272,"How to redirect from ""/"" to a specific db/table", https://github.com/simonw/datasette/issues/236#issuecomment-1033772902,https://api.github.com/repos/simonw/datasette/issues/236,1033772902,IC_kwDOBm6k_c49nh9m,1376648,jordaneremieff,2022-02-09T13:40:52Z,2022-02-09T13:40:52Z,NONE,"Hi @simonw, I've received some inquiries over the last year or so about Datasette and how it might be supported by [Mangum](https://github.com/jordaneremieff/mangum). I maintain Mangum which is, as far as I know, the only project that provides support for ASGI applications in AWS Lambda. If there is anything that I can help with here, please let me know because I think what Datasette provides to the community (even beyond OSS) is noble and worthy of special consideration.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/simonw/datasette/pull/529#issuecomment-505424665,https://api.github.com/repos/simonw/datasette/issues/529,505424665,MDEyOklzc3VlQ29tbWVudDUwNTQyNDY2NQ==,1383872,nathancahill,2019-06-25T12:35:07Z,2019-06-25T12:35:07Z,NONE,"Opps, wrote this late last night, didn't see you'd already worked on the issue.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",460396952,Use keyed rows - fixes #521, https://github.com/simonw/datasette/issues/522#issuecomment-506000023,https://api.github.com/repos/simonw/datasette/issues/522,506000023,MDEyOklzc3VlQ29tbWVudDUwNjAwMDAyMw==,1383872,nathancahill,2019-06-26T18:48:53Z,2019-06-26T18:48:53Z,NONE,Reference implementation from Requests: https://github.com/kennethreitz/requests/blob/3.0/requests/structures.py#L14,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459622390,Handle case-insensitive headers in a nicer way, https://github.com/simonw/datasette/issues/1258#issuecomment-807459633,https://api.github.com/repos/simonw/datasette/issues/1258,807459633,MDEyOklzc3VlQ29tbWVudDgwNzQ1OTYzMw==,1385831,wdccdw,2021-03-25T20:48:33Z,2021-03-25T20:49:34Z,NONE,"What about allowing default parameters when defining the query in metadata.yml? Something like: ``` databases: fec: queries: search_by_name: params: - q default-param-values: q: ""text to search"" sql: |- SELECT... ``` For now, I'm using a custom database-.html file that hardcodes a default param in the link, but I'd rather not customize the template just for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",828858421,Allow canned query params to specify default values, https://github.com/simonw/datasette/issues/394#issuecomment-602904184,https://api.github.com/repos/simonw/datasette/issues/394,602904184,MDEyOklzc3VlQ29tbWVudDYwMjkwNDE4NA==,1448859,betatim,2020-03-23T23:03:42Z,2020-03-23T23:03:42Z,NONE,"On mybinder.org we allow access to arbitrary processes listening on a port inside the container via a [reverse proxy](https://github.com/jupyterhub/jupyter-server-proxy). This means we need support for a proxy prefix as the proxy ends up running at a URL like `/something/random/proxy/datasette/...` An example that shows the problem is https://github.com/psychemedia/jupyterserverproxy-datasette-demo. Launch directly into a datasette instance on mybinder.org with https://mybinder.org/v2/gh/psychemedia/jupyterserverproxy-datasette-demo/master?urlpath=datasette then try to follow links inside the UI.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/sqlite-utils/issues/97#issuecomment-614073859,https://api.github.com/repos/simonw/sqlite-utils/issues/97,614073859,MDEyOklzc3VlQ29tbWVudDYxNDA3Mzg1OQ==,1448859,betatim,2020-04-15T14:29:30Z,2020-04-15T14:29:30Z,NONE,"Woah! Thanks a lot. Next time I will add a more obvious/explicit ""if you like this idea let me know I'd love to work on it to get my feet wet here"" :D","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",593751293,"Adding a ""recreate"" flag to the `Database` constructor", https://github.com/simonw/sqlite-utils/issues/441#issuecomment-1155515426,https://api.github.com/repos/simonw/sqlite-utils/issues/441,1155515426,IC_kwDOCGYnMM5E38Qi,1448859,betatim,2022-06-14T17:53:43Z,2022-06-14T17:53:43Z,NONE,"That would be handy (additional where filters) but I think the trick with the `with` statement is already an order of magnitude better than what I had thought of, so my problem is solved by it (plus I got to learn about `with` today!)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1257724585,Combining `rows_where()` and `search()` to limit which rows are searched, https://github.com/simonw/sqlite-utils/issues/206#issuecomment-955365098,https://api.github.com/repos/simonw/sqlite-utils/issues/206,955365098,IC_kwDOCGYnMM448bbq,1449512,dufferzafar,2021-10-30T15:49:19Z,2021-10-30T15:49:19Z,NONE,"@simonw Hey! JSON parsing for me is failing and I'm getting this same error, but I feel that my json is correct. How can I debug this?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",761915790,sqlite-utils should suggest --csv if JSON parsing fails, https://github.com/simonw/sqlite-utils/issues/206#issuecomment-955370190,https://api.github.com/repos/simonw/sqlite-utils/issues/206,955370190,IC_kwDOCGYnMM448crO,1449512,dufferzafar,2021-10-30T15:52:16Z,2021-10-30T15:52:16Z,NONE,@simonw That was working fine. It turned out that I had to use `--nl` flag since I was passing in jsonl.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",761915790,sqlite-utils should suggest --csv if JSON parsing fails, https://github.com/simonw/datasette/issues/1253#issuecomment-955384545,https://api.github.com/repos/simonw/datasette/issues/1253,955384545,IC_kwDOBm6k_c448gLh,1449512,dufferzafar,2021-10-30T16:00:42Z,2021-10-30T16:00:42Z,NONE,"Yeah, I was pressing Ctrl + Enter as well. Came here to open this issue and found out Shift + Enter works. @simonw Any way to configure this?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",826064552,"Capture ""Ctrl + Enter"" or ""⌘ + Enter"" to send SQL query?", https://github.com/simonw/datasette/issues/1646#issuecomment-1172930092,https://api.github.com/repos/simonw/datasette/issues/1646,1172930092,IC_kwDOBm6k_c5F6X4s,1473102,mustafa0x,2022-07-02T17:12:18Z,2022-07-02T17:12:18Z,NONE,"I'm affected by this as well. Would be nice to be able to pass in an extension, eg `--extension=sqlite3`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1157182254,Configuration directory mode does not pick up other file extensions than .db, https://github.com/simonw/datasette/issues/1771#issuecomment-1356657451,https://api.github.com/repos/simonw/datasette/issues/1771,1356657451,IC_kwDOBm6k_c5Q3PMr,1473102,mustafa0x,2022-12-18T04:04:32Z,2022-12-18T04:04:32Z,NONE,"the problem is: ``` .select-wrapper select:focus { outline: none; } ``` I sometimes add this js: ``` window.addEventListener('keydown', function check_tab(e) { if (e.key === 'Tab') { document.documentElement.classList.add('user-is-tabbing') window.removeEventListener('keydown', check_tab) } }) ``` and then in the css, using a `html.user-is-tabbing` selector undo any outlines I removed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1306984363,minor a11y: