id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 488874815,MDU6SXNzdWU0ODg4NzQ4MTU=,5,Write tests that simulate the Twitter API,9599,open,0,,,1,2019-09-03T23:55:35Z,2019-09-03T23:56:28Z,,MEMBER,,I can use betamax for this: https://pypi.org/project/betamax/,206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/5/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 496415321,MDU6SXNzdWU0OTY0MTUzMjE=,1,Figure out some interesting example SQL queries,9599,open,0,,,9,2019-09-20T15:28:07Z,2021-05-03T03:46:23Z,,MEMBER,,My knowledge of genetics has left me short here. I'd love to be able to provide some interesting example SELECT queries - maybe one that spots if you are [likely to have red hair?](https://www.snpedia.com/index.php/Rs1805007),209590345,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 503243784,MDU6SXNzdWU1MDMyNDM3ODQ=,3,Extract images into separate tables,9599,open,0,,,1,2019-10-07T05:43:01Z,2020-09-01T06:17:45Z,,MEMBER,,"As already done with authors. Slightly harder because images do not have a universally unique ID. Also need to figure out what to do about there being columns for both `image` and `images`. ",213286752,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/3/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 505673645,MDU6SXNzdWU1MDU2NzM2NDU=,16,Do a better job with archived direct message threads,9599,open,0,,,0,2019-10-11T06:55:21Z,2019-10-11T06:55:27Z,,MEMBER,,https://github.com/dogsheep/twitter-to-sqlite/blob/fb2698086d766e0333a55bb73435e7283feeb438/twitter_to_sqlite/archive.py#L98-L99,206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/16/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 530491074,MDU6SXNzdWU1MzA0OTEwNzQ=,14,Command for importing events,9599,open,0,,,3,2019-11-29T21:28:58Z,2020-04-14T19:38:34Z,,MEMBER,,"Eg from https://api.github.com/users/simonw/events Docs here: https://developer.github.com/v3/activity/events/#list-events-performed-by-a-user",207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/14/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 599776345,MDU6SXNzdWU1OTk3NzYzNDU=,24,Feature idea: github-to-sqlite everything ...,9599,open,0,,,0,2020-04-14T18:34:00Z,2020-04-14T18:34:00Z,,MEMBER,,"At the moment if you want to pull all your repos, issues, issues comments etc you have to do it with a sequence of separate commands. Consider adding a `everything` or `all` command which fetches everything that the tool knows how to fetch, and is designed to be run on a cron in a way that fetches just new stuff each time.",207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/24/reactions"", ""total_count"": 7, ""+1"": 7, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 602533300,MDU6SXNzdWU2MDI1MzMzMDA=,1,Import photo metadata from Apple Photos into SQLite,9599,open,0,,5324096,8,2020-04-18T19:23:26Z,2020-05-04T02:41:40Z,,MEMBER,,"Faces, albums, locations, that kind of thing.",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/1/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 602533481,MDU6SXNzdWU2MDI1MzM0ODE=,3,"Import EXIF data into SQLite - lens used, ISO, aperture etc",9599,open,0,,5324096,2,2020-04-18T19:24:31Z,2021-10-05T12:38:24Z,,MEMBER,,,256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 602585497,MDU6SXNzdWU2MDI1ODU0OTc=,7,Integrate image content hashing,9599,open,0,,,2,2020-04-19T00:36:58Z,2021-08-26T02:01:01Z,,MEMBER,,To spot duplicate images (where the file content differs such that the sha256 is no longer a match) it would be useful to calculate and store perceptual hashes of some sort.,256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/7/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",, 602619330,MDU6SXNzdWU2MDI2MTkzMzA=,45,Use raise_for_status() everywhere,9599,open,0,,,1,2020-04-19T04:38:28Z,2020-04-19T04:39:22Z,,MEMBER,,"I keep seeing errors which I think are caused by authentication or rate limit problems but which appear to be unexpected JSON responses - presumably because they are actually an error message. Recent example: https://github.com/simonw/jsk-fellows-on-twitter/runs/598892575 Using `response.raise_for_status()` everywhere will make these errors less confusing.",206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/45/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 606033104,MDU6SXNzdWU2MDYwMzMxMDQ=,12,"If less than 500MB, show size in MB not GB",9599,open,0,,,1,2020-04-24T04:35:01Z,2020-04-24T04:35:25Z,,MEMBER,,"Just saw this: ``` Uploading 0.05 GB ```",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/12/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 607888367,MDU6SXNzdWU2MDc4ODgzNjc=,13,Also upload movie files,9599,open,0,,,2,2020-04-27T22:11:25Z,2020-04-28T00:39:45Z,,MEMBER,,"The `upload` command currently only handles static images: https://github.com/dogsheep/photos-to-sqlite/blob/d939455af00e07866686457ee2fcb9b2d1b7194e/photos_to_sqlite/utils.py#L26-L33 Need to cover movies taken by my phone and DSLR too.",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/13/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 608512747,MDU6SXNzdWU2MDg1MTI3NDc=,14,Annotate photos using the Google Cloud Vision API,9599,open,0,,,5,2020-04-28T18:09:03Z,2020-04-28T18:19:06Z,,MEMBER,,"It can detect faces, run OCR, do image labeling (it knows what a lemur is!) and do object localization where it identifies objects and returns bounding polygons for them.",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/14/reactions"", ""total_count"": 3, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",, 612287234,MDU6SXNzdWU2MTIyODcyMzQ=,16,"Import machine-learning detected labels (dog, llama etc) from Apple Photos",9599,open,0,,,13,2020-05-05T02:45:43Z,2020-05-05T05:38:16Z,,MEMBER,,"Follow-on from #1. Apple Photos runs some very sophisticated machine learning on-device to figure out if photos are of dogs, llamas and so on. I really want to extract those labels out into my own database.",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 1, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 612860758,MDU6SXNzdWU2MTI4NjA3NTg=,18,Switch CI solution to GitHub Actions with a macOS runner,9599,open,0,,,1,2020-05-05T20:03:50Z,2020-05-05T23:49:18Z,,MEMBER,,Refs #17.,256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/18/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 615626118,MDU6SXNzdWU2MTU2MjYxMTg=,22,Try out ExifReader,9599,open,0,,,4,2020-05-11T06:32:13Z,2020-05-14T05:59:53Z,,MEMBER,,"https://pypi.org/project/ExifReader/ New fork that should be able to handle EXIF in HEIC files. Forked here: https://github.com/ianare/exif-py/issues/102#issuecomment-626376522 Refs #3 ",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 621323348,MDU6SXNzdWU2MjEzMjMzNDg=,24,Configurable URL for images,9599,open,0,,,1,2020-05-19T22:25:56Z,2020-05-20T06:00:29Z,,MEMBER,,"This is hard-coded at the moment, which is bad: https://github.com/dogsheep/photos-to-sqlite/blob/d5d69b9019703c47bc251444838578dd752801e2/photos_to_sqlite/cli.py#L269-L272",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/24/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 621486115,MDU6SXNzdWU2MjE0ODYxMTU=,27,photos_with_apple_metadata view should include labels,9599,open,0,,,0,2020-05-20T06:06:17Z,2020-05-20T06:06:17Z,,MEMBER,,"https://dogsheep-photos.dogsheep.net/public/photos_with_apple_metadata?place_city=New+Orleans&_facet=place_city&_facet_array=albums&_facet_array=persons Here's one way to add that: ```sql select rowid, photo, ( select json_group_array( json_object( 'label', normalized_string, 'href', '/photos/labelled?_hide_sql=1&label=' || normalized_string ) ) from labels where labels.uuid = photos_with_apple_metadata.uuid ) as labels, date, ```",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/27/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 673602857,MDU6SXNzdWU2NzM2MDI4NTc=,9,Define a view that displays photos correctly,9599,open,0,,,0,2020-08-05T14:53:39Z,2020-08-05T14:53:39Z,,MEMBER,,"The `photos` table stores data like this: id | createdAt | source | prefix | suffix | width | height | visibility | created ▲ | user -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 5e12c9708506bc000840262a | January 06, 2020 - 05:45:20 UTC | Swarm for iOS 1 | https://fastly.4sqi.net/img/general/ | /15889193_AXxGk4I1nbzUZuyYqObgbXdJNyEHiwj6AUDq0tPZWtw.jpg | 1920 | 1440 | public | 2020-01-06T05:45:20 | 15889193 The photo URL can be derived from those pieces - define a SQL view which does that (using `datasette-json-html` to display the pictures)",205429375,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/9/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 689848827,MDU6SXNzdWU2ODk4NDg4Mjc=,6,ISO timestamps,9599,open,0,,,0,2020-09-01T06:16:42Z,2020-09-01T06:16:42Z,,MEMBER,,"The `time_added`, `time_updated` and `time_read` columns currently store data like this: September 19, 2019 - 00:30:30 UTC Should use ISO instead, e.g. `2020-07-26T01:05:24+00:00`",213286752,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 689850810,MDU6SXNzdWU2ODk4NTA4MTA=,6,Set up a demo instance,9599,open,0,,,0,2020-09-01T06:20:24Z,2020-09-01T06:20:24Z,,MEMBER,,"Once I've got the Datasette plugin to a state where it's worth building a demo: #3 I can use data from my public https://github-to-sqlite.dogsheep.net/ demo plus the Pocket data subset I use for the demo in https://github.com/dogsheep/pocket-to-sqlite/issues/5 - I could pull in the https://dogsheep-photos.dogsheep.net/ photos data too.",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 692202408,MDU6SXNzdWU2OTIyMDI0MDg=,12,Idea: maps and GeoJSON support,9599,open,0,,,0,2020-09-03T18:47:10Z,2020-09-04T01:45:03Z,,MEMBER,,"It would be cool if the `display_sql` could return a column populated with GeoJSON which would the automatically be displayed on a map in the results (or maybe default JS would look for a `class=""geojson""` element output by the `display` template) - ala https://github.com/simonw/datasette-leaflet-geojson Then I could render workout routes on a map, or Swarm checkin points.",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/12/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 694136490,MDU6SXNzdWU2OTQxMzY0OTA=,15,Add a bunch of config examples,9599,open,0,,,1,2020-09-05T17:58:43Z,2020-09-18T23:17:39Z,,MEMBER,,I can bring these over from my personal Dogsheep.,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/15/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 694493566,MDU6SXNzdWU2OTQ0OTM1NjY=,16,Timeline view,9599,open,0,,,3,2020-09-06T19:13:58Z,2020-09-21T02:42:29Z,,MEMBER,,Ability to browse (and facet) by date.,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/16/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 695553522,MDU6SXNzdWU2OTU1NTM1MjI=,18,Deleted records stay in the search index,9599,open,0,,,2,2020-09-08T05:14:23Z,2020-09-08T05:15:51Z,,MEMBER,,"Here's why: https://github.com/dogsheep/dogsheep-beta/blob/24f7898d41a39218058f174c75ba62f7c0fcfff6/dogsheep_beta/utils.py#L44-L53 That should probably do `DELETE FROM index1.search_index WHERE [table] = ?` first.",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/18/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 695556681,MDU6SXNzdWU2OTU1NTY2ODE=,19,Figure out incremental re-indexing,9599,open,0,,,2,2020-09-08T05:23:31Z,2020-09-08T05:27:07Z,,MEMBER,,As tables get bigger reindexing everything on a schedule (essentially recreating the entire index from scratch) will start to become a performance bottleneck.,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/19/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 703216044,MDU6SXNzdWU3MDMyMTYwNDQ=,49,Feature: gists and starred gists,9599,open,0,,,0,2020-09-17T02:30:52Z,2020-09-17T02:30:52Z,,MEMBER,,https://developer.github.com/v3/gists/#list-starred-gists,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/49/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 703218448,MDU6SXNzdWU3MDMyMTg0NDg=,51,Documentation for twitter-to-sqlite fetch,9599,open,0,,,0,2020-09-17T02:38:10Z,2020-09-17T02:38:10Z,,MEMBER,,"It's mentioned in passing in the README but it deserves its own section: ``` $ twitter-to-sqlite fetch \ ""https://api.twitter.com/1.1/account/verify_credentials.json"" \ | grep '""id""' | head -n 1 ```",206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/51/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 703218756,MDU6SXNzdWU3MDMyMTg3NTY=,50,Commands for making authenticated API calls,9599,open,0,,,7,2020-09-17T02:39:07Z,2020-10-19T05:01:29Z,,MEMBER,,"Similar to `twitter-to-sqlite fetch`, see https://github.com/dogsheep/twitter-to-sqlite/issues/51",207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/50/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 703246031,MDU6SXNzdWU3MDMyNDYwMzE=,51,github-to-sqlite should handle rate limits better,9599,open,0,,,4,2020-09-17T04:01:50Z,2022-10-14T16:34:07Z,,MEMBER,,From #50 - right now it will crash with an error of it hits the rate limit. Since the rate limit information (including reset time) is available in the headers it could automatically sleep and try again instead.,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 705215230,MDU6SXNzdWU3MDUyMTUyMzA=,26,Pagination,9599,open,0,,,7,2020-09-21T00:14:37Z,2020-09-21T02:55:54Z,,MEMBER,,Useful for #16 (timeline view) since you can now filter to just the items on a specific day - but if there are more than 50 items you can't see them all.,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/26/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 709789634,MDU6SXNzdWU3MDk3ODk2MzQ=,27,Sort order is not persisted by facet filter links,9599,open,0,,,0,2020-09-27T18:22:07Z,2020-09-27T18:22:07Z,,MEMBER,,A link to `/-/beta?category=1×tamp__date=2018-08-01&q=swedish` should be to `/-/beta?category=1×tamp__date=2018-08-01&q=swedish&sort=newest`,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/27/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 718934942,MDU6SXNzdWU3MTg5MzQ5NDI=,1,Documentation on how to use this with Datasette,9599,open,0,,,1,2020-10-11T21:56:27Z,2020-10-11T22:14:00Z,,MEMBER,,In particular how to use `datasette-render-images` to see the images.,303218369,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/1/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 718938889,MDU6SXNzdWU3MTg5Mzg4ODk=,5,Figure out how to display images from tags inline in Datasette,9599,open,0,,,6,2020-10-11T22:17:03Z,2020-10-16T20:16:28Z,,MEMBER,,"Relates to #1. Evernote XML looks like this: ```xml
This note includes two images.
The Python logo
The Evernote logo
``` That hash is the md5 we use to store resources. It should be possible to turn these into embedded image tags, especially if done in conjunction with the https://github.com/simonw/datasette-media plugin.",303218369,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/5/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 724759588,MDU6SXNzdWU3MjQ3NTk1ODg=,29,Add search highlighting snippets,9599,open,0,,,5,2020-10-19T16:00:48Z,2021-08-26T20:23:11Z,,MEMBER,,Like on https://til.simonwillison.net/til/search?q=Snippet,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/29/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",, 727848625,MDU6SXNzdWU3Mjc4NDg2MjU=,12,"Some workout columns should be float, not text",9599,open,0,,,4,2020-10-23T02:47:02Z,2022-06-23T04:35:02Z,,MEMBER,,"Columns `duration`, `totalDistance` and `totalEnergyBurned` should be converted to float. https://github.com/dogsheep/healthkit-to-sqlite/blob/71e36e1cf034b96de2a8e6652265d782d3fdf63b/healthkit_to_sqlite/utils.py#L50-L57",197882382,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 753000405,MDU6SXNzdWU3NTMwMDA0MDU=,53,Command for fetching file contents,9599,open,0,,,1,2020-11-29T20:31:04Z,2020-11-30T00:36:09Z,,MEMBER,,"Something like this: github-to-sqlite files github.db simonw/datasette This would fetch all files from the `main` branch into a `files` table. Additional options could handle things like pulling files from a branch or tag, or just pulling files that match a specific glob or that exist in a specific directory.",207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/53/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 821841046,MDU6SXNzdWU4MjE4NDEwNDY=,6,Upgrade to latest sqlite-utils,9599,open,0,,,1,2021-03-04T07:21:54Z,2021-03-04T07:22:51Z,,MEMBER,,This is pinned to v1 at the moment.,206649770,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 836923194,MDU6SXNzdWU4MzY5MjMxOTQ=,32,JSON API for search results,9599,open,0,,,0,2021-03-20T22:21:36Z,2021-03-20T22:21:36Z,,MEMBER,,Refs https://github.com/simonw/datasette/issues/878,197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/32/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 897212458,MDU6SXNzdWU4OTcyMTI0NTg=,63,Ability to fetch commits from branches other than the default,9599,open,0,,,0,2021-05-20T17:58:08Z,2021-05-20T17:58:08Z,,MEMBER,,This tool is currently almost entirely ignorant of the concept of branches. One example: you can't retrieve commits from any branch other than the default (usually main).,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/63/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 952179830,MDU6SXNzdWU5NTIxNzk4MzA=,2,Command for fetching Hacker News threads from the search API,9599,open,0,,,4,2021-07-25T02:00:45Z,2021-07-25T03:12:57Z,,MEMBER,,"I want to be able to fetch every item for a domain, e.g. https://news.ycombinator.com/from?site=simonwillison.net",248903544,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 952189173,MDU6SXNzdWU5NTIxODkxNzM=,3,Use HN algolia endpoint to retrieve trees,9599,open,0,,,3,2021-07-25T03:35:27Z,2021-07-25T18:41:17Z,,MEMBER,,"The `trees` command currently has to make a request for every single comment. Algolia have an endpoint that bundles the entire thread together into a single request. `https://hn.algolia.com/api/v1/items/ID` Here's an example that loads quickly, with about 50 comments: https://hn.algolia.com/api/v1/items/27941108 It doesn't appear to use pagination at all - if a thread is big then the response is big. I ran this search to find some stories with more than 1000 comments: https://hn.algolia.com/api/v1/search?tags=story&numericFilters=num_comments%3E=1000 Here's one: https://news.ycombinator.com/item?id=25015967 with 4759 comments. Hitting the API takes 41s and returns 3.7 MB of JSON! ``` wget 'https://hn.algolia.com/api/v1/items/25015967' 0.03s user 0.04s system 0% cpu 41.368 total /tmp % ls -lah 25015967 -rw-r--r-- 1 simon wheel 3.7M Jul 24 20:31 25015967 ```",248903544,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 975166271,MDU6SXNzdWU5NzUxNjYyNzE=,20,Add index on workout_points.date,9599,open,0,,,2,2021-08-20T01:08:04Z,2021-08-20T01:12:48Z,,MEMBER,,"Sorting that by date makes sense for seeing most recent points, and my DB has 2.5m points in so it's an expensive sort!",197882382,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/20/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1071071397,I_kwDODFdgUs4_10Cl,69,View that combines issues and issue comments,9599,open,0,,,1,2021-12-04T00:34:33Z,2021-12-04T00:34:52Z,,MEMBER,,I want to see a reverse chronologically ordered interface onto both issues and comments - essentially a unified log of comments and issues opened across one or multiple projects.,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/69/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1570375808,I_kwDODFdgUs5dmgiA,79,Deploy demo job is failing due to rate limit,9599,open,0,,,2,2023-02-03T20:05:01Z,2023-12-08T14:50:15Z,,MEMBER,,https://github.com/dogsheep/github-to-sqlite/actions/runs/4080058087/jobs/7032116511,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1616429236,I_kwDOJHON9s5gWMC0,4,Support incremental updates,9599,open,0,,,2,2023-03-09T05:14:00Z,2023-03-09T18:20:56Z,,MEMBER,,"Running this script can take several hours against a large notes database. Would be neat if it could run against just the notes that have been modified since it last ran. Could pull the max `updated` date and then keep on looping until it finds one modified before then. Problem is I don't actually know what order it iterates over the notes in.",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/4/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1616440856,I_kwDOJHON9s5gWO4Y,5,Configure full text search,9599,open,0,,,0,2023-03-09T05:20:46Z,2023-03-09T05:20:46Z,,MEMBER,,"FTS would be useful. Maybe even extract the plain text from the notes to make that index easier to create, rather than creating it against the HTML. Can use the `plaintext` property for that.",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/5/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1617602868,I_kwDOJHON9s5gaqk0,6,Character encoding problem,9599,open,0,,,2,2023-03-09T16:44:34Z,2023-04-14T15:22:09Z,,MEMBER,,"I ran against a recent note with this in it: > Or just ""Actions ⚙️ "" And got back: > `Actions ‚öôÔ∏è` Pasting that into https://ftfy.vercel.app/?s=Actions+%E2%80%9A%C3%B6%C3%B4%C3%94%E2%88%8F%C3%A8+ gives this: ```python s = 'Actions â\x80\x9aöôÃ\x94â\x88\x8fè' s = s.encode('latin-1') s = s.decode('utf-8') s = s.encode('macroman') s = s.decode('utf-8') print(s) ``` ",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1617938730,I_kwDOJHON9s5gb8kq,9,"Default to just storing plaintext, store HTML if `--html` is passed",9599,open,0,,,0,2023-03-09T20:19:06Z,2023-03-09T20:19:06Z,,MEMBER,,"The full `body` version of the notes can get HUGE, due to embedded images. It turns out for my own purposes I'm usually happy with just the `plaintext` version. I'm tempted to say you don't get HTML unless you pass a `--html` option.",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/9/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1618130434,I_kwDOJHON9s5gcrYC,11,Implement a SQL view to make it easier to query files in a nested folder,9599,open,0,,,3,2023-03-09T23:19:28Z,2023-03-09T23:24:01Z,,MEMBER,,"Working with nested data in SQL is tricky, can I make it easier with a view or canned query?",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/11/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1888477283,I_kwDOC8SPRc5wj-Bj,38,Run `rebuild_fts` after building the index,9599,open,0,,,0,2023-09-08T23:17:45Z,2023-09-08T23:17:45Z,,MEMBER,,"In: - https://github.com/simonw/datasette.io/issues/152#issuecomment-1712323347 This turned out to be the fix: ```bash dogsheep-beta index dogsheep-index.db templates/dogsheep-beta.yml sqlite-utils rebuild-fts dogsheep-index.db ```",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/38/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 267515678,MDU6SXNzdWUyNjc1MTU2Nzg=,3,"Make individual column valuables addressable, with smart content types",9599,open,0,,,1,2017-10-23T01:11:32Z,2017-12-10T03:11:58Z,,OWNER,,"Some SQLite databases embed images in columns. It would be cool if these had URLs. /database-name-7sha256/table-name/compound-pk/column /database-name-7sha256/table-name/compound-pk/column.json /database-name-7sha256/table-name/compound-pk/column.png /database-name-7sha256/table-name/compound-pk/column.gif /database-name-7sha256/table-name/compound-pk/column.txt The one without an explicit file extension auto-detects the correct extension.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/3/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 268110769,MDU6SXNzdWUyNjgxMTA3Njk=,33,Use locust for benchmarking and load tests,9599,open,0,,,0,2017-10-24T17:00:09Z,2017-12-10T03:12:16Z,,OWNER,,"https://github.com/locustio/locust Needed for #32 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/33/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 274615452,MDU6SXNzdWUyNzQ2MTU0NTI=,111,Add “updated” to metadata,9599,open,0,,,12,2017-11-16T18:22:20Z,2021-09-21T22:48:27Z,,OWNER,,"To give an indication as to when the data was last updated. This should be a field in the metadata that is then shown on the index page and in the footer, if it is set. Also support setting it using an option to “datasette publish” and “datasette package” - which can either be a string or can be the magic string “today” to set it to today’s date: datasette publish file.db --updated=today",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/111/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275125561,MDU6SXNzdWUyNzUxMjU1NjE=,123,Datasette serve should accept paths/URLs to CSVs and other file formats,9599,open,0,,,9,2017-11-19T02:05:48Z,2021-07-19T00:04:32Z,,OWNER,,"This would remove the csvs-to-sqlite step which I end up using for almost everything. I'm hesitant to introduce pandas as a required dependency though since it require compiling numpy. Could build it so this option is only available if you have pandas installed.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/123/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",, 275159710,MDU6SXNzdWUyNzUxNTk3MTA=,128,"Every visualization should have an ""embed"" button",9599,open,0,,,0,2017-11-19T13:38:13Z,2019-05-13T18:33:51Z,,OWNER,,"At least for the first round of visualizations, any time you construct one using the UI the result should include an ""embed this"" button that returns source code to copy and paste These examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/128/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275415799,MDU6SXNzdWUyNzU0MTU3OTk=,137,Ability to combine multiple SQL queries on a single graph,9599,open,0,,,1,2017-11-20T16:26:57Z,2019-05-13T18:33:51Z,,OWNER,,This would make visualizations significantly more powerful. The interesting challenge will be around the URL design. It would be useful to be able to combine either multiple explicit SQL queries or multiple queries based on the filter string parameters passed to one or more table views.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/137/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275755475,MDU6SXNzdWUyNzU3NTU0NzU=,140,Heatmap visualization plugin,9599,open,0,,,2,2017-11-21T15:34:23Z,2019-05-13T18:33:51Z,,OWNER,,Could use https://github.com/scottbedard/svelte-heatmap,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/140/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 288438570,MDU6SXNzdWUyODg0Mzg1NzA=,179,More metadata options for template authors ,9599,open,0,,,2,2018-01-14T20:51:04Z,2019-05-13T18:33:33Z,,OWNER,,See this thread on Twitter: https://twitter.com/simonw/status/952637152797458432,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/179/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 309047460,MDU6SXNzdWUzMDkwNDc0NjA=,188,Ability to bundle metadata and templates inside the SQLite file,9599,open,0,,,4,2018-03-27T16:42:07Z,2020-12-04T17:18:34Z,,OWNER,,"One of the nicest qualities of SQLite as a data format is that you get a single file which you can then backup or share with other people. Datasette breaks this a little once you start including custom metadata.json or template files and CSS. It would be cool if there was an optional mechanism for baking that extra configuration into the SQLite file itself. That way entire datasette mini-applications (including canned queries and custom HTML and CSS) could be constructed as single .db files. Since datasette configuration is all file-based, one way to achieve that would be to support a ""datasette_files"" table which, if present is used to search for file contents by path. This is inline with the philosophy described by https://www.sqlite.org/appfileformat.html ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/188/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 312395790,MDU6SXNzdWUzMTIzOTU3OTA=,197,Ability to sort by more than one column,9599,open,0,,,0,2018-04-09T05:13:30Z,2018-07-10T17:45:37Z,,OWNER,,"Split off from #189. I'd like to support ""sort by X descending, then by Y ascending if there are dupes for X"" as well. Suggested syntax for that: ?_sort_desc=X&_sort=Y we currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example: ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/197/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 312396095,MDU6SXNzdWUzMTIzOTYwOTU=,198,Ability to sort with nulls last,9599,open,0,,,0,2018-04-09T05:15:40Z,2018-07-10T17:45:37Z,,OWNER,,"Split off from #189 Here's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid order by case when career_ranypa is null then 1 else 0 end, career_ranypa",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/198/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 314771615,MDU6SXNzdWUzMTQ3NzE2MTU=,218,"Support custom unit display in order to handle ""$10,000""",9599,open,0,,,0,2018-04-16T18:39:31Z,2018-07-10T17:45:38Z,,OWNER,,"I tried to get Datasette to display `$10,000` using the new units support but we currently only display units as a suffix: https://github.com/simonw/datasette/blob/10a34f995c70daa37a8a2aa02c3135a4b023a24c/datasette/app.py#L563-L572 It would be neat if there was a mechanism for specifying a custom unit display - maybe something like this: ``` { ""custom_units"": { ""us_dollar"": { ""unit"": ""us_dollar = [] = $"", ""format"": ""${:,}"" } } } ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 316621102,MDU6SXNzdWUzMTY2MjExMDI=,235,Add limit on the size in KB of data returned from a single query,9599,open,0,,,2,2018-04-22T23:01:15Z,2018-04-24T00:30:02Z,,OWNER,,"Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized. It does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server. I think the easiest place to implement that is here: https://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190 Currently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits. The bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/235/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 317001500,MDU6SXNzdWUzMTcwMDE1MDA=,236,datasette publish lambda plugin,9599,open,0,,,11,2018-04-23T22:10:30Z,2023-03-12T14:04:15Z,,OWNER,,"Refs #217 - create a publish plugin that can deploy to AWS Lambda. https://docs.aws.amazon.com/lambda/latest/dg/limits.html says lambda packages can be up to 50 MB, so this would only work with smaller databases (the command can check the filesize before attempting to package and deploy it). Lambdas do get a 512 MB `/tmp` directory too, so for larger databases the function could start and then download up to 512MB from an S3 bucket - so the plugin could take an optional S3 bucket to write to and know how to upload the `.db` file there and then have the lambda download it on startup.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/236/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 318490133,MDU6SXNzdWUzMTg0OTAxMzM=,241,Default datasette logging format should be JSON,9599,open,0,,,0,2018-04-27T17:32:48Z,2018-07-10T17:45:40Z,,OWNER,,"Structured logs are better. Datasette should default to outputting it's HTTP access log lines as newline delimited JSON instead of the Sanic default format it uses at the moment. For improved greppability these logs should have keys ordered in a consistent way. Python's JSON module can do this with ordered dictionaries.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/241/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 320132682,MDU6SXNzdWUzMjAxMzI2ODI=,250,Setup some issue templates,9599,open,0,,,0,2018-05-04T01:49:07Z,2018-05-04T01:49:07Z,,OWNER,,"https://twitter.com/left_pad/status/99216385740464537 I like the idea of using these to help people understand some of the ways I want to use issues.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/250/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323223872,MDU6SXNzdWUzMjMyMjM4NzI=,260,Validate metadata.json on startup,9599,open,0,,,7,2018-05-15T13:42:56Z,2023-06-21T12:51:22Z,,OWNER,,"It's easy to misspell the name of a database or table and then be puzzled when the metadata settings silently fail. To avoid this, let's sanity check the provided metadata.json on startup and quit with a useful error message if we find any obvious mistakes.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/260/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323658641,MDU6SXNzdWUzMjM2NTg2NDE=,262,Add ?_extra= mechanism for requesting extra properties in JSON,9599,open,0,,3268330,27,2018-05-16T14:55:42Z,2023-03-29T06:22:22Z,,OWNER,,"Datasette views currently work by creating a set of data that should be returned as JSON, then defining an additional, optional `template_data()` function which is called if the view is being rendered as HTML. This `template_data()` function calculates extra template context variables which are necessary for the HTML view but should not be included in the JSON. Example of how that is used today: https://github.com/simonw/datasette/blob/2b79f2bdeb1efa86e0756e741292d625f91cb93d/datasette/views/table.py#L672-L704 With features like Facets in #255 I'm beginning to want to move more items into the `template_data()` - in the case of facets it's the `suggested_facets` array. This saves that feature from being calculated (involving several SQL queries) for the JSON case where it is unlikely to be used. But... as an API user, I want to still optionally be able to access that information. Solution: Add a `?_extra=suggested_facets&_extra=table_metadata` argument which can be used to optionally request additional blocks to be added to the JSON API. Then redefine as many of the current `template_data()` features as extra arguments instead, and teach Datasette to return certain extras by default when rendering templates. This could allow the JSON representation to be slimmed down further (removing e.g. the `table_definition` and `view_definition` keys) while still making that information available to API users who need it.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/262/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323718842,MDU6SXNzdWUzMjM3MTg4NDI=,268,Mechanism for ranking results from SQLite full-text search,9599,open,0,,,12,2018-05-16T17:36:40Z,2022-01-13T22:21:28Z,,OWNER,,This isn't particularly straight-forward - all the more reason for Datasette to implement it for you. This article is helpful: http://charlesleifer.com/blog/using-sqlite-full-text-search-with-python/,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/268/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 326599525,MDU6SXNzdWUzMjY1OTk1MjU=,286,Database hash should include current datasette version,9599,open,0,,,2,2018-05-25T17:03:42Z,2018-05-25T17:07:36Z,,OWNER,,"Right now deploying a new version of datasette doesn't invalidate existing URLs, so users may still see a cached copy of the old templates. We can fix this by including the current datasette version in the input to the hash function (which currently just the database file contents).",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/286/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 326778161,MDU6SXNzdWUzMjY3NzgxNjE=,290,Consider increasing the default for num_sql_threads (currently 3),9599,open,0,,,0,2018-05-27T00:52:41Z,2018-05-27T00:52:41Z,,OWNER,,"I ran a very rough micro-benchmark on the new `num_sql_threads` config option (added in #285) datasette --config num_sql_threads:1 fivethirtyeight.db Then ab -n 100 -c 10 'http://127.0.0.1:8011/fivethirtyeight-2628db9/twitter-ratio%2Fsenators' | Number of threads | Requests/second | |---|---| | 1 | 4.57 | | 3 | 9.77 | | 10 | 13.53 | | 20 | 15.24 | 50 | 8.21 | This was on my early 2018 OS X laptop. Need to benchmark in other common environments before making a decision on changing the default. That said, the default of 3 was a number I plucked out of thin air.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/290/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 327365110,MDU6SXNzdWUzMjczNjUxMTA=,294,inspect should record column types,9599,open,0,,,7,2018-05-29T15:10:41Z,2019-06-28T16:45:28Z,,OWNER,,"For each table we want to know the columns, their order and what type they are. I'm going to break with SQLite defaults a little on this one and allow datasette to define additional types - to start with just a `geometry` type for columns that are detected as SpatiaLite geometries. Possible JSON design: ""columns"": [{ ""name"": ""title"", ""type"": ""text"" }, ...] Refs #276",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/294/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 327395270,MDU6SXNzdWUzMjczOTUyNzA=,296,Per-database and per-table /-/ URL namespace,9599,open,0,,,3,2018-05-29T16:23:13Z,2019-06-28T16:46:34Z,,OWNER,,"Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph` To start: * `/dbname/-/inspect` * `/dbname/-/metadata` * `/dbname/tablename/-/inspect` * `/dbname/tablename/-/metadata` This means we will no longer allow databases or tables to have the name `""-""` - I think that's OK We will continue to support rows with a primary key of `""-""` at the following URL: * `/dbname/tablename/-`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/296/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 328155946,MDU6SXNzdWUzMjgxNTU5NDY=,301,"--spatialite option for ""datasette publish heroku""",9599,open,0,,,1,2018-05-31T14:13:09Z,2022-01-20T21:28:50Z,,OWNER,,Split off from #243. Need to figure out how to install and configure SpatiaLite on Heroku.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/301/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 335200136,MDU6SXNzdWUzMzUyMDAxMzY=,327,Explore if SquashFS can be used to shrink size of packaged Docker containers,9599,open,0,,,4,2018-06-24T18:15:16Z,2022-02-17T23:37:24Z,,OWNER,,"Inspired by this article: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html#sqlite-database-indexed--squashed https://en.wikipedia.org/wiki/SquashFS is ""a compressed read-only file system for Linux"" - which means it could be a really nice fit for Datasette and its read-only SQLite databases. It would be interesting to explore a Dockerfile recipe that used SquashFS to compress the SQLite database file that was bundled up by `datasette package` and friends.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/327/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 344654623,MDU6SXNzdWUzNDQ2NTQ2MjM=,347,"Rename ""datasette package"" to ""datasette publish docker""",9599,open,0,,,0,2018-07-26T00:42:46Z,2018-07-26T00:42:46Z,,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/347/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 346026869,MDU6SXNzdWUzNDYwMjY4Njk=,354,Handle many-to-many relationships,9599,open,0,,,0,2018-07-31T04:03:13Z,2020-11-24T19:51:18Z,,OWNER,,This is a master tracking ticket for various many-2-many features.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/354/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 346027040,MDU6SXNzdWUzNDYwMjcwNDA=,355,Table view should support filtering via many-to-many relationships,9599,open,0,,,10,2018-07-31T04:04:16Z,2019-05-23T06:04:03Z,,OWNER,,Parent: #354 ,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/355/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 348043884,MDU6SXNzdWUzNDgwNDM4ODQ=,357,Plugin hook for loading metadata.json,9599,open,0,,,6,2018-08-06T19:00:01Z,2020-06-21T22:19:58Z,,OWNER,,"For https://github.com/simonw/russian-ira-facebook-ads-datasette/tree/af6d956995e14afd585c35a6a06bb01da32043ba I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements. Example yaml metadata file: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/af6d956995e14afd585c35a6a06bb01da32043ba/russian-ads-metadata.yaml It would be useful if Datasette could be fed a YAML file directly: datasette -m metadata.yaml Question is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a `datasette-metadata-yaml` plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/357/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 400340905,MDU6SXNzdWU0MDAzNDA5MDU=,402,Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs,9599,open,0,,,3,2019-01-17T15:52:28Z,2019-01-17T16:15:21Z,,OWNER,,"> Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html https://twitter.com/ignoredambience/status/1085926961413869568",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/402/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 421546944,MDU6SXNzdWU0MjE1NDY5NDQ=,417,Datasette Library,9599,open,0,,,12,2019-03-15T14:30:22Z,2020-12-29T14:34:50Z,,OWNER,,"The ability to run Datasette in a mode where it automatically picks up new (or modified) files in a directory tree without needing to restart the server. Suggested command: datasette library /path/to/mydbs/",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/417/reactions"", ""total_count"": 8, ""+1"": 8, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 426722204,MDU6SXNzdWU0MjY3MjIyMDQ=,423,?_search_col=X not reflected correctly in the UI,9599,open,0,,,0,2019-03-28T21:48:19Z,2020-11-03T19:01:59Z,,OWNER,,"e.g. https://latest.datasette.io/fixtures/searchable?_search_text1=barry ![2019-03-28 at 2 47 PM](https://user-images.githubusercontent.com/9599/55195035-84ebb800-5168-11e9-910b-fc9868bcd93e.png) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/423/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 443021509,MDU6SXNzdWU0NDMwMjE1MDk=,461,Paginate + search for databases/tables on the homepage,9599,open,0,,3268330,4,2019-05-11T18:05:34Z,2020-12-17T22:14:46Z,,OWNER,,Split out from #460 - in order to support large numbers of connected databases the homepage needs to be paginated.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/461/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 447408527,MDU6SXNzdWU0NDc0MDg1Mjc=,483,Option to facet by date using month or year,9599,open,0,,,5,2019-05-23T01:25:29Z,2019-05-29T21:38:27Z,,OWNER,,"Facet by date (from #481) can take datetimes and facet them by the day component. https://latest.datasette.io/fixtures/facetable?_facet_date=created I'd like to also be able to facet by month or year. I'm not sure what the best way to achieve this is. Could be two more Facet classes (YearFacet and MonthFacet) but I think it might be nicer if the existing DateFacet could take an optional argument that changed its behaviour. But... if I do that, do I expose it in the UI somewhere or is it only available to URL-hackers?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/483/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 447451492,MDU6SXNzdWU0NDc0NTE0OTI=,484,Mechanism for displaying summary of m2m relationships in rows on table view,9599,open,0,,,1,2019-05-23T05:02:41Z,2019-05-23T06:34:05Z,,OWNER,,"Part of #354 (m2m support) It would be fantastic if rows that are part of a m2m relationship could display it in an additional column in the table view. It might look something like this: https://russian-ira-facebook-ads.datasettes.com/russian-ads-919cbfd/display_ads?_search=black+lives+matter That example [was achieved](https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/daf51a8c50a78e8bc7971c211005fd85e66ccf64/russian-ads-metadata.yaml#L72-L77) using a custom SQL query and [datasette-json-html](https://github.com/simonw/datasette-json-html) - but I'd like this to be a built-in feature instead.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/484/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 447469253,MDU6SXNzdWU0NDc0NjkyNTM=,485,Improvements to table label detection ,9599,open,0,9599,,10,2019-05-23T06:19:49Z,2022-10-03T00:04:42Z,,OWNER,,"Label detection doesn't work if the primary key is called pk rather than id, so this page doesn't work: https://latest.datasette.io/fixtures/roadside_attraction_characteristics Code is here: https://github.com/simonw/datasette/blob/cccea85be6aaaeadb31f3b588ec7f732628815f5/datasette/app.py#L644-L653",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/485/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 449445715,MDU6SXNzdWU0NDk0NDU3MTU=,491,Figure out how to use Firebase with cloudrun to enable vanity URLs and CDN caching,9599,open,0,,,0,2019-05-28T19:48:06Z,2019-05-28T19:48:35Z,,OWNER,,"It looks like Firebase can solve a couple of problems with the existing `datasette publish cloudrun` hosting mechanism: * The URLs it produces aren't pretty enough. Firebase offers more control over vanity URLs. * CDN caching (as seen in `datasette publish now`) is great for improving performance and saving money on Cloud Run execution time. https://firebase.google.com/docs/hosting/cloud-run looks like it can help with both of these. Lots of interesting questions: * Should this be a new `datasette publish firebase` command or should it instead be implemented as additional custom options to `datasette publish cloudrun`? * How much harder does it become to do account setup? * How much will this option cost users?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/491/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 450032134,MDU6SXNzdWU0NTAwMzIxMzQ=,495,facet_m2m gets confused by multiple relationships,9599,open,0,,,2,2019-05-29T21:37:28Z,2020-12-17T05:08:22Z,,OWNER,,"I got this for a database I was playing with: I think this is because of these three tables: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/495/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 455486286,MDU6SXNzdWU0NTU0ODYyODY=,26,Mechanism for turning nested JSON into foreign keys / many-to-many,9599,open,0,,,14,2019-06-13T00:52:06Z,2022-06-29T23:35:29Z,,OWNER,,"The GitHub JSON APIs have a really interesting convention with respect to related objects. Consider https://api.github.com/repos/simonw/sqlite-utils/issues - here's a truncated subset: ```json { ""id"": 449818897, ""node_id"": ""MDU6SXNzdWU0NDk4MTg4OTc="", ""number"": 24, ""title"": ""Additional Column Constraints?"", ""user"": { ""login"": ""IgnoredAmbience"", ""id"": 98555, ""node_id"": ""MDQ6VXNlcjk4NTU1"", ""avatar_url"": ""https://avatars0.githubusercontent.com/u/98555?v=4"", ""gravatar_id"": """" }, ""labels"": [ { ""id"": 993377884, ""node_id"": ""MDU6TGFiZWw5OTMzNzc4ODQ="", ""url"": ""https://api.github.com/repos/simonw/sqlite-utils/labels/enhancement"", ""name"": ""enhancement"", ""color"": ""a2eeef"", ""default"": true } ], ""state"": ""open"" } ``` The `user` column lists a complete user. The `labels` column has a list of labels. Since both user and label have populated `id` field this is actually enough information for us to create records for them AND set up the corresponding foreign key (for user) and m2m relationships (for labels). It would be really neat if `sqlite-utils` had some kind of mechanism for correctly processing these kind of patterns. Thanks to `jq` there's not much need for extra customization of the shape here - if we support a narrowly defined structure users can use `jq` to reshape arbitrary JSON to match.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/26/reactions"", ""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 455852801,MDU6SXNzdWU0NTU4NTI4MDE=,507,Every datasette plugin on the ecosystem page should have a screenshot,9599,open,0,,,4,2019-06-13T17:02:51Z,2020-09-17T02:47:35Z,,OWNER,,https://github.com/simonw/datasette/blob/master/docs/ecosystem.rst,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/507/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 456569067,MDU6SXNzdWU0NTY1NjkwNjc=,510,Ability to facet by delimiter (e.g. comma separated fields),9599,open,0,9599,,1,2019-06-15T19:34:41Z,2019-07-08T15:44:51Z,,OWNER,,"E.g. if a field contains ""Tags,With,Commas"" be able to facet them in the same way as `_facet_array=` lets you facet `[""Tags"", ""With"", ""Commas""]`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/510/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 456578474,MDU6SXNzdWU0NTY1Nzg0NzQ=,511,Get Datasette tests passing on Windows in GitHub Actions,9599,open,0,,,13,2019-06-15T21:41:58Z,2021-07-11T17:23:05Z,,OWNER,,"This should almost happen as a side-effect or moving from Sanic to Uvicorn during the port to ASGI: #272 Additional steps: - test it manually - update documentation - set up some form of Windows CI ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/511/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459469278,MDU6SXNzdWU0NTk0NjkyNzg=,515,Try shrinking official image with docker-slim,9599,open,0,,,0,2019-06-22T12:25:37Z,2019-06-22T12:25:37Z,,OWNER,,"This looks really promising: https://github.com/docker-slim/docker-slim If it can shave substantial size from our official container reliably we could add it to the automated build process.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/515/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459509126,MDU6SXNzdWU0NTk1MDkxMjY=,516,Enforce import sort order with isort,9599,open,0,,,8,2019-06-22T20:35:50Z,2023-08-23T02:15:36Z,,OWNER,,"I want to use isort to order imports. A few steps here: - [x] Add a .isort.cfg file (see below) - [x] Use `isort -rc` to reformat existing code - [ ] Commit this change - [x] Add a unit test that ensures future changes remain isort compatible",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/516/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459622390,MDU6SXNzdWU0NTk2MjIzOTA=,522,Handle case-insensitive headers in a nicer way,9599,open,0,,,1,2019-06-23T21:56:34Z,2019-06-26T18:48:53Z,,OWNER,,Spun out from https://github.com/simonw/datasette/pull/518#discussion_r296486289,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/522/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 460095928,MDU6SXNzdWU0NjAwOTU5Mjg=,528,Establish a pattern for Datasette plugins built on top of Pandas,9599,open,0,,,0,2019-06-24T21:05:52Z,2019-06-24T21:05:52Z,,OWNER,,"The Pandas ecosystem is huge, varied and full of tools that are really good at doing interesting analysis on top of tabular data. Pandas should not be a dependency of Datasette core, but I think there is a lot of potential in having plugins which use Pandas to apply interesting analysis to data sucked out of Datasette's SQLite tables. One example ([thanks, Tony](https://twitter.com/psychemedia/status/1143259809715752962)): https://github.com/ResidentMario/missingno could form the basis of a fantastic plugin for getting a high-level overview of how complete each column in a table is. Some thought is needed here about what shape these kind of plugins might take, and what plugin hooks they would use.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/528/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 462117311,MDU6SXNzdWU0NjIxMTczMTE=,531,/database/-/inspect,9599,open,0,,,1,2019-06-28T16:33:41Z,2019-07-08T15:43:57Z,,OWNER,,"Build `/database/-/inspect` which shows tables, columns, column types and foreign keys It won't show table counts. Or maybe it will include them optionally but only for `-i` databases, in a special area of the JSON reserved for immutable-only inspect details. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/465#issuecomment-506797086_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/531/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 463492815,MDU6SXNzdWU0NjM0OTI4MTU=,534,500 error on m2m facet detection,9599,open,0,,,1,2019-07-03T00:42:42Z,2020-12-17T05:08:22Z,,OWNER,,"This may help debug: ``` diff --git a/datasette/facets.py b/datasette/facets.py index 76d73e5..07a4034 100644 --- a/datasette/facets.py +++ b/datasette/facets.py @@ -499,11 +499,14 @@ class ManyToManyFacet(Facet): ""outgoing"" ] if len(other_table_outgoing_foreign_keys) == 2: - destination_table = [ - t - for t in other_table_outgoing_foreign_keys - if t[""other_table""] != self.table - ][0][""other_table""] + try: + destination_table = [ + t + for t in other_table_outgoing_foreign_keys + if t[""other_table""] != self.table + ][0][""other_table""] + except IndexError: + import pdb; pdb.pm() # Only suggest if it's not selected already if (""_facet_m2m"", destination_table) in args: continue ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/534/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 463544206,MDU6SXNzdWU0NjM1NDQyMDY=,537,"Populate ""endpoint"" key in ASGI scope",9599,open,0,,,12,2019-07-03T04:54:47Z,2019-07-22T06:03:18Z,,OWNER,,"This is a trick used by Starlette so that other layers of ASGI middleware can see which route was selected. They added it here: https://github.com/encode/starlette/commit/34d0097feb6f057bd050d5057df5a2f96b97384e If Datasette supports it as well we can benefit from it if we integrate this sentry_asgi middleware (probably as a `datasette-sentry` plugin): https://github.com/encode/sentry-asgi/blob/c6a42d44d31f85885b79e4ee898683ecf8104971/sentry_asgi/middleware.py#L34-L35",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/537/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 464987783,MDExOlB1bGxSZXF1ZXN0Mjk1MTI3MjEz,546,Facet by delimiter,9599,open,0,,,2,2019-07-07T20:06:05Z,2019-11-18T23:46:01Z,,OWNER,simonw/datasette/pulls/546,Refs #510,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/546/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,