{"id": 565064079, "node_id": "MDExOlB1bGxSZXF1ZXN0Mzc1MTgwODMy", "number": 672, "title": "--dirs option for scanning directories for SQLite databases", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 15, "created_at": "2020-02-14T02:25:52Z", "updated_at": "2020-03-27T01:03:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/672", "body": "Refs #417.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/672/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 567902704, "node_id": "MDU6SXNzdWU1Njc5MDI3MDQ=", "number": 675, "title": "--cp option for datasette publish and datasette package for shipping additional files and directories", "user": {"value": 141844, "label": "aviflax"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2020-02-19T22:55:56Z", "updated_at": "2020-12-28T18:49:21Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I\u2019m working on integrating Datasette into a documentation-oriented publishing workflow internally in my company, and in order to deploy the Docker image created by `datasette package` I need to add an additional file to the image \u2014 in my case, it\u2019s a sort of a deployment directive. I\u2019ve worked out a way to do this after the image has been created, but it\u2019s convoluted and brittle.\r\n\r\nSo it\u2019d be excellent if there was an additional option for this command, something like, like, `--copy`.\r\n\r\nI\u2019d envision it looking something like:\r\n\r\n```shell\r\n$ datasette package --copy /the/source/path:/the/target/path data.db\r\n```\r\n\r\nI\u2019d be happy to help design, specify, implement, and test this feature, if you\u2019d be interested.\r\n\r\nThanks for the fantastic tools!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/675/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 573578548, "node_id": "MDU6SXNzdWU1NzM1Nzg1NDg=", "number": 89, "title": "Ability to customize columns used by extracts= feature", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-03-01T16:54:48Z", "updated_at": "2020-10-16T19:17:50Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the \"value\" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named \"value\". I have an existing lookup table that I've populated with columns \"id\" and \"name\" as opposed to \"id\" and \"value\", and seems I can't use `extracts=`, unless I'm missing something...\r\n\r\nInitial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so:\r\n```\r\ntable = db.table(\"trees\", extracts={\"species_id\": (\"Species\", \"name\"})\r\n```\r\n\r\nI haven't dug too much into the existing code yet, but does this make sense? Worth doing?\r\n\r\n_Originally posted by @chrishas35 in https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503_", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/89/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 574021194, "node_id": "MDU6SXNzdWU1NzQwMjExOTQ=", "number": 691, "title": "--reload sould reload server if code in --plugins-dir changes", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-03-02T14:42:21Z", "updated_at": "2020-06-14T02:35:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/691/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 574035432, "node_id": "MDU6SXNzdWU1NzQwMzU0MzI=", "number": 692, "title": "is_hidden_table context variable on table.html page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-03-02T15:03:25Z", "updated_at": "2020-03-02T15:03:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's useful to know if a table is hidden when rendering that page. `datasette-configure-fts` for example may want to disallow enabling search on hidden tables.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/692/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 581795570, "node_id": "MDU6SXNzdWU1ODE3OTU1NzA=", "number": 93, "title": "Support more string values for types in .add_column()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-03-15T19:32:49Z", "updated_at": "2020-09-24T20:36:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://sqlite-utils.readthedocs.io/en/2.4.2/python-api.html#adding-columns says:\r\n> SQLite types you can specify are \"TEXT\", \"INTEGER\", \"FLOAT\" or \"BLOB\".\r\n\r\nAs discovered in #92 this isn't the right list of values. I should expand this to match https://www.sqlite.org/datatype3.html", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/93/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 593006814, "node_id": "MDU6SXNzdWU1OTMwMDY4MTQ=", "number": 715, "title": "Refactor duplicate cell display logic", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-03T00:58:11Z", "updated_at": "2020-04-03T00:58:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The logic for rendering cells in table view and in database (or canned query) view is currently very similar:\r\n\r\nhttps://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/base.py#L514-L539\r\n\r\nCompared with:\r\n\r\nhttps://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/table.py#L104-L195\r\n\r\nI'll be changing this a bit in #698 but I should still try to clean this up more further in the future.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/715/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 594237015, "node_id": "MDU6SXNzdWU1OTQyMzcwMTU=", "number": 718, "title": "Plugin idea: datasette-redirects", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-05T03:41:38Z", "updated_at": "2023-08-30T22:17:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin.\r\n\r\nhttps://github.com/simonw/museums/blob/6b1faf00c463b2228860d4d62d104b11935e01b1/plugins/redirect_www.py", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/718/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "reopened"} {"id": 599776345, "node_id": "MDU6SXNzdWU1OTk3NzYzNDU=", "number": 24, "title": "Feature idea: github-to-sqlite everything ...", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-14T18:34:00Z", "updated_at": "2020-04-14T18:34:00Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "At the moment if you want to pull all your repos, issues, issues comments etc you have to do it with a sequence of separate commands.\r\n\r\nConsider adding a `everything` or `all` command which fetches everything that the tool knows how to fetch, and is designed to be run on a cron in a way that fetches just new stuff each time.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/24/reactions\", \"total_count\": 7, \"+1\": 7, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 602533300, "node_id": "MDU6SXNzdWU2MDI1MzMzMDA=", "number": 1, "title": "Import photo metadata from Apple Photos into SQLite", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 5324096, "label": "Apple Photos online and securely browsable"}, "comments": 8, "created_at": "2020-04-18T19:23:26Z", "updated_at": "2020-05-04T02:41:40Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Faces, albums, locations, that kind of thing.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 602533481, "node_id": "MDU6SXNzdWU2MDI1MzM0ODE=", "number": 3, "title": "Import EXIF data into SQLite - lens used, ISO, aperture etc", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 5324096, "label": "Apple Photos online and securely browsable"}, "comments": 2, "created_at": "2020-04-18T19:24:31Z", "updated_at": "2021-10-05T12:38:24Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 602585497, "node_id": "MDU6SXNzdWU2MDI1ODU0OTc=", "number": 7, "title": "Integrate image content hashing", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-04-19T00:36:58Z", "updated_at": "2021-08-26T02:01:01Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "To spot duplicate images (where the file content differs such that the sha256 is no longer a match) it would be useful to calculate and store perceptual hashes of some sort.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/7/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 602619330, "node_id": "MDU6SXNzdWU2MDI2MTkzMzA=", "number": 45, "title": "Use raise_for_status() everywhere", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-19T04:38:28Z", "updated_at": "2020-04-19T04:39:22Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "I keep seeing errors which I think are caused by authentication or rate limit problems but which appear to be unexpected JSON responses - presumably because they are actually an error message.\r\n\r\nRecent example: https://github.com/simonw/jsk-fellows-on-twitter/runs/598892575\r\n\r\nUsing `response.raise_for_status()` everywhere will make these errors less confusing.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/45/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 606033104, "node_id": "MDU6SXNzdWU2MDYwMzMxMDQ=", "number": 12, "title": "If less than 500MB, show size in MB not GB", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-24T04:35:01Z", "updated_at": "2020-04-24T04:35:25Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Just saw this:\r\n```\r\nUploading 0.05 GB\r\n```", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 607223136, "node_id": "MDU6SXNzdWU2MDcyMjMxMzY=", "number": 741, "title": "Replace \"datasette publish --extra-options\" with \"--setting\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 9, "created_at": "2020-04-27T04:29:04Z", "updated_at": "2022-05-12T19:21:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "See https://github.com/simonw/datasette-publish-now/issues/9#issuecomment-618155764 - the `--extra-options` mechanism is in practice just used to set `--config` options in data that you publish, but that means you end up with pretty messy looking commands:\r\n\r\n datasette publish my.db --extra-options=\"--config default_page_size:50 --config sql_time_limit_ms:3500\"\r\n\r\nA neater design would be to support `--config` as an option for `datasette publish` directly:\r\n\r\n datasette publish my.db --config default_page_size:50 --config sql_time_limit_ms:3500\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/741/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 607888367, "node_id": "MDU6SXNzdWU2MDc4ODgzNjc=", "number": 13, "title": "Also upload movie files", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-04-27T22:11:25Z", "updated_at": "2020-04-28T00:39:45Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The `upload` command currently only handles static images:\r\n\r\nhttps://github.com/dogsheep/photos-to-sqlite/blob/d939455af00e07866686457ee2fcb9b2d1b7194e/photos_to_sqlite/utils.py#L26-L33\r\n\r\nNeed to cover movies taken by my phone and DSLR too.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/13/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 608512747, "node_id": "MDU6SXNzdWU2MDg1MTI3NDc=", "number": 14, "title": "Annotate photos using the Google Cloud Vision API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2020-04-28T18:09:03Z", "updated_at": "2020-04-28T18:19:06Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "It can detect faces, run OCR, do image labeling (it knows what a lemur is!) and do object localization where it identifies objects and returns bounding polygons for them.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/14/reactions\", \"total_count\": 3, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 612287234, "node_id": "MDU6SXNzdWU2MTIyODcyMzQ=", "number": 16, "title": "Import machine-learning detected labels (dog, llama etc) from Apple Photos", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2020-05-05T02:45:43Z", "updated_at": "2020-05-05T05:38:16Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Follow-on from #1. Apple Photos runs some very sophisticated machine learning on-device to figure out if photos are of dogs, llamas and so on. I really want to extract those labels out into my own database.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 1, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 612382643, "node_id": "MDU6SXNzdWU2MTIzODI2NDM=", "number": 758, "title": "Question: Access to immutable database-path", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2020-05-05T07:01:18Z", "updated_at": "2020-05-28T08:23:27Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi Simon\r\n\r\nIs there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context (`databases[0][\"path\")` when rendering urls of the database (eg. `/db-44b06v9/cases`...), but where can I find the hashed url when rendering the index-page? I'm trying to avoid redirects. Thanks!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/758/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 612860758, "node_id": "MDU6SXNzdWU2MTI4NjA3NTg=", "number": 18, "title": "Switch CI solution to GitHub Actions with a macOS runner", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-05T20:03:50Z", "updated_at": "2020-05-05T23:49:18Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Refs #17.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/18/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 613422636, "node_id": "MDU6SXNzdWU2MTM0MjI2MzY=", "number": 760, "title": "Way of seeing full schema for a database", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-05-06T15:46:08Z", "updated_at": "2020-05-06T23:49:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I find myself wanting to quickly figure out all of the BLOB columns in a database.\r\n\r\nA `/-/schema` page showing the full schema (actually since it's per-database probably `/dbname/-/schema` or `/-/schema/dbname`) would be really handy.\r\n\r\nIt would need to be carefully constructed from various queries against `sqlite_master` - just doing `select * from sqlite_master where type='table'` isn't quite enough because I also want to show indexes, triggers etc.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/760/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 613491342, "node_id": "MDU6SXNzdWU2MTM0OTEzNDI=", "number": 762, "title": "Experiment with PRAGMA hard_heap_limit ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-06T17:33:23Z", "updated_at": "2020-05-07T03:08:44Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This was added in SQLite 2020-01-22 (3.31.0): https://www.sqlite.org/changes.html#version_3_31_0\r\n\r\n> Add the [sqlite3_hard_heap_limit64()](https://www.sqlite.org/c3ref/hard_heap_limit64.html) interface and the corresponding [PRAGMA hard_heap_limit](https://www.sqlite.org/pragma.html#pragma_hard_heap_limit) command. \r\n\r\nThis sounds like it could be a nice extra safety measure.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/762/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 615626118, "node_id": "MDU6SXNzdWU2MTU2MjYxMTg=", "number": 22, "title": "Try out ExifReader", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-05-11T06:32:13Z", "updated_at": "2020-05-14T05:59:53Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://pypi.org/project/ExifReader/\r\n\r\nNew fork that should be able to handle EXIF in HEIC files.\r\n\r\nForked here: https://github.com/ianare/exif-py/issues/102#issuecomment-626376522\r\n\r\nRefs #3 ", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 616087149, "node_id": "MDU6SXNzdWU2MTYwODcxNDk=", "number": 765, "title": "publish heroku should default to currently tagged version", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-11T18:24:06Z", "updated_at": "2020-05-11T18:25:43Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Had a report that deploying to Heroku was using the previously installed version of Datasette, not the latest.\r\n\r\nCould be because of this:\r\n\r\nhttps://github.com/simonw/datasette/blob/af6c6c5d6f929f951c0e63bfd1c82e37a071b50f/datasette/publish/heroku.py#L172-L179\r\n\r\nHeroku documentation recommends pinning to specific versions https://devcenter.heroku.com/articles/python-pip\r\n\r\nSo... we could ensure we default to an install value of `[\"datasette>=current_tag\"]`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/765/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 617323873, "node_id": "MDU6SXNzdWU2MTczMjM4NzM=", "number": 766, "title": "Enable wildcard-searches by default", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-05-13T10:14:48Z", "updated_at": "2021-03-05T16:35:21Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi Simon.\r\n\r\nIt seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the [docs](https://datasette.readthedocs.io/en/stable/full_text_search.html#the-table-page-and-table-view-api) (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance.\r\n\r\nI would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring \"_searchmode=raw\" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using \"_searchmode=raw\"? Is the \"escape_fts\"-function from datasette.utils ignored?\r\n\r\n\r\nThanks!\r\n\r\n/Claus", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/766/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 621323348, "node_id": "MDU6SXNzdWU2MjEzMjMzNDg=", "number": 24, "title": "Configurable URL for images", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-19T22:25:56Z", "updated_at": "2020-05-20T06:00:29Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "This is hard-coded at the moment, which is bad:\r\nhttps://github.com/dogsheep/photos-to-sqlite/blob/d5d69b9019703c47bc251444838578dd752801e2/photos_to_sqlite/cli.py#L269-L272", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/24/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 621486115, "node_id": "MDU6SXNzdWU2MjE0ODYxMTU=", "number": 27, "title": "photos_with_apple_metadata view should include labels", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-20T06:06:17Z", "updated_at": "2020-05-20T06:06:17Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://dogsheep-photos.dogsheep.net/public/photos_with_apple_metadata?place_city=New+Orleans&_facet=place_city&_facet_array=albums&_facet_array=persons\r\n\r\nHere's one way to add that:\r\n```sql\r\n select\r\n rowid,\r\n photo,\r\n (\r\n select\r\n json_group_array(\r\n json_object(\r\n 'label',\r\n normalized_string,\r\n 'href',\r\n '/photos/labelled?_hide_sql=1&label=' || normalized_string\r\n )\r\n )\r\n from\r\n labels\r\n where\r\n labels.uuid = photos_with_apple_metadata.uuid\r\n ) as labels,\r\n date,\r\n```", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/27/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 624490929, "node_id": "MDU6SXNzdWU2MjQ0OTA5Mjk=", "number": 28, "title": "Invalid SQL no such table: main.uploads", "user": {"value": 41439, "label": "dmd"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-25T21:25:39Z", "updated_at": "2020-12-24T22:26:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "http://127.0.0.1:8001/photos/photos_with_apple_metadata gives \"Invalid SQL no such table: main.uploads\"", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 626211658, "node_id": "MDU6SXNzdWU2MjYyMTE2NTg=", "number": 778, "title": "Ability to configure keyset pagination for views and queries", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-28T04:48:56Z", "updated_at": "2020-10-02T02:26:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Currently views offer pagination, but it uses offset/limit - e.g. https://latest.datasette.io/fixtures/paginated_view?_next=100\r\n\r\nThis means pagination will perform poorly on deeper pages.\r\n\r\nIf a view is based on a table that has a primary key it should be possible to configure efficient keyset pagination that works the same way that table pagination works.\r\n\r\nThis may be as simple as configuring a column that can be treated as a \"primary key\" for the purpose of pagination using `metadata.json` - or with a `?_view_pk=colname` querystring argument.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/778/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 626582657, "node_id": "MDU6SXNzdWU2MjY1ODI2NTc=", "number": 779, "title": "Make human_description_en explicitly available to output renderers", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-28T14:59:54Z", "updated_at": "2020-05-28T14:59:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "`datasette-atom` uses this:\r\n\r\nhttps://github.com/simonw/datasette-atom/blob/df98a6c43a443224b6cd232f84703ec297ef046b/datasette_atom/__init__.py#L36-L37\r\n```python\r\n if data.get(\"human_description_en\"):\r\n title += \": \" + data[\"human_description_en\"]\r\n```\r\nIt's a nice way to generate a useful title for a filtered table.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/779/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 626593402, "node_id": "MDU6SXNzdWU2MjY1OTM0MDI=", "number": 780, "title": "Internals documentation for datasette.metadata() method", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2020-05-28T15:14:22Z", "updated_at": "2022-03-15T20:50:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/40885ef24e32d91502b6b8bbad1c7376f50f2830/datasette/app.py#L297-L328", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/780/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 628156527, "node_id": "MDU6SXNzdWU2MjgxNTY1Mjc=", "number": 789, "title": "Mechanism for enabling pluggy tracing", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-06-01T05:10:14Z", "updated_at": "2020-06-01T05:11:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Could be useful for debugging plugins: https://pluggy.readthedocs.io/en/latest/#call-tracing\r\n\r\nI tried this out by adding these two lines in `plugins.py`:\r\n```python\r\npm = pluggy.PluginManager(\"datasette\")\r\npm.add_hookspecs(hookspecs)\r\n# Added these:\r\npm.trace.root.setwriter(print)\r\npm.enable_tracing()\r\n```\r\nOutput looked something like this:\r\n```\r\nINFO: 127.0.0.1:52724 - \"GET /-/-/static/app.css HTTP/1.1\" 404 Not Found\r\n actor_from_request [hook]\r\n datasette: \r\n request: \r\n\r\n finish actor_from_request --> [] [hook]\r\n\r\n extra_body_script [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n view_name: json_data\r\n datasette: \r\n\r\n finish extra_body_script --> [] [hook]\r\n\r\n extra_template_vars [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n view_name: json_data\r\n request: \r\n datasette: \r\n\r\n finish extra_template_vars --> [] [hook]\r\n\r\n extra_css_urls [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n datasette: \r\n\r\n finish extra_css_urls --> [] [hook]\r\n\r\n extra_js_urls [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n datasette: \r\n\r\n finish extra_js_urls --> [] [hook]\r\n\r\nINFO: 127.0.0.1:52724 - \"GET /-/actor HTTP/1.1\" 200 OK\r\n actor_from_request [hook]\r\n datasette: \r\n request: \r\n\r\n finish actor_from_request --> [] [hook]\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/789/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 628572716, "node_id": "MDU6SXNzdWU2Mjg1NzI3MTY=", "number": 791, "title": "Tutorial: building a something-interesting with writable canned queries", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-06-01T16:32:05Z", "updated_at": "2020-10-10T23:34:42Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Initial idea: TODO list, as a tutorial for #698 writable canned queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/791/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 629473827, "node_id": "MDU6SXNzdWU2Mjk0NzM4Mjc=", "number": 5, "title": "Set up a demo", "user": {"value": 26745575, "label": "harryvederci"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-06-02T19:56:49Z", "updated_at": "2020-09-01T06:18:43Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "First off, thanks for open sourcing this application! This is a suggestion to increase the amount of people that would make use of it: an example in the readme file would help.\r\n\r\nCurrently, users have to clone the app, install it, authorize through pocket, run a command, an then find out if this application does what they hope it does.\r\n\r\nAnother possibility is to add a file `example-output.db`, containing one (mock) Pocket article.\r\n\r\nKeep up the good work!", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 632724154, "node_id": "MDU6SXNzdWU2MzI3MjQxNTQ=", "number": 805, "title": "Writable canned queries live demo on Glitch", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 11, "created_at": "2020-06-06T20:52:13Z", "updated_at": "2020-07-01T22:44:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Needs to run somewhere with a mutable disk drive, so not Cloud Run or Heroku or Vercel.\r\n\r\nI think I'll put it on Glitch.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/805/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 634663505, "node_id": "MDU6SXNzdWU2MzQ2NjM1MDU=", "number": 815, "title": "Group permission checks by request on /-/permissions debug page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2020-06-08T14:25:23Z", "updated_at": "2020-12-17T22:06:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Now that we're making a LOT more permission checks (on the DB index page we do a check for every listed table for example) the `/-/permissions` page gets filled up pretty quickly.\r\n\r\nCan make this more readable by grouping permission checks by request. Have most recent request at the top of the page but the permission requests within that page sorted chronologically by most recent last.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/815/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 636511683, "node_id": "MDU6SXNzdWU2MzY1MTE2ODM=", "number": 830, "title": "Redesign register_facet_classes plugin hook", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2020-06-10T20:03:27Z", "updated_at": "2021-12-16T19:58:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Nothing uses this plugin hook yet, so the design is not yet proven.\r\n\r\nI'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.\r\n\r\nI'll add a warning about this to the documentation.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/830/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 638238548, "node_id": "MDU6SXNzdWU2MzgyMzg1NDg=", "number": 845, "title": "Code coverage should ignore files in .coveragerc", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-06-13T21:45:42Z", "updated_at": "2020-06-13T21:46:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I'm not sure why this is, but the code coverage I have running in a GitHub Action doesn't take my `.coveragerc` file into account. It should:\r\n\r\nhttps://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/.github/workflows/test-coverage.yml#L31-L35\r\n\r\nHere's the bit that's ignored:\r\n\r\nhttps://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/.coveragerc#L1-L2\r\n\r\nAs a result my coverage score is 84%, when it should be 92%:\r\n```\r\n2020-06-13T21:41:18.4404252Z ----------- coverage: platform linux, python 3.8.3-final-0 -----------\r\n2020-06-13T21:41:18.4404570Z Name Stmts Miss Cover\r\n2020-06-13T21:41:18.4404971Z --------------------------------------------------------\r\n2020-06-13T21:41:18.4405227Z datasette/__init__.py 3 0 100%\r\n2020-06-13T21:41:18.4405441Z datasette/__main__.py 3 3 0%\r\n2020-06-13T21:41:18.4405668Z datasette/_version.py 279 279 0%\r\n2020-06-13T21:41:18.4405921Z datasette/actor_auth_cookie.py 20 0 100%\r\n2020-06-13T21:41:18.4406135Z datasette/app.py 499 27 95%\r\n2020-06-13T21:41:18.4406343Z datasette/cli.py 162 45 72%\r\n2020-06-13T21:41:18.4406553Z datasette/database.py 236 17 93%\r\n2020-06-13T21:41:18.4406761Z datasette/default_permissions.py 40 0 100%\r\n2020-06-13T21:41:18.4406975Z datasette/facets.py 210 24 89%\r\n2020-06-13T21:41:18.4407186Z datasette/filters.py 122 7 94%\r\n2020-06-13T21:41:18.4407394Z datasette/hookspecs.py 34 0 100%\r\n2020-06-13T21:41:18.4407600Z datasette/inspect.py 36 23 36%\r\n2020-06-13T21:41:18.4407807Z datasette/plugins.py 34 6 82%\r\n2020-06-13T21:41:18.4408014Z datasette/publish/__init__.py 0 0 100%\r\n2020-06-13T21:41:18.4408240Z datasette/publish/cloudrun.py 57 2 96%\r\n2020-06-13T21:41:18.4408786Z datasette/publish/common.py 19 1 95%\r\n2020-06-13T21:41:18.4409029Z datasette/publish/heroku.py 97 13 87%\r\n2020-06-13T21:41:18.4409243Z datasette/renderer.py 63 4 94%\r\n2020-06-13T21:41:18.4409450Z datasette/sql_functions.py 5 0 100%\r\n2020-06-13T21:41:18.4410480Z datasette/tracer.py 87 16 82%\r\n2020-06-13T21:41:18.4410972Z datasette/utils/__init__.py 504 31 94%\r\n2020-06-13T21:41:18.4411755Z datasette/utils/asgi.py 264 24 91%\r\n2020-06-13T21:41:18.4412173Z datasette/utils/shutil_backport.py 44 44 0%\r\n2020-06-13T21:41:18.4412822Z datasette/version.py 4 0 100%\r\n2020-06-13T21:41:18.4413562Z datasette/views/__init__.py 0 0 100%\r\n2020-06-13T21:41:18.4414276Z datasette/views/base.py 288 19 93%\r\n2020-06-13T21:41:18.4414579Z datasette/views/database.py 120 2 98%\r\n2020-06-13T21:41:18.4414860Z datasette/views/index.py 57 2 96%\r\n2020-06-13T21:41:18.4415379Z datasette/views/special.py 72 16 78%\r\n2020-06-13T21:41:18.4418994Z datasette/views/table.py 418 18 96%\r\n2020-06-13T21:41:18.4428811Z --------------------------------------------------------\r\n2020-06-13T21:41:18.4430394Z TOTAL 3777 623 84%\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/845/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 639542974, "node_id": "MDU6SXNzdWU2Mzk1NDI5NzQ=", "number": 47, "title": "Fall back to FTS4 if FTS5 is not available", "user": {"value": 73579, "label": "hpk42"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-16T10:11:23Z", "updated_at": "2020-06-17T20:13:48Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "got this with version 0.21.1 from pypi. twitter-to-sqlite auth worked but then \"twitter-to-sqlite user-timeline USER.db\" produced a tracekback ending in \"no such module: FTS5\". ", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 639993467, "node_id": "MDU6SXNzdWU2Mzk5OTM0Njc=", "number": 850, "title": "Proof of concept for Datasette on AWS Lambda with EFS", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 25, "created_at": "2020-06-16T21:48:31Z", "updated_at": "2020-06-16T23:52:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://aws.amazon.com/about-aws/whats-new/2020/06/aws-lambda-support-for-amazon-elastic-file-system-now-generally-/\r\n\r\nIf Datasette can run on Lambda with access to EFS it could both read AND write large databases there.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/850/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 642296989, "node_id": "MDU6SXNzdWU2NDIyOTY5ODk=", "number": 856, "title": "Consider pagination of canned queries", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-20T03:15:59Z", "updated_at": "2021-05-21T14:22:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The new `canned_queries()` plugin hook from #852 combined with plugins like https://github.com/simonw/datasette-saved-queries could mean that some installations end up with hundreds or even thousands of canned queries. I should consider pagination or some other way of ensuring that this doesn't cause performance problems for Datasette.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/856/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 642388564, "node_id": "MDU6SXNzdWU2NDIzODg1NjQ=", "number": 858, "title": "publish heroku does not work on Windows 10", "user": {"value": 870912, "label": "simonlau"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-06-20T14:40:28Z", "updated_at": "2021-06-10T17:44:09Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When executing \"datasette publish heroku schools.db\" on Windows 10, I get the following error\r\n\r\n```shell\r\n File \"c:\\users\\dell\\.virtualenvs\\sec-schools-jn-cwk8z\\lib\\site-packages\\datasette\\publish\\heroku.py\", line 54, in heroku\r\n line.split()[0] for line in check_output([\"heroku\", \"plugins\"]).splitlines()\r\n File \"c:\\python38\\lib\\subprocess.py\", line 411, in check_output\r\n return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,\r\n File \"c:\\python38\\lib\\subprocess.py\", line 489, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"c:\\python38\\lib\\subprocess.py\", line 854, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"c:\\python38\\lib\\subprocess.py\", line 1307, in _execute_child\r\n hp, ht, pid, tid = _winapi.CreateProcess(executable, args,\r\nFileNotFoundError: [WinError 2] The system cannot find the file specified\r\n```\r\n\r\nChanging https://github.com/simonw/datasette/blob/55a6ffb93c57680e71a070416baae1129a0243b8/datasette/publish/heroku.py#L54\r\n\r\nto \r\n\r\n```python\r\nline.split()[0] for line in check_output([\"heroku\", \"plugins\"], shell=True).splitlines()\r\n```\r\n\r\nas well as the other `check_output()` and `call()` within the same file leads me to another recursive error about temp files", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/858/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 642572841, "node_id": "MDU6SXNzdWU2NDI1NzI4NDE=", "number": 859, "title": "Database page loads too slowly with many large tables (due to table counts)", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2020-06-21T14:23:17Z", "updated_at": "2021-08-25T21:59:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hey,\r\nI have a database that I save in HTML from couple of web scrapers. There are around 200k+, 50+ rows in a couple of tables, with sqlite file weighing around 600MB.\r\n\r\nThe app runs on a VPS with 2 core CPU, 4GB RAM and refreshing database page regularly takes more than 10 seconds. I was suspecting that counting tables was the culprit, but manually running `select count(*) from table_name` for the largest table finishes under a second.\r\n\r\nI've looked at the source code. There's a check for index page for mutable databases larger than 100MB\r\nhttps://github.com/simonw/datasette/blob/799c5d53570d773203527f19530cf772dc2eeb24/datasette/views/index.py#L15\r\n\r\nbut this check is not performed for database page. \r\nI've manually crippled `Database::table_counts` method\r\n```py\r\nasync def table_counts(self, limit=10):\r\n if not self.is_mutable and self.cached_table_counts is not None:\r\n return self.cached_table_counts\r\n # Try to get counts for each table, $limit timeout for each count\r\n counts = {}\r\n for table in await self.table_names():\r\n try:\r\n # table_count = (\r\n # await self.execute(\r\n # \"select count(*) from [{}]\".format(table),\r\n # custom_time_limit=limit,\r\n # )\r\n # ).rows[0][0]\r\n counts[table] = 10 # table_count\r\n # In some cases I saw \"SQL Logic Error\" here in addition to\r\n # QueryInterrupted - so we catch that too:\r\n except (QueryInterrupted, sqlite3.OperationalError, sqlite3.DatabaseError):\r\n counts[table] = None\r\n if not self.is_mutable:\r\n self.cached_table_counts = counts\r\n return counts\r\n```\r\n\r\nnow the page loads in <100ms.\r\n\r\nIs it possible to apply size check on database page too?\r\n\r\n
\r\n\r\n/-/versions output\r\n\r\n
\r\n{\r\n    \"python\": {\r\n        \"version\": \"3.8.0\",\r\n        \"full\": \"3.8.0 (default, Oct 28 2019, 16:14:01) \\n[GCC 8.3.0]\"\r\n    },\r\n    \"datasette\": {\r\n        \"version\": \"0.44\"\r\n    },\r\n    \"asgi\": \"3.0\",\r\n    \"uvicorn\": \"0.11.5\",\r\n    \"sqlite\": {\r\n        \"version\": \"3.22.0\",\r\n        \"fts_versions\": [\r\n            \"FTS5\",\r\n            \"FTS4\",\r\n            \"FTS3\"\r\n        ],\r\n        \"extensions\": {\r\n            \"json1\": null\r\n        },\r\n        \"compile_options\": [\r\n            \"COMPILER=gcc-7.4.0\",\r\n            \"ENABLE_COLUMN_METADATA\",\r\n            \"ENABLE_DBSTAT_VTAB\",\r\n            \"ENABLE_FTS3\",\r\n            \"ENABLE_FTS3_PARENTHESIS\",\r\n            \"ENABLE_FTS3_TOKENIZER\",\r\n            \"ENABLE_FTS4\",\r\n            \"ENABLE_FTS5\",\r\n            \"ENABLE_JSON1\",\r\n            \"ENABLE_LOAD_EXTENSION\",\r\n            \"ENABLE_PREUPDATE_HOOK\",\r\n            \"ENABLE_RTREE\",\r\n            \"ENABLE_SESSION\",\r\n            \"ENABLE_STMTVTAB\",\r\n            \"ENABLE_UNLOCK_NOTIFY\",\r\n            \"ENABLE_UPDATE_DELETE_LIMIT\",\r\n            \"HAVE_ISNAN\",\r\n            \"LIKE_DOESNT_MATCH_BLOBS\",\r\n            \"MAX_SCHEMA_RETRY=25\",\r\n            \"MAX_VARIABLE_NUMBER=250000\",\r\n            \"OMIT_LOOKASIDE\",\r\n            \"SECURE_DELETE\",\r\n            \"SOUNDEX\",\r\n            \"TEMP_STORE=1\",\r\n            \"THREADSAFE=1\"\r\n        ]\r\n    }\r\n}\r\n
\r\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/859/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 643510821, "node_id": "MDU6SXNzdWU2NDM1MTA4MjE=", "number": 862, "title": "Set an upper limit on total facet suggestion time for a page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-06-23T03:57:55Z", "updated_at": "2020-06-23T03:58:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "If a table has 100 columns the facet suggestion code will currently run 100 times, taking a max of `facet_suggest_time_limit_ms` which defaults to 50ms per column:\r\n\r\nhttps://github.com/simonw/datasette/blob/000528192eaf891118932250141dabe7a1561ece/datasette/facets.py#L142-L162\r\n\r\nSo for 100 columns, that's 100 * 50ms = 5s total time that might be spent attempting to calculate facets on a large table!\r\n\r\nI should implement a hard upper limit on the total amount of time taken suggesting facets - probably of around 500ms. If it takes longer than that the remaining columns will not be considered.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/862/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 644161221, "node_id": "MDU6SXNzdWU2NDQxNjEyMjE=", "number": 117, "title": "Support for compound (composite) foreign keys", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-23T21:33:42Z", "updated_at": "2020-06-23T21:40:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It turns out SQLite supports composite foreign keys: https://www.sqlite.org/foreignkeys.html#fk_composite\r\n\r\nTheir example looks like this:\r\n```sql\r\nCREATE TABLE album(\r\n albumartist TEXT,\r\n albumname TEXT,\r\n albumcover BINARY,\r\n PRIMARY KEY(albumartist, albumname)\r\n);\r\n\r\nCREATE TABLE song(\r\n songid INTEGER,\r\n songartist TEXT,\r\n songalbum TEXT,\r\n songname TEXT,\r\n FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname)\r\n);\r\n```\r\n\r\nHere's what that looks like in sqlite-utils:\r\n\r\n```\r\nIn [1]: import sqlite_utils \r\n\r\nIn [2]: import sqlite3 \r\n\r\nIn [3]: conn = sqlite3.connect(\":memory:\") \r\n\r\nIn [4]: conn \r\nOut[4]: \r\n\r\nIn [5]: conn.executescript(\"\"\" \r\n ...: CREATE TABLE album( \r\n ...: albumartist TEXT, \r\n ...: albumname TEXT, \r\n ...: albumcover BINARY, \r\n ...: PRIMARY KEY(albumartist, albumname) \r\n ...: ); \r\n ...: \r\n ...: CREATE TABLE song( \r\n ...: songid INTEGER, \r\n ...: songartist TEXT, \r\n ...: songalbum TEXT, \r\n ...: songname TEXT, \r\n ...: FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname) \r\n ...: ); \r\n ...: \"\"\") \r\nOut[5]: \r\n\r\nIn [6]: db = sqlite_utils.Database(conn) \r\n\r\nIn [7]: db.tables \r\nOut[7]: \r\n[,\r\n
]\r\n\r\nIn [8]: db.tables[0].foreign_keys \r\nOut[8]: []\r\n\r\nIn [9]: db.tables[1].foreign_keys \r\nOut[9]: \r\n[ForeignKey(table='song', column='songartist', other_table='album', other_column='albumartist'),\r\n ForeignKey(table='song', column='songalbum', other_table='album', other_column='albumname')]\r\n```\r\nThe table appears to have two separate foreign keys, when actually it has a single compound composite foreign key.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/117/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 646448486, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0", "number": 868, "title": "initial windows ci setup", "user": {"value": 702729, "label": "joshmgrant"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-26T18:49:13Z", "updated_at": "2021-07-10T23:41:43Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/868", "body": "Picking up the work done on #557 with a new PR. Seeing if I can get this working.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/868/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 646737558, "node_id": "MDU6SXNzdWU2NDY3Mzc1NTg=", "number": 870, "title": "Refactor default views to use register_routes", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-06-27T18:53:12Z", "updated_at": "2022-03-15T20:07:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It would be much cleaner if Datasette's default views were all registered using the new `register_routes()` plugin hook. Could dramatically reduce the code in `datasette/app.py`.\r\n\r\n> The ideal fix here would be to rework my `BaseView` subclass mechanism to work with `register_routes()` so that those views don't have any special privileges above plugin-provided views.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/864#issuecomment-648580556_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/870/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 647095487, "node_id": "MDU6SXNzdWU2NDcwOTU0ODc=", "number": 873, "title": "\"datasette -p 0 --root\" gives the wrong URL", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 14, "created_at": "2020-06-29T04:03:06Z", "updated_at": "2020-08-18T17:26:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "```\r\n$ datasette -p 0 --root\r\nhttp://127.0.0.1:0/-/auth-token?token=2d498c...\r\n```\r\nThe port is incorrect.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/873/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 648435885, "node_id": "MDU6SXNzdWU2NDg0MzU4ODU=", "number": 878, "title": "New pattern for views that return either JSON or HTML, available for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 26, "created_at": "2020-06-30T19:26:13Z", "updated_at": "2022-03-19T16:19:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Can be part of #870 - refactoring existing views to use `register_routes()`.\r\n\r\n> I'm going to put the new `check_permissions()` method on `BaseView` as well. If I want that method to be available to plugins I can do so by turning that `BaseView` class into a documented API that plugins are encouraged to use themselves.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/832#issuecomment-651995453_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/878/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 648659536, "node_id": "MDU6SXNzdWU2NDg2NTk1MzY=", "number": 881, "title": "Figure out why restore_working_directory is needed in some places", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-01T04:19:25Z", "updated_at": "2020-07-01T04:19:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a frustrating workaround. I have a `restore_working_directory` fixture that I wrote to solve errors that look like this:\r\n```\r\n/Users/simon/Dropbox/Development/datasette/tests/test_publish_cloudrun.py:148: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py:112: in __enter__\r\n return next(self.gen)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = \r\n\r\n @contextlib.contextmanager\r\n def isolated_filesystem(self):\r\n \"\"\"A context manager that creates a temporary folder and changes\r\n the current working directory to it for isolated filesystem tests.\r\n \"\"\"\r\n> cwd = os.getcwd()\r\nE FileNotFoundError: [Errno 2] No such file or directory\r\n```\r\nHere's an example of it in use: removing the `restore_working_directory` argument from this function causes the failure. https://github.com/simonw/datasette/blob/549b1c2063db48c4622ee5c7b478a1e3cbc1ac07/tests/test_plugins.py#L689-L690\r\n\r\nI'd like to not have to do this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/881/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 648749062, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQyNTA1MDg4", "number": 883, "title": "Skip counting hidden tables", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-07-01T07:38:08Z", "updated_at": "2020-07-02T00:25:44Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/883", "body": "Potential fix for https://github.com/simonw/datasette/issues/859.\r\n\r\nDisabling table counts for hidden tables speeds up database page quite a bit. In my setup it reduced load time by 2/3 (~300 -> ~90ms)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/883/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 649429772, "node_id": "MDU6SXNzdWU2NDk0Mjk3NzI=", "number": 886, "title": "Reconsider how _actor_X magic parameter deals with missing values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-07-02T00:00:38Z", "updated_at": "2020-09-11T21:35:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I had to build a custom `_actorornull` prefix for [datasette-saved-queries](https://github.com/simonw/datasette-saved-queries/blob/37c00e56ac398e1f9aa342d30357de013a9b37b4/datasette_saved_queries/__init__.py):\r\n```python\r\ndef actorornull(key, request):\r\n if request.actor is None:\r\n return None\r\n return request.actor.get(key)\r\n\r\n\r\n@hookimpl\r\ndef register_magic_parameters():\r\n return [\r\n (\"actorornull\", actorornull),\r\n ]\r\n```\r\nMaybe the `actor` magic in Datasette core should do that out of the box?\r\n\r\nhttps://github.com/simonw/datasette/blob/f1f581b7ffcd5d8f3ae6c1c654d813a6641410eb/datasette/default_magic_parameters.py#L14-L17\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/886/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 652961907, "node_id": "MDU6SXNzdWU2NTI5NjE5MDc=", "number": 121, "title": "Improved (and better documented) support for transactions", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-07-08T04:56:51Z", "updated_at": "2020-09-24T20:36:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655283393_\r\n\r\nWe should put some thought into how this library supports and encourages smart use of transactions.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/121/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 655974395, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw", "number": 30, "title": "Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces)", "user": {"value": 110038, "label": "scanner"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-13T16:15:26Z", "updated_at": "2020-07-13T16:15:26Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep-photos/pulls/30", "body": "Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3.\r\nHad to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop.\r\n\r\nAlso probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 657572753, "node_id": "MDU6SXNzdWU2NTc1NzI3NTM=", "number": 894, "title": "?sort=colname~numeric to sort by by column cast to real", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2020-07-15T18:47:48Z", "updated_at": "2021-08-20T02:07:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "If a text column actually contains numbers, being able to \"sort by column, treated as numeric\" would be really useful.\r\n\r\nProbably depends on column actions enabled by #690", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/894/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 659873662, "node_id": "MDU6SXNzdWU2NTk4NzM2NjI=", "number": 898, "title": "datasette.utils.testing module", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-07-18T03:53:24Z", "updated_at": "2020-07-18T03:57:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The unit tests for plugins could benefit from reusing code from Datasette's own testing fixtures, e.g.:\r\n> I may need to borrow this function from Datasette for the tests:\r\n> https://github.com/simonw/datasette/blob/1f6a134369e6a7efaae9db469f15b1dd2b7f3709/tests/fixtures.py#L836-L851\r\n> \r\n> It's not importable (it lives in `fixtures.py` and not in the `datasette` package that gets packaged for PyPI) - maybe I should fix that in Datasette by adding a `from datasette.utils.testing` module.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-update-api/issues/4#issuecomment-660419182_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/898/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 664485022, "node_id": "MDU6SXNzdWU2NjQ0ODUwMjI=", "number": 46, "title": "Feature: pull request reviews and comments", "user": {"value": 1326704, "label": "bhrutledge"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2020-07-23T13:43:45Z", "updated_at": "2022-12-20T14:40:15Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi there! I saw your [presentation at Boston Python](https://www.meetup.com/bostonpython/events/271887195). I'm already a light user of Datasette (thank you!), but wasn't aware of this project.\r\n\r\nI've been working on a \"pull request dashboard\" to get a comprehensive view of the state of open PR's, esp. related to reviews (i.e., pending, approved, changes requested). Currently it's a CLI command, but I thought a Datasette UI might be fun.\r\n\r\nI see that PR's are available from the `issues` command, but I don't see reviews anywhere. From the [API docs](https://docs.github.com/en/rest/reference/pulls#reviews), it looks like there are separate endpoints for those (as well as pull requests in general). What do you think about adding that? Would you accept a PR? Any sense of the level of effort?", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/46/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 664793260, "node_id": "MDU6SXNzdWU2NjQ3OTMyNjA=", "number": 2, "title": "Yak shave", "user": {"value": 145425, "label": "ekg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-23T22:04:18Z", "updated_at": "2020-07-23T22:04:18Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Just a quick note... The 23andme data is not exactly your genome, but a SNP chip of your genome. It's \"some of your genotypes.\" Or about 0.1% of your genome. Nice work in any case! It deserves to be liberated!!!!!", "repo": {"value": 209590345, "label": "genome-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 668064026, "node_id": "MDU6SXNzdWU2NjgwNjQwMjY=", "number": 911, "title": "Rethink the --name option to \"datasette publish\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2020-07-29T18:49:49Z", "updated_at": "2020-07-29T18:49:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "`--name` works inconsistently across the different publish providers - on Cloud Run you should use `--service` instead for example. Need to review it across all of them and either remove it or clarify what it does.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/911/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 670209331, "node_id": "MDU6SXNzdWU2NzAyMDkzMzE=", "number": 913, "title": "Mechanism for passing additional options to `datasette my.db` that affect plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2020-07-31T20:38:26Z", "updated_at": "2021-01-04T20:04:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> It's a shame there's no obvious mechanism for passing additional options to `datasette my.db` that affect how plugins work.\r\n>\r\n>The only way I can think of at the moment is via environment variables:\r\n>\r\n> DATASETTE_INSERT_UNSAFE=1 datasette my.db\r\n>\r\n>This will have to do for the moment - it's ugly enough that people will at least know they are doing something unsafe, which is the goal here.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-insert/issues/15#issuecomment-667346438_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/913/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 672421411, "node_id": "MDU6SXNzdWU2NzI0MjE0MTE=", "number": 916, "title": "Support reverse pagination (previous page, has-previous-items)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-08-04T00:32:06Z", "updated_at": "2021-04-03T23:43:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I need this for `datasette-graphql` for full compatibility with the way Relay likes to paginate - using cursors for paginating backwards as well as for paginating forwards.\r\n\r\n> This may be the kick I need to get Datasette pagination to work in reverse too.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-graphql/issues/2#issuecomment-668305853_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/916/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 673602857, "node_id": "MDU6SXNzdWU2NzM2MDI4NTc=", "number": 9, "title": "Define a view that displays photos correctly", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-05T14:53:39Z", "updated_at": "2020-08-05T14:53:39Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The `photos` table stores data like this:\r\n\r\nid | createdAt | source | prefix | suffix | width | height | visibility | created\u00a0\u25b2 | user\r\n-- | -- | -- | -- | -- | -- | -- | -- | -- | --\r\n5e12c9708506bc000840262a | January 06, 2020 - 05:45:20 UTC | Swarm for iOS\u00a01 | https://fastly.4sqi.net/img/general/ | /15889193_AXxGk4I1nbzUZuyYqObgbXdJNyEHiwj6AUDq0tPZWtw.jpg | 1920 | 1440 | public | 2020-01-06T05:45:20 | 15889193\r\n\r\nThe photo URL can be derived from those pieces - define a SQL view which does that (using `datasette-json-html` to display the pictures)", "repo": {"value": 205429375, "label": "swarm-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/9/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 675594325, "node_id": "MDU6SXNzdWU2NzU1OTQzMjU=", "number": 917, "title": "Idea: \"datasette publish\" option for \"only if the data has changed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-08T21:58:27Z", "updated_at": "2020-08-08T21:58:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a pattern I often find myself needing. I usually implement this in GitHub Actions like this:\r\n\r\nhttps://github.com/simonw/covid-19-datasette/blob/efa01c39abc832b8641fc2a92840cc3acae2fb08/.github/workflows/scheduled.yml#L52-L63\r\n\r\n```yaml\r\n - name: Set variables to decide if we should deploy\r\n id: decide_variables\r\n run: |-\r\n echo \"##[set-output name=latest;]$(datasette inspect covid.db | jq '.covid.hash' -r)\"\r\n echo \"##[set-output name=deployed;]$(curl -s https://covid-19.datasettes.com/-/databases.json | jq '.[0].hash' -r)\"\r\n - name: Set up Cloud Run\r\n if: github.event_name == 'workflow_dispatch' || steps.decide_variables.outputs.latest != steps.decide_variables.outputs.deployed\r\n uses: GoogleCloudPlatform/github-actions/setup-gcloud@master\r\n```\r\nThis is pretty fiddly. It might be good for `datasette publish` to grow a helper option that does effectively this - hashes the databases (and the `metadata.json`) and compares them to the deployed version.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/917/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 675753042, "node_id": "MDU6SXNzdWU2NzU3NTMwNDI=", "number": 131, "title": "sqlite-utils insert: options for column types", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2020-08-09T18:59:11Z", "updated_at": "2022-03-15T13:21:42Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The `insert` command currently results in string types for every column - at least when used against CSV or TSV inputs.\r\n\r\nIt would be useful if you could do the following:\r\n\r\n- automatically detects the column types based on eg the first 1000 records\r\n- explicitly state the rule for specific columns\r\n\r\n`--detect-types` could work for the former - or it could do that by default and allow opt-out using `--no-detect-types`\r\n\r\nFor specific columns maybe this:\r\n\r\n sqlite-utils insert db.db images images.tsv \\\r\n --tsv \\\r\n -c id int \\\r\n -c score float", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/131/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 678760988, "node_id": "MDU6SXNzdWU2Nzg3NjA5ODg=", "number": 932, "title": "End-user documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 6, "created_at": "2020-08-13T22:04:39Z", "updated_at": "2022-03-08T15:20:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette's documentation is aimed at people who install and configure it.\r\n\r\nWhat about end users of preconfigured and deployed Datasette instances?\r\n\r\nSomething that can be linked to from the Datasette UI would be really useful.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/932/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 687694947, "node_id": "MDU6SXNzdWU2ODc2OTQ5NDc=", "number": 954, "title": "Remove old register_output_renderer dict mechanism in Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2020-08-28T04:04:23Z", "updated_at": "2020-08-28T04:56:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Documentation says that the old dictionary mechanism will be deprecated by 1.0:\r\n> \r\n> https://github.com/simonw/datasette/blob/799ecae94824640bdff21f86997f69844048d5c3/docs/plugin_hooks.rst#L460\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/953#issuecomment-682312494_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/954/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 688351054, "node_id": "MDU6SXNzdWU2ODgzNTEwNTQ=", "number": 140, "title": "Idea: insert-files mechanism for adding extra columns with fixed values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-08-28T20:57:36Z", "updated_at": "2022-03-20T19:45:45Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Say for example you want to populate a `file_type` column with the value `gif`. That could work like this:\r\n\r\n```\r\nsqlite-utils insert-files gifs.db images *.gif \\\r\n -c path -c md5 -c last_modified:mtime \\\r\n -c file_type:text:gif --pk=path\r\n```\r\nSo a column defined as a `text` column with a value that follows a second colon.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/140/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 688352145, "node_id": "MDU6SXNzdWU2ODgzNTIxNDU=", "number": 141, "title": "insert-files support for compressed values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-28T20:59:46Z", "updated_at": "2020-09-24T20:36:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The `sqlar` format supports this, it would be useful if `insert-files` could support this too.\r\n\r\nhttps://www.sqlite.org/sqlar.html", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/141/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 688670158, "node_id": "MDU6SXNzdWU2ODg2NzAxNTg=", "number": 147, "title": "SQLITE_MAX_VARS maybe hard-coded too low", "user": {"value": 96218, "label": "simonwiles"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-08-30T07:26:45Z", "updated_at": "2021-02-15T21:27:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I came across this while about to open an issue and PR against the documentation for `batch_size`, which is a bit incomplete.\r\n\r\nAs mentioned in #145, while:\r\n\r\n> [`SQLITE_MAX_VARIABLE_NUMBER`](https://www.sqlite.org/limits.html#max_variable_number) ... defaults to 999 for SQLite versions prior to 3.32.0 (2020-05-22) or 32766 for SQLite versions after 3.32.0.\r\n\r\nit is common that it is increased at compile time. Debian-based systems, for example, seem to ship with a version of sqlite compiled with SQLITE_MAX_VARIABLE_NUMBER set to 250,000, and I believe this is the case for homebrew installations too.\r\n\r\nIn working to understand what `batch_size` was actually doing and why, I realized that by setting `SQLITE_MAX_VARS` in `db.py` to match the value my sqlite was compiled with (I'm on Debian), I was able to decrease the time to `insert_all()` my test data set (~128k records across 7 tables) from ~26.5s to ~3.5s. Given that this about .05% of my total dataset, this is time I am keen to save...\r\n\r\nUnfortunately, it seems that `sqlite3` in the python standard library doesn't expose the `get_limit()` C API (even though `pysqlite` used to), so it's hard to know what value sqlite has been compiled with (note that this could mean, I suppose, that it's less than 999, and even hardcoding `SQLITE_MAX_VARS` to the conservative default might not be adequate. It can also be lowered -- but not raised -- at runtime). The best I could come up with is `echo \"\" | sqlite3 -cmd \".limits variable_number\"` (only available in `sqlite >= 2015-05-07 (3.8.10)`).\r\n\r\nObviously this couldn't be relied upon in `sqlite_utils`, but I wonder what your opinion would be about exposing `SQLITE_MAX_VARS` as a user-configurable parameter (with suitable \"here be dragons\" warnings)? I'm going to go ahead and monkey-patch it for my purposes in any event, but it seems like it might be worth considering.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/147/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 689848827, "node_id": "MDU6SXNzdWU2ODk4NDg4Mjc=", "number": 6, "title": "ISO timestamps", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-01T06:16:42Z", "updated_at": "2020-09-01T06:16:42Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The `time_added`, `time_updated` and `time_read` columns currently store data like this:\r\n\r\n September 19, 2019 - 00:30:30 UTC\r\n\r\nShould use ISO instead, e.g. `2020-07-26T01:05:24+00:00`", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 689850810, "node_id": "MDU6SXNzdWU2ODk4NTA4MTA=", "number": 6, "title": "Set up a demo instance", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-01T06:20:24Z", "updated_at": "2020-09-01T06:20:24Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Once I've got the Datasette plugin to a state where it's worth building a demo: #3\r\n\r\nI can use data from my public https://github-to-sqlite.dogsheep.net/ demo plus the Pocket data subset I use for the demo in https://github.com/dogsheep/pocket-to-sqlite/issues/5 - I could pull in the https://dogsheep-photos.dogsheep.net/ photos data too.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 691537426, "node_id": "MDU6SXNzdWU2OTE1Mzc0MjY=", "number": 959, "title": "Internals API idea: results.dicts in addition to results.rows", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-03T00:50:17Z", "updated_at": "2020-09-03T00:50:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just wrote this code:\r\n```python\r\n results = await database.execute(SEARCH_SQL, {\"query\": query})\r\n return [dict(r) for r in results.rows]\r\n```\r\nHow about having `results.dicts` as a utility property that does that?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/959/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 692202408, "node_id": "MDU6SXNzdWU2OTIyMDI0MDg=", "number": 12, "title": "Idea: maps and GeoJSON support", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-03T18:47:10Z", "updated_at": "2020-09-04T01:45:03Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "It would be cool if the `display_sql` could return a column populated with GeoJSON which would the automatically be displayed on a map in the results (or maybe default JS would look for a `class=\"geojson\"` element output by the `display` template) - ala https://github.com/simonw/datasette-leaflet-geojson\r\n\r\nThen I could render workout routes on a map, or Swarm checkin points.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 694136490, "node_id": "MDU6SXNzdWU2OTQxMzY0OTA=", "number": 15, "title": "Add a bunch of config examples", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-09-05T17:58:43Z", "updated_at": "2020-09-18T23:17:39Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "I can bring these over from my personal Dogsheep.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/15/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 694493566, "node_id": "MDU6SXNzdWU2OTQ0OTM1NjY=", "number": 16, "title": "Timeline view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-09-06T19:13:58Z", "updated_at": "2020-09-21T02:42:29Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Ability to browse (and facet) by date.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 695441530, "node_id": "MDU6SXNzdWU2OTU0NDE1MzA=", "number": 154, "title": "OperationalError: cannot change into wal mode from within a transaction", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-07T23:42:44Z", "updated_at": "2020-09-07T23:47:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I'm getting this error when running:\r\n\r\n sqlite-utils enable-wal beta.db\r\n\r\n`OperationalError: cannot change into wal mode from within a transaction`\r\n\r\nI'm worried that maybe that's because of this new code from #152:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/deb2eb013ff85bbc828ebc244a9654f0d9c3139e/sqlite_utils/db.py#L128-L129", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/154/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 695553522, "node_id": "MDU6SXNzdWU2OTU1NTM1MjI=", "number": 18, "title": "Deleted records stay in the search index", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-08T05:14:23Z", "updated_at": "2020-09-08T05:15:51Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Here's why: https://github.com/dogsheep/dogsheep-beta/blob/24f7898d41a39218058f174c75ba62f7c0fcfff6/dogsheep_beta/utils.py#L44-L53\r\n\r\nThat should probably do `DELETE FROM index1.search_index WHERE [table] = ?` first.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/18/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 695556681, "node_id": "MDU6SXNzdWU2OTU1NTY2ODE=", "number": 19, "title": "Figure out incremental re-indexing", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-08T05:23:31Z", "updated_at": "2020-09-08T05:27:07Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "As tables get bigger reindexing everything on a schedule (essentially recreating the entire index from scratch) will start to become a performance bottleneck.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/19/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 696908389, "node_id": "MDU6SXNzdWU2OTY5MDgzODk=", "number": 961, "title": "Verification checks for metadata.json on startup", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-09T15:21:53Z", "updated_at": "2020-09-09T15:24:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I lost a bunch of time yesterday trying to figure out why a Datasette instance wasn't starting up - it turned out it was because I had a `facets:` reference that mentioned a column that did not exist.\r\n\r\nCatching these on startup would be good.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/961/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 697162939, "node_id": "MDU6SXNzdWU2OTcxNjI5Mzk=", "number": 20, "title": "Add more tags so people can find your project.", "user": {"value": 7902810, "label": "ran88dom99"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-09T21:14:09Z", "updated_at": "2020-09-09T21:14:09Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "quantified-self habit-tracking google-fit time-tracking wearables quantifiedself \r\nfor example", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 1, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 698791218, "node_id": "MDU6SXNzdWU2OTg3OTEyMTg=", "number": 50, "title": "favorites --stop_after=N stops after min(N, 200)", "user": {"value": 370930, "label": "mikepqr"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-11T03:38:14Z", "updated_at": "2020-09-13T05:11:14Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "For any number greater than 200, `favorites --stop_after` stops after getting 200 tweets, e.g.\r\n```\r\n$ twitter-to-sqlite favorites tweets.db --stop_after=300\r\nImporting favorites [####################################] 199\r\n$\r\n```\r\nI don't _think_ this is a limitation of the API (if you omit `--stop_after` you get some very large number, possibly all of them), so I _think_ this is a bug.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 702386948, "node_id": "MDU6SXNzdWU3MDIzODY5NDg=", "number": 159, "title": ".delete_where() does not auto-commit (unlike .insert() or .upsert())", "user": {"value": 11712349, "label": "spdkils"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2020-09-16T01:55:52Z", "updated_at": "2023-04-01T17:21:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When you use the delete_where() function on a table, it never commits....\r\n\r\nIs that intentional?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/159/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 703216044, "node_id": "MDU6SXNzdWU3MDMyMTYwNDQ=", "number": 49, "title": "Feature: gists and starred gists", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-17T02:30:52Z", "updated_at": "2020-09-17T02:30:52Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://developer.github.com/v3/gists/#list-starred-gists", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/49/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 703218448, "node_id": "MDU6SXNzdWU3MDMyMTg0NDg=", "number": 51, "title": "Documentation for twitter-to-sqlite fetch", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-17T02:38:10Z", "updated_at": "2020-09-17T02:38:10Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "It's mentioned in passing in the README but it deserves its own section:\r\n```\r\n$ twitter-to-sqlite fetch \\\r\n \"https://api.twitter.com/1.1/account/verify_credentials.json\" \\\r\n | grep '\"id\"' | head -n 1\r\n```", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/51/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 703218756, "node_id": "MDU6SXNzdWU3MDMyMTg3NTY=", "number": 50, "title": "Commands for making authenticated API calls", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-09-17T02:39:07Z", "updated_at": "2020-10-19T05:01:29Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Similar to `twitter-to-sqlite fetch`, see https://github.com/dogsheep/twitter-to-sqlite/issues/51", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/50/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 703246031, "node_id": "MDU6SXNzdWU3MDMyNDYwMzE=", "number": 51, "title": "github-to-sqlite should handle rate limits better", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-09-17T04:01:50Z", "updated_at": "2022-10-14T16:34:07Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "From #50 - right now it will crash with an error of it hits the rate limit. Since the rate limit information (including reset time) is available in the headers it could automatically sleep and try again instead.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 705215230, "node_id": "MDU6SXNzdWU3MDUyMTUyMzA=", "number": 26, "title": "Pagination", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-09-21T00:14:37Z", "updated_at": "2020-09-21T02:55:54Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Useful for #16 (timeline view) since you can now filter to just the items on a specific day - but if there are more than 50 items you can't see them all.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/26/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 705840673, "node_id": "MDU6SXNzdWU3MDU4NDA2NzM=", "number": 972, "title": "Support faceting against arbitrary SQL queries", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-09-21T19:00:43Z", "updated_at": "2021-12-15T18:02:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> ... support for running facets against arbitrary custom SQL queries is half-done in that facets now execute against wrapped subqueries as-of ea66c45df96479ef66a89caa71fff1a97a862646\r\n> \r\n> https://github.com/simonw/datasette/blob/ea66c45df96479ef66a89caa71fff1a97a862646/datasette/facets.py#L192-L200\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/971#issuecomment-696307922_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/972/reactions\", \"total_count\": 3, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 3, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 706001517, "node_id": "MDU6SXNzdWU3MDYwMDE1MTc=", "number": 163, "title": "Idea: conversions= could take Python functions", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-09-22T00:37:12Z", "updated_at": "2021-12-20T00:56:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Right now you use `conversions=` like this:\r\n\r\n```python\r\ndb[\"example\"].insert({\r\n \"name\": \"The Bigfoot Discovery Museum\"\r\n}, conversions={\"name\": \"upper(?)\"})\r\n```\r\nHow about if you could optionally provide a Python function (or a lambda) like this?\r\n```python\r\ndb[\"example\"].insert({\r\n \"name\": \"The Bigfoot Discovery Museum\"\r\n}, conversions={\"name\": lambda s: s.upper()})\r\n```\r\nThis would work by creating a random name for that function, registering it (similar to #162), executing the SQL and then un-registering the custom function at the end.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/163/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 707849175, "node_id": "MDU6SXNzdWU3MDc4NDkxNzU=", "number": 974, "title": "static assets and favicon aren't cached by the browser", "user": {"value": 45416, "label": "obra"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-09-24T04:44:55Z", "updated_at": "2022-01-13T22:21:28Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Using datasette to solve some frustrating problems with our fulfillment provider today, I was surprised to see repeated requests for assets under /-/static and the favicon. While it won't likely be a huge performance bottleneck, I bet datasette would feel a bit zippier if you had Uvicorn serving up some caching-related headers telling the browser it was safe to cache static assets.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/974/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 709789634, "node_id": "MDU6SXNzdWU3MDk3ODk2MzQ=", "number": 27, "title": "Sort order is not persisted by facet filter links", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-27T18:22:07Z", "updated_at": "2020-09-27T18:22:07Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "A link to `/-/beta?category=1×tamp__date=2018-08-01&q=swedish` should be to `/-/beta?category=1×tamp__date=2018-08-01&q=swedish&sort=newest`", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/27/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712202333, "node_id": "MDU6SXNzdWU3MTIyMDIzMzM=", "number": 982, "title": "SQL editor should allow execution of write queries, if you have permission", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-30T19:04:35Z", "updated_at": "2022-01-13T22:21:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The `datasette-write` plugin provides this at the moment https://github.com/simonw/datasette-write - but it feels like it should be a built-in capability, protected by a default permission.\r\n\r\nUI concept: if you have write permission then the existing SQL editor gets an \"execute write\" checkbox underneath it.\r\n\r\nJavaScript can spot if you appear to be trying to execute an UPDATE or INSERT or DELETE query and check that checkbox for you.\r\n\r\nIf you link to a query page with a non-SELECT then that query will be displayed in the box ready for you to POST submit it. The page will also then get \"cannot be embedded\" headers to protect against clickjacking.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/982/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712260429, "node_id": "MDU6SXNzdWU3MTIyNjA0Mjk=", "number": 983, "title": "JavaScript plugin hooks mechanism similar to pluggy", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 47, "created_at": "2020-09-30T20:32:43Z", "updated_at": "2021-01-25T04:43:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> It would be neat to provide a JavaScript plugin hook that plugins can use to add their own options to this menu. No idea what that would look like though.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/981#issuecomment-701616922_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/983/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712368432, "node_id": "MDU6SXNzdWU3MTIzNjg0MzI=", "number": 984, "title": "Review accessibility of new column action menus", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-09-30T23:56:44Z", "updated_at": "2020-10-01T00:01:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Feature added in #981", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/984/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712984738, "node_id": "MDU6SXNzdWU3MTI5ODQ3Mzg=", "number": 987, "title": "Documented HTML hooks for JavaScript plugin authors", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-10-01T16:10:14Z", "updated_at": "2021-01-25T04:00:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In #981 I added `data-column=` attributes to the `
` on the table page. These should become part of Datasette's documented API so JavaScript plugin authors can use them to derive things about the tables shown on a page (`datasette-cluster-map uses them as-of https://github.com/simonw/datasette-cluster-map/issues/18).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/987/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 714377268, "node_id": "MDU6SXNzdWU3MTQzNzcyNjg=", "number": 991, "title": "Redesign application homepage", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-10-04T18:48:45Z", "updated_at": "2021-01-26T19:06:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Most Datasette instances only host a single database, but the current homepage design assumes that it should leave plenty of space for multiple databases:\r\n\r\n\"Datasette_Fixtures__fixtures\"\r\n\r\nReconsider this design - should the default show more information?\r\n\r\nThe Covid-19 Datasette homepage looks particularly sparse I think: https://covid-19.datasettes.com/\r\n\r\n\"COVID-19_cases__using_data_from_Johns_Hopkins_CSSE__the_New_York_Times_and_the_LA_Times__covid\"", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/991/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 718238967, "node_id": "MDU6SXNzdWU3MTgyMzg5Njc=", "number": 1003, "title": "from_json jinja2 filter", "user": {"value": 649467, "label": "mhalle"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-10-09T15:30:58Z", "updated_at": "2020-10-09T17:17:07Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When JSON fields are rendered in a jinja2 template, it is handy to be able to manipulate them as data (e.g., iterate over an array of values). \r\n\r\nAnsible has a \"from_json\" function, which just called json.loads. It's a trivial as a datasette plugin, but it seems generally useful. Does it makes sense to add it directly into the app?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1003/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 718272593, "node_id": "MDU6SXNzdWU3MTgyNzI1OTM=", "number": 1007, "title": "set-env and add-path commands have been deprecated", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-10-09T16:21:18Z", "updated_at": "2020-10-09T16:23:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/\r\n\r\n> Starting today runner version 2.273.5 will begin to warn you if you use the `add-path` or `set-env` commands. We are monitoring telemetry for the usage of these commands and plan to fully disable them in the future.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1007/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 718395987, "node_id": "MDExOlB1bGxSZXF1ZXN0NTAwNzk4MDkx", "number": 1008, "title": "Add json_loads and json_dumps jinja2 filters", "user": {"value": 649467, "label": "mhalle"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-10-09T20:11:34Z", "updated_at": "2020-12-15T02:30:28Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1008", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1008/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 718540751, "node_id": "MDU6SXNzdWU3MTg1NDA3NTE=", "number": 1012, "title": "For 1.0 update trove classifier in setup.py", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2020-10-10T05:52:08Z", "updated_at": "2021-11-16T13:18:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": " Development Status :: 5 - Production/Stable", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1012/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}