{"id": 1570375808, "node_id": "I_kwDODFdgUs5dmgiA", "number": 79, "title": "Deploy demo job is failing due to rate limit", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-02-03T20:05:01Z", "updated_at": "2023-12-08T14:50:15Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://github.com/dogsheep/github-to-sqlite/actions/runs/4080058087/jobs/7032116511", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 564833696, "node_id": "MDU6SXNzdWU1NjQ4MzM2OTY=", "number": 670, "title": "Prototoype for Datasette on PostgreSQL", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 15, "created_at": "2020-02-13T17:17:55Z", "updated_at": "2023-11-17T15:32:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I thought this would never happen, but now that I'm deep in the weeds of running SQLite in production for Datasette Cloud I'm starting to reconsider my policy of only supporting SQLite.\r\n\r\nSome of the factors making me think PostgreSQL support could be worth the effort:\r\n- Serverless. I'm getting increasingly excited about writable-database use-cases for Datasette. If it could talk to PostgreSQL then users could easily deploy it on Heroku or other serverless providers that can talk to a managed RDS-style PostgreSQL.\r\n- Existing databases. Plenty of organizations have PostgreSQL databases. They can export to SQLite using [db-to-sqlite](https://github.com/simonw/db-to-sqlite) but that's a pretty big barrier to getting started - being able to run `datasette postgresql://connection-string` and start trying it out would be a massively better experience.\r\n- Data size. I keep running into use-cases where I want to run Datasette against many GBs of data. SQLite can do this but PostgreSQL is much more optimized for large data, especially given the existence of tools like Citus.\r\n- Marketing. Convincing people to trust their data to SQLite is potentially a big barrier to adoption. Even if I've convinced myself it's trustworthy I still have to convince everyone else.\r\n- It might not be that hard? If this required a ground-up rewrite it wouldn't be worth the effort, but I have a hunch that it may not be too hard - most of the SQL in Datasette should work on both databases since it's almost all portable SELECT statements. If Datasette did DML this would be a lot harder, but it doesn't.\r\n- Plugins! This feels like a natural surface for a plugin - at which point people could add MySQL support and suchlike in the future.\r\n\r\nThe above reasons feel strong enough to justify a prototype.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/670/reactions\", \"total_count\": 19, \"+1\": 14, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 5, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978023780, "node_id": "I_kwDOBm6k_c515j9k", "number": 2205, "title": "request.post_vars() method obliterates form keys with multiple values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 3, "created_at": "2023-11-05T23:25:08Z", "updated_at": "2023-11-06T04:10:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139\r\n\r\nIn GET requests you can do `?foo=1&foo=2` - you can do the same in POST requests, but the `dict()` call here eliminates those duplicates.\r\n\r\nYou can't even try calling `post_body()` and implement your own custom parsing because of:\r\n- #2204", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978022687, "node_id": "I_kwDOBm6k_c515jsf", "number": 2204, "title": "request.post_body() can only be called once", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T23:22:03Z", "updated_at": "2023-11-05T23:23:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This code here:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L127-L135\r\n\r\nIt consumes the messages, which means if you try to call it a second time you won't be able to get at the body.\r\n\r\nThis is efficient - we don't end up with a `request` object property with potentially megabytes of content that we never look at again - but it's inconvenient for cases like middleware or functions where we don't know if the body has been consumed yet or not.\r\n\r\nPotential solution: set `request._body` the first time it is called, and return that on subsequent calls.\r\n\r\nPotential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call `post_body()` multiple times against one of those larger bodies.\r\n\r\nI'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2204/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1977155641, "node_id": "I_kwDOCGYnMM512QA5", "number": 601, "title": "Move plugin directory into documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-04T04:07:52Z", "updated_at": "2023-11-04T04:07:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils-plugins should be in the official documentation.\r\n\r\nI can use the same pattern as https://llm.datasette.io/en/stable/plugins/directory.html\r\n\r\nhttps://til.simonwillison.net/readthedocs/stable-docs", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/601/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1940346034, "node_id": "I_kwDOBm6k_c5zp1Sy", "number": 2199, "title": "Detailed upgrade instructions for metadata.yaml -> datasette.yaml", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 7, "created_at": "2023-10-12T16:21:25Z", "updated_at": "2023-10-12T22:08:42Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> `Exception: Datasette no longer accepts plugin configuration in --metadata. Move your \"plugins\" configuration blocks to a separate file - we suggest calling that datasette..json - and start Datasette with datasette -c datasette..json. See https://docs.datasette.io/en/latest/configuration.html for more details.`\r\n>\r\n> I think we should link directly to documentation that tells people how to perform this upgrade.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/2190#issuecomment-1759947021_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2199/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 777333388, "node_id": "MDU6SXNzdWU3NzczMzMzODg=", "number": 1168, "title": "Mechanism for storing metadata in _metadata tables", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2021-01-01T18:47:27Z", "updated_at": "2023-09-28T18:29:05Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "_Original title: Perhaps metadata should all live in a `_metadata` in-memory database_\r\n\r\nInspired by #1150 - metadata should be exposed as an API, and for large Datasette instances that API may need to be paginated. So why not expose it through an in-memory database table?\r\n\r\nOne catch to this: plugins. #860 aims to add a plugin hook for metadata. But if the metadata comes from an in-memory table, how do the plugins interact with it?\r\n\r\nThe need to paginate over metadata does make a plugin hook that returns metadata for an individual table seem less wise, since we don't want to have to do 10,000 plugin hook invocations to show a list of all metadata.\r\n\r\nIf those plugins write directly to the in-memory table how can their contributions survive the server restarting?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1168/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 944846776, "node_id": "MDU6SXNzdWU5NDQ4NDY3NzY=", "number": 297, "title": "Option for importing CSV data using the SQLite .import mechanism", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 23, "created_at": "2021-07-14T22:36:41Z", "updated_at": "2023-09-22T20:49:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "As seen in https://til.simonwillison.net/sqlite/import-csv - `.mode csv` and then `.import school.csv schools` is hugely faster than importing via `sqlite-utils insert` and doing the work in Python - but it can only be implemented by shelling out to the `sqlite3` CLI tool, it's not functionality that is exposed to the Python `sqlite3` module.\r\n\r\nAn option to use this would be useful - maybe something like this:\r\n\r\n sqlite-utils insert blah.db blah blah.csv --fast", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/297/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1907765514, "node_id": "I_kwDOBm6k_c5xtjEK", "number": 2195, "title": "`datasette publish` needs support for the new config/metadata split", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2023-09-21T21:08:12Z", "updated_at": "2023-09-21T22:57:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> ... which raises the challenge that `datasette publish` doesn't yet know what to do with a config file!\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/2194#issuecomment-1730259871_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2195/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 787098345, "node_id": "MDU6SXNzdWU3ODcwOTgzNDU=", "number": 1191, "title": "Ability for plugins to collaborate when adding extra HTML to blocks in default templates", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 12, "created_at": "2021-01-15T18:18:51Z", "updated_at": "2023-09-18T06:55:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Sometimes a plugin may want to add content to an existing default template - for example `datasette-search-all` adds a new search box at the top of `index.html`. I also want `datasette-upload-csvs` to add a CTA on the `database.html` page: https://github.com/simonw/datasette-upload-csvs/issues/18\r\n\r\nCurrently plugins can do this by providing a new version of the `index.html` template - but if multiple plugins try to do that only one of them will succeed.\r\n\r\nIt would be better if there were known areas of those templates which plugins could add additional content to, such that multiple plugins can use the same spot.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1191/reactions\", \"total_count\": 4, \"+1\": 4, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1898927976, "node_id": "I_kwDOBm6k_c5xL1do", "number": 2186, "title": "Mechanism for register_output_renderer hooks to access full count", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2023-09-15T18:57:54Z", "updated_at": "2023-09-15T19:27:59Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The cause of this bug:\r\n- https://github.com/simonw/datasette-export-notebook/issues/17\r\n\r\nIs that `datasette-export-notebook` was consulting `data[\"filtered_table_rows_count\"]` in the render output plugin function in order to show the total number of rows that would be exported.\r\n\r\nThat field is no longer available by default - the `\"count\"` field is only available if `?_extra=count` was passed.\r\n\r\nIt would be useful if plugins like this could access the total count on demand, should they need to.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2186/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1895266807, "node_id": "I_kwDOBm6k_c5w93n3", "number": 2184, "title": "Design decision - should configuration be exposed at /-/config ?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-13T21:07:08Z", "updated_at": "2023-09-13T21:07:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> This made me think. That `{\"$env\": \"ENV_VAR\"}` hack was introduced back here:\r\n>\r\n> - https://github.com/simonw/datasette/issues/538\r\n>\r\n> The problem it was solving was that metadata was visible to everyone with access to the instance at `/-/metadata` but plugins clearly needed a way to set secret settings.\r\n>\r\n> Now that this stuff is moving to config, we have some decisions to make:\r\n>\r\n> 1. Add `/-/config` to let people see the configuration of their instance, and keep the `$env` trick for secret settings.\r\n> 2. Say all configuration aside from metadata is secret and make `$env` optional or ditch it entirely.\r\n> 3. Allow plugins to announce which of their configuration options are secret so we can automatically redact them from `/-/config`\r\n>\r\n> I've found `/-/metadata` extraordinarily useful as a user of Datasette - it really helps me understand exactly what's going on if I run into any problems with a plugin, if I can quickly check what the settings look like.\r\n>\r\n> So I'm leaning towards option 1 or 3.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/pull/2183#discussion_r1325076924_\r\n\r\nAlso refs:\r\n- #2093", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2184/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1891614971, "node_id": "I_kwDOCGYnMM5wv8D7", "number": 594, "title": "Represent compound foreign keys in table.foreign_keys output", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-09-12T03:48:24Z", "updated_at": "2023-09-12T03:51:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Given this schema:\r\n```sql\r\nCREATE TABLE departments (\r\n campus_name TEXT NOT NULL,\r\n dept_code TEXT NOT NULL,\r\n dept_name TEXT,\r\n PRIMARY KEY (campus_name, dept_code)\r\n);\r\nCREATE TABLE courses (\r\n course_code TEXT PRIMARY KEY,\r\n course_name TEXT,\r\n campus_name TEXT NOT NULL,\r\n dept_code TEXT NOT NULL,\r\n FOREIGN KEY (campus_name, dept_code) REFERENCES departments(campus_name, dept_code)\r\n);\r\n```\r\nThe output of `db[\"courses\"].foreign_keys` right now is:\r\n```\r\n[ForeignKey(table='courses', column='campus_name', other_table='departments', other_column='campus_name'),\r\n ForeignKey(table='courses', column='dept_code', other_table='departments', other_column='dept_code')]\r\n```\r\nWhich suggests two normal foreign keys, not one compound foreign key.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/594/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1876353656, "node_id": "I_kwDOBm6k_c5v1uJ4", "number": 2168, "title": "Consider a request/response wrapping hook slightly higher level than asgi_wrapper()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2023-08-31T21:42:04Z", "updated_at": "2023-09-10T17:54:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "There's a long justification for why this might be needed here:\r\n- https://github.com/simonw/datasette-auth-tokens/issues/10#issuecomment-1701820001\r\n\r\nShort version: it would be neat if it was possible to stash some data on the `request` object such that a later plugin/middleware-type-thing could use that to influence the final returned response - similar to the kinds of things you can do with Django middleware.\r\n\r\nThe `asgi_wrapper()` mechanism doesn't have access to the request or response objects - it gets `scope` and can mess around with `receive` and `send`, but those are pretty low-level primitives.\r\n\r\nSince Datasette has well-defined `request` and `response` objects now it might be nice to have a middleware layer that can manipulate those directly.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2168/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1888477283, "node_id": "I_kwDOC8SPRc5wj-Bj", "number": 38, "title": "Run `rebuild_fts` after building the index", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-08T23:17:45Z", "updated_at": "2023-09-08T23:17:45Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "In:\r\n- https://github.com/simonw/datasette.io/issues/152#issuecomment-1712323347\r\n\r\nThis turned out to be the fix:\r\n\r\n```bash\r\ndogsheep-beta index dogsheep-index.db templates/dogsheep-beta.yml\r\nsqlite-utils rebuild-fts dogsheep-index.db\r\n```", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/38/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1884408624, "node_id": "I_kwDOBm6k_c5wUcsw", "number": 2177, "title": "Move schema tables from _internal to _catalog", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-09-06T16:58:33Z", "updated_at": "2023-09-06T17:04:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This came up in discussion over:\r\n- https://github.com/simonw/datasette/pull/2174\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2177/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1879214365, "node_id": "I_kwDOCGYnMM5wAokd", "number": 590, "title": "Ability to tell if a Database is an in-memory one", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-09-03T19:50:15Z", "updated_at": "2023-09-03T19:50:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Currently the constructor accepts `memory=True` or `memory_name=...` and uses those to create a connection, but does not record what those values were:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1260bdc7bfe31c36c272572c6389125f8de6ef71/sqlite_utils/db.py#L307-L349\r\n\r\nThis makes it hard to tell if a database object is to an in-memory or a file-based database, which is sometimes useful to know.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/590/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1879209560, "node_id": "I_kwDOCGYnMM5wAnZY", "number": 589, "title": "Mechanism for de-registering registered SQL functions", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-09-03T19:32:39Z", "updated_at": "2023-09-03T19:36:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I used a custom SQL function in a migration script and then realized that it should be de-registered before the end of the script to avoid leaking into the calling code.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/589/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1875739055, "node_id": "I_kwDOBm6k_c5vzYGv", "number": 2167, "title": "Document return type of await ds.permission_allowed()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-31T15:14:23Z", "updated_at": "2023-08-31T15:14:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The return type isn't documented here: https://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/docs/internals.rst#L327-L350\r\n\r\nOn inspecting the code I'm not 100% sure if it's possible for this. method to return `None`, or if it can only return `True` or `False`. Need to confirm that.\r\n\r\nhttps://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/datasette/app.py#L822C15-L853", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2167/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 594237015, "node_id": "MDU6SXNzdWU1OTQyMzcwMTU=", "number": 718, "title": "Plugin idea: datasette-redirects", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-05T03:41:38Z", "updated_at": "2023-08-30T22:17:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin.\r\n\r\nhttps://github.com/simonw/museums/blob/6b1faf00c463b2228860d4d62d104b11935e01b1/plugins/redirect_www.py", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/718/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "reopened"} {"id": 1865649347, "node_id": "I_kwDOBm6k_c5vM4zD", "number": 2156, "title": "datasette -s/--setting option for setting nested configuration options", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2023-08-24T18:09:27Z", "updated_at": "2023-08-28T19:33:05Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> I've been thinking about what it might look like to allow command-line arguments to be used to define _any_ of the configuration options in `datasette.yml`, as alternative and more convenient syntax.\r\n>\r\n> Here's what I've come up with:\r\n> ```\r\n> datasette \\\r\n> -s settings.sql_time_limit_ms 1000 \\\r\n> -s plugins.datasette-auth-tokens.manage_tokens true \\\r\n> -s plugins.datasette-auth-tokens.manage_tokens_database tokens \\\r\n> mydatabase.db tokens.db\r\n> ```\r\n> Which would be equivalent to `datasette.yml` containing this:\r\n> ```yaml\r\n> plugins:\r\n> datasette-auth-tokens:\r\n> manage_tokens: true\r\n> manage_tokens_database: tokens\r\n> settings:\r\n> sql_time_limit_ms: 1000\r\n> ```\r\nMore details in https://github.com/simonw/datasette/issues/2143#issuecomment-1690792514\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2156/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1868713944, "node_id": "I_kwDOCGYnMM5vYk_Y", "number": 588, "title": "`table.get(column=value)` option for retrieving things not by their primary key", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-08-28T00:41:23Z", "updated_at": "2023-08-28T00:41:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This came up working on this feature:\r\n- https://github.com/simonw/llm/pull/186\r\n\r\nI have a table with this schema:\r\n```sql\r\nCREATE TABLE [collections] (\r\n [id] INTEGER PRIMARY KEY,\r\n [name] TEXT,\r\n [model] TEXT\r\n);\r\nCREATE UNIQUE INDEX [idx_collections_name]\r\n ON [collections] ([name]);\r\n```\r\nSo the primary key is an integer (because it's going to have a huge number of rows foreign key related to it, and I don't want to store a larger text value thousands of times), but there is a unique constraint on the `name` - that would be the primary key column if not for all of those foreign keys.\r\n\r\nProblem is, fetching the collection by name is actually pretty inconvenient.\r\n\r\nFetch by numeric ID:\r\n\r\n```python\r\ntry:\r\n table[\"collections\"].get(1)\r\nexcept NotFoundError:\r\n # It doesn't exist\r\n```\r\nFetching by name:\r\n```python\r\ndef get_collection(db, collection):\r\n rows = db[\"collections\"].rows_where(\"name = ?\", [collection])\r\n try:\r\n return next(rows)\r\n except StopIteration:\r\n raise NotFoundError(\"Collection not found: {}\".format(collection))\r\n```\r\nIt would be neat if, for columns where we know that we should always get 0 or one result, we could do this instead:\r\n```python\r\ntry:\r\n collection = table[\"collections\"].get(name=\"entries\")\r\nexcept NotFoundError:\r\n # It doesn't exist\r\n```\r\nThe existing `.get()` method doesn't have any non-positional arguments, so using `**kwargs` like that should work:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1260bdc7bfe31c36c272572c6389125f8de6ef71/sqlite_utils/db.py#L1495", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/588/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 459509126, "node_id": "MDU6SXNzdWU0NTk1MDkxMjY=", "number": 516, "title": "Enforce import sort order with isort", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2019-06-22T20:35:50Z", "updated_at": "2023-08-23T02:15:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I want to use isort to order imports. A few steps here:\r\n\r\n- [x] Add a .isort.cfg file (see below)\r\n- [x] Use `isort -rc` to reformat existing code\r\n- [ ] Commit this change\r\n- [x] Add a unit test that ensures future changes remain isort compatible", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/516/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1857234285, "node_id": "I_kwDOBm6k_c5usyVt", "number": 2145, "title": "If a row has a primary key of `null` various things break", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 23, "created_at": "2023-08-18T20:06:28Z", "updated_at": "2023-08-21T17:30:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Stumbled across this while experimenting with `datasette-write-ui`. The error I got was a 500 on the `/db` page:\r\n\r\n> `'NoneType' object has no attribute 'encode'`\r\n\r\nTracked it down to this code, which assembles the URL for a row page:\r\n\r\nhttps://github.com/simonw/datasette/blob/943df09dcca93c3b9861b8c96277a01320db8662/datasette/utils/__init__.py#L120-L134\r\n\r\nThat's because `tilde_encode` can't handle `None`: https://github.com/simonw/datasette/blob/943df09dcca93c3b9861b8c96277a01320db8662/datasette/utils/__init__.py#L1175-L1178\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2145/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1856075668, "node_id": "I_kwDOCGYnMM5uoXeU", "number": 586, "title": ".transform() fails to drop column if table is part of a view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-08-18T05:25:22Z", "updated_at": "2023-08-18T06:13:47Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I got this error trying to drop a column from a table that was part of a SQL view:\r\n\r\n> error in view plugins: no such table: main.pypi_releases\r\n\r\nUpon further investigation I found that this pattern seemed to fix it:\r\n```python\r\ndef transform_the_table(conn):\r\n # Run this in a transaction:\r\n with conn:\r\n # We have to read all the views first, because we need to drop and recreate them\r\n db = sqlite_utils.Database(conn)\r\n views = {v.name: v.schema for v in db.views if table.lower() in v.schema.lower()}\r\n for view in views.keys():\r\n db[view].drop()\r\n db[table].transform(\r\n types=types,\r\n rename=rename,\r\n drop=drop,\r\n column_order=[p[0] for p in order_pairs],\r\n )\r\n # Now recreate the views\r\n for name, schema in views.items():\r\n db.create_view(name, schema)\r\n```\r\nSo grab a copy of any view that might reference this table, start a transaction, drop those views, run the transform, recreate the views again.\r\n\r\n> I wonder if this should become an option in `sqlite-utils`? Maybe a `recreate_views=True` argument for `table.tranform(...)`? Should it be opt-in or opt-out?\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-edit-schema/issues/35#issuecomment-1683370548_\r\n ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/586/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1838469176, "node_id": "I_kwDOBm6k_c5tlNA4", "number": 2127, "title": "Context base class to support documenting the context", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2023-08-07T00:01:02Z", "updated_at": "2023-08-10T01:30:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This idea first came up here:\r\n- https://github.com/simonw/datasette/issues/2112#issuecomment-1652751140\r\n\r\nIf `datasette.render_template(...)` takes an optional `Context` subclass as an alternative to a context dictionary, I could then use dataclasses to define the context made available to specific templates - which then gives me something I can use to help document what they are.\r\n\r\nAlso refs:\r\n- https://github.com/simonw/datasette/issues/1510", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2127/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1843821954, "node_id": "I_kwDOBm6k_c5t5n2C", "number": 2137, "title": "Redesign row default JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2023-08-09T18:49:11Z", "updated_at": "2023-08-09T19:02:47Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This URL here:\r\n\r\nhttps://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables\r\n\r\n```json\r\n{\r\n \"database\": \"fixtures\",\r\n \"table\": \"simple_primary_key\",\r\n \"rows\": [\r\n {\r\n \"id\": \"1\",\r\n \"content\": \"hello\"\r\n }\r\n ],\r\n \"columns\": [\r\n \"id\",\r\n \"content\"\r\n ],\r\n \"primary_keys\": [\r\n \"id\"\r\n ],\r\n \"primary_key_values\": [\r\n \"1\"\r\n ],\r\n \"units\": {},\r\n \"foreign_key_tables\": [\r\n {\r\n \"other_table\": \"foreign_key_references\",\r\n \"column\": \"id\",\r\n \"other_column\": \"foreign_key_with_blank_label\",\r\n \"count\": 0,\r\n \"link\": \"/fixtures/foreign_key_references?foreign_key_with_blank_label=1\"\r\n },\r\n {\r\n \"other_table\": \"foreign_key_references\",\r\n \"column\": \"id\",\r\n \"other_column\": \"foreign_key_with_label\",\r\n \"count\": 1,\r\n \"link\": \"/fixtures/foreign_key_references?foreign_key_with_label=1\"\r\n },\r\n {\r\n \"other_table\": \"complex_foreign_keys\",\r\n \"column\": \"id\",\r\n \"other_column\": \"f3\",\r\n \"count\": 1,\r\n \"link\": \"/fixtures/complex_foreign_keys?f3=1\"\r\n },\r\n {\r\n \"other_table\": \"complex_foreign_keys\",\r\n \"column\": \"id\",\r\n \"other_column\": \"f2\",\r\n \"count\": 0,\r\n \"link\": \"/fixtures/complex_foreign_keys?f2=1\"\r\n },\r\n {\r\n \"other_table\": \"complex_foreign_keys\",\r\n \"column\": \"id\",\r\n \"other_column\": \"f1\",\r\n \"count\": 1,\r\n \"link\": \"/fixtures/complex_foreign_keys?f1=1\"\r\n }\r\n ],\r\n \"query_ms\": 4.226590999678592,\r\n \"source\": \"tests/fixtures.py\",\r\n \"source_url\": \"https://github.com/simonw/datasette/blob/main/tests/fixtures.py\",\r\n \"license\": \"Apache License 2.0\",\r\n \"license_url\": \"https://github.com/simonw/datasette/blob/main/LICENSE\",\r\n \"ok\": true,\r\n \"truncated\": false\r\n}\r\n```\r\n\r\nThat `?_extras=` should be `?_extra=` - plus the row JSON should be redesigned to fit the new default JSON representation.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2137/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1822939274, "node_id": "I_kwDOBm6k_c5sp9iK", "number": 2113, "title": "Implement and document extras for the new query view page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 3, "created_at": "2023-07-26T18:24:01Z", "updated_at": "2023-08-09T17:35:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "- #2109 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2113/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1840417903, "node_id": "I_kwDOBm6k_c5tsoxv", "number": 2131, "title": "Refactor code that supports templates_considered comment", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2023-08-08T01:28:36Z", "updated_at": "2023-08-09T15:27:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I ended up duplicating it here: https://github.com/simonw/datasette/blob/7532feb424b1dce614351e21b2265c04f9669fe2/datasette/views/database.py#L164-L167\r\n\r\nI think it should move to `datasette.render_template()` - and maybe have a renamed template variable too.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2131/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1840324765, "node_id": "I_kwDOBm6k_c5tsSCd", "number": 2129, "title": "CSV ?sql= should indicate errors", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2023-08-07T23:13:04Z", "updated_at": "2023-08-08T02:02:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> https://latest.datasette.io/_memory.csv?sql=select+blah is a blank page right now:\r\n\r\n```bash\r\ncurl -I 'https://latest.datasette.io/_memory.csv?sql=select+blah'\r\n```\r\n```\r\nHTTP/2 200 \r\naccess-control-allow-origin: *\r\naccess-control-allow-headers: Authorization, Content-Type\r\naccess-control-expose-headers: Link\r\naccess-control-allow-methods: GET, POST, HEAD, OPTIONS\r\naccess-control-max-age: 3600\r\ncontent-type: text/plain; charset=utf-8\r\nx-databases: _memory, _internal, fixtures, fixtures2, extra_database, ephemeral\r\ndate: Mon, 07 Aug 2023 23:12:15 GMT\r\nserver: Google Frontend\r\n```\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/2118#issuecomment-1668688947_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2129/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1818838294, "node_id": "I_kwDOCGYnMM5saUUW", "number": 578, "title": "Plugin hook for adding new output formats", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2023-07-24T17:29:18Z", "updated_at": "2023-08-07T15:41:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> What would it take to add a format hook? I'm still thinking about my GIS workflow, and being able to do `sqlite-utils query ... --geojson` would be nice. It's the one place my Datasette workflow is messy, having to do `datasette . --get /path/to/query.geojson --setting max_rows_returned 10000 --load-extension spatialite`.\r\n> I know the current pattern is `--csv`, but maybe `--format geojson` is more future-proof.\r\n\r\nhttps://discord.com/channels/823971286308356157/997738192360964156/1133076679011602432", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/578/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1823428714, "node_id": "I_kwDOBm6k_c5sr1Bq", "number": 2120, "title": "Add __all__ to datasette/__init__.py", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-27T01:07:10Z", "updated_at": "2023-07-27T01:07:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Currently looks like this: https://github.com/simonw/datasette/blob/08181823990a71ffa5a1b57b37259198eaa43e06/datasette/__init__.py#L1-L6\r\n\r\nAdding `__all__ = [\"Permission\", \"Forbidden\"...]` would let me get rid of those `# noqa` comments.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2120/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1808215339, "node_id": "I_kwDOBm6k_c5rxy0r", "number": 2104, "title": "Tables starting with an underscore should be treated as hidden", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-07-17T17:13:53Z", "updated_at": "2023-07-18T22:41:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Plugins can then take advantage of this pattern, for example:\r\n- https://github.com/simonw/datasette-auth-tokens/pull/8", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2104/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1808116827, "node_id": "I_kwDOBm6k_c5rxaxb", "number": 2103, "title": "data attribute on Datasette tables exposing the primary key of the row", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-17T16:18:25Z", "updated_at": "2023-07-17T16:18:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Maybe put it on the `` but probably better to go on the `td.type-pk`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2103/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1765870617, "node_id": "I_kwDOBm6k_c5pQQwZ", "number": 2087, "title": "`--settings settings.json` option", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-06-20T17:48:45Z", "updated_at": "2023-07-14T17:02:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://discord.com/channels/823971286308356157/823971286941302908/1120705940728066080\r\n\r\n> May I add a request to the whole metadata / settings ? Allow to pass `--settings path/to/settings.json` instead of having to rely exclusively on directory mode to centralize settings (this would reflect the behavior of providing metadata)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2087/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1803264272, "node_id": "I_kwDOBm6k_c5re6EQ", "number": 2101, "title": "alter: true support for JSON write API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-07-13T15:24:11Z", "updated_at": "2023-07-13T15:24:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Requested here: https://discord.com/channels/823971286308356157/823971286941302908/1129034187073134642\r\n\r\n> The former datasette-insert plugin had an option `?alter=1` to auto-add new columns. Does the JSON write API also have this?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2101/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1784794489, "node_id": "I_kwDOCGYnMM5qYc15", "number": 562, "title": "Explore the intersection between sqlite-utils and dataclasses", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-07-02T19:23:08Z", "updated_at": "2023-07-02T19:26:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Aside: this makes me think it might be cool if `sqlite-utils` had a way of working with dataclasses rather than just dicts, and knew how to create a SQLite table to match a dataclass and maybe how to code-generate dataclasses for a specific table schema (dynamically or even using code-generation that can be written to disk, for better editor integrations).\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/llm/issues/65#issuecomment-1616742529_\r\n ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/562/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1781005740, "node_id": "I_kwDOBm6k_c5qJ_2s", "number": 2090, "title": "Adopt ruff for linting", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-06-29T14:56:43Z", "updated_at": "2023-06-29T15:05:04Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://beta.ruff.rs/docs/", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2090/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054244712, "node_id": "I_kwDOBm6k_c4-1n9o", "number": 1510, "title": "Datasette 1.0 documented template context (maybe via API docs)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:23:58Z", "updated_at": "2023-06-28T02:05:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 323223872, "node_id": "MDU6SXNzdWUzMjMyMjM4NzI=", "number": 260, "title": "Validate metadata.json on startup", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2018-05-15T13:42:56Z", "updated_at": "2023-06-21T12:51:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's easy to misspell the name of a database or table and then be puzzled when the metadata settings silently fail.\r\n\r\nTo avoid this, let's sanity check the provided metadata.json on startup and quit with a useful error message if we find any obvious mistakes.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/260/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1764792125, "node_id": "I_kwDOBm6k_c5pMJc9", "number": 2086, "title": "Show information on startup in directory configuration mode", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-20T07:13:33Z", "updated_at": "2023-06-20T07:13:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://discord.com/channels/823971286308356157/823971286941302908/1120516587036889098\r\n\r\n> One thing that would be helpful would be message at launch indicating a metadata.json is getting picked up. I'm using directory mode and was editing the wrong file for awhile before I realize nothing I was doing was having any effect.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2086/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1124731464, "node_id": "I_kwDOCGYnMM5DCgpI", "number": 399, "title": "Make it easier to insert geometries, with documentation and maybe code", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 25, "created_at": "2022-02-05T00:11:26Z", "updated_at": "2023-05-16T03:11:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In playing with the new SpatiaLite helpers from #385 I noticed that actually populating geometry columns is still a little bit tricky. Here's what I ended up doing:\r\n\r\n```python\r\nimport httpx, sqlite_utils\r\ndb = sqlite_utils.Database(\"/tmp/spatial.db\")\r\nattractions = httpx.get(\"https://latest.datasette.io/fixtures/roadside_attractions.json?_shape=array\").json()\r\ndb[\"attractions\"].insert_all(attractions, pk=\"pk\")\r\n\r\n# Schema of that table is now:\r\n# CREATE TABLE [attractions] (\r\n# [pk] INTEGER PRIMARY KEY,\r\n# [name] TEXT,\r\n# [address] TEXT,\r\n# [latitude] FLOAT,\r\n# [longitude] FLOAT\r\n# )\r\n\r\ndb.init_spatialite()\r\ndb[\"attractions\"].add_geometry_column(\"point\", \"POINT\")\r\n\r\ndb.execute(\"\"\"\r\n update attractions set point = GeomFromText(\r\n 'POINT(' || longitude || ' ' || latitude || ')', 4326\r\n )\r\n\"\"\")\r\n```\r\nThat last line took some figuring out - especially the need for the SRID of `4326`, without which I got this error:\r\n\r\n> `IntegrityError: attractions.point violates Geometry constraint [geom-type or SRID not allowed]`\r\n\r\nIt would be good to both document this in more detail, but ideally also to come up with a more obvious pattern for inserting common types of spatial data.\r\n\r\nAlso related:\r\n- #398\r\n- #79", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/399/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1708030220, "node_id": "I_kwDOBm6k_c5lznkM", "number": 2073, "title": "Faceting doesn't work against integer columns in views", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-05-12T18:20:10Z", "updated_at": "2023-05-12T18:24:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted this issue here: https://til.simonwillison.net/datasette/baseline\r\n\r\nI had to do this workaround:\r\n```sql\r\ncreate view baseline as select\r\n _key,\r\n spec,\r\n '' || json_extract(status, '$.is_baseline') as is_baseline,\r\n json_extract(status, '$.since') as baseline_since,\r\n json_extract(status, '$.support.chrome') as baseline_chrome,\r\n json_extract(status, '$.support.edge') as baseline_edge,\r\n json_extract(status, '$.support.firefox') as baseline_firefox,\r\n json_extract(status, '$.support.safari') as baseline_safari,\r\n compat_features,\r\n caniuse,\r\n usage_stats,\r\n status\r\nfrom\r\n [index]\r\n```\r\nI think the core issue here is that, against a table, `select * from x where integer_column = '1'` works correctly, due to some kind of column type conversion mechanism... but this mechanism doesn't work against views.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2073/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1700936245, "node_id": "I_kwDOCGYnMM5lYjo1", "number": 542, "title": "Remove `skip_false=True` and `--no-skip-false` in `sqlite-utils` 4.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 9374594, "label": "4.0 backwards incomatible changes"}, "comments": 1, "created_at": "2023-05-08T21:04:28Z", "updated_at": "2023-05-08T21:07:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Following:\r\n- #527\r\n\r\nThe only reason I didn't remove fix this mis-feature entirely is that it represents a backwards incompatible change. I'll make that change in 4.0.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/542/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1700840265, "node_id": "I_kwDOCGYnMM5lYMNJ", "number": 541, "title": "Get tests to pass with `pytest -Werror`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-05-08T19:57:23Z", "updated_at": "2023-05-08T19:59:35Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by:\r\n- #534", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/541/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1617602868, "node_id": "I_kwDOJHON9s5gaqk0", "number": 6, "title": "Character encoding problem", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-03-09T16:44:34Z", "updated_at": "2023-04-14T15:22:09Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "I ran against a recent note with this in it:\r\n\r\n> Or just \"Actions \u2699\ufe0f \"\r\n\r\nAnd got back:\r\n\r\n\"image\"\r\n\r\n> `Actions \u201a\u00f6\u00f4\u00d4\u220f\u00e8`\r\n\r\nPasting that into https://ftfy.vercel.app/?s=Actions+%E2%80%9A%C3%B6%C3%B4%C3%94%E2%88%8F%C3%A8+ gives this:\r\n\r\n```python\r\ns = 'Actions \u00e2\\x80\\x9a\u00c3\u00b6\u00c3\u00b4\u00c3\\x94\u00e2\\x88\\x8f\u00c3\u00a8'\r\ns = s.encode('latin-1')\r\ns = s.decode('utf-8')\r\ns = s.encode('macroman')\r\ns = s.decode('utf-8')\r\nprint(s)\r\n```\r\n\"image\"\r\n", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1663399821, "node_id": "I_kwDOBm6k_c5jJXeN", "number": 2058, "title": "500 \"attempt to write a readonly database\" error caused by \"PRAGMA schema_version\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2023-04-11T23:57:50Z", "updated_at": "2023-04-13T16:35:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I've not been able to replicate this myself yet, but I've seen log files from a user affected by it.\r\n\r\n```\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/views/base.py\", line 89, in dispatch_request\r\nawait self.ds.refresh_schemas()\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/app.py\", line 371, in refresh_schemas\r\nawait self._refresh_schemas()\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/app.py\", line 386, in _refresh_schemas\r\nschema_version = (await db.execute(\"PRAGMA schema_version\")).first()[0]\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/database.py\", line 267, in execute\r\nresults = await self.execute_fn(sql_operation_in_thread)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/database.py\", line 213, in execute_fn\r\nreturn await asyncio.get_event_loop().run_in_executor(\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/concurrent/futures/thread.py\", line 58, in run\r\nresult = self.fn(*self.args, **self.kwargs)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/database.py\", line 211, in in_thread\r\nreturn fn(conn)\r\n^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/datasette/database.py\", line 237, in sql_operation_in_thread\r\ncursor.execute(sql, params if params is not None else {})\r\nsqlite3.OperationalError: attempt to write a readonly database\r\n```\r\nThat's running the official Datasette Docker image on https://fly.io/ - it's causing 500 errors on every page of their site.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2058/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1665510265, "node_id": "I_kwDOBm6k_c5jRat5", "number": 2060, "title": "Clean up a bunch of warnings from ruff", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-13T01:23:02Z", "updated_at": "2023-04-13T01:23:02Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "See:\r\n- #2056\r\n\r\n`ruff` spots a bunch of warnings about things like unused variables - would be good to clean up as many of these as possible.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2060/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1657861026, "node_id": "I_kwDOBm6k_c5i0POi", "number": 2054, "title": "Make detailed notes on how table, query and row views work right now", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2023-04-06T18:21:09Z", "updated_at": "2023-04-07T20:14:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Research to help influence the following:\r\n- #2049 \r\n- #2053 \r\n- #2050 \r\n- #262 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2054/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1646734246, "node_id": "I_kwDOBm6k_c5iJyum", "number": 2049, "title": "Custom SQL queries should use new JSON ?_extra= format", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 4, "created_at": "2023-03-30T00:42:53Z", "updated_at": "2023-04-05T23:29:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Related:\r\n- #262\r\n\r\nI've made the change to the table view, now I need the new format to work for arbitrary SQL queries too.\r\n\r\nNote that this incorporates both arbitrary SQL queries and canned queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2049/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1649791661, "node_id": "I_kwDOBm6k_c5iVdKt", "number": 2050, "title": "Row page JSON should use new ?_extra= format", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2023-03-31T17:56:53Z", "updated_at": "2023-03-31T17:59:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://latest.datasette.io/fixtures/facetable/2.json\r\n\r\nRelated:\r\n- #2049\r\n- #1709 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2050/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1649793525, "node_id": "I_kwDOBm6k_c5iVdn1", "number": 2051, "title": "`?_extra=row_urls` for table pages", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-31T17:58:36Z", "updated_at": "2023-03-31T17:58:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Provides URLs to the JSON version of those rows. Maybe it persists the `?_shape=` option too? Not sure about that.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2051/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1615692818, "node_id": "I_kwDOBm6k_c5gTYQS", "number": 2035, "title": "Potential feature: special support for `?a=1&a=2` on the query page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 14, "created_at": "2023-03-08T18:05:03Z", "updated_at": "2023-03-31T16:09:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "From a discussion on Discord: https://discord.com/channels/823971286308356157/996877076982415491/1082789517062320138\r\n\r\nThe key idea is to make it easier for people to implement `where id in (...)` that's populated from query string arguments.\r\n\r\nWhat if you could add `?id=11&id=32&id=62` to the URL and have that made available as a list that can be used in the query?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2035/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 323658641, "node_id": "MDU6SXNzdWUzMjM2NTg2NDE=", "number": 262, "title": "Add ?_extra= mechanism for requesting extra properties in JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 27, "created_at": "2018-05-16T14:55:42Z", "updated_at": "2023-03-29T06:22:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette views currently work by creating a set of data that should be returned as JSON, then defining an additional, optional `template_data()` function which is called if the view is being rendered as HTML.\r\n\r\nThis `template_data()` function calculates extra template context variables which are necessary for the HTML view but should not be included in the JSON.\r\n\r\nExample of how that is used today: https://github.com/simonw/datasette/blob/2b79f2bdeb1efa86e0756e741292d625f91cb93d/datasette/views/table.py#L672-L704\r\n\r\nWith features like Facets in #255 I'm beginning to want to move more items into the `template_data()` - in the case of facets it's the `suggested_facets` array. This saves that feature from being calculated (involving several SQL queries) for the JSON case where it is unlikely to be used.\r\n\r\nBut... as an API user, I want to still optionally be able to access that information.\r\n\r\nSolution: Add a `?_extra=suggested_facets&_extra=table_metadata` argument which can be used to optionally request additional blocks to be added to the JSON API.\r\n\r\nThen redefine as many of the current `template_data()` features as extra arguments instead, and teach Datasette to return certain extras by default when rendering templates.\r\n\r\nThis could allow the JSON representation to be slimmed down further (removing e.g. the `table_definition` and `view_definition` keys) while still making that information available to API users who need it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/262/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1641013220, "node_id": "I_kwDOBm6k_c5hz9_k", "number": 2045, "title": "First column on a view page has no facet option in cog menu", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2023-03-26T18:02:47Z", "updated_at": "2023-03-26T18:02:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "e.g. first column on this page - cog menu has no option to facet.\r\n\r\nhttps://datasette.io/content/tools\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2045/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1121583414, "node_id": "I_kwDOBm6k_c5C2gE2", "number": 1619, "title": "JSON link on row page is 404 if base_url setting is used", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-02-02T07:09:53Z", "updated_at": "2023-03-24T15:38:04Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "On my local environment:\r\n\r\n datasette fixtures.db -p 3344 --setting base_url /foo/bar/\r\n\r\nThen hit http://127.0.0.1:3344/foo/bar/fixtures/table%2Fwith%2Fslashes.csv/3\r\n\r\n\"image\"\r\n\r\nBut... that `json` link goes here, which is a 404:\r\n\r\nhttp://127.0.0.1:3344/foo/bar/foo/bar/fixtures/table%2Fwith%2Fslashes.csv/3?_format=json", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1619/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1636616315, "node_id": "I_kwDOBm6k_c5hjMh7", "number": 2042, "title": "Gather feedback on new ?_extra= design", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-22T23:07:43Z", "updated_at": "2023-03-22T23:08:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Now that I've landed:\r\n- #1999\r\n\r\nSee also:\r\n- #262\r\n\r\nI want to get some feedback from people on the design of the new `?_extra=` feature, before freezing it into Datasette 1.0.\r\n\r\nThe big change is that the default JSON representation is now MUCH slimmer - it only gives you keys for `\"next\"` and `\"rows\"`, where rows is a list of JSON objects (not a list of arrays as was previously the default) - for example https://latest.datasette.io/fixtures/sortable.json\r\n\r\nIf you want extra stuff you can ask for it with the new `?_extra=` parameter - e.g. https://latest.datasette.io/fixtures/sortable.json?_extra=columns&_extra=suggested_facets\r\n\r\nYou can use `?_extra=extras` to see a list of available extras: https://latest.datasette.io/fixtures/sortable.json?_extra=extras\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2042/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 317001500, "node_id": "MDU6SXNzdWUzMTcwMDE1MDA=", "number": 236, "title": "datasette publish lambda plugin", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 11, "created_at": "2018-04-23T22:10:30Z", "updated_at": "2023-03-12T14:04:15Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Refs #217 - create a publish plugin that can deploy to AWS Lambda.\r\n\r\nhttps://docs.aws.amazon.com/lambda/latest/dg/limits.html says lambda packages can be up to 50 MB, so this would only work with smaller databases (the command can check the filesize before attempting to package and deploy it).\r\n\r\nLambdas do get a 512 MB `/tmp` directory too, so for larger databases the function could start and then download up to 512MB from an S3 bucket - so the plugin could take an optional S3 bucket to write to and know how to upload the `.db` file there and then have the lambda download it on startup.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/236/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1618249044, "node_id": "I_kwDOBm6k_c5gdIVU", "number": 2038, "title": "Consider a `strict_templates` setting", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-03-10T02:09:13Z", "updated_at": "2023-03-10T02:11:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "A setting which turns on Jinja strict mode, so any templates that access undefined variables raise a hard error.\r\n\r\nPrototype here:\r\n```diff\r\ndiff --git a/datasette/app.py b/datasette/app.py\r\nindex 40416713..1428a3f0 100644\r\n--- a/datasette/app.py\r\n+++ b/datasette/app.py\r\n@@ -200,6 +200,7 @@ SETTINGS = (\r\n \"Allow display of SQL trace debug information with ?_trace=1\",\r\n ),\r\n Setting(\"base_url\", \"/\", \"Datasette URLs should use this base path\"),\r\n+ Setting(\"strict_templates\", False, \"Raise errors for undefined template variables\"),\r\n )\r\n _HASH_URLS_REMOVED = \"The hash_urls setting has been removed, try the datasette-hashed-urls plugin instead\"\r\n OBSOLETE_SETTINGS = {\r\n@@ -399,11 +400,14 @@ class Datasette:\r\n ),\r\n ]\r\n )\r\n+ env_extras = {}\r\n+ if self.setting(\"strict_templates\"):\r\n+ env_extras[\"undefined\"] = StrictUndefined\r\n self.jinja_env = Environment(\r\n loader=template_loader,\r\n autoescape=True,\r\n enable_async=True,\r\n- undefined=StrictUndefined,\r\n+ **env_extras,\r\n )\r\n self.jinja_env.filters[\"escape_css_string\"] = escape_css_string\r\n self.jinja_env.filters[\"quote_plus\"] = urllib.parse.quote_plus\r\n```\r\nExplored this idea a bit in:\r\n- #1999", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2038/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1618130434, "node_id": "I_kwDOJHON9s5gcrYC", "number": 11, "title": "Implement a SQL view to make it easier to query files in a nested folder", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-03-09T23:19:28Z", "updated_at": "2023-03-09T23:24:01Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Working with nested data in SQL is tricky, can I make it easier with a view or canned query?", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/11/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1617938730, "node_id": "I_kwDOJHON9s5gb8kq", "number": 9, "title": "Default to just storing plaintext, store HTML if `--html` is passed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-09T20:19:06Z", "updated_at": "2023-03-09T20:19:06Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The full `body` version of the notes can get HUGE, due to embedded images. It turns out for my own purposes I'm usually happy with just the `plaintext` version.\r\n\r\nI'm tempted to say you don't get HTML unless you pass a `--html` option.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/9/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1616429236, "node_id": "I_kwDOJHON9s5gWMC0", "number": 4, "title": "Support incremental updates", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-03-09T05:14:00Z", "updated_at": "2023-03-09T18:20:56Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Running this script can take several hours against a large notes database.\r\n\r\nWould be neat if it could run against just the notes that have been modified since it last ran. Could pull the max `updated` date and then keep on looping until it finds one modified before then.\r\n\r\nProblem is I don't actually know what order it iterates over the notes in.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/4/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1616440856, "node_id": "I_kwDOJHON9s5gWO4Y", "number": 5, "title": "Configure full text search", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-09T05:20:46Z", "updated_at": "2023-03-09T05:20:46Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "FTS would be useful.\r\n\r\nMaybe even extract the plain text from the notes to make that index easier to create, rather than creating it against the HTML. Can use the `plaintext` property for that.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1579973223, "node_id": "I_kwDOBm6k_c5eLHpn", "number": 2024, "title": "Mention WAL mode in documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-02-10T16:11:10Z", "updated_at": "2023-02-10T16:11:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's not currently obvious from the docs how you can ensure that Datasette runs well in situations where other processes may update the underlying SQLite files.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2024/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1573424830, "node_id": "I_kwDOBm6k_c5dyI6-", "number": 2019, "title": "Refactor out the keyset pagination code", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 14, "created_at": "2023-02-06T23:04:00Z", "updated_at": "2023-02-08T01:40:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "While working on:\r\n- #1999\r\n\r\nI noticed that some of the most complex code in the existing table view is the code that implements keyset pagination:\r\n\r\nhttps://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/views/table.py#L417-L493\r\n\r\nExtracting that into a utility function would simplify that code a lot.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2019/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 473288428, "node_id": "MDExOlB1bGxSZXF1ZXN0MzAxNDgzNjEz", "number": 564, "title": "First proof-of-concept of Datasette Library", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-07-26T10:22:26Z", "updated_at": "2023-02-07T15:14:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/564", "body": "Refs #417. Run it like this:\r\n\r\n datasette -d ~/Library\r\n\r\nUses a new plugin hook - available_databases()\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/564/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 1564774831, "node_id": "I_kwDOBm6k_c5dRJGv", "number": 2012, "title": "Missing space in database summary", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-31T18:01:13Z", "updated_at": "2023-01-31T18:01:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted this on an instance index page:\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2012/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1564769997, "node_id": "I_kwDOBm6k_c5dRH7N", "number": 2011, "title": "Applied facet did not result in an \"x\" icon to dismiss it", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-31T17:57:44Z", "updated_at": "2023-01-31T17:58:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "![CleanShot 2023-01-31 at 09 55 56@2x](https://user-images.githubusercontent.com/9599/215843684-1761a230-d490-4f87-be6d-186319366794.png)\r\n\r\nThat's against this data https://data.sfgov.org/City-Management-and-Ethics/Supplier-Contracts/cqi5-hm2d imported using https://datasette.io/plugins/datasette-socrata\r\n\r\nIt's for `Contract Type` of `Non-Purchasing Contract (Rents, etc.)` - so possible that some of the spaces or punctuation in either the name of the value tripped up the code that decides if the X icon should be displayed.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2011/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1563264257, "node_id": "I_kwDOBm6k_c5dLYUB", "number": 2010, "title": "Row page should default to card view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2023-01-30T21:49:37Z", "updated_at": "2023-01-30T21:52:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette currently uses the same table layout on the row pages as it does on the table pages:\r\n\r\nhttps://datasette.io/content/pypi_packages?_sort=name&name__exact=datasette-column-inspect\r\n\r\n\"image\"\r\n\r\nhttps://datasette.io/content/pypi_packages/datasette-column-inspect\r\n\r\n\"image\"\r\n\r\nIf you shrink down to mobile width you get this instead, on both of those pages:\r\n\r\n\"image\"\r\n\r\nI think that view, which I think of as the \"card view\", is plain better if you're looking at just a single row - and it (or a variant of it) should be the default presentation on the row page.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2010/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1186696202, "node_id": "I_kwDOBm6k_c5Gu4wK", "number": 1696, "title": "Show foreign key label when filtering", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-30T16:18:54Z", "updated_at": "2023-01-29T20:56:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "For example here:\r\n\r\n\"image\"\r\n\r\n3 corresponds to \"Human Related: Other\" - it would be neat to display this in this area of the page somehow.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1696/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1558644003, "node_id": "I_kwDOBm6k_c5c5wUj", "number": 2006, "title": "Teach `datasette publish` to pin to `datasette<1.0` in a 0.x release", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2023-01-26T19:17:40Z", "updated_at": "2023-01-26T19:20:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just realized that when I ship Datasette 1.0 there may be automated deployments out there which could deploy the 1.0 version by accident, potentially breaking any customizations that aren't compatible with the 1.0 changes.\r\n\r\nI can hopefully help avoid that by shipping one last entry in the `0.x` series that ensures `datasette publish` pins to `<1.0` when it installs Datasette itself.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2006/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1557507274, "node_id": "I_kwDOBm6k_c5c1azK", "number": 2005, "title": "`extra_template_vars` should be OK to return `None`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-26T01:40:45Z", "updated_at": "2023-01-26T01:41:50Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Got this exception and had to make sure it always returned `{}`:\r\n\r\n```\r\n File \".../python3.11/site-packages/datasette/app.py\", line 1049, in render_template\r\n assert isinstance(extra_vars, dict), \"extra_vars is of type {}\".format(\r\nAssertionError: extra_vars is of type \r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2005/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 743371103, "node_id": "MDU6SXNzdWU3NDMzNzExMDM=", "number": 1099, "title": "Support linking to compound foreign keys", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2020-11-15T23:23:17Z", "updated_at": "2023-01-25T00:58:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Reported as a bug in #1098 because they caused 500 errors - but it would be even better if Datasette could hyperlink to related rows via compound foreign keys.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1099/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1554032168, "node_id": "I_kwDOBm6k_c5coKYo", "number": 2002, "title": "Document how actors are displayed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-24T00:08:49Z", "updated_at": "2023-01-24T00:08:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/e4ebef082de90db4e1b8527abc0d582b7ae0bc9d/datasette/utils/__init__.py#L1052-L1056\r\n\r\nThis logic should be reflected in the documentation on https://docs.datasette.io/en/stable/authentication.html#actors", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2002/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 749283032, "node_id": "MDU6SXNzdWU3NDkyODMwMzI=", "number": 1101, "title": "register_output_renderer() should support streaming data", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 13, "created_at": "2020-11-24T02:17:09Z", "updated_at": "2023-01-21T22:07:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> I'd like to implement this by first extending the `register_output_renderer()` hook to support streaming huge responses, then switching CSV to use the plugin hook in addition to TSV using it.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1096#issuecomment-732542285_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1101/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1551113681, "node_id": "I_kwDOBm6k_c5cdB3R", "number": 1998, "title": "`datasette --version` should also show the SQLite version", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-01-20T16:11:30Z", "updated_at": "2023-01-20T18:19:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Idea came up here: https://discord.com/channels/823971286308356157/823971286941302908/1066026473003159783", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1998/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1529707837, "node_id": "I_kwDOBm6k_c5bLX09", "number": 1988, "title": "Reconsider pattern where plugins could break existing template context", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2023-01-11T21:13:43Z", "updated_at": "2023-01-11T21:25:05Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> I hadn't run into an issue with plugins like `datasette-template-sql` interfering with the existing context for other features before! Definitely not a good thing.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-write/issues/6#issuecomment-1379490596_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1988/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1525815985, "node_id": "I_kwDOBm6k_c5a8hqx", "number": 1983, "title": "Make CustomJSONEncoder a documented public API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-01-09T15:27:05Z", "updated_at": "2023-01-09T15:35:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's used by `datasette-geojson` here: https://github.com/eyeseast/datasette-geojson/commit/902bf135a5a33a0dc8264673d00a59a67cb05152", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1983/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1426080014, "node_id": "I_kwDOBm6k_c5VAEEO", "number": 1867, "title": "/db/table/-/rename API (also allows atomic replace)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-10-27T18:13:23Z", "updated_at": "2023-01-09T15:34:12Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> There's one catch with batched inserts: if your CLI tool fails half way through you could end up with a partially populated table - since a bunch of batches will have succeeded first.\r\n>\r\n> ...\r\n>\r\n> If people care about that kind of thing they could always push all of their inserts to a table called `_tablename` and then atomically rename that once they've uploaded all of the data (assuming I provide an atomic-rename-this-table mechanism).\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1866#issuecomment-1293893789_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1867/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1082584499, "node_id": "I_kwDOBm6k_c5Ahu2z", "number": 1558, "title": "Redesign `facet_results` JSON structure prior to Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-12-16T19:45:10Z", "updated_at": "2023-01-09T15:31:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Decision: as an initial fix I'm going to de-duplicate those keys by using `tags__array` etc - with a `_2` on the end if that key is already used.\r\n>\r\n> I'll open a separate issue to redesign this better for Datasette 1.0.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/625#issuecomment-996130862_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1524983536, "node_id": "I_kwDOBm6k_c5a5Wbw", "number": 1981, "title": "Canned query field labels truncated", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-09T06:04:24Z", "updated_at": "2023-01-09T06:05:44Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Eg here on mobile: https://timezones.datasette.io/timezones/by_point?longitude=-0.1406632&latitude=50.8246776\r\n\r\n![107A1894-D1DA-4158-9EA3-40C840DD10E3](https://user-images.githubusercontent.com/9599/211248895-c922ce61-95d3-47ca-9314-dcff7c86afab.jpeg)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1981/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1524867951, "node_id": "I_kwDOBm6k_c5a46Nv", "number": 1980, "title": "\"Cannot sort table by id\" when sortable_columns is used", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-01-09T03:21:33Z", "updated_at": "2023-01-09T03:23:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I had an instance with this in `metadata.yml`:\r\n\r\n```yaml\r\ndatabases:\r\n timezones:\r\n tables:\r\n timezones:\r\n sortable_columns:\r\n - tzid\r\n```\r\nWhen I clicked on the \"Apply\" button here:\r\n\r\n\"image\"\r\n\r\nIt sent me to `/timezones/timezones?_sort=id&id__exact=133` with the error message:\r\n\r\n> 500: Cannot sort table by id", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1980/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1516815571, "node_id": "I_kwDOBm6k_c5aaMTT", "number": 1975, "title": "_col=id can cause id column to export twice in CSV export", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-03T00:25:15Z", "updated_at": "2023-01-03T00:25:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://datasette.simonwillison.net/simonwillisonblog/blog_entry.csv?_col=id&_col=title&_col=body&_labels=on&_size=1\r\n\r\n```csv\r\nid,id,title,body\r\n1,1,WaSP Phase II,\"

The Web Standards project has launched Phase II.

\"\r\n```\r\nThat should not have two `id` columns.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1975/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1115435536, "node_id": "I_kwDOBm6k_c5CfDIQ", "number": 1614, "title": "Try again with SQLite codemirror support", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-26T20:05:20Z", "updated_at": "2022-12-23T21:27:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I tried and failed to implement autocomplete a while ago. Relevant code:\r\n\r\nhttps://github.com/codemirror/legacy-modes/blob/8f36abca5f55024258cd23d9cfb0203d8d244f0d/mode/sql.js#L335\r\n\r\nSounds like upgrading to CodeMirror 6 ASAP would be worthwhile since it has better accessibility and touch screen support: https://codemirror.net/6/", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1614/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1373224657, "node_id": "I_kwDOCGYnMM5R2b7R", "number": 488, "title": "`sqlite-utils transform` should set empty strings to null when converting text columns to integer/float", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-09-14T15:51:30Z", "updated_at": "2022-12-23T17:38:55Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "```\r\n/tmp % echo \"id,age,weight\\n1,3,2.5\\n2,,\" | sqlite-utils insert test.db test - --csv\r\n/tmp % sqlite-utils schema test.db \r\nCREATE TABLE [test] (\r\n [id] TEXT,\r\n [age] TEXT,\r\n [weight] TEXT\r\n);\r\n/tmp % sqlite-utils transform test.db test --type age integer --type weight float \r\n/tmp % sqlite-utils schema test.db \r\nCREATE TABLE \"test\" (\r\n [id] TEXT,\r\n [age] INTEGER,\r\n [weight] FLOAT\r\n);\r\n/tmp % sqlite-utils rows test.db test\r\n[{\"id\": \"1\", \"age\": 3, \"weight\": 2.5},\r\n {\"id\": \"2\", \"age\": \"\", \"weight\": \"\"}]\r\n```\r\nIt would be neat if this resulted in the following instead:\r\n```\r\n {\"id\": \"2\", \"age\": null, \"weight\": null}\r\n```\r\nRelated Discord discussion: https://discord.com/channels/823971286308356157/823971286941302908/1019635490833567794", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/488/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1447050738, "node_id": "I_kwDOBm6k_c5WQD3y", "number": 1886, "title": "Call for birthday presents: if you're using Datasette, let us know how you're using it here", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2022-11-13T19:25:51Z", "updated_at": "2022-12-18T17:34:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette is 5 years old today. To celebrate, I'm asking the community for birthday presents:\r\n\r\nhttps://simonwillison.net/2022/Nov/13/datasette-birthday/\r\n\r\n> To celebrate this open source project\u2019s birthday, I\u2019ve decided to try something new: I\u2019m going to ask for birthday presents.\r\n> \r\n> An aspect of Datastte\u2019s marketing that I\u2019ve so far neglected is social proof. I think it\u2019s time to change that: I know people are using the software to do cool things, but this often happens behind closed doors.\r\n> \r\n> For Datastte\u2019s birthday, I\u2019m looking for endorsements and case studies and just general demonstrations that show how people are using it do so cool stuff.\r\n> \r\n> So: if you\u2019ve used Datasette to solve a problem, and you\u2019re willing to publicize it, please give us the gift of your endorsement!\r\n> \r\n> [...]\r\n> \r\n> Add a comment to [this issue thread](https://github.com/simonw/datasette/issues/1886) describing what you\u2019re doing. Just a few sentences is fine\u2014though a screenshot or even a link to a live instance would be even better", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1886/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 2, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1501778647, "node_id": "I_kwDOBm6k_c5Zg1LX", "number": 1964, "title": "Cog menu is not keyboard accessible (also no ARIA)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-12-18T06:36:28Z", "updated_at": "2022-12-18T06:37:28Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This menu here: https://latest.datasette.io/fixtures/attraction_characteristic\r\n\r\nYou can tab to it (see the outline) and hit space or enter to open it, but you can't then navigate the items in the open menu using the keyboard.\r\n\r\n![cog-menu](https://user-images.githubusercontent.com/9599/208284973-2a04cdab-ed95-4316-979c-67fe5f7787db.gif)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1964/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1408757705, "node_id": "I_kwDOBm6k_c5T9-_J", "number": 1843, "title": "Intermittent \"Too many open files\" error running tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 16, "created_at": "2022-10-14T04:45:01Z", "updated_at": "2022-12-17T22:02:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Partial stack trace from one of them:\r\n```\r\n/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.10/site-packages/jinja2/loaders.py:200: in get_source\r\n f = open_if_exists(filename)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nfilename = '/Users/simon/Dropbox/Development/datasette/datasette/templates/error.html', mode = 'rb'\r\n\r\n def open_if_exists(filename: str, mode: str = \"rb\") -> t.Optional[t.IO]:\r\n \"\"\"Returns a file descriptor for the filename if that file exists,\r\n otherwise ``None``.\r\n \"\"\"\r\n if not os.path.isfile(filename):\r\n return None\r\n \r\n> return open(filename, mode)\r\nE OSError: [Errno 24] Too many open files: '/Users/simon/Dropbox/Development/datasette/datasette/templates/error.html'\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1843/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "reopened"} {"id": 1500636982, "node_id": "I_kwDOBm6k_c5Zcec2", "number": 1962, "title": "Alternative, async-friendly pattern for `make_app_client()` and similar - fully retire `TestClient`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-12-16T17:56:51Z", "updated_at": "2022-12-16T21:55:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In this issue I replaced a whole bunch of places that used the non-async `app_client` fixture with an async `ds_client` fixture instead:\r\n- #1959\r\n\r\nBut I didn't get everything, and a lot of tests are still using the old `TestClient` mechanism as a result.\r\n\r\nThe main work here is replacing all of the `app_client_...` fixtures which use variants on the default client - and changing the tests that call `make_app_client()` to do something else instead.\r\n\r\nThis requires some careful thought. I need to come up with a really nice pattern for creating variants on the `ds_client` default fixture - and do so in a way that minimizes the number of open files, refs:\r\n\r\n- #1843", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1962/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1493471221, "node_id": "I_kwDOBm6k_c5ZBI_1", "number": 1949, "title": "`.json` errors should be returned as JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 10, "created_at": "2022-12-13T06:14:12Z", "updated_at": "2022-12-15T00:46:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Eg the error in this issue:\r\n- #1945 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1949/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1490576818, "node_id": "I_kwDOBm6k_c5Y2GWy", "number": 1943, "title": "`/-/permissions` should list available permissions", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-12-11T23:38:03Z", "updated_at": "2022-12-15T00:41:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Idea: a `/-/permissions` introspection endpoint for listing registered permissions\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1939#issuecomment-1345691103_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1943/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1497577017, "node_id": "I_kwDOBm6k_c5ZQzY5", "number": 1957, "title": "Reconsider row value truncation on query page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-12-14T23:49:47Z", "updated_at": "2022-12-14T23:50:50Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Consider this example: https://ripgrep.datasette.io/repos?sql=select+json_group_array%28full_name%29+from+repos\r\n\r\n```sql\r\nselect json_group_array(full_name) from repos\r\n```\r\n\r\n![CleanShot 2022-12-14 at 15 48 32@2x](https://user-images.githubusercontent.com/9599/207739709-8177f683-f938-49a1-8225-42791fad88fe.png)\r\n\r\nMy intention here was to get a string of JSON I can copy and paste elsewhere - see: https://til.simonwillison.net/sqlite/compare-before-after-json\r\n\r\nThe truncation isn't helping here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1957/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 855296937, "node_id": "MDU6SXNzdWU4NTUyOTY5Mzc=", "number": 1295, "title": "Errors should have links to further information", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-04-11T12:39:12Z", "updated_at": "2022-12-14T23:28:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by this tweet:\r\nhttps://twitter.com/willmcgugan/status/1381186384510255104\r\n\r\n> While I am thinking about faqs. I\u2019d also like to add short URLs to Rich exceptions.\r\n>\r\n> I loath cryptic error messages, and I\u2019ve created a fair few myself. In Rich I\u2019ve tried to make them as plain English as possible. But...\r\n>\r\n> would be great if every error message linked to a page that explains the error in detail and offers fixes.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1295/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1495716243, "node_id": "I_kwDOBm6k_c5ZJtGT", "number": 1952, "title": "Improvements to /-/create-token restrictions interface", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-12-14T05:22:39Z", "updated_at": "2022-12-14T05:23:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> It would be neat not to show write permissions against immutable databases too - and not hard from a performance perspective since it doesn't involve hundreds more permission checks.\r\n>\r\n> That will need permissions to grow a flag for if they need a mutable database though, which is a bigger job.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1947#issuecomment-1350414402_\r\n\r\nAlso, DO show the `_memory` database there if Datasette was started in `--crossdb` mode.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1952/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1468689139, "node_id": "I_kwDOBm6k_c5Ximrz", "number": 1914, "title": "Finalize design of JSON for Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 1, "created_at": "2022-11-29T20:59:10Z", "updated_at": "2022-12-13T06:15:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Tracking issue.\r\n\r\n- [ ] #1709\r\n- [ ] #1729\r\n- [ ] #1875", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1914/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1468495358, "node_id": "I_kwDOBm6k_c5Xh3X-", "number": 1910, "title": "Check incoming column types on various write APIs", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 0, "created_at": "2022-11-29T18:09:10Z", "updated_at": "2022-12-13T05:29:09Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> I do think this needs type checking - I just tried and you really can send a string to an integer column and have it work, which feels bad.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1863#issuecomment-1331089156_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1910/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1428630253, "node_id": "I_kwDOBm6k_c5VJyrt", "number": 1873, "title": "Ensure insert API has good tests for rowid and compound primark key tables", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 11, "created_at": "2022-10-30T06:22:17Z", "updated_at": "2022-12-13T05:29:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Following:\r\n- #1866\r\n\r\nI need to design and implement various edge-cases or primary keys:\r\n\r\n- Table without an auto-incrementing primary key\r\n- Table with compound primary keys\r\n- Table with just a `rowid`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1873/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "reopened"} {"id": 1430797211, "node_id": "I_kwDOBm6k_c5VSDub", "number": 1875, "title": "Figure out design for JSON errors (consider RFC 7807)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 7, "created_at": "2022-11-01T03:14:15Z", "updated_at": "2022-12-13T05:29:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://datatracker.ietf.org/doc/draft-ietf-httpapi-rfc7807bis/ is a brand new standard.\r\n\r\nSince I need a neat, predictable format for my JSON errors, maybe I should use this one?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1875/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1447465004, "node_id": "I_kwDOBm6k_c5WRpAs", "number": 1889, "title": "Ability to create new tokens via the API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 0, "created_at": "2022-11-14T06:21:36Z", "updated_at": "2022-12-13T05:29:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Refs:\r\n- #1850\r\n\r\nInitially I decided that the API shouldn't be able to create new tokens at all - I don't like the idea of an API token holder creating themselves additional tokens.\r\n\r\nThen I realized that two of the API features are specifically more useful if you can generate fresh tokens via the API:\r\n\r\n- Tokes that expire after a time limit are MUCH more useful if they can be automatically generated\r\n- Likewise, tokens that are restricted to a subset of permissions (see #1855) make more sense to be generated like this, especially in conjunction with expiry times", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1889/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1216436131, "node_id": "I_kwDOBm6k_c5IgVej", "number": 1721, "title": "Implement plugin hooks: `register_table_extras`, `register_row_extras`, `register_query_extras`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 0, "created_at": "2022-04-26T20:21:49Z", "updated_at": "2022-12-13T05:29:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Designed in:\r\n- #1720\r\n\r\nPart of:\r\n- #262\r\n- #1709", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1721/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}