{"id": 1054244712, "node_id": "I_kwDOBm6k_c4-1n9o", "number": 1510, "title": "Datasette 1.0 documented template context (maybe via API docs)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:23:58Z", "updated_at": "2023-06-28T02:05:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054243511, "node_id": "I_kwDOBm6k_c4-1nq3", "number": 1509, "title": "Datasette 1.0 JSON API (and documentation)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:22:45Z", "updated_at": "2022-03-15T20:38:56Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The new JSON API in a stable, documented form.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1509/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1059509927, "node_id": "I_kwDOBm6k_c4_Jtan", "number": 1525, "title": "\"Links from other tables\" broken for columns starting with underscore", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-21T22:55:08Z", "updated_at": "2021-11-30T06:39:01Z", "closed_at": "2021-11-30T06:34:35Z", "author_association": "OWNER", "pull_request": null, "body": "Same bug as #1506, this time it's this link or the row page:\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1525/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1060631257, "node_id": "I_kwDOBm6k_c4_N_LZ", "number": 1528, "title": "Add new `\"sql_file\"` key to Canned Queries in metadata?", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-22T21:58:01Z", "updated_at": "2022-06-10T03:23:08Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Currently for canned queries, you have to inline SQL in your `metadata.yaml` like so:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql: |-\r\n select neighborhood, facet_cities.name, state\r\n from facetable\r\n join facet_cities on facetable.city_id = facet_cities.id\r\n where neighborhood like '%' || :text || '%'\r\n order by neighborhood\r\n title: Search neighborhoods\r\n```\r\n\r\nThis works fine, but for a few reasons, I usually have my canned queries already written in separate `.sql` files. I'd like to instead re-use those instead of re-writing it. \r\n\r\nSo, I'd like to see a new `\"sql_file\"` key that works like so:\r\n\r\n`metadata.yaml`:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql_file: neighborhood_search.sql\r\n title: Search neighborhoods\r\n```\r\n`neighborhood_search.sql`:\r\n```sql\r\nselect neighborhood, facet_cities.name, state\r\nfrom facetable\r\njoin facet_cities on facetable.city_id = facet_cities.id\r\nwhere neighborhood like '%' || :text || '%'\r\norder by neighborhood\r\n```\r\n\r\nBoth of these would work in the exact same way, where Datasette would instead open + include `neighborhood_search.sql` on startup. \r\n\r\n\r\nA few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:\r\n\r\n- Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code\r\n- Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit\r\n- Works well with other tools (can pipe `.sql` files into the `sqlite3` CLI, or use with other SQLite clients easier)\r\n- Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible\r\n\r\nLet me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1066288689, "node_id": "I_kwDOBm6k_c4_jkYx", "number": 1538, "title": "Research pattern for re-registering existing Click tools with register_commands", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-29T17:09:47Z", "updated_at": "2021-11-29T17:32:44Z", "closed_at": "2021-11-29T17:27:16Z", "author_association": "OWNER", "pull_request": null, "body": "Building a Datasette plugin that imports an existing Click CLI tool and re-registers it is proving hard - Click doesn't really want you to do that. I tried this:\r\n```python\r\nfrom datasette import hookimpl\r\nfrom git_history.cli import file as git_history_file\r\n\r\n\r\n@hookimpl\r\ndef register_commands(cli):\r\n cli.command(name=\"git-history\")(git_history_file.callback)\r\n```\r\nBut when I run this:\r\n```\r\n % datasette git-history --help \r\nUsage: datasette git-history [OPTIONS]\r\n\r\n Analyze the history of a specific file and write it to SQLite\r\n\r\nOptions:\r\n --help Show this message and exit.\r\n```\r\nThe options are all missing - which means that the command doesn't actually work. Will need to research this pattern separately.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/git-history/issues/21#issuecomment-981835305_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1538/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1978023780, "node_id": "I_kwDOBm6k_c515j9k", "number": 2205, "title": "request.post_vars() method obliterates form keys with multiple values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 3, "created_at": "2023-11-05T23:25:08Z", "updated_at": "2023-11-06T04:10:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139\r\n\r\nIn GET requests you can do `?foo=1&foo=2` - you can do the same in POST requests, but the `dict()` call here eliminates those duplicates.\r\n\r\nYou can't even try calling `post_body()` and implement your own custom parsing because of:\r\n- #2204", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1075893249, "node_id": "I_kwDOBm6k_c5AINQB", "number": 1545, "title": "Custom pages don't work on windows", "user": {"value": 559711, "label": "ryascott"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-09T18:53:05Z", "updated_at": "2022-02-03T02:08:31Z", "closed_at": "2022-02-03T01:58:35Z", "author_association": "NONE", "pull_request": null, "body": "It seems that custom pages don't work when put in templates/pages\r\n\r\nTo reproduce on datasette version 0.59.4 using PowerShell on WIndows 10 with Python 3.10.0\r\n\r\n mkdir -p templates/pages\r\n\r\n echo \"hello world\" >> templates/pages/about.html\r\n\r\nStart datasette\r\n \r\n datasette --template-dir templates/\r\n\r\nNavigate to [http://127.0.0.1:8001/about](url) and receive:\r\n \r\n Error 404:\r\n Database not found: about\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1545/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1077893013, "node_id": "I_kwDOBm6k_c5AP1eV", "number": 1551, "title": "`keep_blank_values=True` when parsing `request.args`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 3, "created_at": "2021-12-12T19:53:07Z", "updated_at": "2022-01-13T22:26:04Z", "closed_at": "2021-12-12T20:02:01Z", "author_association": "OWNER", "pull_request": null, "body": "This code in `TableView` wouldn't be necessary: https://github.com/simonw/datasette/blob/492f9835aa7e90540dd0c6324282b109f73df71b/datasette/views/table.py#L396-L399\r\n\r\nIf that happened here instead: https://github.com/simonw/datasette/blob/492f9835aa7e90540dd0c6324282b109f73df71b/datasette/utils/asgi.py#L98-L100\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1518#issuecomment-991827468_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1551/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1079111498, "node_id": "I_kwDOBm6k_c5AUe9K", "number": 1553, "title": "if csv export is truncated in non streaming mode set informative response header", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-13T22:50:44Z", "updated_at": "2021-12-16T19:17:28Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.\r\n\r\nit would be great if a response is truncated that an header signalling that was set in the header.\r\n\r\ni need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1553/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1082584499, "node_id": "I_kwDOBm6k_c5Ahu2z", "number": 1558, "title": "Redesign `facet_results` JSON structure prior to Datasette 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-12-16T19:45:10Z", "updated_at": "2023-01-09T15:31:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Decision: as an initial fix I'm going to de-duplicate those keys by using `tags__array` etc - with a `_2` on the end if that key is already used.\r\n>\r\n> I'll open a separate issue to redesign this better for Datasette 1.0.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/625#issuecomment-996130862_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1082765654, "node_id": "I_kwDOBm6k_c5AibFW", "number": 1561, "title": "add hash id to \"_memory\" url if hashed url mode is turned on and crossdb is also turned on", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-17T00:45:12Z", "updated_at": "2022-03-19T04:45:40Z", "closed_at": "2022-03-19T04:45:40Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. \r\n\r\nOne way that it could work is to have the _memory hash be a hash of all the individual databases.\r\n\r\nOtherwise, crossdb queries can get quit out of data if using aggressive caching.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1561/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1097101917, "node_id": "I_kwDOBm6k_c5BZHJd", "number": 1588, "title": "`explain query plan select` is too strict about whitespace", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 3, "created_at": "2022-01-09T04:22:42Z", "updated_at": "2022-01-13T22:28:19Z", "closed_at": "2022-01-13T20:35:05Z", "author_association": "OWNER", "pull_request": null, "body": "`explain query plan select * from facetable` is allowed: https://latest.datasette.io/fixtures?sql=explain+query+plan+select+*+from+facetable\r\n\r\nBut... `explain query plan select * from facetable` (with two spaces before the `select`) returns a \"Statement must be a SELECT\" error: https://latest.datasette.io/fixtures?sql=explain+query+plan++select+*+from+facetable", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1588/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1102359726, "node_id": "I_kwDOBm6k_c5BtKyu", "number": 1594, "title": "Add a CLI reference page to the docs, inspired by sqlite-utils", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 3, "created_at": "2022-01-13T20:55:08Z", "updated_at": "2022-01-13T22:28:22Z", "closed_at": "2022-01-13T21:38:48Z", "author_association": "OWNER", "pull_request": null, "body": "Thought of this while posting this comment: https://github.com/simonw/datasette/issues/1591#issuecomment-1012506595\r\n\r\nI added https://sqlite-utils.datasette.io/en/stable/cli-reference.html to `sqlite-utils` in https://github.com/simonw/sqlite-utils/issues/383 and I _really_ like it - it's a page showing the `--help` output of every CLI command for that tool.\r\n\r\nIt's maintained using `cog`. One of the benefits is that I get a free commit history of changes to `--help` at https://github.com/simonw/sqlite-utils/commits/main/docs/cli-reference.rst", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1594/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1121618041, "node_id": "I_kwDOBm6k_c5C2oh5", "number": 1620, "title": "Link: rel=\"alternate\" to JSON for queries too", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-02-02T08:02:42Z", "updated_at": "2022-02-02T21:53:02Z", "closed_at": "2022-02-02T21:33:00Z", "author_association": "OWNER", "pull_request": null, "body": "Following:\r\n- #1533\r\n\r\nI implemented it for tables and rows but I should have done queries as well.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1620/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1108846067, "node_id": "I_kwDOBm6k_c5CF6Xz", "number": 1606, "title": "Tests failing against Python 3.6", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-01-20T04:22:44Z", "updated_at": "2022-01-20T04:36:42Z", "closed_at": "2022-01-20T04:36:42Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/runs/4877484366\r\n\r\n```\r\nE File \"/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/uvicorn/server.py\", line 67, in run\r\nE return asyncio.run(self.serve(sockets=sockets))\r\nE AttributeError: module 'asyncio' has no attribute 'run'\r\n```\r\nI think this may mean `uvicorn` has dropped support for Python 3.6.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1606/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1157182254, "node_id": "I_kwDOBm6k_c5E-TMu", "number": 1646, "title": "Configuration directory mode does not pick up other file extensions than .db", "user": {"value": 15640196, "label": "dnsos"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-02T13:15:23Z", "updated_at": "2022-10-07T23:06:17Z", "closed_at": "2022-10-07T23:03:35Z", "author_association": "NONE", "pull_request": null, "body": "Hello, I've been trying to run Datasette with the [configuration directory mode](https://docs.datasette.io/en/stable/settings.html#configuration-directory-mode) with a structure such as this one:\r\n\r\n```plain\r\nsome-directory/\r\n example.sqlite3\r\n another-example.db\r\n one-more.custom\r\n [...]\r\n```\r\n\r\n(In my scenario I can't just change the filename extension without other problems arising)\r\n\r\nNow databases with the `.sqlite3` or the custom filename extension are ignored by Datasette in this case. I'm aware that the docs state that a `.db` extension is required, but I was wondering if there is a reason for restricting this or any workaround available? When I run `datasette example.sqlite3` or `datasette one-more.custom` the databases are served by Datasette without a problem. \r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1646/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1174162781, "node_id": "I_kwDOBm6k_c5F_E1d", "number": 1666, "title": "Refactor URL routing to enable testing", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-03-19T03:52:29Z", "updated_at": "2022-03-19T16:32:03Z", "closed_at": "2022-03-19T16:32:03Z", "author_association": "OWNER", "pull_request": null, "body": "I ran into some bugs earlier with URL routing - having more robust testing around this (especially since they are defined using regular expressions) would be really useful.\r\n\r\n- A utility function that resolves a path against a list of reflexes and returns the match\r\n- Make the routes and regular expressions available from a private Datasette method\r\n- Add tests that exercise them\r\n\r\nRelated:\r\n- #1660", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1666/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1174302994, "node_id": "I_kwDOBm6k_c5F_nES", "number": 1667, "title": "Make route matched pattern groups more consistent", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-03-19T16:32:35Z", "updated_at": "2022-03-19T20:37:42Z", "closed_at": "2022-03-19T20:37:41Z", "author_association": "OWNER", "pull_request": null, "body": "> ... highlights how inconsistent the way the capturing works is. Especially `as_format` which can be `None` or `\"\"` or `.json` or `json` or not used at all in the case of `TableView`.\r\n\r\nhttps://github.com/simonw/datasette/blob/764738dfcb16cd98b0987d443f59d5baa9d3c332/tests/test_routes.py#L12-L36\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1666#issuecomment-1073039670_\r\n\r\nPart of:\r\n- #1660", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1667/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1175690070, "node_id": "I_kwDOBm6k_c5GE5tW", "number": 1676, "title": "Reconsider ensure_permissions() logic, can it be less confusing?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-03-21T17:14:57Z", "updated_at": "2022-12-02T01:23:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Updated documentation: https://github.com/simonw/datasette/blob/e627510b760198ccedba9e5af47a771e847785c9/docs/internals.rst#await-ensure_permissionsactor-permissions\r\n>\r\n>> This method allows multiple permissions to be checked at onced. It raises a `datasette.Forbidden` exception if any of the checks are denied before one of them is explicitly granted.\r\n>> \r\n>> This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns `True` or not a single one of them returns `False`:\r\n>\r\n> That's pretty hard to understand! I'm going to open a separate issue to reconsider if this is a useful enough abstraction given how confusing it is.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1675#issuecomment-1074177827_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1676/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1181432624, "node_id": "I_kwDOBm6k_c5Gazsw", "number": 1688, "title": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-26T01:17:44Z", "updated_at": "2022-03-27T01:01:14Z", "closed_at": "2022-03-26T21:34:47Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm trying to make a small plugin that depends on static assets, by following the guide [here](https://docs.datasette.io/en/stable/writing_plugins.html#writing-one-off-plugins). I made a `plugins/` directory with `datasette_nteract_data_explorer.py`. \r\n\r\nI am trying to follow the example of `datasette_vega`, and serving static assets. I created a `statics/` directory within `plugins/` to serve my JS and CSS.\r\n\r\nhttps://github.com/simonw/datasette-vega/blob/00de059ab1ef77394ba9f9547abfacf966c479c4/datasette_vega/__init__.py#L13\r\n\r\nUnfortunately, datasette doesn't seem to be able to find my assets.\r\n\r\nInput:\r\n\r\n```bash\r\ndatasette ~/Library/Safari/History.db --plugins-dir=plugins/\r\n```\r\n![Image 2022-03-25 at 9 18 17 PM](https://user-images.githubusercontent.com/9020979/160218979-a3ff474b-5255-4a76-85d1-6f90ab2e3b44.jpg)\r\n\r\nOutput:\r\n\r\n![Image 2022-03-25 at 9 11 00 PM](https://user-images.githubusercontent.com/9020979/160218733-ca5144cf-f23f-43d8-a8d3-e3a871e57f3a.jpg)\r\n\r\n\r\n\r\n\r\nI suspect this issue might go away if I move away from \"one-off\" plugin mode, but it's been a while since I created a new python package so I'm not sure how much work there is to go between \"one off\" and \"packaged for PyPI\". I'd like to try to avoid needing to repackage a new `tar.gz` file and or reinstall my library repeatedly when developing new python code.\r\n\r\n1. Is there a way to serve a static assets when using the `plugins/` directory method instead of installing plugins as a new python package?\r\n2. If not, is there a way I can work on developing a plugin without creating and repackaging tar.gz files after every change, or is that the recommended path? \r\n\r\nThanks for your help!\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1688/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1202227104, "node_id": "I_kwDOBm6k_c5HqIeg", "number": 1712, "title": "Make \"\" easier to read", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-04-12T18:17:07Z", "updated_at": "2022-04-12T19:12:22Z", "closed_at": "2022-04-12T18:44:20Z", "author_association": "OWNER", "pull_request": null, "body": "`Binary: 2,427,344 bytes` would be nicer - even better, include a tooltip showing that size translated using this function: https://github.com/simonw/datasette/blob/138e4d9a53e3982137294ba383303c3a848cfca4/datasette/utils/__init__.py#L837-L846\r\n\r\n![CleanShot 2022-04-12 at 11 15 04@2x](https://user-images.githubusercontent.com/9599/163027324-b0b6092e-6e11-438b-8077-789025d0bb37.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1712/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1223241647, "node_id": "I_kwDOBm6k_c5I6S-v", "number": 1734, "title": "Remove python-baseconv dependency", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-05-02T19:08:37Z", "updated_at": "2022-05-02T23:25:49Z", "closed_at": "2022-05-02T19:39:20Z", "author_association": "OWNER", "pull_request": null, "body": "> I was going to vendor `baseconv.py`, but then I reconsidered - what if there are plugins out there that expect `import baseconv` to work because they have depended on Datasette?\r\n>\r\n> I used https://cs.github.com/ and as far as I can tell there aren't any!\r\n>\r\n> So I'm going to remove that dependency and work out a smarter way to do this - probably by providing a utility function within Datasette itself.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1733#issuecomment-1115258737_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1734/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1214859703, "node_id": "I_kwDOBm6k_c5IaUm3", "number": 1719, "title": "Refactor `RowView` and remove `RowTableShared`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-04-25T18:06:24Z", "updated_at": "2022-12-01T21:15:19Z", "closed_at": "2022-04-25T18:33:44Z", "author_association": "OWNER", "pull_request": null, "body": "> The `RowTableShared` class is making this a whole lot more complicated.\r\n> \r\n> I'm going to split the `RowView` view out into an entirely separate `views/row.py` module.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1715#issuecomment-1108875068_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1719/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1216619276, "node_id": "I_kwDOBm6k_c5IhCMM", "number": 1724, "title": "?_trace=1 doesn't work on Global Power Plants demo", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-04-27T00:15:02Z", "updated_at": "2022-04-27T06:15:14Z", "closed_at": "2022-04-27T00:18:30Z", "author_association": "OWNER", "pull_request": null, "body": "https://global-power-plants.datasettes.com/global-power-plants/global-power-plants?_trace=1 is not showing the trace JSON at the bottom of the page.\r\n\r\nConfirmed that `trace_debug` is `true` on https://global-power-plants.datasettes.com/-/settings\r\n\r\nPossibly related:\r\n\r\n- https://github.com/simonw/datasette-total-page-time/issues/1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1724/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1362363685, "node_id": "I_kwDOBm6k_c5RNAUl", "number": 1800, "title": "Remove upper bound dependencies as a default policy", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-09-05T18:23:45Z", "updated_at": "2022-09-05T18:39:52Z", "closed_at": "2022-09-05T18:35:41Z", "author_association": "OWNER", "pull_request": null, "body": "https://iscinumpy.dev/post/bound-version-constraints/ has convinced me not to use upper bound dependencies unless I'm certain they are needed.\r\n\r\nRelevant PR:\r\n- https://github.com/simonw/datasette/pull/1799\r\n\r\nAlso:\r\n\r\nhttps://github.com/simonw/datasette/blob/ba35105eee2d3ba620e4f230028a02b2e2571df2/setup.py#L45-L46\r\n\r\nhttps://github.com/simonw/datasette/blob/ba35105eee2d3ba620e4f230028a02b2e2571df2/setup.py#L48-L49\r\n\r\nhttps://github.com/simonw/datasette/blob/ba35105eee2d3ba620e4f230028a02b2e2571df2/setup.py#L51-L55\r\n\r\nhttps://github.com/simonw/datasette/blob/ba35105eee2d3ba620e4f230028a02b2e2571df2/setup.py#L57-L59\r\n\r\nhttps://github.com/simonw/datasette/blob/ba35105eee2d3ba620e4f230028a02b2e2571df2/setup.py#L75-L78\r\n\r\nhttps://github.com/simonw/datasette/blob/ba35105eee2d3ba620e4f230028a02b2e2571df2/setup.py#L81-L82\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1800/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1385026210, "node_id": "I_kwDOBm6k_c5SjdKi", "number": 1819, "title": "Preserve query on timeout", "user": {"value": 2182, "label": "danp"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-09-25T13:32:31Z", "updated_at": "2022-09-26T23:16:15Z", "closed_at": "2022-09-26T23:06:06Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If a query hits the timeout it shows a message like:\r\n\r\n> SQL query took too long. The time limit is controlled by the [sql_time_limit_ms](https://docs.datasette.io/en/stable/settings.html#sql-time-limit-ms) configuration option.\r\n\r\nBut the query is lost. Hitting the browser back button shows the query _before_ the one that errored.\r\n\r\nIt would be nice if the query that errored was preserved for more tweaking. This would make it similar to how \"invalid syntax\" works since #1346 / #619.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1819/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1386854246, "node_id": "I_kwDOBm6k_c5Sqbdm", "number": 1822, "title": "Switch to keyword-only arguments for a bunch of internal methods", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-09-26T23:20:38Z", "updated_at": "2022-09-27T00:44:04Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a good idea, and one that needs to happen before Datasette 1.0:\r\n\r\n> While you are adding features, would you be future-proofing your APIs if you switched over some arguments over to keyword-only arguments or would that be too disruptive?\r\n>\r\n> Thinking out loud:\r\n>\r\n> ```\r\n> async def render_template( \r\n> self, templates, *, context=None, plugin_context=None, request=None, view_name=None \r\n> ): \r\n> ```\r\n_Originally posted by @jefftriplett in https://github.com/simonw/datasette/issues/1817#issuecomment-1256781274_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1822/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1388631785, "node_id": "I_kwDOBm6k_c5SxNbp", "number": 1826, "title": "render_cell documentation example doesn't match the method signature", "user": {"value": 66709385, "label": "pjamargh"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-09-28T02:37:59Z", "updated_at": "2022-09-28T04:30:28Z", "closed_at": "2022-09-28T04:05:16Z", "author_association": "NONE", "pull_request": null, "body": "Open Datasette stable doc at https://docs.datasette.io/en/stable/plugin_hooks.html?highlight=render_cell#render-cell-row-value-column-table-database-datasette\r\n\r\nrender_cell plugin hook method signature is `render_cell(row, value, column, table, database, datasette)`, the example shown inline uses `render_cell(value)`.\r\n\r\n![image](https://user-images.githubusercontent.com/66709385/192674691-34265b81-6cdd-41d2-8424-aa12f8bc8c94.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1826/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1422973111, "node_id": "I_kwDOBm6k_c5U0Ni3", "number": 1854, "title": "Flaky test: test_serve_localhost_http", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-25T19:37:35Z", "updated_at": "2022-10-25T19:53:02Z", "closed_at": "2022-10-25T19:53:02Z", "author_association": "OWNER", "pull_request": null, "body": "Failing on Python 3.10 at the moment: https://github.com/simonw/datasette/actions/runs/3323629947/jobs/5494340302", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1854/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1423336122, "node_id": "I_kwDOBm6k_c5U1mK6", "number": 1856, "title": "allow_signed_tokens setting for disabling API signed token mechanism", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 8658075, "label": "Datasette 1.0a0"}, "comments": 3, "created_at": "2022-10-26T02:20:55Z", "updated_at": "2022-11-15T19:57:05Z", "closed_at": "2022-10-26T02:58:35Z", "author_association": "OWNER", "pull_request": null, "body": "Had some design thoughts here: https://github.com/simonw/datasette/issues/1852#issuecomment-1291272280\r\n\r\nI liked this option the most:\r\n\r\n --setting allow_create_tokens off", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1856/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1423369494, "node_id": "I_kwDOBm6k_c5U1uUW", "number": 1859, "title": "datasette create-token CLI command", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 8658075, "label": "Datasette 1.0a0"}, "comments": 3, "created_at": "2022-10-26T03:12:59Z", "updated_at": "2022-11-15T19:59:00Z", "closed_at": "2022-10-26T04:31:39Z", "author_association": "OWNER", "pull_request": null, "body": "The CLI equivalent of the `/-/create-token` page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1859/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1410305897, "node_id": "I_kwDOBm6k_c5UD49p", "number": 1845, "title": "Reconsider the Datasette first-run experience", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-15T22:21:31Z", "updated_at": "2022-10-16T08:54:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Had a really interesting conversation today about how hard it is to get from \"I installed Datasette\" to \"I've done something useful with it\": https://news.ycombinator.com/item?id=33216789#33218590\r\n\r\nSpending some time focusing on that first-run experience feels very worthwhile.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1845/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1420090659, "node_id": "I_kwDOBm6k_c5UpN0j", "number": 1848, "title": "Private database page should show padlock on every table", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-24T02:28:38Z", "updated_at": "2022-10-24T02:50:29Z", "closed_at": "2022-10-24T02:42:34Z", "author_association": "OWNER", "pull_request": null, "body": "Following:\r\n- #1829\r\n\r\nhttps://latest.datasette.io/_internal looks like this:\r\n\r\n\"image\"\r\n\r\nBut those queries and tables are private too, and should also show the padlock icon.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1848/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1426253476, "node_id": "I_kwDOBm6k_c5VAuak", "number": 1869, "title": "Release 0.63", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-27T20:53:01Z", "updated_at": "2022-10-27T22:24:38Z", "closed_at": "2022-10-27T22:11:33Z", "author_association": "OWNER", "pull_request": null, "body": "Most of the release notes are already written:\r\n- https://github.com/simonw/datasette/releases/tag/0.63a0\r\n- https://github.com/simonw/datasette/releases/tag/0.63a1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1869/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1428560020, "node_id": "I_kwDOBm6k_c5VJhiU", "number": 1872, "title": "SITE-BUSTING ERROR: \"render_template() called before await ds.invoke_startup()\"", "user": {"value": 192568, "label": "mroswell"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-30T02:28:39Z", "updated_at": "2022-10-30T06:26:01Z", "closed_at": "2022-10-30T06:26:01Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "1. My https://list.saferdisinfectants.org/disinfectants/listN page (linked from https://SaferDisinfectants.org ) has been running beautifully for a year and a half, including a GitHub Actions workflow that's been routinely updating the database.\r\n\r\n2. I received a recent report that the list page is down. I don't know when it went down, but the content is replaced with: \"render_template() called before await ds.invoke_startup()\"\r\n\r\n3. The local datasette repo runs without incident.\r\n\r\n4. The site is hosted on vercel, linked to my github repo. Perhaps some vercel changes were made, but not by anyone on our side. Here is a screenshot of the current project settings:\r\n\r\n\"Screen\r\n\r\nHere a screenshot of the latest deployment status:\r\n\"Screen\r\n\r\nThis is my repository:\r\nhttps://github.com/mroswell/list-N\r\n(I notice: datasette==0.59 in my requirements.txt file)\r\n\r\nBecause it's been long while since I actively worked on this or any other datasette project, I forget a lot of what I knew at one point. Perhaps some configuration file could be missing? Or perhaps I just need to know the right incantation to add to that vercel settings page. \r\n\r\nHelp is welcome as the nonprofit org is soon hosting its annual conference, and we'd love to have the page working again.\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1872/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1450312343, "node_id": "I_kwDOBm6k_c5WcgKX", "number": 1892, "title": "Merge 1.0-dev branch back to main", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 8658075, "label": "Datasette 1.0a0"}, "comments": 3, "created_at": "2022-11-15T20:04:25Z", "updated_at": "2022-11-29T19:40:23Z", "closed_at": "2022-11-29T19:40:23Z", "author_association": "OWNER", "pull_request": null, "body": "I'm committed enough to the 1.0 work now that I'm ready for the `main` branch to reflect that instead.\r\n\r\nIf I need to make any dot-releases against 0.63 I can do those from a branch.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1892/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1470320227, "node_id": "I_kwDOBm6k_c5Xo05j", "number": 1923, "title": "latest.datasette.io Cloud Run deploys failing", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-11-30T22:49:34Z", "updated_at": "2022-11-30T23:04:56Z", "closed_at": "2022-11-30T23:04:56Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/actions/runs/3587402085/jobs/6038106719v\r\n\r\n```\r\nWarning: \"service_account_key\" has been deprecated. Please switch to using google-github-actions/auth which supports both Workload Identity Federation and Service Account Key JSON authentication. For more details, see https://github.com/google-github-actions/setup-gcloud#authorization\r\nError: google-github-actions/setup-gcloud failed with: failed to execute command `gcloud --quiet auth activate-service-account *** --key-file -`: /opt/hostedtoolcache/gcloud/275.0.0/x64/lib/googlecloudsdk/core/console/console_io.py:544: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n if answer is None or (answer is '' and default is not None):\r\nERROR: gcloud failed to load: module 'collections' has no attribute 'MutableMapping'\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1923/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1525815985, "node_id": "I_kwDOBm6k_c5a8hqx", "number": 1983, "title": "Make CustomJSONEncoder a documented public API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-01-09T15:27:05Z", "updated_at": "2023-01-09T15:35:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's used by `datasette-geojson` here: https://github.com/eyeseast/datasette-geojson/commit/902bf135a5a33a0dc8264673d00a59a67cb05152", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1983/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1515185383, "node_id": "I_kwDOBm6k_c5aT-Tn", "number": 1971, "title": "Upgrade for Sphinx 6.0 (once Furo has support for it)", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-12-31T19:04:35Z", "updated_at": "2023-01-10T02:02:34Z", "closed_at": "2023-01-10T02:02:34Z", "author_association": "OWNER", "pull_request": null, "body": "A deployment of #1967 to ReadTheDocs just failed like this: https://readthedocs.org/projects/datasette/builds/19045460/\r\n\r\n```\r\nRunning Sphinx v6.0.0\r\nmaking output directory... done\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 28 source files that are out of date\r\nupdating environment: [new config] 28 added, 0 changed, 0 removed\r\nreading sources... [ 3%] authentication\r\nreading sources... [ 7%] binary_data\r\nreading sources... [ 10%] changelog\r\n\r\nTraceback (most recent call last):\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 299, in next_line\r\n self.line = self.input_lines[self.line_offset]\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 1136, in __getitem__\r\n return self.data[i]\r\nIndexError: list index out of range\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 226, in run\r\n self.next_line()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 302, in next_line\r\n raise EOFError\r\nEOFError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/cmd/build.py\", line 281, in build_main\r\n app.build(args.force_all, args.filenames)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/application.py\", line 344, in build\r\n self.builder.build_update()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py\", line 310, in build_update\r\n self.build(to_build,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py\", line 326, in build\r\n updated_docnames = set(self.read())\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py\", line 433, in read\r\n self._read_serial(docnames)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py\", line 454, in _read_serial\r\n self.read_doc(docname)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py\", line 510, in read_doc\r\n publisher.publish()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/core.py\", line 224, in publish\r\n self.document = self.reader.read(self.source, self.parser,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/io.py\", line 103, in read\r\n self.parse()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/readers/__init__.py\", line 76, in parse\r\n self.parser.parse(self.input, document)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/parsers.py\", line 78, in parse\r\n self.statemachine.run(inputlines, document, inliner=self.inliner)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 169, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 3024, in text\r\n self.section(title.lstrip(), source, style, lineno + 1, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 325, in section\r\n self.new_subsection(title, lineno, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 391, in new_subsection\r\n newabsoffset = self.nested_parse(\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 2785, in underline\r\n self.section(title, source, style, lineno - 1, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 325, in section\r\n self.new_subsection(title, lineno, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 391, in new_subsection\r\n newabsoffset = self.nested_parse(\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 1273, in bullet\r\n i, blank_finish = self.list_item(match.end())\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 1295, in list_item\r\n self.nested_parse(indented, input_offset=line_offset,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py\", line 239, in run\r\n result = state.eof(context)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 2725, in eof\r\n self.blank(None, context, None)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 2716, in blank\r\n paragraph, literalnext = self.paragraph(\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 416, in paragraph\r\n textnodes, messages = self.inline_text(text, lineno)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 425, in inline_text\r\n nodes, messages = self.inliner.parse(text, lineno,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 649, in parse\r\n before, inlines, remaining, sysmessages = method(self, match,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 792, in interpreted_or_phrase_ref\r\n nodelist, messages = self.interpreted(rawsource, escaped, role,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py\", line 889, in interpreted\r\n nodes, messages2 = role_fn(role, rawsource, text, lineno, self)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/ext/extlinks.py\", line 101, in role\r\n title = caption % part\r\nTypeError: not all arguments converted during string formatting\r\n\r\nException occurred:\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/ext/extlinks.py\", line 101, in role\r\n title = caption % part\r\nTypeError: not all arguments converted during string formatting\r\nThe full traceback has been saved in /tmp/sphinx-err-kq7ylgqo.log, if you want to report the issue to the developers.\r\nPlease also report this if it was a user error, so that a better error message can be provided next time.\r\nA bug report can be filed in the tracker at . Thanks! \r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1971/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1531991339, "node_id": "I_kwDOBm6k_c5bUFUr", "number": 1989, "title": "Suggestion: Hiding columns", "user": {"value": 116795, "label": "pax"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-01-13T09:33:32Z", "updated_at": "2023-03-31T06:18:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "As there's the possibility of [hiding tables](https://docs.datasette.io/en/stable/metadata.html#hiding-tables) - I've run into the **need of hiding specific columns** - data that's either not relevant for public or can't be shown due to privacy reasons. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1989/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1615891776, "node_id": "I_kwDOBm6k_c5gUI1A", "number": 2037, "title": "Test failure: FAILED tests/test_cli.py::test_install_requirements - FileNotFoundError", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-03-08T20:30:06Z", "updated_at": "2023-03-09T22:33:39Z", "closed_at": "2023-03-09T22:33:39Z", "author_association": "OWNER", "pull_request": null, "body": "> FAILED tests/test_cli.py::test_install_requirements - FileNotFoundError: [Errno 2] No such file or directory\r\n\r\nFrom https://github.com/simonw/datasette/actions/runs/4348548218/jobs/7597208191\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n__________________________ test_install_requirements ___________________________\r\n\r\nrun_module = \r\n\r\n @mock.patch(\"datasette.cli.run_module\")\r\n def test_install_requirements(run_module):\r\n runner = CliRunner()\r\n> with runner.isolated_filesystem():\r\n\r\n/home/runner/work/datasette/datasette/tests/test_cli.py:184: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/contextlib.py:119: in __enter__\r\n return next(self.gen)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = , temp_dir = None\r\n\r\n @contextlib.contextmanager\r\n def isolated_filesystem(\r\n self, temp_dir: t.Optional[t.Union[str, os.PathLike]] = None\r\n ) -> t.Iterator[str]:\r\n \"\"\"A context manager that creates a temporary directory and\r\n changes the current working directory to it. This isolates tests\r\n that affect the contents of the CWD to prevent them from\r\n interfering with each other.\r\n \r\n :param temp_dir: Create the temporary directory under this\r\n directory. If given, the created directory is not removed\r\n when exiting.\r\n \r\n .. versionchanged:: 8.0\r\n Added the ``temp_dir`` parameter.\r\n \"\"\"\r\n> cwd = os.getcwd()\r\nE FileNotFoundError: [Errno 2] No such file or directory\r\n\r\n/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/click/testing.py:466: FileNotFoundError\r\n```\r\nNot sure why it only affected the \"[Calculate test coverage](https://github.com/simonw/datasette/actions/workflows/test-coverage.yml)\" one.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2037/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1781022369, "node_id": "I_kwDOBm6k_c5qKD6h", "number": 2091, "title": "Drop support for Python 3.7", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-06-29T15:06:38Z", "updated_at": "2023-08-23T18:18:18Z", "closed_at": "2023-08-23T18:18:18Z", "author_association": "OWNER", "pull_request": null, "body": "It's EOL now, as of 2023-06-27 (two days ago): https://devguide.python.org/versions/\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2091/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1811824307, "node_id": "I_kwDOBm6k_c5r_j6z", "number": 2105, "title": "When reverse proxying datasette with nginx an URL element gets erronously added", "user": {"value": 2235371, "label": "aki-k"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-07-19T12:16:53Z", "updated_at": "2023-07-21T21:17:09Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I use this nginx config:\r\n```\r\n location /datasette-llm {\r\n return 302 /datasette-llm/;\r\n }\r\n\r\n location /datasette-llm/ {\r\n proxy_set_header Upgrade $http_upgrade;\r\n proxy_set_header Connection \"Upgrade\";\r\n proxy_http_version 1.1;\r\n proxy_set_header X-Real-IP $remote_addr;\r\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\r\n proxy_set_header X-Forwarded-Proto https;\r\n proxy_set_header X-Forwarded-Host $http_host;\r\n proxy_set_header Host $host;\r\n proxy_max_temp_file_size 0;\r\n proxy_pass http://127.0.0.1:8001/datasette-llm/;\r\n proxy_redirect http:// https://;\r\n proxy_buffering off;\r\n proxy_request_buffering off;\r\n proxy_set_header Origin '';\r\n client_max_body_size 0;\r\n auth_basic \"datasette-llm\";\r\n auth_basic_user_file /etc/nginx/custom-userdb;\r\n }\r\n```\r\nThen I start datasette with this command:\r\n```\r\ndatasette serve --setting base_url /datasette-llm/ $(llm logs path)\r\n```\r\nEverything else works right, except the links in \"This data as json, CSV\".\r\nThey get an extra URL element \"datasette-llm\" like this:\r\n\r\nhttps://192.168.1.3:5432/datasette-llm/datasette-llm/logs.json?sql=select+*+from+_llm_migrations\r\n\r\nhttps://192.168.1.3:5432/datasette-llm/datasette-llm/logs.csv?sql=select+*+from+_llm_migrations&_size=max\r\n\r\nWhen I remove that extra \"datasette-llm\" from the URL, those links work too.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2105/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1816857442, "node_id": "I_kwDOBm6k_c5sSwti", "number": 2106, "title": "`datasette install -e` option", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-07-22T18:33:42Z", "updated_at": "2023-07-26T18:28:33Z", "closed_at": "2023-07-22T18:42:54Z", "author_association": "OWNER", "pull_request": null, "body": "As seen in LLM and now in `sqlite-utils` too:\r\n- https://github.com/simonw/sqlite-utils/issues/570\r\n\r\nUseful for developing plugins, see tutorial at https://llm.datasette.io/en/stable/plugins/tutorial-model-plugin.html", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2106/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1822939274, "node_id": "I_kwDOBm6k_c5sp9iK", "number": 2113, "title": "Implement and document extras for the new query view page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 3, "created_at": "2023-07-26T18:24:01Z", "updated_at": "2023-08-09T17:35:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "- #2109 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2113/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1822940263, "node_id": "I_kwDOBm6k_c5sp9xn", "number": 2114, "title": "Implement canned queries against new query JSON work", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 9700784, "label": "Datasette 1.0a3"}, "comments": 3, "created_at": "2023-07-26T18:24:50Z", "updated_at": "2023-08-09T15:26:58Z", "closed_at": "2023-08-09T15:26:57Z", "author_association": "OWNER", "pull_request": null, "body": "- #2109 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2114/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1822949756, "node_id": "I_kwDOBm6k_c5sqAF8", "number": 2116, "title": "Turn DatabaseDownload into an async view function", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 9700784, "label": "Datasette 1.0a3"}, "comments": 3, "created_at": "2023-07-26T18:31:59Z", "updated_at": "2023-07-26T18:44:00Z", "closed_at": "2023-07-26T18:44:00Z", "author_association": "OWNER", "pull_request": null, "body": "A minor refactor, but it is a good starting point for this new branch. Refs:\r\n- #2109", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2116/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1823393475, "node_id": "I_kwDOBm6k_c5srsbD", "number": 2119, "title": "database color shows only on index page, not other pages", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2023-07-27T00:19:39Z", "updated_at": "2023-08-11T05:25:45Z", "closed_at": "2023-08-11T05:16:24Z", "author_association": "OWNER", "pull_request": null, "body": "I think this has been a bug for a long time.\r\n\r\nhttps://latest.datasette.io/ currently shows:\r\n\r\n\"image\"\r\n\r\nThose colors are based on a hash of the database name. But when you click through to https://latest.datasette.io/fixtures\r\n\r\n\"image\"\r\n\r\nIt's red on all sub-pages too.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2119/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1843600087, "node_id": "I_kwDOBm6k_c5t4xrX", "number": 2135, "title": "Release notes for 1.0a3", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 9700784, "label": "Datasette 1.0a3"}, "comments": 3, "created_at": "2023-08-09T16:09:26Z", "updated_at": "2023-08-09T19:17:07Z", "closed_at": "2023-08-09T19:17:06Z", "author_association": "OWNER", "pull_request": null, "body": "118 commits! https://github.com/simonw/datasette/compare/1.0a2...26be9f0445b753fb84c802c356b0791a72269f25", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2135/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1838266862, "node_id": "I_kwDOBm6k_c5tkbnu", "number": 2126, "title": "Permissions in metadata.yml / metadata.json", "user": {"value": 36199671, "label": "ctsrc"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-08-06T16:24:10Z", "updated_at": "2023-08-11T05:52:30Z", "closed_at": "2023-08-11T05:52:29Z", "author_association": "NONE", "pull_request": null, "body": "https://docs.datasette.io/en/latest/authentication.html#other-permissions-in-metadata says the following:\r\n\r\n> For all other permissions, you can use one or more \"permissions\" blocks in your metadata.\r\n\r\n> To grant access to the permissions debug tool to all signed in users you can grant permissions-debug to any actor with an id matching the wildcard * by adding this a the root of your metadata:\r\n\r\n```yaml\r\npermissions:\r\n debug-menu:\r\n id: '*'\r\n```\r\n\r\nI tried this.\r\n\r\nMy `metadata.yml` file looks like:\r\n\r\n```yaml\r\npermissions:\r\n debug-menu:\r\n id: '*'\r\n permissions-debug:\r\n id: '*'\r\nplugins:\r\n datasette-auth-passwords:\r\n myuser_password_hash:\r\n $env: \"PASSWORD_HASH_MYUSER\"\r\n```\r\n\r\nAnd then I run\r\n\r\n```zsh\r\ndatasette -m metadata.yml tiddlywiki.db --root\r\n```\r\n\r\nAnd I open a session for the \"root\" user of datasette with the link given.\r\n\r\nI open a private browser session and log in as \"myuser\" from http://127.0.0.1:8001/-/login\r\n\r\nThen I check http://127.0.0.1:8001/-/actor which confirms that I am logged in as the \"myuser\" actor\r\n\r\n```json\r\n{\r\n \"actor\": {\r\n \"id\": \"myuser\"\r\n }\r\n}\r\n```\r\n\r\nIn the session where I am logged in as \"myuser\" I then try to go to http://127.0.0.1:8001/-/permissions\r\n\r\nBut all I get there as the logged in user \"myuser\" is\r\n\r\n> Forbidden\r\n>\r\n> Permission denied\r\n\r\nAnd then if I check the http://127.0.0.1:8001/-/permissions as the datasette \"root\" user from another browser session, I see:\r\n\r\n> permissions-debug checked at 2023-08-06T16:22:58.997841 \u2717 (used default)\r\n>\r\n> Actor: {\"id\": \"myuser\"}\r\n\r\nIt seems that in spite of having tried to give the `permissions-debug` permission to the \"myuser\" user in my `metadata.yml` file, datasette does not agree that \"myuser\" has permission `permissions-debug`..\r\n\r\nWhat do I need to do differently so that my \"myuser\" user is able to access http://127.0.0.1:8001/-/permissions ?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2126/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1838469176, "node_id": "I_kwDOBm6k_c5tlNA4", "number": 2127, "title": "Context base class to support documenting the context", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2023-08-07T00:01:02Z", "updated_at": "2023-08-10T01:30:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This idea first came up here:\r\n- https://github.com/simonw/datasette/issues/2112#issuecomment-1652751140\r\n\r\nIf `datasette.render_template(...)` takes an optional `Context` subclass as an alternative to a context dictionary, I could then use dataclasses to define the context made available to specific templates - which then gives me something I can use to help document what they are.\r\n\r\nAlso refs:\r\n- https://github.com/simonw/datasette/issues/1510", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2127/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1865869205, "node_id": "I_kwDOBm6k_c5vNueV", "number": 2157, "title": "Proposal: Make the `_internal` database persistent, customizable, and hidden", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-08-24T20:54:29Z", "updated_at": "2023-08-31T02:45:56Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The current `_internal` database is used by Datasette core to cache info about databases/tables/columns/foreign keys of databases in a Datasette instance. It's a temporary database created at startup, that can only be seen by the root user. See an [example `_internal` DB here](https://latest.datasette.io/_internal), after [logging in as root](https://latest.datasette.io/login-as-root).\r\n\r\nThe current `_internal` database has a few rough edges:\r\n\r\n- It's part of `datasette.databases`, so many plugins have to specifically exclude `_internal` from their queries [examples here](https://github.com/search?q=datasette+hookimpl+%22_internal%22+language%3APython+-path%3Adatasette%2F&ref=opensearch&type=code)\r\n- It's only used by Datasette core and can't be used by plugins or 3rd parties\r\n- It's created from scratch at startup and stored in memory. Why is fine, the performance is great, but persistent storage would be nice.\r\n\r\nAdditionally, it would be really nice if plugins could use this `_internal` database to store their own configuration, secrets, and settings. For example:\r\n\r\n- `datasette-auth-tokens` [creates a `_datasette_auth_tokens` table](https://github.com/simonw/datasette-auth-tokens/blob/main/datasette_auth_tokens/__init__.py#L15) to store auth token metadata. This could be moved into the `_internal` database to avoid writing to the gues database\r\n- `datasette-socrata` [creates a `socrata_imports`](https://github.com/simonw/datasette-socrata/blob/1409aa9b4d2fc3aff286b52e73af33b5786d56d0/datasette_socrata/__init__.py#L190-L198) table, which also can be in `_internal`\r\n- `datasette-upload-csvs` [creates a `_csv_progress_`](https://github.com/simonw/datasette-upload-csvs/blob/main/datasette_upload_csvs/__init__.py#L154) table, which can be in `_internal`\r\n- `datasette-write-ui` wants to have the ability for users to toggle whether a table appears editable, which can be either in `datasette.yaml` or on-the-fly by storing config in `_internal`\r\n\r\n\r\nIn general, these are specific features that Datasette plugins would have access to if there was a central internal database they could read/write to:\r\n\r\n- **Dynamic configuration**. Changing the `datasette.yaml` file works, but can be tedious to restart the server every time. Plugins can define their own configuration table in `_internal`, and could read/write to it to store configuration based on user actions (cell menu click, API access, etc.)\r\n- **Caching**. If a plugin or Datasette Core needs to cache some expensive computation, they can store it inside `_internal` (possibly as a temporary table) instead of managing their own caching solution.\r\n- **Audit logs**. If a plugin performs some sensitive operations, they can log usage info to `_internal` for others to audit later. \r\n- **Long running process status**. Many plugins (`datasette-upload-csvs`, `datasette-litestream`, `datasette-socrata`) perform tasks that run for a really long time, and want to give continue status updates to the user. They can store this info inside` _internal`\r\n- **Safer authentication**. Passwords and authentication plugins usually store credentials/hashed secrets in configuration files or environment variables, which can be difficult to handle. Now, they can store them in `_internal` \r\n\r\n## Proposal\r\n\r\n- We remove `_internal` from [`datasette.databases`](https://docs.datasette.io/en/latest/internals.html#databases) property.\r\n- We add new `datasette.get_internal_db()` method that returns the `_internal` database, for plugins to use\r\n- We add a new `--internal internal.db` flag. If provided, then the `_internal` DB will be sourced from that file, and further updates will be persisted to that file (instead of an in-memory database)\r\n- When creating internal.db, create a new `_datasette_internal` table to mark it a an \"datasette internal database\"\r\n- In `datasette serve`, we check for the existence of the `_datasette_internal` table. If it exists, we assume the user provided that file in error and raise an error. This is to limit the chance that someone accidentally publishes their internal database to the internet. We could optionally add a `--unsafe-allow-internal` flag (or database plugin) that allows someone to do this if they really want to.\r\n\r\n\r\n## New features unlocked with this\r\n\r\nThese features don't really need a standardized `_internal` table per-say (plugins could currently configure their own long-time storage features if they really wanted to), but it would make it much simpler to create these kinds of features with a persistent application database.\r\n\r\n- **`datasette-comments`** : A plugin for commenting on rows or specific values in a database. Comment contents + threads + email notification info can be stored in `_internal`\r\n- **Bookmarks**: \"Bookmarking\" an SQL query could be stored in `_internal`, or a URL link shortener\r\n- **Webhooks**: If a plugin wants to either consume a webhook or create a new one, they can store hashed credentials/API endpoints in `_internal`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2157/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1930008379, "node_id": "I_kwDOBm6k_c5zCZc7", "number": 2197, "title": "click-default-group-wheel dependency conflict", "user": {"value": 1176293, "label": "ar-jan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-10-06T11:49:20Z", "updated_at": "2023-10-12T21:53:17Z", "closed_at": "2023-10-12T21:53:17Z", "author_association": "NONE", "pull_request": null, "body": "I upgraded my dependencies, then ran into this problem running `datasette inspect`:\r\n\r\n> env/lib/python3.9/site-packages/datasette/cli.py\", line 6, in \r\n> from click_default_group import DefaultGroup\r\n> ModuleNotFoundError: No module named 'click_default_group'\r\n\r\nTurns out the released version of datasette still depends on `click-default-group-wheel`, so `click-default-group` doesn't get installed/recognized:\r\n\r\n```\r\n$ virtualenv venv\r\n$ source venv/bin/activate\r\n$ pip install datasette\r\n$ pip list | grep click-default-group\r\nclick-default-group 1.2.4\r\nclick-default-group-wheel 1.2.3\r\n$ python -c \"from click_default_group import DefaultGroup\"\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nModuleNotFoundError: No module named 'click_default_group'\r\n$ pip install --force-reinstall click-default-group\r\n...\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed.\r\nThis behaviour is the source of the following dependency conflicts.\r\ndatasette 0.64.4 requires click-default-group-wheel>=1.2.2, which is not installed.\r\nSuccessfully installed click-8.1.7 click-default-group-1.2.4\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2197/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1004613267, "node_id": "I_kwDOCGYnMM474S6T", "number": 328, "title": "Invalid JSON output when no rows", "user": {"value": 12752, "label": "gravis"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-09-22T18:37:26Z", "updated_at": "2021-09-22T20:21:34Z", "closed_at": "2021-09-22T20:20:18Z", "author_association": "NONE", "pull_request": null, "body": "`sqlite-utils query` generates a JSON output with the result from the query:\r\n\r\n```json\r\n[{...},{...}]\r\n```\r\nIf no rows are returned by the query, I'm expecting an empty JSON array:\r\n\r\n```json\r\n[]\r\n```\r\n\r\nBut actually I'm getting an empty string. To be consistent, the output should be `[]` when the request succeeds (return code == `0`).", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/328/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1071531082, "node_id": "I_kwDOCGYnMM4_3kRK", "number": 349, "title": "A way of creating indexes on newly created tables", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-05T18:56:12Z", "updated_at": "2021-12-07T01:04:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I'm writing code for https://github.com/simonw/git-history/issues/33 that creates a table inside a loop:\r\n\r\n```python\r\nitem_pk = db[item_table].lookup(\r\n {\"_item_id\": item_id},\r\n item_to_insert,\r\n column_order=(\"_id\", \"_item_id\"),\r\n pk=\"_id\",\r\n)\r\n```\r\nI need to look things up by `_item_id` on this table, which means I need an index on that column (the table can get very big).\r\n\r\nBut there's no mechanism in SQLite utils to detect if the table was created for the first time and add an index to it. And I don't want to run `CREATE INDEX IF NOT EXISTS` every time through the loop.\r\n\r\nThis should work like the `foreign_keys=` mechanism.\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/349/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1072435124, "node_id": "I_kwDOCGYnMM4_7A-0", "number": 350, "title": "Optional caching mechanism for table.lookup()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-06T17:54:25Z", "updated_at": "2021-12-06T17:56:57Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by work on `git-history` where I used this pattern:\r\n```python\r\n column_name_to_id = {}\r\n\r\n def column_id(column):\r\n if column not in column_name_to_id:\r\n id = db[\"columns\"].lookup(\r\n {\"namespace\": namespace_id, \"name\": column},\r\n foreign_keys=((\"namespace\", \"namespaces\", \"id\"),),\r\n )\r\n column_name_to_id[column] = id\r\n return column_name_to_id[column]\r\n```\r\nIf you're going to be doing a large number of `table.lookup(...)` calls and you know that no other script will be modifying the database at the same time you can presumably get a big speedup using a Python in-memory cache - maybe even a LRU one to avoid memory bloat.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/350/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1097087280, "node_id": "I_kwDOCGYnMM5BZDkw", "number": 368, "title": "Offer `python -m sqlite_utils` as an alternative to `sqlite-utils`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7558727, "label": "3.21"}, "comments": 3, "created_at": "2022-01-09T02:29:30Z", "updated_at": "2022-01-10T19:27:20Z", "closed_at": "2022-01-09T02:40:50Z", "author_association": "OWNER", "pull_request": null, "body": "> Add this to `sqlite_utils/cli.py`:\r\n>\r\n> ```python\r\n> if __name__ == \"__main__\":\r\n> cli()\r\n> ```\r\n> Now the tool can be run using `python -m sqlite_utils.cli --help`\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/364#issuecomment-1008214998_", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/368/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1097135732, "node_id": "I_kwDOCGYnMM5BZPZ0", "number": 373, "title": "List `--fmt` options in the docs ", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7558727, "label": "3.21"}, "comments": 3, "created_at": "2022-01-09T08:22:11Z", "updated_at": "2022-01-10T19:27:24Z", "closed_at": "2022-01-09T17:49:00Z", "author_association": "OWNER", "pull_request": null, "body": "https://sqlite-utils.datasette.io/en/stable/cli.html#table-formatted-output currently cheats and tells the user to run `--help` - can fix this using `cog`. ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/373/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1097251014, "node_id": "I_kwDOCGYnMM5BZrjG", "number": 375, "title": "`sqlite-utils bulk` command", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7558727, "label": "3.21"}, "comments": 3, "created_at": "2022-01-09T17:12:38Z", "updated_at": "2022-01-11T02:12:58Z", "closed_at": "2022-01-11T02:10:55Z", "author_association": "OWNER", "pull_request": null, "body": "The `.executemany()` method is a very efficient way to execute the same SQL query against a huge list of parameters.\r\n\r\n`sqlite-utils insert` supports a bunch of ways of loading a list of dictionaries - from CSV, TSV, JSON, newline JSON and more thanks to:\r\n- #361\r\n\r\nWhat if you could load a list of dictionaries and provide a SQL query with `:named` parameters that correspond to keys in those dictionaries instead?\r\n\r\nThis would need to be a new command - I thought about adding a `--sql` option to `insert` but that doesn't make sense as that command already requires a table name.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/375/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1123903919, "node_id": "I_kwDOCGYnMM5C_Wmv", "number": 397, "title": "Support IF NOT EXISTS for table creation", "user": {"value": 738408, "label": "rafguns"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-02-04T07:41:15Z", "updated_at": "2022-02-06T01:30:46Z", "closed_at": "2022-02-06T01:29:01Z", "author_association": "NONE", "pull_request": null, "body": "Currently, I have a bunch of code that looks like this:\r\n\r\n```python\r\nsubjects = db[\"subjects\"] if db[\"subjects\"].exists() else db[\"subjects\"].create({\r\n ...\r\n})\r\n```\r\nIt would be neat if sqlite-utils could simplify that by supporting `CREATE TABLE IF NOT EXISTS`, so that I'd be able to write, e.g.\r\n\r\n```python\r\nsubjects = db[\"subjects\"].create({...}, if_not_exists=True)\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/397/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1149661489, "node_id": "I_kwDOCGYnMM5EhnEx", "number": 409, "title": "`with db:` for transactions", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-02-24T19:22:06Z", "updated_at": "2022-10-01T03:42:50Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This can be a documented wrapper around `with db.conn:`.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/409/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1175744654, "node_id": "I_kwDOCGYnMM5GFHCO", "number": 417, "title": "insert fails on JSONL with whitespace", "user": {"value": 9954, "label": "blaine"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-21T17:58:14Z", "updated_at": "2022-03-25T21:19:06Z", "closed_at": "2022-03-25T21:17:13Z", "author_association": "NONE", "pull_request": null, "body": "Any JSON that is newline-delimited and has whitespace (newlines) between the start of a JSON object and an attribute fails due to a parse error.\r\n\r\ne.g. given the valid JSONL:\r\n\r\n```{\r\n \"attribute\": \"value\"\r\n}\r\n{\r\n \"attribute\": \"value2\"\r\n}\r\n```\r\n\r\nI would expect that `sqlite-utils insert --nl my.db mytable file.jsonl` would properly import the data into `mytable`. However, the following error is thrown instead:\r\n\r\n`json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)`\r\n\r\nIt makes sense that since the file is intended to be newline separated, the thing being parsed is \"{\" (which obviously fails), however the default newline-separated output of `jq` isn't compact. Using `jq -c` avoids this problem, but the fix is unintuitive and undocumented.\r\n\r\nProposed solutions:\r\n1. Default to a \"loose\" newline-separated parse; this could be implemented internally as [the equivalent of] a `jq -c` filter ahead of the insert step.\r\n2. Catch the JSONDecodeError (or pre-empt it in the case of a record === \"{\\n\") and give the user a \"it looks like your json isn't _actually_ newline-delimited; try running it through `jq -c` instead\" error message.\r\n\r\nIt might just have been too early in the morning when I was playing with this, but running pipes of data through sqlite-utils without the 'knack' of it led to some false starts.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/417/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1227571375, "node_id": "I_kwDOCGYnMM5JK0Cv", "number": 431, "title": "Allow making m2m relation of a table to itself", "user": {"value": 738408, "label": "rafguns"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-05-06T08:30:43Z", "updated_at": "2022-06-23T14:12:51Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I am building a database, in which one of the tables has a many-to-many relationship to itself. As far as I can see, this is not (yet) possible using `.m2m()` in sqlite-utils. This may be a bit of a niche use case, so feel free to close this issue if you feel it would introduce too much complexity compared to the benefits.\r\n\r\nExample: suppose I have a table of people, and I want to store the information that John and Mary have two children, Michael and Suzy. It would be neat if I could do something like this:\r\n\r\n```python\r\nfrom sqlite_utils import Database\r\n\r\ndb = Database(memory=True)\r\ndb[\"people\"].insert({\"name\": \"John\"}, pk=\"name\").m2m(\r\n \"people\", [{\"name\": \"Michael\"}, {\"name\": \"Suzy\"}], m2m_table=\"parent_child\", pk=\"name\"\r\n)\r\ndb[\"people\"].insert({\"name\": \"Mary\"}, pk=\"name\").m2m(\r\n \"people\", [{\"name\": \"Michael\"}, {\"name\": \"Suzy\"}], m2m_table=\"parent_child\", pk=\"name\"\r\n)\r\n```\r\n\r\nBut if I do that, the many-to-many table `parent_child` has only one column:\r\n```\r\nCREATE TABLE [parent_child] (\r\n [people_id] TEXT REFERENCES [people]([name]),\r\n PRIMARY KEY ([people_id], [people_id])\r\n)\r\n```\r\n\r\nThis could be solved by adding one or two keyword_arguments to `.m2m()`, e.g. `.m2m(..., left_name=None, right_name=None)` or `.m2m(..., names=(None, None))`.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/431/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1243151184, "node_id": "I_kwDOCGYnMM5KGPtQ", "number": 434, "title": "`detect_fts()` identifies the wrong table if tables have names that are subsets of each other", "user": {"value": 559711, "label": "ryascott"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-05-20T13:28:31Z", "updated_at": "2022-06-14T23:24:09Z", "closed_at": "2022-06-14T23:24:09Z", "author_association": "NONE", "pull_request": null, "body": "Windows 10\r\nPython 3.9.6\r\n\r\nWhen I was running a full text search through the Python library, I noticed that the query was being run on a different full text search table than the one I was trying to search.\r\n\r\nI took a look at the following function\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/841ad44bacaff05ec79ef78166d12e80c82ba6d7/sqlite_utils/db.py#L2213\r\n\r\nand noticed:\r\n\r\n```python\r\nsql LIKE '%VIRTUAL TABLE%USING FTS%content=%{table}%'\r\n```\r\n\r\nMy database contains tables with similar names and %{table}% was matching another table that ended differently in its name.\r\nI have included a sample test that shows this occurring:\r\n\r\nI search for Marsupials in db[\"books\"] and The Clue of the Broken Blade is returned. \r\n\r\nThis occurs since the search for Marsupials was \"successfully\" done against db[\"booksb\"] and rowid 1 is returned. \"The Clue of the Broken Blade\" has a rowid of 1 in db[\"books\"] and this is what is returned from the search.\r\n\r\n```python\r\ndef test_fts_search_with_similar_table_names(fresh_db):\r\n db = Database(memory=True)\r\n db[\"books\"].insert_all(\r\n [\r\n {\r\n \"title\": \"The Clue of the Broken Blade\",\r\n \"author\": \"Franklin W. Dixon\",\r\n },\r\n {\r\n \"title\": \"Habits of Australian Marsupials\",\r\n \"author\": \"Marlee Hawkins\",\r\n },\r\n ]\r\n )\r\n db[\"booksb\"].insert(\r\n {\r\n \"title\": \"Habits of Australian Marsupials\",\r\n \"author\": \"Marlee Hawkins\",\r\n }\r\n )\r\n\r\n db[\"booksb\"].enable_fts([\"title\", \"author\"])\r\n db[\"books\"].enable_fts([\"title\", \"author\"])\r\n\r\n\r\n query = \"Marsupials\"\r\n\r\n assert [\r\n { \"rowid\": 1,\r\n \"title\": \"Habits of Australian Marsupials\",\r\n \"author\": \"Marlee Hawkins\",\r\n },\r\n ] == list(db[\"books\"].search(query))\r\n```\r\n\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/434/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1277295119, "node_id": "I_kwDOCGYnMM5MIfoP", "number": 445, "title": "`sqlite_utils.utils.TypeTracker` should be a documented API", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-06-20T19:08:28Z", "updated_at": "2022-06-20T19:49:02Z", "closed_at": "2022-06-20T19:46:58Z", "author_association": "OWNER", "pull_request": null, "body": "I've used it in a couple of external places now:\r\n\r\n- https://github.com/simonw/datasette-socrata/blob/32fb256a461bf0e790eca10bdc7dd9d96c20f7c4/datasette_socrata/__init__.py#L264-L280\r\n- https://github.com/simonw/datasette-lite/blob/caa8eade10f0321c64f9f65c4561186f02d57c5b/webworker.js#L55-L64\r\n\r\nRefs:\r\n- https://github.com/simonw/datasette-lite/issues/32", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/445/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1278571700, "node_id": "I_kwDOCGYnMM5MNXS0", "number": 447, "title": "Incorrect syntax highlighting in docs CLI reference", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-06-21T14:53:10Z", "updated_at": "2022-06-21T18:48:47Z", "closed_at": "2022-06-21T18:48:46Z", "author_association": "OWNER", "pull_request": null, "body": "https://sqlite-utils.datasette.io/en/stable/cli-reference.html#insert\r\n\r\n![CE020DDA-27FB-49C3-9EA6-37457DC4C321](https://user-images.githubusercontent.com/9599/174830380-06530537-b870-41c0-a8af-03c7fa720c6f.jpeg)\r\n\r\nIt looks like Python keywords are being incorrectly highlighted here.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/447/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1353074021, "node_id": "I_kwDOCGYnMM5QpkVl", "number": 474, "title": "Add an option for specifying column names when inserting CSV data", "user": {"value": 14294, "label": "hubgit"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-08-27T15:29:59Z", "updated_at": "2022-08-31T03:42:36Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "https://sqlite-utils.datasette.io/en/stable/cli.html#csv-files-without-a-header-row\r\n\r\n> The first row of any CSV or TSV file is expected to contain the names of the columns in that file.\r\n\r\n> If your file does not include this row, you can use the `--no-headers` option to specify that the tool should not use that fist row as headers.\r\n\r\n> If you do this, the table will be created with column names called `untitled_1` and `untitled_2` and so on. You can then rename them using the `sqlite-utils transform ... --rename` command.\r\n\r\nIt would be nice to be able to specify the column names when importing CSV/TSV without a header row, via an extra command line option.\r\n\r\n(renaming a column of a large table can take a long time, which makes it an inconvenient workaround)", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/474/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1386562662, "node_id": "I_kwDOCGYnMM5SpURm", "number": 493, "title": "Tiny typographical error in install/uninstall docs", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-09-26T19:00:42Z", "updated_at": "2022-10-25T21:31:15Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Added in:\r\n- #483\r\n\r\nI don't know how to fix this in Sphinx: I'm getting this: https://sqlite-utils.datasette.io/en/latest/cli.html#cli-install\r\n\r\n> The [insert \u2013convert](https://sqlite-utils.datasette.io/en/latest/cli.html#cli-insert-convert) and [query \u2013functions](https://sqlite-utils.datasette.io/en/latest/cli.html#cli-query-functions) options\r\n\r\n\"image\"\r\n\r\nBut I want it to display `insert --convert` and not `insert \u2013convert` there.\r\n\r\nHere's the code: https://github.com/simonw/sqlite-utils/blob/85247038f70d7eb2f3e272cfeaa4c44459cafba8/docs/cli.rst#L2125", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/493/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1392690202, "node_id": "I_kwDOCGYnMM5TAsQa", "number": 495, "title": "Support JSON values returned from .convert() functions", "user": {"value": 649467, "label": "mhalle"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-09-30T16:33:49Z", "updated_at": "2022-10-25T21:23:37Z", "closed_at": "2022-10-25T21:23:28Z", "author_association": "NONE", "pull_request": null, "body": "When using the convert function on a JSON column, the result of the conversion function must be a string. If the return value is either a dict (object) or a list (array), the convert call will error out with an unhelpful user defined function exception. \r\n\r\nIt makes sense that since the original column value was a string and required conversion to data structures, the result should be converted back into a JSON string as well. However, other functions auto-convert to JSON string representation, so the fact that convert doesn't could be surprising.\r\n\r\nAt least the documentation should note this requirement, because the sqlite error messages won't readily reveal the issue.\r\n\r\nJf only sqlite's JSON column type meant something :)", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/495/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1429029604, "node_id": "I_kwDOCGYnMM5VLULk", "number": 506, "title": "Make `cursor.rowcount` accessible (wontfix)", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-30T21:51:55Z", "updated_at": "2022-11-01T17:37:47Z", "closed_at": "2022-11-01T17:37:13Z", "author_association": "OWNER", "pull_request": null, "body": "In building this Datasette feature on top of `sqlite-utils` I thought it might be useful to expose the number of rows that had been affected by a bulk insert or update - the `cursor.rowcount`:\r\n\r\n- https://github.com/simonw/datasette/issues/1866\r\n\r\nThis isn't currently exposed by `sqlite-utils`.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/506/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1434911255, "node_id": "I_kwDOCGYnMM5VhwIX", "number": 510, "title": "Cannot enable FTS5 despite it being available", "user": {"value": 1176293, "label": "ar-jan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-11-03T16:03:49Z", "updated_at": "2022-11-18T18:37:52Z", "closed_at": "2022-11-17T10:36:28Z", "author_association": "NONE", "pull_request": null, "body": "When I do `sqlite-utils enable-fts my.db table_name column_name` (with or without `--fts5`), I get an FTS4 virtual table instead of the expected FTS5.\r\n\r\nFTS5 is however available and Python/SQLite versions do not seem to be the issue. I can manually create the FTS5 virtual table, and then Datasette also works with it from this same Python environment.\r\n\r\n`>>> sqlite3.version`\r\n`2.6.0`\r\n`>>> sqlite3.sqlite_version`\r\n`3.39.4`\r\n\r\n`PRAGMA compile_options;` includes `ENABLE_FTS5`.\r\n\r\n`sqlite-utils, version 3.30`.\r\n\r\nAny ideas what's happening and how to fix?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1450952393, "node_id": "I_kwDOCGYnMM5We8bJ", "number": 512, "title": "mypy failures in CI", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-11-16T06:22:48Z", "updated_at": "2022-11-16T07:49:51Z", "closed_at": "2022-11-16T07:49:50Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils/actions/runs/3472012235 failed on Python 3.11:\r\n\r\nTruncated output:\r\n```\r\nsqlite_utils/db.py:2467: note: PEP 484 prohibits implicit Optional. Accordingly, mypy has changed its default to no_implicit_optional=True\r\nsqlite_utils/db.py:2467: note: Use https://github.com/hauntsaninja/no_implicit_optional to automatically upgrade your codebase\r\nsqlite_utils/db.py:2530: error: Incompatible default for argument \"where\" (default has type \"None\", argument has type \"str\") [assignment]\r\nsqlite_utils/db.py:2530: note: PEP 484 prohibits implicit Optional. Accordingly, mypy has changed its default to no_implicit_optional=True\r\nsqlite_utils/db.py:2530: note: Use https://github.com/hauntsaninja/no_implicit_optional to automatically upgrade your codebase\r\nsqlite_utils/db.py:2658: error: Argument 1 to \"count_where\" of \"Queryable\" has incompatible type \"Optional[str]\"; expected \"str\" [arg-type]\r\nFound 23 errors in 1 file (checked 51 source files)\r\n```\r\nBest look at https://github.com/hauntsaninja/no_implicit_optional", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/512/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1516644980, "node_id": "I_kwDOCGYnMM5aZip0", "number": 520, "title": "rows_from_file() raises confusing error if file-like object is not in binary mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-01-02T19:00:14Z", "updated_at": "2023-05-08T22:08:07Z", "closed_at": "2023-05-08T22:08:07Z", "author_association": "OWNER", "pull_request": null, "body": "I got this error:\r\n\r\n```\r\n File \"/Users/simon/Dropbox/Development/openai-to-sqlite/openai_to_sqlite/cli.py\", line 27, in embeddings\r\n rows, _ = rows_from_file(input)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/simon/.local/share/virtualenvs/openai-to-sqlite-jt4obeb2/lib/python3.11/site-packages/sqlite_utils/utils.py\", line 305, in rows_from_file\r\n first_bytes = buffered.peek(2048).strip()\r\n ^^^^^^^^^^^^^^^^^^^\r\n```\r\nFrom this code:\r\n```python\r\n\r\n@cli.command()\r\n@click.argument(\r\n \"db_path\",\r\n type=click.Path(file_okay=True, dir_okay=False, allow_dash=False),\r\n)\r\n@click.option(\r\n \"-i\",\r\n \"--input\",\r\n type=click.File(\"r\"),\r\n default=\"-\",\r\n)\r\ndef embeddings(db_path, input):\r\n \"Store embeddings for one or more text documents\"\r\n click.echo(\"Here is some output\")\r\n db = sqlite_utils.Database(db_path)\r\n rows, _ = rows_from_file(input)\r\n print(list(rows))\r\n```\r\nThe error went away when I changed it to `type=click.File(\"rb\")`.\r\n\r\nThis should either be called out in the documentation or `rows_from_file()` should be fixed to handle text-mode files in addition to binary files.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/520/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1718595700, "node_id": "I_kwDOCGYnMM5mb7B0", "number": 550, "title": "AttributeError: 'EntryPoints' object has no attribute 'get' for flake8 on Python 3.7", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-05-21T18:24:39Z", "updated_at": "2023-05-21T18:42:25Z", "closed_at": "2023-05-21T18:41:58Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils/actions/runs/5039064797/jobs/9036965488\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/bin/flake8\", line 8, in \r\n sys.exit(main())\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/cli.py\", line 22, in main\r\n app.run(argv)\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py\", line 363, in run\r\n self._run(argv)\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py\", line 350, in _run\r\n self.initialize(argv)\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py\", line 330, in initialize\r\n self.find_plugins(config_finder)\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py\", line 153, in find_plugins\r\n self.check_plugins = plugin_manager.Checkers(local_plugins.extension)\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/plugins/manager.py\", line 357, in __init__\r\n self.namespace, local_plugins=local_plugins\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/plugins/manager.py\", line 238, in __init__\r\n self._load_entrypoint_plugins()\r\n File \"/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/plugins/manager.py\", line 254, in _load_entrypoint_plugins\r\n eps = importlib_metadata.entry_points().get(self.namespace, ())\r\nAttributeError: 'EntryPoints' object has no attribute 'get'\r\nError: Process completed with exit code 1.\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/550/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1816918185, "node_id": "I_kwDOCGYnMM5sS_ip", "number": 574, "title": "`prepare_connection()` plugin hook", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-07-22T22:52:47Z", "updated_at": "2023-07-22T23:13:14Z", "closed_at": "2023-07-22T22:59:10Z", "author_association": "OWNER", "pull_request": null, "body": "> Splitting off an issue for `prepare_connection()` since Alex got the PR in seconds before I shipped 3.34!\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/567#issuecomment-1646686424_\r\n ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/574/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1816852402, "node_id": "I_kwDOCGYnMM5sSvey", "number": 569, "title": "register_command plugin hook", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-07-22T18:17:27Z", "updated_at": "2023-07-22T19:19:35Z", "closed_at": "2023-07-22T19:19:35Z", "author_association": "OWNER", "pull_request": null, "body": "> I'm going to start by adding the `register_command` hook using the exact same pattern as Datasette and LLM.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/567#issuecomment-1646643450_\r\n ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/569/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1856075668, "node_id": "I_kwDOCGYnMM5uoXeU", "number": 586, "title": ".transform() fails to drop column if table is part of a view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-08-18T05:25:22Z", "updated_at": "2023-08-18T06:13:47Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I got this error trying to drop a column from a table that was part of a SQL view:\r\n\r\n> error in view plugins: no such table: main.pypi_releases\r\n\r\nUpon further investigation I found that this pattern seemed to fix it:\r\n```python\r\ndef transform_the_table(conn):\r\n # Run this in a transaction:\r\n with conn:\r\n # We have to read all the views first, because we need to drop and recreate them\r\n db = sqlite_utils.Database(conn)\r\n views = {v.name: v.schema for v in db.views if table.lower() in v.schema.lower()}\r\n for view in views.keys():\r\n db[view].drop()\r\n db[table].transform(\r\n types=types,\r\n rename=rename,\r\n drop=drop,\r\n column_order=[p[0] for p in order_pairs],\r\n )\r\n # Now recreate the views\r\n for name, schema in views.items():\r\n db.create_view(name, schema)\r\n```\r\nSo grab a copy of any view that might reference this table, start a transaction, drop those views, run the transform, recreate the views again.\r\n\r\n> I wonder if this should become an option in `sqlite-utils`? Maybe a `recreate_views=True` argument for `table.tranform(...)`? Should it be opt-in or opt-out?\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-edit-schema/issues/35#issuecomment-1683370548_\r\n ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/586/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1857851384, "node_id": "I_kwDOCGYnMM5uvI_4", "number": 587, "title": "New .add_foreign_key() can break if PRAGMA legacy_alter_table=ON and there's an invalid foreign key reference", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-08-19T20:01:26Z", "updated_at": "2023-08-19T20:04:33Z", "closed_at": "2023-08-19T20:04:32Z", "author_association": "OWNER", "pull_request": null, "body": "Extremely detailed story of how I got to this point:\r\n\r\n- https://github.com/simonw/llm/issues/162\r\n\r\nSteps to reproduce (only if that pragma is on though):\r\n```bash\r\npython -c '\r\nimport sqlite_utils\r\ndb = sqlite_utils.Database(memory=True)\r\ndb.execute(\"\"\"\r\nCREATE TABLE \"logs\" (\r\n [id] INTEGER PRIMARY KEY,\r\n [model] TEXT,\r\n [prompt] TEXT,\r\n [system] TEXT,\r\n [prompt_json] TEXT,\r\n [options_json] TEXT,\r\n [response] TEXT,\r\n [response_json] TEXT,\r\n [reply_to_id] INTEGER,\r\n [chat_id] INTEGER REFERENCES [log]([id]),\r\n [duration_ms] INTEGER,\r\n [datetime_utc] TEXT\r\n);\r\n\"\"\")\r\ndb[\"logs\"].add_foreign_key(\"reply_to_id\", \"logs\", \"id\")\r\n'\r\n```\r\nThis succeeds in some environments, fails in others.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/587/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1879209560, "node_id": "I_kwDOCGYnMM5wAnZY", "number": 589, "title": "Mechanism for de-registering registered SQL functions", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-09-03T19:32:39Z", "updated_at": "2023-09-03T19:36:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I used a custom SQL function in a migration script and then realized that it should be de-registered before the end of the script to avoid leaking into the calling code.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/589/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1246826792, "node_id": "I_kwDODLZ_YM5KUREo", "number": 10, "title": "When running `auth` command, don't overwrite an existing auth.json file", "user": {"value": 11887, "label": "ashanan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-05-24T16:42:20Z", "updated_at": "2022-09-07T15:07:38Z", "closed_at": "2022-08-22T16:17:19Z", "author_association": "NONE", "pull_request": null, "body": "Ran the `auth` command in the same directory I'd previously set up an auth.json file for `twitter-to-sqlite` and it was completely overwritten. Not the biggest issue, but still unexpected. Ideally, for me, the keys would just be added to the existing file, but getting a warning and a chance to back out would be a good solution as well.", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/10/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1345452427, "node_id": "I_kwDODLZ_YM5QMfmL", "number": 11, "title": "-a option is used for \"--auth\" and for \"--all\"", "user": {"value": 2467, "label": "fernand0"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-08-21T10:50:48Z", "updated_at": "2022-08-21T21:11:57Z", "closed_at": "2022-08-21T21:11:57Z", "author_association": "NONE", "pull_request": null, "body": "I'm not sure which option is best, instead of -a -all.", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/11/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1616347574, "node_id": "I_kwDOJHON9s5gV4G2", "number": 1, "title": "Initial proof of concept with ChatGPT", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-03-09T03:44:39Z", "updated_at": "2023-03-09T03:51:55Z", "closed_at": "2023-03-09T03:51:55Z", "author_association": "MEMBER", "pull_request": null, "body": "I'm using ChatGPT to figure out enough AppleScript to get at my notes data.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1618130434, "node_id": "I_kwDOJHON9s5gcrYC", "number": 11, "title": "Implement a SQL view to make it easier to query files in a nested folder", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-03-09T23:19:28Z", "updated_at": "2023-03-09T23:24:01Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Working with nested data in SQL is tricky, can I make it easier with a view or canned query?", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/11/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 322741659, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg3NzcwMzQ1", "number": 258, "title": "Add new metadata key persistent_urls which removes the hash from all database urls", "user": {"value": 247131, "label": "philroche"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-05-14T09:39:18Z", "updated_at": "2018-05-21T07:38:15Z", "closed_at": "2018-05-21T07:38:15Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/258", "body": "Add new metadata key \"persistent_urls\" which removes the hash from all database urls when set to \"true\"\r\n\r\nThis PR is just to gauge if this, or something like it, is something you would consider merging?\r\n\r\nI understand the reason why the substring of the hash is included in the url but\r\nthere are some use cases where the urls should persist across deployments. For bookmarks\r\nfor example or for scripts that use the JSON API.\r\n\r\nThis is the initial commit for this feature. Tests and documentation updates to follow.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/258/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 313494458, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxMDMzMDI0", "number": 200, "title": "Hide Spatialite system tables", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-04-11T21:26:58Z", "updated_at": "2018-04-12T21:34:48Z", "closed_at": "2018-04-12T21:34:48Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/200", "body": "They were getting on my nerves.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/200/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314319372, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNjQyMTE0", "number": 205, "title": "Support filtering with units and more", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-04-14T10:47:51Z", "updated_at": "2018-04-14T15:24:04Z", "closed_at": "2018-04-14T15:24:04Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/205", "body": "The first commit:\r\n* Adds units to exported JSON\r\n* Adds units key to metadata skeleton\r\n* Adds some docs for units\r\n\r\nThe second commit adds filtering by units by the first method I mentioned in #203:\r\n![image](https://user-images.githubusercontent.com/45057/38767463-7193be16-3fd9-11e8-8a5f-ac4159415c6d.png)\r\n\r\n[Try it here](https://wtr-api.herokuapp.com/wtr-663ea99/license_frequency?frequency__gt=50GHz&height__lt=50ft). I think it integrates pretty neatly.\r\n\r\nThe third commit adds support for registering custom units with Pint from metadata.json. Probably pretty niche, but I need decibels!", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 403499298, "node_id": "MDExOlB1bGxSZXF1ZXN0MjQ3OTIzMzQ3", "number": 404, "title": "Experiment: run Jinja in async mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-01-27T00:28:44Z", "updated_at": "2019-11-12T05:02:18Z", "closed_at": "2019-11-12T05:02:13Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/404", "body": "See http://jinja.pocoo.org/docs/2.10/api/#async-support\r\n\r\nTests all pass. Have not checked performance difference yet.\r\n\r\nCreating pull request to run tests in Travis. This is not ready to merge - I'm not yet sure if this is a good idea.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/404/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 452901999, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjg1Njk4MzEw", "number": 501, "title": "Test against Python 3.8-dev using Travis", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-06T08:37:53Z", "updated_at": "2019-11-11T03:23:29Z", "closed_at": "2019-11-11T03:23:29Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/501", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/501/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 505818256, "node_id": "MDExOlB1bGxSZXF1ZXN0MzI3MTcyNTQ1", "number": 590, "title": "Handle spaces in DB names", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-10-11T12:18:22Z", "updated_at": "2019-11-04T23:16:31Z", "closed_at": "2019-11-04T23:16:30Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/590", "body": "Closes #503", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/590/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 499954048, "node_id": "MDExOlB1bGxSZXF1ZXN0MzIyNTI5Mzgx", "number": 578, "title": "Added support for multi arch builds", "user": {"value": 887095, "label": "heussd"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-09-29T18:43:03Z", "updated_at": "2019-11-13T19:13:15Z", "closed_at": "2019-11-13T19:13:15Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/578", "body": "Minor changes in Dockerfile and new Makefile to support Docker multi architecture builds. `make`will build one image per architecture and push them as one Docker manifest to Docker Hub. Feel free to change `IMAGE_NAME ` to `datasetteproject/datasette` to update your official Docker Hub image(s).", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/578/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 509535510, "node_id": "MDExOlB1bGxSZXF1ZXN0MzMwMDc2MjYz", "number": 602, "title": "Offer to format readonly SQL", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-10-20T02:29:32Z", "updated_at": "2019-11-04T07:29:33Z", "closed_at": "2019-11-04T02:39:56Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/602", "body": "Following discussion in #601, this PR adds a \"Format SQL\" button to\r\nread-only SQL (if the SQL actually differs from the formatting result).\r\n\r\nIt also removes a console error on readonly SQL queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/602/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 562085508, "node_id": "MDExOlB1bGxSZXF1ZXN0MzcyNzYzOTA2", "number": 666, "title": "Use inspect-file, if possible, for total row count", "user": {"value": 13896256, "label": "kevindkeogh"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-02-08T22:10:35Z", "updated_at": "2020-03-09T02:47:15Z", "closed_at": "2020-02-25T20:19:29Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/666", "body": "For large tables, counting the number of rows in the table can take a\r\nsignficant amount of time. Instead, where an inspect-file is provided\r\nfor an immutable database, look up the row-count for a plain count(*).", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/666/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 585597133, "node_id": "MDExOlB1bGxSZXF1ZXN0MzkxOTI0NTA5", "number": 703, "title": "WIP implementation of writable canned queries", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-03-21T22:23:51Z", "updated_at": "2020-06-03T00:08:14Z", "closed_at": "2020-06-02T23:57:35Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/703", "body": "Refs #698.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/703/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 607107849, "node_id": "MDExOlB1bGxSZXF1ZXN0NDA5MTUzODcw", "number": 739, "title": "Configuration directory mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-04-26T20:37:46Z", "updated_at": "2020-04-27T16:30:25Z", "closed_at": "2020-04-27T16:30:25Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/739", "body": "Refs #731\r\n\r\nTODO:\r\n\r\n- [x] Decide how to combine explicit command-line options with items detected from the directory structure\r\n- [x] Add unit tests\r\n- [x] Implement `inspect-data.json` mechanism for populating `immutables`\r\n- [x] Add documentation", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/739/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 598891570, "node_id": "MDExOlB1bGxSZXF1ZXN0NDAyNjQ1OTg0", "number": 725, "title": "Update aiofiles requirement from ~=0.4.0 to >=0.4,<0.6", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-04-13T13:32:47Z", "updated_at": "2020-05-04T18:16:54Z", "closed_at": "2020-05-04T16:17:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/725", "body": "Updates the requirements on [aiofiles](https://github.com/Tinche/aiofiles) to permit the latest version.\n
\nCommits\n\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/725/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 646448486, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0", "number": 868, "title": "initial windows ci setup", "user": {"value": 702729, "label": "joshmgrant"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-26T18:49:13Z", "updated_at": "2021-07-10T23:41:43Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/868", "body": "Picking up the work done on #557 with a new PR. Seeing if I can get this working.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/868/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 688386219, "node_id": "MDExOlB1bGxSZXF1ZXN0NDc1NjY1OTg0", "number": 142, "title": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records", "user": {"value": 96218, "label": "simonwiles"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-08-28T22:22:57Z", "updated_at": "2020-08-30T07:28:23Z", "closed_at": "2020-08-28T22:30:14Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/142", "body": "Closes #139.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/142/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 726910999, "node_id": "MDExOlB1bGxSZXF1ZXN0NTA3OTAzMzky", "number": 1040, "title": "/db/table/-/blob/pk/column.blob download URL", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6026070, "label": "0.51"}, "comments": 3, "created_at": "2020-10-21T22:39:15Z", "updated_at": "2020-10-24T23:09:20Z", "closed_at": "2020-10-24T23:09:19Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1040", "body": "Refs #1036. Still needs:\r\n\r\n- [x] Comprehensive tests across all of the code branches, plus permissions\r\n- [x] A bit more refactoring to share logic cleanly with `RowView`\r\n- ~~A configuration option to disable this feature (probably)~~", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1040/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 735663855, "node_id": "MDExOlB1bGxSZXF1ZXN0NTE1MDE0ODgz", "number": 195, "title": "table.search() improvements plus sqlite-utils search command", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-11-03T22:02:08Z", "updated_at": "2020-11-06T18:30:49Z", "closed_at": "2020-11-06T18:30:42Z", "author_association": "OWNER", "pull_request": "simonw/sqlite-utils/pulls/195", "body": "Refs #192. Still needs tests.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/195/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 729818242, "node_id": "MDExOlB1bGxSZXF1ZXN0NTEwMjM1OTA5", "number": 189, "title": "Allow iterables other than Lists in m2m records", "user": {"value": 35681, "label": "adamwolf"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-10-26T18:47:44Z", "updated_at": "2020-10-27T16:28:37Z", "closed_at": "2020-10-27T16:24:21Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/189", "body": "I was playing around with sqlite-utils, creating a Roam Research dogsheep-style importer for Datasette, and ran into a slight snag.\r\n\r\nI wanted to use a generator to add an order column in an importer. It looked something like:\r\n\r\n```\r\ndef order_generator(iterable, attr=None):\r\n if attr is None:\r\n attr = \"order\"\r\n order: int = 0\r\n\r\n for i in iterable:\r\n i[attr] = order\r\n order += 1\r\n yield i\r\n```\r\n\r\nWhen I used this with `insert_all` and other things, it worked fine--but it didn't work as the `records` argument to `m2m`. I dug into it, and sqlite-utils is explicitly checking if the records argument is a list or a tuple. I flipped the check upside down, and now it checks if the argument is a mapping. If it's a mapping, it wraps it in a list, otherwise it leaves it alone.\r\n\r\n(I get that it might not really make sense to put the order column on the second table. I changed my import schema a bit, and no longer have a real example, but maybe this change still makes sense.)\r\n\r\nThe automated tests still pass, but I did not add any new ones.\r\n\r\nLet me know what you think! I'm really loving Datasette and its ecosystem; thanks for everything!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/189/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}