{"id": 1200649889, "node_id": "I_kwDOBm6k_c5HkHah", "number": 1710, "title": "Guide for plugin authors to upgrade their plugins for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-11T22:58:25Z", "updated_at": "2022-04-11T23:04:01Z", "closed_at": "2022-04-11T23:03:25Z", "author_association": "OWNER", "pull_request": null, "body": "I'll also encourage testing against both Datasette 0.x and Datasette 1.0 using a GitHub Actions matrix.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1710/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1200224939, "node_id": "I_kwDOBm6k_c5Hifqr", "number": 1707, "title": "[feature] expanded detail page", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-11T16:29:17Z", "updated_at": "2022-04-11T16:33:00Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Right now, if click on the detail page for a row you get the info for the row and links to related tables:\r\n![Screenshot 2022-04-11 at 12-27-26 lm20 filing](https://user-images.githubusercontent.com/536941/162786802-90ac1a71-4624-47c4-ae55-b783f4f6c92d.png)\r\n\r\nIt would be very cool if there was an option to expand the rows of the related tables from within this detail view.\r\n\r\nIf you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1707/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1193090967, "node_id": "I_kwDOBm6k_c5HHR-X", "number": 1699, "title": "Proposal: datasette query", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-04-05T12:36:43Z", "updated_at": "2022-04-11T01:32:12Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I started sketching out a plugin to add a `datasette query` subcommand to export data from the command line. This is based on discussions in #1356 and #1605. Before I get too far down this rabbit hole, I figure it's worth getting some feedback here (unless this should happen in `Discussions`). Here's what I'm thinking:\r\n\r\nAt its most basic, it will write the results of a query to STDOUT.\r\n\r\n```sh\r\ndatasette query -d data.db 'select * from data' > results.json\r\n```\r\n\r\nThis isn't much improvement over using [sqlite-utils](https://github.com/simonw/sqlite-utils). To make better use of datasette and its ecosystem, run `datasette query` using a canned query defined in a `metadata.yml` file.\r\n\r\nFor example, using the metadata file from [alltheplaces-datasette](https://github.com/eyeseast/alltheplaces-datasette/blob/main/metadata.yml):\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query -d alltheplaces.db -m metadata.yml count_by_spider\r\n```\r\n\r\nThat query would be good to get as CSV, and we can auto-discover metadata and databases in the current directory:\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query count_by_spider -f csv\r\n```\r\n\r\nIn this case, `count_by_spider` is a canned query defined on the `alltheplaces` database. If the same query is defined on multiple databases or its otherwise unclear which database `query` should use, pass the `-d` or `--database` option.\r\n\r\nIf a query takes parameters, I can pass them in at runtime, using the `--param` or `-p` option:\r\n\r\n```sh\r\ndatasette query -d data.db -p value something 'select * from neighborhoods where some_column = :value'\r\n```\r\n\r\nI'm very interested in feedback on this, including whether it should be a plugin or in Datasette core. (I don't have a strong opinion about this, but I'm prototyping it as a plugin to start.)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1699/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1197925865, "node_id": "I_kwDOBm6k_c5HZuXp", "number": 1704, "title": "File PRs against incompatible plugins pinning to datasette<1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-04-08T23:15:30Z", "updated_at": "2022-04-08T23:15:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "As part of the preparation for the 1.0 release, test all existing known plugins against the alpha.\r\n\r\nFor any that break, submit a PR suggesting they pin to a version <1.0 - and include a link to the documentation on how to upgrade the plugin for 1.0.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1704/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1184850675, "node_id": "PR_kwDOBm6k_c41OrWq", "number": 1694, "title": "Update click requirement from <8.1.0,>=7.1.1 to >=7.1.1,<8.2.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-29T13:11:23Z", "updated_at": "2022-04-08T23:05:10Z", "closed_at": "2022-04-08T23:05:09Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1694", "body": "Updates the requirements on [click](https://github.com/pallets/click) to permit the latest version.\n
\nRelease notes\n

Sourced from click's releases.

\n
\n

8.1.0

\n

This is a feature release, which includes new features and removes previously deprecated features. The 8.1.x branch is now the supported bugfix branch, the 8.0.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as pip-tools to pin all dependencies and control upgrades.

\n\n
\n
\n
\nChangelog\n

Sourced from click's changelog.

\n
\n

Version 8.1.0

\n

Released 2022-03-28

\n\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1694/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1197298420, "node_id": "PR_kwDOBm6k_c4132NJ", "number": 1703, "title": "Update beautifulsoup4 requirement from <4.11.0,>=4.8.1 to >=4.8.1,<4.12.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-08T13:08:53Z", "updated_at": "2022-04-08T22:51:05Z", "closed_at": "2022-04-08T22:51:05Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1703", "body": "Updates the requirements on [beautifulsoup4](https://www.crummy.com/software/BeautifulSoup/bs4/) to permit the latest version.\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1703/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1196327155, "node_id": "I_kwDOBm6k_c5HToDz", "number": 1702, "title": "Be more consistent with column quoting", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-07T16:59:20Z", "updated_at": "2022-04-07T16:59:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This tutorial made me notice that Datasette is pretty inconsistent with how column quoting works: https://datasette.io/tutorials/learn-sql\r\n\r\nIt has examples of each of `\"table_name\"` and `[table_name]` and `table_name`, and it uses single quoted values too.\r\n\r\nDatasette should generate SQL as consistently as possible to support learners.\r\n\r\nThat tutorial should also provide a tiny bit of extra information about what's going on here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1702/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1194790504, "node_id": "I_kwDOBm6k_c5HNw5o", "number": 1701, "title": "Use + for spaces instead of ~20", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-04-06T15:40:48Z", "updated_at": "2022-04-06T15:55:10Z", "closed_at": "2022-04-06T15:55:05Z", "author_association": "OWNER", "pull_request": null, "body": "Tilde encoding introduced in #1657 means that database files with spaces in the name - e.g. the Apple Mail `Envelope Index` database - end up with URLs like this:\r\n\r\n http://127.0.0.1:8001/Envelope~20Index\r\n\r\nI think this would be prettier:\r\n\r\n http://127.0.0.1:9933/Envelope+Index", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1701/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1077620955, "node_id": "I_kwDOBm6k_c5AOzDb", "number": 1549, "title": "Redesign CSV export to improve usability", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2021-12-11T19:02:12Z", "updated_at": "2022-04-04T11:17:13Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser*\r\n\r\nRight now, if the user clicks on the CSV related to a table or a query, the response header for the content type is \r\n\r\n\"content-type: text/plain; charset=utf-8\"\r\n\r\nMost browsers will try to open a file with this content-type in the browser. \r\n\r\nThis is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser.\r\n\r\nIt would be great if the response header could be something like \r\n\r\n```\r\n'Content-type: text/csv');\r\n'Content-disposition: attachment;filename=MyVerySpecial.csv');\r\n```\r\n\r\nwhich would lead browsers to open a download dialog.\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1549/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1190828163, "node_id": "I_kwDOBm6k_c5G-piD", "number": 1698, "title": "Add a warning about bots and Cloud Run", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-03T05:57:17Z", "updated_at": "2022-04-03T06:10:24Z", "closed_at": "2022-04-03T06:10:24Z", "author_association": "OWNER", "pull_request": null, "body": "Recommend the https://github.com/simonw/datasette-block-robots plugin if you are going to run a large database in Cloud Run (one with a lot of rows).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1698/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1189113609, "node_id": "I_kwDOBm6k_c5G4G8J", "number": 1697, "title": "`Request.fake(..., url_vars={})`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-04-01T01:48:40Z", "updated_at": "2022-04-01T02:02:18Z", "closed_at": "2022-04-01T02:02:10Z", "author_association": "OWNER", "pull_request": null, "body": "I just created an alternative `.fake()` method because I wanted to fake the `url_vars` captured in the route as well:\r\n```python\r\nfrom datasette.utils.asgi import Request\r\nclass Request(Request):\r\n\r\n @classmethod\r\n def fake(cls, path_with_query_string, method=\"GET\", scheme=\"http\", url_vars=None):\r\n \"\"\"Useful for constructing Request objects for tests\"\"\"\r\n path, _, query_string = path_with_query_string.partition(\"?\")\r\n scope = {\r\n \"http_version\": \"1.1\",\r\n \"method\": method,\r\n \"path\": path,\r\n \"raw_path\": path_with_query_string.encode(\"latin-1\"),\r\n \"query_string\": query_string.encode(\"latin-1\"),\r\n \"scheme\": scheme,\r\n \"type\": \"http\",\r\n }\r\n if url_vars:\r\n scope[\"url_route\"] = {\r\n \"kwargs\": url_vars\r\n }\r\n return cls(scope, None)\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1697/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1182227211, "node_id": "I_kwDOBm6k_c5Gd1sL", "number": 1692, "title": "[plugins][feature request]: Support additional script tag attributes when loading custom JS", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-27T01:16:03Z", "updated_at": "2022-03-30T06:14:51Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "## Motivation\r\n\r\n- The build system for my new [plugin](https://github.com/hydrosquall/datasette-nteract-data-explorer) has two output JS files, one for browsers that support ES modules, one for browsers that don't. At present, I'm only passing one of them into Datasette.\r\n- I'd like to specify the non-es-module script as a fallback for older browsers. I don't want to load it by default, because browsers will only need one, and it's heavy, so for now I'm only supporting modern browsers. \r\n\r\nTo be able to support legacy browsers without slowing down users with modern browsers, I would like to be able to set additional HTML attributes on the tag fallback script, `nomodule` and `defer`. My injected scripts should look something like this:\r\n\r\n```html\r\n\r\n\r\n```\r\n\r\n## Proposal\r\n\r\nTo achieve this, I propose additional optional properties to the API accepted by the `extra_js_urls` hook and custom JS field the `metadata.json` [described here](https://docs.datasette.io/en/stable/custom_templates.html#custom-css-and-javascript). \r\n\r\nUnder this API, I'd write something like this to get the above HTML rendered in Datasette.\r\n\r\n```json\r\n{\r\n \"extra_js_urls\": [\r\n {\r\n \"url\": \"/index.my-es-module-bundle.js\",\r\n \"module\": true,\r\n },\r\n {\r\n \"url\": \"/index.my-legacy-fallback-bundle.js\",\r\n \"nomodule\": \"\",\r\n \"defer\": true\r\n }\r\n ]\r\n}\r\n```\r\n\r\n## Resources\r\n\r\n- [MDN on the script tag](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script)\r\n - There may be other properties that could be added that are potentially valuable, like `async` or `referrerpolicy`, but I don't have an immediate need for those.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1692/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1185868354, "node_id": "I_kwDOBm6k_c5GrupC", "number": 1695, "title": "Option to un-filter facet not shown for `?col__exact=value`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-30T04:44:02Z", "updated_at": "2022-03-30T04:46:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted this on a page with `COUNTY__exact=Lee` in the URL:\r\n\r\n![CleanShot 2022-03-29 at 21 41 46@2x](https://user-images.githubusercontent.com/9599/160752849-a9039343-3770-4655-920b-f19e25687a57.png)\r\n\r\nWith `COUNTY=Lee` you get this instead:\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1695/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1181432624, "node_id": "I_kwDOBm6k_c5Gazsw", "number": 1688, "title": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-26T01:17:44Z", "updated_at": "2022-03-27T01:01:14Z", "closed_at": "2022-03-26T21:34:47Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm trying to make a small plugin that depends on static assets, by following the guide [here](https://docs.datasette.io/en/stable/writing_plugins.html#writing-one-off-plugins). I made a `plugins/` directory with `datasette_nteract_data_explorer.py`. \r\n\r\nI am trying to follow the example of `datasette_vega`, and serving static assets. I created a `statics/` directory within `plugins/` to serve my JS and CSS.\r\n\r\nhttps://github.com/simonw/datasette-vega/blob/00de059ab1ef77394ba9f9547abfacf966c479c4/datasette_vega/__init__.py#L13\r\n\r\nUnfortunately, datasette doesn't seem to be able to find my assets.\r\n\r\nInput:\r\n\r\n```bash\r\ndatasette ~/Library/Safari/History.db --plugins-dir=plugins/\r\n```\r\n![Image 2022-03-25 at 9 18 17 PM](https://user-images.githubusercontent.com/9020979/160218979-a3ff474b-5255-4a76-85d1-6f90ab2e3b44.jpg)\r\n\r\nOutput:\r\n\r\n![Image 2022-03-25 at 9 11 00 PM](https://user-images.githubusercontent.com/9020979/160218733-ca5144cf-f23f-43d8-a8d3-e3a871e57f3a.jpg)\r\n\r\n\r\n\r\n\r\nI suspect this issue might go away if I move away from \"one-off\" plugin mode, but it's been a while since I created a new python package so I'm not sure how much work there is to go between \"one off\" and \"packaged for PyPI\". I'd like to try to avoid needing to repackage a new `tar.gz` file and or reinstall my library repeatedly when developing new python code.\r\n\r\n1. Is there a way to serve a static assets when using the `plugins/` directory method instead of installing plugins as a new python package?\r\n2. If not, is there a way I can work on developing a plugin without creating and repackaging tar.gz files after every change, or is that the recommended path? \r\n\r\nThanks for your help!\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1688/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1182143895, "node_id": "I_kwDOBm6k_c5GdhWX", "number": 1691, "title": "Bug in pytest-httpx example", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-26T22:45:30Z", "updated_at": "2022-03-26T22:46:09Z", "closed_at": "2022-03-26T22:46:09Z", "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/0.61.1/testing_plugins.html#testing-outbound-http-calls-with-pytest-httpx says:\r\n\r\n```python\r\nasync def test_outbound_http_call(httpx_mock):\r\n httpx_mock.add_response(\r\n url='https://www.example.com/',\r\n data='Hello world',\r\n )\r\n```\r\nThat's wrong - `data=` should be `text=`.\r\n\r\nhttps://github.com/Colin-b/pytest_httpx/blob/v0.20.0/README.md#reply-with-custom-body", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1691/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1182141761, "node_id": "I_kwDOBm6k_c5Gdg1B", "number": 1690, "title": "Idea: `datasette.set_actor_cookie(response, actor)`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-26T22:41:52Z", "updated_at": "2022-03-26T22:43:00Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just wrote this code in a plugin and it felt like it could benefit from an abstraction: https://github.com/simonw/datasette-auth0/blob/152e6eb21e96e9b73bd9c205f9749a1297d0ef0b/datasette_auth0/__init__.py#L79-L92\r\n\r\n```python\r\n redirect_response = Response.redirect(\"/\")\r\n expires_at = int(time.time()) + (24 * 60 * 60)\r\n redirect_response.set_cookie(\r\n \"ds_actor\",\r\n datasette.sign(\r\n {\r\n \"a\": profile_response.json(),\r\n \"e\": baseconv.base62.encode(expires_at),\r\n },\r\n \"actor\",\r\n ),\r\n )\r\n return redirect_response\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1690/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1182065616, "node_id": "I_kwDOBm6k_c5GdOPQ", "number": 1689, "title": "datasette.add_message() documentation is incorrect", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-26T20:49:42Z", "updated_at": "2022-03-26T21:35:57Z", "closed_at": "2022-03-26T20:51:21Z", "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/0.61.1/internals.html#add-message-request-message-message-type-datasette-info says:\r\n\r\n`.add_message(request, message, message_type=datasette.INFO)`\r\n\r\nBut in the code it's:\r\n\r\nhttps://github.com/simonw/datasette/blob/6b99e4a66ba0ed8fca8ee41ceb7206928b60d5d1/datasette/app.py#L582", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1689/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1181364043, "node_id": "I_kwDOBm6k_c5Gai9L", "number": 1687, "title": "Make show_json.html or a similar mechanism stable for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-25T23:42:45Z", "updated_at": "2022-03-25T23:42:45Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I used `show_json.html` in the new `datasette-packages` plugin, which means it will break if that template changes:\r\n- https://github.com/simonw/datasette-packages/issues/3\r\n\r\nIt would be useful if it (or something like it) was documented and stable for plugins to use.\r\n\r\nAlso relevant:\r\n- #878", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1687/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1175744654, "node_id": "I_kwDOCGYnMM5GFHCO", "number": 417, "title": "insert fails on JSONL with whitespace", "user": {"value": 9954, "label": "blaine"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-21T17:58:14Z", "updated_at": "2022-03-25T21:19:06Z", "closed_at": "2022-03-25T21:17:13Z", "author_association": "NONE", "pull_request": null, "body": "Any JSON that is newline-delimited and has whitespace (newlines) between the start of a JSON object and an attribute fails due to a parse error.\r\n\r\ne.g. given the valid JSONL:\r\n\r\n```{\r\n \"attribute\": \"value\"\r\n}\r\n{\r\n \"attribute\": \"value2\"\r\n}\r\n```\r\n\r\nI would expect that `sqlite-utils insert --nl my.db mytable file.jsonl` would properly import the data into `mytable`. However, the following error is thrown instead:\r\n\r\n`json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)`\r\n\r\nIt makes sense that since the file is intended to be newline separated, the thing being parsed is \"{\" (which obviously fails), however the default newline-separated output of `jq` isn't compact. Using `jq -c` avoids this problem, but the fix is unintuitive and undocumented.\r\n\r\nProposed solutions:\r\n1. Default to a \"loose\" newline-separated parse; this could be implemented internally as [the equivalent of] a `jq -c` filter ahead of the insert step.\r\n2. Catch the JSONDecodeError (or pre-empt it in the case of a record === \"{\\n\") and give the user a \"it looks like your json isn't _actually_ newline-delimited; try running it through `jq -c` instead\" error message.\r\n\r\nIt might just have been too early in the morning when I was playing with this, but running pipes of data through sqlite-utils without the 'knack' of it led to some false starts.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/417/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1181236173, "node_id": "I_kwDOCGYnMM5GaDvN", "number": 422, "title": "Reconsider not running convert functions against null values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-25T20:22:40Z", "updated_at": "2022-03-25T20:23:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just got caught out by the fact that `None` values are not processed by the `.convert()` mechanism https://github.com/simonw/sqlite-utils/blob/0b7b80bd40fe86e4d66a04c9f607d94991c45c0b/sqlite_utils/db.py#L2504-L2510\r\n\r\nI had run this code while working on #420 and I wasn't sure why it didn't work:\r\n\r\n```\r\n$ sqlite-utils add-column content.db articles score float\r\n$ sqlite-utils convert content.db articles score '\r\nimport random\r\nrandom.seed(10)\r\n\r\ndef convert(value):\r\n global random\r\n return random.random()\r\n'\r\n```\r\nThe reason it didn't work is that the newly added `score` column was full of `null` values.\r\n\r\nI fixed it by doing this instead:\r\n\r\n $ sqlite-utils add-column content.db articles score float --not-null-default 1.0\r\n\r\nBut this indicates to me that the design of `convert()` here may be incorrect.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/422/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1181037277, "node_id": "I_kwDOBm6k_c5GZTLd", "number": 1686, "title": "heroku bails if app name specifed in datasette publish is the same as existing app", "user": {"value": 2115933, "label": "tlongers"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-25T17:10:34Z", "updated_at": "2022-03-25T17:10:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Seem that `heroku` does not accept an app overwrite triggered by specifying the app name using `datasette publish`, as below:\r\n\r\n```\r\ndatasette publish heroku some.db --name \"jazzy-name\" \r\n```\r\n\r\nThe resulting error has the below traceback:\r\n\r\n```\r\nCreating jazzy-name... !\r\n \u25b8 Name jazzy-name is already taken\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/bin/datasette\", line 33, in \r\n sys.exit(load_entry_point('datasette==0.60.1', 'console_scripts', 'datasette')())\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/opt/homebrew/Cellar/datasette/0.60.1/libexec/lib/python3.10/site-packages/datasette/publish/heroku.py\", line 127, in heroku\r\n create_output = check_output(cmd).decode(\"utf8\")\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 420, in check_output\r\n return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 524, in run\r\n raise CalledProcessError(retcode, process.args,\r\nsubprocess.CalledProcessError: Command '['heroku', 'apps:create', 'jazzy-name', '--json']' returned non-zero exit status 1.\r\n```\r\n\r\nIt's a solid failsafe, but does `datasette publish` have a way to force an overwrite?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1686/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1178456794, "node_id": "I_kwDOCGYnMM5GPdLa", "number": 418, "title": "Add generated files to .gitignore", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-23T17:48:12Z", "updated_at": "2022-03-24T21:01:44Z", "closed_at": "2022-03-24T21:01:44Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I end up with these in my local directory:\r\n\r\n\t.hypothesis/\r\n\tPipfile\r\n\tPipfile.lock\r\n\tpyproject.toml\r\n\r\nMight as well gitignore them.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/418/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1178484369, "node_id": "PR_kwDOCGYnMM405rPe", "number": 419, "title": "Ignore common generated files", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-23T18:06:22Z", "updated_at": "2022-03-24T21:01:44Z", "closed_at": "2022-03-24T21:01:44Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/419", "body": "Closes #418 \r\n\r\nThis adds four files to `.gitignore`:\r\n\r\n\t.hypothesis/\r\n\tPipfile\r\n\tPipfile.lock\r\n\tpyproject.toml\r\n\r\nThose are all generated in the course of development and testing.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/419/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1179998071, "node_id": "I_kwDOBm6k_c5GVVd3", "number": 1684, "title": "Mechanism for disabling faceting on large tables only", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-24T20:06:11Z", "updated_at": "2022-03-24T20:13:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Forest turned off faceting on https://labordata.bunkum.us/ because it was causing performance problems on some of the huge tables - but it would be nice if it could still be an option on smaller tables such as https://labordata.bunkum.us/voluntary_recognitions-4421085/voluntary_recognitions\r\n\r\nOne option: a new setting that automatically disables faceting (and facet suggestion) for tables that have either more than X rows or that are so big that the count could not be completed within the time limit.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1684/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1179928510, "node_id": "I_kwDOBm6k_c5GVEe-", "number": 1683, "title": "allow_facet: False should be respected by column cog menu", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-24T19:05:06Z", "updated_at": "2022-03-24T19:16:36Z", "closed_at": "2022-03-24T19:16:36Z", "author_association": "OWNER", "pull_request": null, "body": "The column cog menu currently shows \"Facet by this\" even if faceting is disabled for the Datasette instance.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1683/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1089529555, "node_id": "I_kwDOBm6k_c5A8ObT", "number": 1581, "title": "when hashed urls are turned on, the _memory db has improperly long-lived cache expiry", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-28T00:05:48Z", "updated_at": "2022-03-24T04:08:18Z", "closed_at": "2022-03-24T04:08:18Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "if hashed_urls are on, then a -000 suffix is added to the `_memory` database, and the cache settings are set just as if it was a normal hashed database.\r\n\r\nin particular, this header is set:\r\n\r\n`cache-control: max-age=31536000`\r\n\r\nthis is not appropriate because the `_memory-000` database isn't really hashed based on the contents of the databases (see #1561).\r\n\r\nEither the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1581/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1090055810, "node_id": "PR_kwDOBm6k_c4wWDxH", "number": 1582, "title": "don't set far expiry if hash is '000'", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-28T18:16:13Z", "updated_at": "2022-03-24T04:07:58Z", "closed_at": "2022-03-24T04:07:58Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1582", "body": "This will close #1581.\r\n\r\nI couldn't find any unit tests related to the testing hashed urls, and I know that you want to break that code out of the core application (#1561), so I'm not quite sure what you would like me to for testing.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1582/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1173828092, "node_id": "PR_kwDOBm6k_c40q1eP", "number": 1665, "title": "Pin setup-gcloud to v0 instead of master", "user": {"value": 408570, "label": "sethvargo"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-18T17:17:22Z", "updated_at": "2022-03-23T19:31:10Z", "closed_at": "2022-03-23T17:55:39Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/1665", "body": "setup-gcloud will be updating the branch name from master to main in\r\na future release. Even though GitHub will establish redirects, this\r\nwill break any GitHub Actions workflows that pin to master. This PR\r\nupdates your GitHub Actions workflows to pin to v0, which is the\r\nrecommended best practice.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1665/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1178521513, "node_id": "I_kwDOBm6k_c5GPs-p", "number": 1682, "title": "SQL queries against databases with different routes are broken", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-23T18:42:57Z", "updated_at": "2022-03-23T18:48:16Z", "closed_at": "2022-03-23T18:48:16Z", "author_association": "OWNER", "pull_request": null, "body": "500 error on https://datasette-hashed-urls-preview.vercel.app/fixtures-09f8f95?sql=select+*+from+facetable\r\n\r\nHere's the trace:\r\n```\r\n File \"/Users/simon/.local/share/virtualenvs/datasette-hashed-urls-ssI2fO50/lib/python3.10/site-packages/datasette/views/database.py\", line 54, in data\r\n return await QueryView(self.ds).data(\r\n File \"/Users/simon/.local/share/virtualenvs/datasette-hashed-urls-ssI2fO50/lib/python3.10/site-packages/datasette/views/database.py\", line 232, in data\r\n self.ds.get_database(database), sql\r\n File \"/Users/simon/.local/share/virtualenvs/datasette-hashed-urls-ssI2fO50/lib/python3.10/site-packages/datasette/app.py\", line 401, in get_database\r\n return self.databases[name]\r\nKeyError: 'fixtures-aa7318b'\r\n```\r\nIt looks like this is a Datasette bug, which is frustrating because I just shipped Datasette 0.61 five minutes ago!\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-hashed-urls/issues/13#issuecomment-1076693667_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1682/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1174423568, "node_id": "I_kwDOBm6k_c5GAEgQ", "number": 1670, "title": "Ship Datasette 0.61", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-03-20T02:47:54Z", "updated_at": "2022-03-23T18:32:32Z", "closed_at": "2022-03-23T18:32:03Z", "author_association": "OWNER", "pull_request": null, "body": "Let the alpha bake for a while, since #1668 is a big last-minute change.\r\n\r\nAfter shipping, release a new `datasette-hashed-urls` that depends on it, also this:\r\n\r\n- https://github.com/simonw/datasette-hashed-urls/issues/11", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1670/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1177101697, "node_id": "I_kwDOBm6k_c5GKSWB", "number": 1681, "title": "Potential bug in numeric handling where_clause for filters", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-22T17:43:50Z", "updated_at": "2022-03-22T17:49:09Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Note that Datasette does already have special logic to convert parameters to integers for numeric comparisons like `>`:\r\n>\r\n> https://github.com/simonw/datasette/blob/c4c9dbd0386e46d2bf199f0ed34e4895c98cb78c/datasette/filters.py#L203-L212\r\n> \r\n> Though... it looks like there's a bug in that? It doesn't account for `float` values - `\"3.5\".isdigit()` return `False` - probably for the best, because `int(3.5)` would break that value anyway.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1671#issuecomment-1075432283_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1681/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174655187, "node_id": "I_kwDOBm6k_c5GA9DT", "number": 1671, "title": "Filters fail to work correctly against calculated numeric columns returned by SQL views because type affinity rules do not apply", "user": {"value": 9308268, "label": "rayvoelker"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-20T19:17:24Z", "updated_at": "2022-03-22T17:43:12Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I found a strange behavior, and I'm not sure if it's related to views and boolean values perhaps, or if there's something else weird going on here, but I'll provide an example that may help show what I'm seeing happen.\r\n\r\n```bash\r\n#!/bin/bash\r\n\r\necho \"\\\"id\\\",\\\"expiration_date\\\"\r\n0,2018-01-04\r\n1,2019-01-05\r\n2,2020-01-06\r\n3,2021-01-07\r\n4,2022-01-08\r\n5,2023-01-09\r\n6,2024-01-10\r\n7,2025-01-11\r\n8,2026-01-12\r\n9,2027-01-13\r\n\" > test.csv\r\ncsvs-to-sqlite test.csv test.db\r\nsqlite-utils create-view --replace test.db test_view \"select id, expiration_date, case when julianday('NOW') >= julianday(expiration_date) then 1 else 0 end as has_expired FROM test\"\r\n```\r\n\r\n```bash\r\ndatasette test.db\r\n```\r\n![image](https://user-images.githubusercontent.com/9308268/159178745-9c6152f7-eac6-4bf9-bef5-a2d63d3ee13f.png)\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/159178824-c8952137-270c-42a4-ad1c-f6ad2c51e499.png)\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/159178877-23e00b36-443a-43ef-83e5-e0bdddd3fdcd.png)\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/159178918-65922cc7-2514-4735-a72d-4904b99976d4.png)\r\n\r\nThanks again and let me know if you want me to provide anything else!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1671/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 340396247, "node_id": "MDU6SXNzdWUzNDAzOTYyNDc=", "number": 339, "title": "Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way", "user": {"value": 12617395, "label": "bsilverm"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-07-11T20:38:06Z", "updated_at": "2022-03-21T22:22:40Z", "closed_at": "2022-03-21T22:22:34Z", "author_association": "NONE", "pull_request": null, "body": "Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/339/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 324835838, "node_id": "MDU6SXNzdWUzMjQ4MzU4Mzg=", "number": 276, "title": "Handle spatialite geometry columns better", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2018-05-21T08:46:55Z", "updated_at": "2022-03-21T22:22:20Z", "closed_at": "2022-03-21T22:22:20Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'd like to see spatialite geometry columns rendered more sensibly - at the moment they come through as well-known-binary unless you use custom SQL, and WKB isn't of much use to anyone on the web.\r\n\r\nIn HTML: they should be shown either as simple lat/long (if it's just a point, for example), or as a sensible placeholder if they're more complex geometries.\r\n\r\nIn JSON: they should be GeoJSON geometries, (which means they can be automatically fed into a leaflet map with no further messing around).\r\n\r\nIn CSV: they should be WKT.\r\n\r\nI briefly wondered if this should go into a plugin, but I suspect it needs hooking in at a deeper level than the plugin architecture will support any time soon.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/276/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 953218043, "node_id": "MDU6SXNzdWU5NTMyMTgwNDM=", "number": 1403, "title": "Labels explaining what hidden tables are for", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-07-26T19:29:22Z", "updated_at": "2022-03-21T22:20:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "A reasonable question: \"What are those hidden tables for?\"\r\n\r\nThis could be answered by adding a small piece of explanatory text to each table - based on if it's related to FTS or to SpatiaLite or configured to be hidden for some other reason.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1403/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1175854982, "node_id": "I_kwDOBm6k_c5GFh-G", "number": 1679, "title": "Research: how much overhead does the n=1 time limit have?", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 11, "created_at": "2022-03-21T19:27:46Z", "updated_at": "2022-03-21T21:55:57Z", "closed_at": "2022-03-21T21:55:56Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/1a7750eb29fd15dd2eea3b9f6e33028ce441b143/datasette/utils/__init__.py#L181-L200\r\n\r\n```python\r\n@contextmanager\r\ndef sqlite_timelimit(conn, ms):\r\n deadline = time.perf_counter() + (ms / 1000)\r\n # n is the number of SQLite virtual machine instructions that will be\r\n # executed between each check. It's hard to know what to pick here.\r\n # After some experimentation, I've decided to go with 1000 by default and\r\n # 1 for time limits that are less than 50ms\r\n n = 1000\r\n if ms < 50:\r\n n = 1\r\n\r\n def handler():\r\n if time.perf_counter() >= deadline:\r\n return 1\r\n\r\n conn.set_progress_handler(handler, n)\r\n try:\r\n yield\r\n finally:\r\n conn.set_progress_handler(None, n)\r\n```\r\nHow often do I set a time limit of 50 or less? How much slower does it go thanks to this code?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1679/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1175894898, "node_id": "I_kwDOBm6k_c5GFrty", "number": 1680, "title": "Consider simplifying permissions for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-03-21T20:17:29Z", "updated_at": "2022-03-21T20:17:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Permission checks right now can express one of three opinions:\r\n\r\n- `False` means \"so not grant this permisson\"\r\n- `True` means \"grant this permission\"\r\n- `None` means \"I have no opinion\"\r\n\r\nBut... there's also a concept of a \"default\" for a given permission check, which might be `False` or `True`.\r\n\r\nI worry this is too complicated. Could this be simplified before 1.0? In particular the default concept.\r\n\r\nSee also:\r\n- #1676 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1680/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1170144879, "node_id": "I_kwDOBm6k_c5Fvv5v", "number": 1660, "title": "Refactor and simplify Datasette routing and views", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 8, "created_at": "2022-03-15T19:56:56Z", "updated_at": "2022-03-21T19:19:12Z", "closed_at": "2022-03-21T19:19:01Z", "author_association": "OWNER", "pull_request": null, "body": "While working on:\n- https://github.com/simonw/datasette/issues/1657\n- https://github.com/simonw/datasette/issues/1439\n\nIt became very clear that the least maintainable part of Datasette at the moment is the way routing to the database, table and row views work - in particular the subclassing mechanism with BaseView and DataView, but also the complex variety of ways in which the URL routes capture different named regular expression groups.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1660/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1175715988, "node_id": "I_kwDOBm6k_c5GFACU", "number": 1678, "title": "Make `check_visibility()` a documented API", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-21T17:30:34Z", "updated_at": "2022-03-21T19:04:03Z", "closed_at": "2022-03-21T19:01:46Z", "author_association": "OWNER", "pull_request": null, "body": "Spotted this while working on:\r\n- #1677\r\n\r\nhttps://github.com/simonw/datasette/blob/e627510b760198ccedba9e5af47a771e847785c9/datasette/utils/__init__.py#L1005-L1021", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1678/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1175694248, "node_id": "I_kwDOBm6k_c5GE6uo", "number": 1677, "title": "Remove `check_permission()` from `BaseView`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-21T17:18:18Z", "updated_at": "2022-03-21T18:45:04Z", "closed_at": "2022-03-21T18:45:03Z", "author_association": "OWNER", "pull_request": null, "body": "Follow-on from:\r\n- #1675\r\n\r\nRefs:\r\n- #1660", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1677/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1175648453, "node_id": "I_kwDOBm6k_c5GEvjF", "number": 1675, "title": "Extract out `check_permissions()` from `BaseView", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-03-21T16:39:46Z", "updated_at": "2022-03-21T17:14:31Z", "closed_at": "2022-03-21T17:13:21Z", "author_association": "OWNER", "pull_request": null, "body": "> I'm going to refactor this stuff out and document it so it can be easily used by plugins:\r\n\r\nhttps://github.com/simonw/datasette/blob/4a4164b81191dec35e423486a208b05a9edc65e4/datasette/views/base.py#L69-L103\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1660#issuecomment-1074136176_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1675/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 780153562, "node_id": "MDU6SXNzdWU3ODAxNTM1NjI=", "number": 1177, "title": "Ability to stream all rows as newline-delimited JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2021-01-06T07:10:48Z", "updated_at": "2022-03-21T15:08:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Yet another use-case for this: I want to be able to stream newline-delimited JSON in order to better import into Pandas:\r\n> \r\n> pandas.read_json(\"https://latest.datasette.io/fixtures/compound_three_primary_keys.json?_shape=array&_nl=on\", lines=True)\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1101#issuecomment-755128038_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1177/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1091819089, "node_id": "I_kwDOCGYnMM5BE9ZR", "number": 360, "title": "MemoryError", "user": {"value": 559453, "label": "nzaar9"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-01T13:39:17Z", "updated_at": "2022-03-21T04:22:46Z", "closed_at": "2022-03-21T04:22:46Z", "author_association": "NONE", "pull_request": null, "body": "HI, when dealing with large json file (~170GB) i got the following error \r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/sqlite-utils\", line 8, in \r\n sys.exit(cli())\r\n File \"/usr/lib/python3/dist-packages/click/core.py\", line 1126, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/click/core.py\", line 1051, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/lib/python3/dist-packages/click/core.py\", line 1657, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/lib/python3/dist-packages/click/core.py\", line 1393, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/lib/python3/dist-packages/click/core.py\", line 752, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/sqlite_utils/cli.py\", line 1300, in memory\r\n rows, format_used = rows_from_file(csv_fp, format=format, encoding=encoding)\r\n File \"/usr/local/lib/python3.9/dist-packages/sqlite_utils/utils.py\", line 185, in rows_from_file\r\n return rows_from_file(buffered, format=Format.JSON)\r\n File \"/usr/local/lib/python3.9/dist-packages/sqlite_utils/utils.py\", line 156, in rows_from_file\r\n decoded = json.load(fp)\r\n File \"/usr/lib/python3.9/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\nMemoryError\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/360/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1171599874, "node_id": "I_kwDOCGYnMM5F1TIC", "number": 415, "title": "Convert with `--multi` and `--dry-run` flag does not work", "user": {"value": 3976183, "label": "dotcs"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-16T21:59:46Z", "updated_at": "2022-03-21T04:18:24Z", "closed_at": "2022-03-21T04:18:24Z", "author_association": "NONE", "pull_request": null, "body": "It's not possible to combine `--multi` and `--dry-run` flag in the `convert` command.\r\n\r\nLet's first create a simple database from JSON string\r\n\r\n```console\r\n$ echo '[{\"foo\": \"abc\"}]' | sqlite-utils insert demo.db demo -\r\n$ sqlite-utils query demo.db \"SELECT * FROM demo\" \r\n[{\"foo\": \"abc\"}]\r\n```\r\n\r\nand then try to convert the \"foo\" column with a static value \"bar\" (see docs [Converting a column into multiple columns](https://sqlite-utils.datasette.io/en/stable/cli.html#converting-a-column-into-multiple-columns))\r\n\r\n```console\r\n$ sqlite-utils convert demo.db demo foo '{\"foo\": \"bar\"}' --multi --dry-run\r\nTraceback (most recent call last):\r\n File \"/home/dotcs/anaconda3/envs/tools/bin/sqlite-utils\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/sqlite_utils/cli.py\", line 2686, in convert\r\n for row in db.conn.execute(sql, where_args).fetchall():\r\nsqlite3.OperationalError: user-defined function raised exception\r\n```\r\n\r\nBut without the `--dry-run` flag it does work as expected:\r\n\r\n```console\r\n$ sqlite-utils convert demo.db demo foo '{\"foo\": \"bar\"}' --multi\r\n$ sqlite-utils query demo.db \"SELECT * FROM demo\" \r\n[{\"foo\": \"bar\"}]\r\n```\r\n\r\n```console\r\n$ sqlite-utils --version\r\nsqlite-utils, version 3.25.1\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/415/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1174717287, "node_id": "I_kwDOBm6k_c5GBMNn", "number": 1674, "title": "Tweak design of /.json", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-20T22:58:01Z", "updated_at": "2022-03-20T22:58:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://latest.datasette.io/.json\r\n\r\nCurrently:\r\n```json\r\n{\r\n \"_memory\": {\r\n \"name\": \"_memory\",\r\n \"hash\": null,\r\n \"color\": \"a6c7b9\",\r\n \"path\": \"/_memory\",\r\n \"tables_and_views_truncated\": [],\r\n \"tables_and_views_more\": false,\r\n \"tables_count\": 0,\r\n \"table_rows_sum\": 0,\r\n \"show_table_row_counts\": false,\r\n \"hidden_table_rows_sum\": 0,\r\n \"hidden_tables_count\": 0,\r\n \"views_count\": 0,\r\n \"private\": false\r\n },\r\n \"fixtures\": {\r\n \"name\": \"fixtures\",\r\n \"hash\": \"645005884646eb941c89997fbd1c0dd6be517cb1b493df9816ae497c0c5afbaa\",\r\n \"color\": \"645005\",\r\n \"path\": \"/fixtures\",\r\n \"tables_and_views_truncated\": [\r\n {\r\n \"name\": \"compound_three_primary_keys\",\r\n \"columns\": [\r\n \"pk1\",\r\n \"pk2\",\r\n \"pk3\",\r\n \"content\"\r\n ],\r\n \"primary_keys\": [\r\n \"pk1\",\r\n \"pk2\",\r\n \"pk3\"\r\n ],\r\n \"count\": 1001,\r\n \"hidden\": false,\r\n \"fts_table\": null,\r\n \"num_relationships_for_sorting\": 0,\r\n \"private\": false\r\n },\r\n```\r\nAs-of this issue the `\"path\"` key is confusing, it doesn't match what https://latest.datasette.io/-/databases returns:\r\n\r\n- #1668", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1674/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 910088936, "node_id": "MDU6SXNzdWU5MTAwODg5MzY=", "number": 1355, "title": "datasette --get should efficiently handle streaming CSV", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-06-03T04:40:40Z", "updated_at": "2022-03-20T22:38:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It would be great if you could use `datasette --get` to run queries that return streaming CSV data without running out of RAM.\r\n\r\nCurrent implementation looks like it loads the entire result into memory first: https://github.com/simonw/datasette/blob/f78ebdc04537a6102316d6dbbf6c887565806078/datasette/cli.py#L546-L552", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1355/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174708375, "node_id": "I_kwDOBm6k_c5GBKCX", "number": 1673, "title": "Streaming CSV spends a lot of time in `table_column_details`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-20T22:25:28Z", "updated_at": "2022-03-20T22:34:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "At least I think it does. I tried running `py-spy top -p $PID` against a Datasette process that was trying to do:\r\n\r\n datasette covid.db --get '/covid/ny_times_us_counties.csv?_size=10&_stream=on'\r\n\r\nWhile investigating:\r\n- #1355\r\n\r\nAnd spotted this:\r\n```\r\ndatasette covid.db --get /covid/ny_times_us_counties.csv?_size=10&_stream=on' (python v3.10.2)\r\nTotal Samples 5800\r\nGIL: 71.00%, Active: 98.00%, Threads: 4\r\n\r\n %Own %Total OwnTime TotalTime Function (filename:line) \r\n 8.00% 8.00% 4.32s 4.38s sql_operation_in_thread (datasette/database.py:212)\r\n 5.00% 5.00% 3.77s 3.93s table_column_details (datasette/utils/__init__.py:614)\r\n 6.00% 6.00% 3.72s 3.72s _worker (concurrent/futures/thread.py:81)\r\n 7.00% 7.00% 2.98s 2.98s _read_from_self (asyncio/selector_events.py:120)\r\n 5.00% 6.00% 2.35s 2.49s detect_fts (datasette/utils/__init__.py:571)\r\n 4.00% 4.00% 1.34s 1.34s _write_to_self (asyncio/selector_events.py:140)\r\n```\r\nRelevant code: https://github.com/simonw/datasette/blob/798f075ef9b98819fdb564f9f79c78975a0f71e8/datasette/utils/__init__.py#L609-L625\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1673/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174697144, "node_id": "I_kwDOBm6k_c5GBHS4", "number": 1672, "title": "Refactor CSV handling code out of DataView", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-20T21:47:00Z", "updated_at": "2022-03-20T21:52:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> I think the way to get rid of most of the remaining complexity in `DataView` is to refactor how CSV stuff works - pulling it in line with other export factors and extracting the streaming mechanism. Opening a fresh issue for that.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1660#issuecomment-1073355032_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1672/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 688351054, "node_id": "MDU6SXNzdWU2ODgzNTEwNTQ=", "number": 140, "title": "Idea: insert-files mechanism for adding extra columns with fixed values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-08-28T20:57:36Z", "updated_at": "2022-03-20T19:45:45Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Say for example you want to populate a `file_type` column with the value `gif`. That could work like this:\r\n\r\n```\r\nsqlite-utils insert-files gifs.db images *.gif \\\r\n -c path -c md5 -c last_modified:mtime \\\r\n -c file_type:text:gif --pk=path\r\n```\r\nSo a column defined as a `text` column with a value that follows a second colon.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/140/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174306154, "node_id": "I_kwDOBm6k_c5F_n1q", "number": 1668, "title": "Introduce concept of a database `route`, separate from its name", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 20, "created_at": "2022-03-19T16:48:28Z", "updated_at": "2022-03-20T16:43:16Z", "closed_at": "2022-03-20T16:43:16Z", "author_association": "OWNER", "pull_request": null, "body": "Some issues came up in the new `datasette-hashed-urls` plugin relating to the way it renames databases on startup to achieve unique URLs that depend on the database SHA-256 content:\r\n\r\n- https://github.com/simonw/datasette-hashed-urls/issues/10\r\n- https://github.com/simonw/datasette-hashed-urls/issues/9\r\n- https://github.com/simonw/datasette-hashed-urls/issues/8\r\n\r\nAll three of these could be addressed by making the \"path\" concept for a database (the `/foo` bit where it is served) work independently of the database's name, which would be used for default display and also as the alias when configuring cross-database aliases.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1668/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1123393829, "node_id": "I_kwDODFE5qs5C9aEl", "number": 10, "title": "sqlite3.OperationalError: no such table: main.my_activity", "user": {"value": 69208826, "label": "glxblt14"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-02-03T17:59:29Z", "updated_at": "2022-03-20T02:38:07Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hello,\r\nWhen i run the command `google-takeout-to-sqlite my-activity db.db takeout-20220203T174446Z-001.zip`, i get this error :\r\n```\r\nTraceback (most recent call last):\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\julie\\AppData\\Local\\Programs\\Python\\Python39-32\\Scripts\\google-takeout-to-sqlite.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\click\\core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\click\\core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\click\\core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\click\\core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\click\\core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\google_takeout_to_sqlite\\cli.py\", line 31, in my_activity\r\n utils.save_my_activity(db, zf)\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\google_takeout_to_sqlite\\utils.py\", line 19, in save_my_activity\r\n db[\"my_activity\"].create_index([\"time\"])\r\n File \"c:\\users\\julie\\appdata\\local\\programs\\python\\python39-32\\lib\\site-packages\\sqlite_utils\\db.py\", line 629, in create_index\r\n self.db.conn.execute(sql)\r\nsqlite3.OperationalError: no such table: main.my_activity\r\n```\r\nThank you for your help\r\nSorry for my bad English\r\nEDIT: i used the json format", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/10/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1174404647, "node_id": "I_kwDOBm6k_c5F__4n", "number": 1669, "title": "Release 0.61 alpha", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-20T00:35:35Z", "updated_at": "2022-03-20T01:24:36Z", "closed_at": "2022-03-20T01:24:36Z", "author_association": "OWNER", "pull_request": null, "body": "> I'm going to release this as a 0.61 alpha so I can more easily depend on it from `datasette-hashed-urls`.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1668#issuecomment-1073136896_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1669/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1174302994, "node_id": "I_kwDOBm6k_c5F_nES", "number": 1667, "title": "Make route matched pattern groups more consistent", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-03-19T16:32:35Z", "updated_at": "2022-03-19T20:37:42Z", "closed_at": "2022-03-19T20:37:41Z", "author_association": "OWNER", "pull_request": null, "body": "> ... highlights how inconsistent the way the capturing works is. Especially `as_format` which can be `None` or `\"\"` or `.json` or `json` or not used at all in the case of `TableView`.\r\n\r\nhttps://github.com/simonw/datasette/blob/764738dfcb16cd98b0987d443f59d5baa9d3c332/tests/test_routes.py#L12-L36\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1666#issuecomment-1073039670_\r\n\r\nPart of:\r\n- #1660", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1667/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1174162781, "node_id": "I_kwDOBm6k_c5F_E1d", "number": 1666, "title": "Refactor URL routing to enable testing", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2022-03-19T03:52:29Z", "updated_at": "2022-03-19T16:32:03Z", "closed_at": "2022-03-19T16:32:03Z", "author_association": "OWNER", "pull_request": null, "body": "I ran into some bugs earlier with URL routing - having more robust testing around this (especially since they are defined using regular expressions) would be really useful.\r\n\r\n- A utility function that resolves a path against a list of reflexes and returns the match\r\n- Make the routes and regular expressions available from a private Datasette method\r\n- Add tests that exercise them\r\n\r\nRelated:\r\n- #1660", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1666/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 648435885, "node_id": "MDU6SXNzdWU2NDg0MzU4ODU=", "number": 878, "title": "New pattern for views that return either JSON or HTML, available for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 26, "created_at": "2020-06-30T19:26:13Z", "updated_at": "2022-03-19T16:19:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Can be part of #870 - refactoring existing views to use `register_routes()`.\r\n\r\n> I'm going to put the new `check_permissions()` method on `BaseView` as well. If I want that method to be available to plugins I can do so by turning that `BaseView` class into a documented API that plugins are encouraged to use themselves.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/832#issuecomment-651995453_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/878/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 531755959, "node_id": "MDU6SXNzdWU1MzE3NTU5NTk=", "number": 647, "title": "Move hashed URL mode out to a plugin", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 9, "created_at": "2019-12-03T06:29:03Z", "updated_at": "2022-03-19T11:56:05Z", "closed_at": "2022-03-15T23:13:06Z", "author_association": "OWNER", "pull_request": null, "body": "They used to be the default until #418. Since making them optional I haven't felt the need to use them even once.\r\n\r\nThat suggests to me that they should be removed. I think their effect could be entirely handled by an ASGI wrapping plugin.\r\n\r\nhttps://datasette.readthedocs.io/en/0.32/performance.html#hashed-url-mode", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/647/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 810397025, "node_id": "MDU6SXNzdWU4MTAzOTcwMjU=", "number": 1228, "title": "500 error caused by faceting if a column called `n` exists", "user": {"value": 7107523, "label": "Kabouik"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-02-17T17:41:20Z", "updated_at": "2022-03-19T06:44:40Z", "closed_at": "2022-03-19T01:38:04Z", "author_association": "NONE", "pull_request": null, "body": "I recently discovered `datasette` thanks to your great talk at FOSDEM and would like to use it for some projects. However, when trying to use it on databases created from some csv ot tsv files, I am sometimes getting this issue when going to http://127.0.0.1:8001/databasetest/databasetest and I don't exactly understand what it refers to.\r\n\r\nSo far, I couldn't find anything relevant when reviewing the raw text files that could explain this issue, nor could I find something obvious between the files that generate this issue and those that don't. Does the error ring a bell and, if so, could you please point me to the right direction?\r\n\r\n```\r\n$ datasette databasetest.db \r\nINFO: Started server process [1408482]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nINFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)\r\nINFO: 127.0.0.1:56394 - \"GET / HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56394 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56396 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56398 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nTraceback (most recent call last):\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/app.py\", line 1099, in route_path\r\n response = await view(request, send)\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 147, in view\r\n request, **request.scope[\"url_route\"][\"kwargs\"]\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 121, in dispatch_request\r\n return await handler(request, *args, **kwargs)\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 260, in get\r\n request, database, hash, correct_hash_provided, **kwargs\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 434, in view_get\r\n request, database, hash, **kwargs\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/table.py\", line 782, in data\r\n suggested_facets.extend(await facet.suggest())\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/facets.py\", line 168, in suggest\r\n and any(r[\"n\"] > 1 for r in distinct_values)\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/facets.py\", line 168, in \r\n and any(r[\"n\"] > 1 for r in distinct_values)\r\nTypeError: '>' not supported between instances of 'str' and 'int'\r\nINFO: 127.0.0.1:56402 - \"GET /databasetest/databasetest HTTP/1.1\" 500 Internal Server Error\r\nINFO: 127.0.0.1:56402 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET / HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56406 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /databasetest HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56406 - \"GET /-/static/codemirror-5.57.0.min.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56410 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56414 - \"GET /-/static/codemirror-5.57.0-sql.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56412 - \"GET /-/static/codemirror-5.57.0.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static/sql-formatter-2.3.3.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /databasetest?sql=select+*+from+databasetest HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56410 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56412 - \"GET /-/static/codemirror-5.57.0.min.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET /-/static/sql-formatter-2.3.3.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56406 - \"GET /-/static/codemirror-5.57.0.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56414 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static/codemirror-5.57.0-sql.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56410 - \"GET /databasetest.json?sql=select+*+from+databasetest&_shape=array&_shape=array HTTP/1.1\" 200 OK\r\n^CINFO: Shutting down\r\nINFO: Waiting for application shutdown.\r\nINFO: Application shutdown complete.\r\nINFO: Finished server process [1408482]\r\n```\r\n\r\nNote that there is no error if I go to http://127.0.0.1:8001/databasetest and then click on `Run SQL`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1228/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1082765654, "node_id": "I_kwDOBm6k_c5AibFW", "number": 1561, "title": "add hash id to \"_memory\" url if hashed url mode is turned on and crossdb is also turned on", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-17T00:45:12Z", "updated_at": "2022-03-19T04:45:40Z", "closed_at": "2022-03-19T04:45:40Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. \r\n\r\nOne way that it could work is to have the _memory hash be a hash of all the individual databases.\r\n\r\nOtherwise, crossdb queries can get quit out of data if using aggressive caching.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1561/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1161584460, "node_id": "I_kwDOBm6k_c5FPF9M", "number": 1651, "title": "Get rid of the no-longer necessary ?_format=json hack for tables called x.json", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 8, "created_at": "2022-03-07T15:40:42Z", "updated_at": "2022-03-19T04:04:50Z", "closed_at": "2022-03-15T18:25:42Z", "author_association": "OWNER", "pull_request": null, "body": "Tidy up from:\r\n- #1439", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1651/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1169840669, "node_id": "I_kwDOBm6k_c5Fulod", "number": 1658, "title": "Revert main to version that passes tests", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 1, "created_at": "2022-03-15T15:37:02Z", "updated_at": "2022-03-19T04:04:50Z", "closed_at": "2022-03-15T15:42:58Z", "author_association": "OWNER", "pull_request": null, "body": "> I've made a real mess of this. I'm going to revert Datasette`main` back to the last commit that passed the tests and try this again in a branch.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1657#issuecomment-1068125636_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1658/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1170554975, "node_id": "I_kwDOBm6k_c5FxUBf", "number": 1663, "title": "Document the internals that were used in datasette-hashed-urls", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2022-03-16T05:17:08Z", "updated_at": "2022-03-19T04:04:50Z", "closed_at": "2022-03-17T21:32:38Z", "author_association": "OWNER", "pull_request": null, "body": "The https://github.com/simonw/datasette-hashed-urls used a couple of currently undocumented features:\r\n- `db.hash`\r\n- `Datasette(..., immutables=[...])`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1663/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1108235694, "node_id": "I_kwDOBm6k_c5CDlWu", "number": 1603, "title": "A proper favicon", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 19, "created_at": "2022-01-19T15:24:55Z", "updated_at": "2022-03-19T04:04:49Z", "closed_at": "2022-01-20T06:07:31Z", "author_association": "OWNER", "pull_request": null, "body": "Tips here: https://adamj.eu/tech/2022/01/18/how-to-add-a-favicon-to-your-django-site/ - I think a PNG served at `/favicon.ico` is the best option, since safari doesn't support SVG yet.\r\n\r\nRelevant code: https://github.com/simonw/datasette/blob/cb29119db9115b1f40de2fb45263ed77e3bfbb3e/datasette/app.py#L182-L183\r\n\r\nI can reuse the icon for https://datasette.io/desktop", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1603/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1114147905, "node_id": "I_kwDOBm6k_c5CaIxB", "number": 1612, "title": "Move canned queries closer to the SQL input area", "user": {"value": 639012, "label": "jsfenfen"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2022-01-25T17:06:39Z", "updated_at": "2022-03-19T04:04:49Z", "closed_at": "2022-01-25T18:34:21Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: Consider placing example queries above the sql input?*\r\n\r\nHi! Have been enjoying deploying ad hoc datasettes for collaborators to pick over! \r\n\r\nI keep finding myself manually \"fixing\" the database.html template so that the \"example queries\" (canned queries) appear directly *over* the sql box? So they are sorta more a suggestion for collaborators who aren't inclined to write their own queries? \r\n\r\nMy sense is any time I go to the trouble of writing canned queries my users should see 'em? \r\n\r\n(( I have also considered a client-side reactive-ish option where selecting a query just places the raw SQL in the box and doesn't execute it, but this seems to end up being an inconvenience, rather than a teaching tool. )) \r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1612/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1122413719, "node_id": "I_kwDOBm6k_c5C5qyX", "number": 1621, "title": "Test against Python 3.11 dev version", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2022-02-02T21:38:57Z", "updated_at": "2022-03-19T04:04:49Z", "closed_at": "2022-02-02T21:58:54Z", "author_association": "OWNER", "pull_request": null, "body": "To avoid another surprise like we got with 3.10: https://simonwillison.net/2021/Oct/9/finding-and-reporting-a-bug/\r\n\r\nFrom a quick GitHub code search it looks like `3.11-dev` should work: https://cs.github.com/urllib3/urllib3/blob/7bec77e81aa0a194c98381053225813f5347c9d2/.github/workflows/ci.yml#L60", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1621/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1122416919, "node_id": "I_kwDOBm6k_c5C5rkX", "number": 1623, "title": "/-/patterns returns link: alternate JSON header to 404", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2022-02-02T21:42:49Z", "updated_at": "2022-03-19T04:04:49Z", "closed_at": "2022-02-02T21:48:56Z", "author_association": "OWNER", "pull_request": null, "body": "Bug from:\r\n- #1620\r\n\r\n```\r\n% curl -s -I 'https://latest.datasette.io/-/patterns' | grep link\r\nlink: https://latest.datasette.io/-/patterns.json; rel=\"alternate\"; type=\"application/json+datasette\"\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1623/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1126604194, "node_id": "I_kwDOBm6k_c5DJp2i", "number": 1632, "title": "datasette one.db one.db opens database twice, as one and one_2", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 6, "created_at": "2022-02-07T23:14:47Z", "updated_at": "2022-03-19T04:04:49Z", "closed_at": "2022-02-07T23:50:01Z", "author_association": "OWNER", "pull_request": null, "body": "> ```\r\n> % mkdir /tmp/data\r\n> % cp ~/Dropbox/Development/datasette/fixtures.db /tmp/data \r\n> % datasette /tmp/data/*.db /tmp/data/created.db --create -p 8852\r\n> ...\r\n> INFO: Uvicorn running on http://127.0.0.1:8852 (Press CTRL+C to quit)\r\n> ^CINFO: Shutting down\r\n> % datasette /tmp/data/*.db /tmp/data/created.db --create -p 8852\r\n> ...\r\n> INFO: 127.0.0.1:49533 - \"GET / HTTP/1.1\" 200 OK\r\n> ```\r\n> The first time I ran Datasette I got two databases - `fixtures` and `created`\r\n> \r\n> BUT... when I ran Datasette the second time it looked like this:\r\n> \r\n> \"image\"\r\n> \r\n> This is the same result you get if you run:\r\n> \r\n> datasette /tmp/data/fixtures.db /tmp/data/created.db /tmp/data/created.db\r\n> \r\n> This is caused by this Datasette issue:\r\n> - https://github.com/simonw/datasette/issues/509\r\n> \r\n> So... either I teach Datasette to de-duplicate multiple identical file paths passed to the command, or I can't use `/data/*.db` in the `Dockerfile` here and I need to go back to other solutions for the challenge described in this comment: https://github.com/simonw/datasette-publish-fly/pull/12#issuecomment-1031971831\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette-publish-fly/pull/12#issuecomment-1032029874_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1632/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1170497629, "node_id": "I_kwDOBm6k_c5FxGBd", "number": 1662, "title": "[feature request] Publish to fully static website", "user": {"value": 32609395, "label": "contrun"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-16T03:32:28Z", "updated_at": "2022-03-19T00:42:23Z", "closed_at": "2022-03-19T00:42:23Z", "author_association": "NONE", "pull_request": null, "body": "It seems currently all datasette publish requires a real backend server which is able to query the database and send results back to the frontend. There are a few projects to on-demand download a portion of data from the database from a sqlite lite database url, and present it directly to the user. These methods leverages web assembly under the hood. I think datasette is a perfect use case for this technology. Below are a few examples of querying sqlite database from frontend directly.\r\n\r\n* [Using sqlite3 as a notekeeping document graph with automatic reference indexing](https://epilys.github.io/bibliothecula/notekeeping.html)\r\n* [Hosting SQLite databases on Github Pages - (or any static file hoster) - phiresky's blog](https://phiresky.github.io/blog/2021/hosting-sqlite-databases-on-github-pages/)\r\n* [Static torrent website with peer-to-peer queries over BitTorrent on 2M records](https://boredcaveman.xyz/post/0x2_static-torrent-website-p2p-queries.html)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1662/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1170355774, "node_id": "I_kwDOBm6k_c5FwjY-", "number": 1661, "title": "Remove Hashed URL mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 10, "created_at": "2022-03-15T23:13:56Z", "updated_at": "2022-03-19T00:37:37Z", "closed_at": "2022-03-19T00:37:36Z", "author_association": "OWNER", "pull_request": null, "body": "It's now handled by a plugin instead:\r\n- #647\r\n- https://github.com/simonw/datasette-hashed-urls/issues/3\r\n\r\nhttps://github.com/simonw/datasette-hashed-urls\r\n\r\nSub-tasks:\r\n\r\n- [x] Remove hashed URL mode implementation\r\n- [x] Update documentation\r\n- [x] Ensure `--setting hash_urls 1` shows a useful message", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1661/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1173017980, "node_id": "PR_kwDOBm6k_c40oRq-", "number": 1664, "title": "Remove hashed URL mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-03-17T23:19:10Z", "updated_at": "2022-03-19T00:12:04Z", "closed_at": "2022-03-19T00:12:04Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1664", "body": "Refs #1661.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1664/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1065429936, "node_id": "I_kwDOBm6k_c4_gSuw", "number": 1532, "title": "Use datasette-table Web Component to guide the design of the JSON API for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2021-11-28T20:37:18Z", "updated_at": "2022-03-16T20:13:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I realized that one of the reasons I'm having trouble committing to nailing down the JSON API for 1.0 is that I don't use it much myself - I use the `?_shape=array` one quite often, but I don't have any projects that are using the default, more fully-featured API.\r\n\r\nAs an experiment I built a Web Component for embedding Datasette tables on pages - https://github.com/simonw/datasette-table - and I think it's actually going to be a really useful tool for helping me dog food the v1.0 API design.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1532/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 973139047, "node_id": "MDU6SXNzdWU5NzMxMzkwNDc=", "number": 1439, "title": "Rethink how .ext formats (v.s. ?_format=) works before 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 48, "created_at": "2021-08-17T23:32:51Z", "updated_at": "2022-03-15T20:51:26Z", "closed_at": "2022-03-15T20:51:26Z", "author_association": "OWNER", "pull_request": null, "body": "Datasette currently has surprising special behaviour for if a table name ends in `.csv` - which can happen when a tool like `csvs-to-sqlite` creates tables that match the filename that they were imported from.\r\n\r\nhttps://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv illustrates this behaviour: it links to `.csv` and `.json` that look like this:\r\n\r\n- https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=json\r\n- https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=csv&_size=max\r\n\r\nWhere normally Datasette would add the `.csv` or `.json` extension to the path component of the URL (as seen on other pages such as https://latest.datasette.io/fixtures/facet_cities) here the [path_with_format() function](https://github.com/simonw/datasette/blob/adb5b70de5cec3c3dd37184defe606a082c232cf/datasette/utils/__init__.py#L710) notices that there is already a `.` in the path and instead adds `?_format=csv` to the query string instead.\r\n\r\nThe problem with this mechanism is that it's pretty surprising. Anyone writing external code to Datasette who wants to get back the `.csv` or `.json` version giving the URL to a table page will need to know about and implement this behaviour themselves. That's likely to cause all kinds of bugs in the future.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1439/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 531502365, "node_id": "MDU6SXNzdWU1MzE1MDIzNjU=", "number": 646, "title": "Make database level information from metadata.json available in the index.html template", "user": {"value": 18017473, "label": "lagolucas"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2019-12-02T19:55:10Z", "updated_at": "2022-03-15T20:50:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Did a search on the issues here and didn't find anything related to what I want.\r\n\r\nI want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page.\r\n\r\nI tried some small tweaks on the python and html files, but failed to get that result.\r\n\r\nIs there a way? Thanks!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/646/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 626593402, "node_id": "MDU6SXNzdWU2MjY1OTM0MDI=", "number": 780, "title": "Internals documentation for datasette.metadata() method", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2020-05-28T15:14:22Z", "updated_at": "2022-03-15T20:50:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/40885ef24e32d91502b6b8bbad1c7376f50f2830/datasette/app.py#L297-L328", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/780/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1054243511, "node_id": "I_kwDOBm6k_c4-1nq3", "number": 1509, "title": "Datasette 1.0 JSON API (and documentation)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-11-15T23:22:45Z", "updated_at": "2022-03-15T20:38:56Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The new JSON API in a stable, documented form.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1509/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 646737558, "node_id": "MDU6SXNzdWU2NDY3Mzc1NTg=", "number": 870, "title": "Refactor default views to use register_routes", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-06-27T18:53:12Z", "updated_at": "2022-03-15T20:07:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It would be much cleaner if Datasette's default views were all registered using the new `register_routes()` plugin hook. Could dramatically reduce the code in `datasette/app.py`.\r\n\r\n> The ideal fix here would be to rework my `BaseView` subclass mechanism to work with `register_routes()` so that those views don't have any special privileges above plugin-provided views.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/864#issuecomment-648580556_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/870/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1058072543, "node_id": "I_kwDOBm6k_c4_EOff", "number": 1518, "title": "Complete refactor of TableView and table.html template", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 45, "created_at": "2021-11-19T02:55:16Z", "updated_at": "2022-03-15T18:35:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split from #878. The current `TableView` class is by far the most complex part of Datasette, and the most difficult to work on: https://github.com/simonw/datasette/blob/0.59.2/datasette/views/table.py\r\n\r\nIn #878 I started exploring a new pattern for building views. In doing so it became clear that `TableView` is the first beast that I need to slay - if I can refactor that into something neat the pattern for building other views will emerge as a natural consequence.\r\n\r\nI've been trying to build this as a `register_routes()` plugin, as originally suggested in #870 - though unfortunately it looks like those plugins can't replace existing Datasette default views at the moment, see #1517. [UPDATE: I was wrong about this, plugins can over-ride default views just fine]\r\n\r\nI also know that I want to have a fully documented template context for `table.html` as a major step on the way to Datasette 1.0, see #1510.\r\n\r\nAll of this adds up to the `TableView` factor being a major project that will unblock a whole flurry of other things - so I'm going to work on that in this separate issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1518/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1168995756, "node_id": "I_kwDOBm6k_c5FrXWs", "number": 1657, "title": "Tilde encoding: use ~ instead of - for dash-encoding", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 12, "created_at": "2022-03-14T22:55:17Z", "updated_at": "2022-03-15T18:25:11Z", "closed_at": "2022-03-15T18:01:58Z", "author_association": "OWNER", "pull_request": null, "body": "Refs #1439", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1657/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1168357113, "node_id": "PR_kwDOBm6k_c40ZRDA", "number": 1656, "title": "Update pytest requirement from <7.1.0,>=5.2.2 to >=5.2.2,<7.2.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-14T13:11:53Z", "updated_at": "2022-03-15T18:03:03Z", "closed_at": "2022-03-15T18:03:02Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1656", "body": "Updates the requirements on [pytest](https://github.com/pytest-dev/pytest) to permit the latest version.\n
\nRelease notes\n

Sourced from pytest's releases.

\n
\n

7.1.0

\n

pytest 7.1.0 (2022-03-13)

\n

Breaking Changes

\n
    \n
  • \n

    #8838: As per our policy, the following features have been deprecated in the 6.X series and are now\nremoved:

    \n
      \n
    • pytest._fillfuncargs function.
    • \n
    • pytest_warning_captured hook - use pytest_warning_recorded instead.
    • \n
    • -k -foobar syntax - use -k 'not foobar' instead.
    • \n
    • -k foobar: syntax.
    • \n
    • pytest.collect module - import from pytest directly.
    • \n
    \n

    For more information consult\nDeprecations and Removals in the docs.

    \n
  • \n
  • \n

    #9437: Dropped support for Python 3.6, which reached end-of-life at 2021-12-23.

    \n
  • \n
\n

Improvements

\n
    \n
  • \n

    #5192: Fixed test output for some data types where -v would show less information.

    \n

    Also, when showing diffs for sequences, -q would produce full diffs instead of the expected diff.

    \n
  • \n
  • \n

    #9362: pytest now avoids specialized assert formatting when it is detected that the default __eq__ is overridden in attrs or dataclasses.

    \n
  • \n
  • \n

    #9536: When -vv is given on command line, show skipping and xfail reasons in full instead of truncating them to fit the terminal width.

    \n
  • \n
  • \n

    #9644: More information about the location of resources that led Python to raise ResourceWarning{.interpreted-text role="class"} can now\nbe obtained by enabling tracemalloc{.interpreted-text role="mod"}.

    \n

    See resource-warnings{.interpreted-text role="ref"} for more information.

    \n
  • \n
  • \n

    #9678: More types are now accepted in the ids argument to @pytest.mark.parametrize.\nPreviously only [str]{.title-ref}, [float]{.title-ref}, [int]{.title-ref} and [bool]{.title-ref} were accepted;\nnow [bytes]{.title-ref}, [complex]{.title-ref}, [re.Pattern]{.title-ref}, [Enum]{.title-ref} and anything with a [__name__]{.title-ref} are also accepted.

    \n
  • \n
  • \n

    #9692: pytest.approx{.interpreted-text role="func"} now raises a TypeError{.interpreted-text role="class"} when given an unordered sequence (such as set{.interpreted-text role="class"}).

    \n

    Note that this implies that custom classes which only implement __iter__ and __len__ are no longer supported as they don't guarantee order.

    \n
  • \n
\n

Bug Fixes

\n
    \n
  • #8242: The deprecation of raising unittest.SkipTest{.interpreted-text role="class"} to skip collection of\ntests during the pytest collection phase is reverted - this is now a supported\nfeature again.
  • \n
  • #9493: Symbolic link components are no longer resolved in conftest paths.\nThis means that if a conftest appears twice in collection tree, using symlinks, it will be executed twice.
  • \n
\n\n
\n

... (truncated)

\n
\n
\nCommits\n
    \n
  • 1dbffcc [pre-commit.ci] auto fixes from pre-commit.com hooks
  • \n
  • d53a5fb Prepare release version 7.1.0
  • \n
  • d306ec0 Update upcoming trainings (#9744)
  • \n
  • 3e4c14b Merge pull request #9751 from fabianegli/main
  • \n
  • 7f924b1 Fix typo in deprecation documentation
  • \n
  • 4a8f8ad build(deps): Bump django from 4.0.2 to 4.0.3 in /testing/plugins_integration ...
  • \n
  • c0fd2d8 build(deps): Bump pytest-asyncio from 0.18.1 to 0.18.2 in /testing/plugins_in...
  • \n
  • 843e018 Merge pull request #9732 from nicoddemus/9730-toml-failure
  • \n
  • bc43d66 [automated] Update plugin list (#9733)
  • \n
  • e38d1ca Improve error message for malformed pyproject.toml files
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1656/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1169895600, "node_id": "PR_kwDOBm6k_c40eW7C", "number": 1659, "title": "Tilde encoding", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-15T16:19:07Z", "updated_at": "2022-03-15T18:01:58Z", "closed_at": "2022-03-15T18:01:57Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1659", "body": "Refs #1657", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1659/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 675753042, "node_id": "MDU6SXNzdWU2NzU3NTMwNDI=", "number": 131, "title": "sqlite-utils insert: options for column types", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2020-08-09T18:59:11Z", "updated_at": "2022-03-15T13:21:42Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The `insert` command currently results in string types for every column - at least when used against CSV or TSV inputs.\r\n\r\nIt would be useful if you could do the following:\r\n\r\n- automatically detects the column types based on eg the first 1000 records\r\n- explicitly state the rule for specific columns\r\n\r\n`--detect-types` could work for the former - or it could do that by default and allow opt-out using `--no-detect-types`\r\n\r\nFor specific columns maybe this:\r\n\r\n sqlite-utils insert db.db images images.tsv \\\r\n --tsv \\\r\n -c id int \\\r\n -c score float", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/131/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 930807135, "node_id": "MDU6SXNzdWU5MzA4MDcxMzU=", "number": 1384, "title": "Plugin hook for dynamic metadata", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 22, "created_at": "2021-06-26T22:36:03Z", "updated_at": "2022-03-14T00:36:42Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "@brandonrobertz contributed an implementation of this in PR #1368, which I just merged. Opening this ticket to track further work on this before it goes out in a Datasette release (likely preceded by an alpha).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1384/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1145882578, "node_id": "I_kwDOCGYnMM5ETMfS", "number": 408, "title": "`deterministic=True` fails on versions of SQLite prior to 3.8.3", "user": {"value": 24938923, "label": "learning4life"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-02-21T14:36:43Z", "updated_at": "2022-03-13T16:54:09Z", "closed_at": "2022-03-02T00:38:11Z", "author_association": "NONE", "pull_request": null, "body": "Hi, love your work.\r\n\r\nI am unable to lookup indexes in a database using sqlite-utils:\r\n\r\n`\r\nsqlite-utils indexes city_spec.db --table`\r\n\r\nor\r\n\r\n`sqlite-utils indexes city_spec.db MyTable\r\n`\r\n\r\n**Software**\r\nsqlite-utils, version 3.24\r\nsqlite3 --version: 3.36.0 \r\n\r\n**Output:**\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/app-root/bin/sqlite-utils\", line 8, in \r\n sys.exit(cli())\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py\", line 2123, in indexes\r\n ctx.invoke(\r\n File \"/opt/app-root/lib64/python3.8/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py\", line 1624, in query\r\n db.register_fts4_bm25()\r\n File \"/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py\", line 403, in register_fts4_bm25\r\n self.register_function(rank_bm25, deterministic=True)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py\", line 399, in register_function\r\n register(fn)\r\n File \"/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py\", line 392, in register\r\n self.conn.create_function(name, arity, fn, **kwargs)\r\nsqlite3.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/408/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1088816961, "node_id": "I_kwDODEm0Qs5A5gdB", "number": 62, "title": "KeyError: 'created_at' for private accounts?", "user": {"value": 6764957, "label": "swyxio"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-12-26T17:51:51Z", "updated_at": "2022-03-12T02:36:32Z", "closed_at": "2022-02-24T18:10:18Z", "author_association": "NONE", "pull_request": null, "body": "hey Simon!\r\n\r\ni was running `twitter-to-sqlite user-timeline twitter.db` for [my private alt](https://twitter.com/swyxio) and ran into this error:\r\n\r\n
\r\n\r\n\r\n![image](https://user-images.githubusercontent.com/6764957/147416165-46b69c30-100a-406f-8534-8612b75547ae.png)\r\n\r\n\r\n\r\n\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/Users/swyx/Work/datasette/env/bin/twitter-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/twitter_to_sqlite/cli.py\", line 291, in user_timeline\r\n profile = utils.get_profile(db, session, **kwargs)\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/twitter_to_sqlite/utils.py\", line 133, in get_profile\r\n save_users(db, [profile])\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/twitter_to_sqlite/utils.py\", line 453, in save_users\r\n transform_user(user)\r\n File \"/Users/swyx/Work/datasette/env/lib/python3.9/site-packages/twitter_to_sqlite/utils.py\", line 285, in transform_user\r\n user[\"created_at\"] = parser.parse(user[\"created_at\"])\r\nKeyError: 'created_at'\r\n```\r\n\r\n
\r\n\r\n\r\nthis looks awfully like #37 but it can't be, because i'm authed into my account and obviously i have perms to read my own account. wonder if there's any diagnostic methods i should apply here? just filing an issue for others to find while i investigate.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/62/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1160034488, "node_id": "I_kwDOCGYnMM5FJLi4", "number": 411, "title": "Support for generated columns", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-04T20:41:33Z", "updated_at": "2022-03-11T22:32:43Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "This is a fairly new feature -- SQLite version 3.31.0 (2020-01-22) -- that I, admittedly, haven't gotten to work yet. But it looks _incredibly_ useful: https://dgl.cx/2020/06/sqlite-json-support\r\n\r\nI'm not sure if this is an option on `add-column` or a separate command like `add-generated-column`. Either way, it needs an argument to populate it. It could be something like this:\r\n\r\n```sh\r\nsqlite-utils add-column data.db table-name generated --as 'json_extract(data, \"$.field\")' --virtual\r\n```\r\n\r\nMore here: https://www.sqlite.org/gencol.html", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/411/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1166731361, "node_id": "I_kwDOCGYnMM5Fiuhh", "number": 414, "title": "I forgot to include the changelog in the 3.25.1 release", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-03-11T18:32:36Z", "updated_at": "2022-03-11T18:40:39Z", "closed_at": "2022-03-11T18:40:39Z", "author_association": "OWNER", "pull_request": null, "body": "I pushed a release for https://github.com/simonw/sqlite-utils/releases/tag/3.25.1 but forgot to include the release notes in `docs/changelog.rst`\r\n\r\nThis means https://sqlite-utils.datasette.io/en/stable/changelog.html isn't showing them.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/414/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1166587040, "node_id": "I_kwDOCGYnMM5FiLSg", "number": 413, "title": "Display autodoc type information more legibly", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-03-11T15:58:20Z", "updated_at": "2022-03-11T18:07:10Z", "closed_at": "2022-03-11T18:07:10Z", "author_association": "OWNER", "pull_request": null, "body": "https://sqlite-utils.datasette.io/en/3.25/reference.html#sqlite_utils.db.Table.insert looks like this at the moment:\r\n\r\n\"image\"\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/413/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1131295060, "node_id": "I_kwDOBm6k_c5DbjFU", "number": 1634, "title": "Update Dockerfile generated by `datasette publish`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2022-02-11T00:07:26Z", "updated_at": "2022-03-11T17:38:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The generated `Dockerfile` currently looks something like this:\r\n```Dockerfile\r\nFROM python:3.8\r\nCOPY . /app\r\nWORKDIR /app\r\n\r\nENV DATASETTE_SECRET 'edab49cbc5d5f6f33238f54852037e3fee710821960b73edd2ce743454182ae2'\r\nRUN pip install -U datasette datasette-auth-passwords datasette-tiddlywiki datasette-graphql\r\nRUN datasette inspect fixtures.db other.db --inspect-file inspect-data.json\r\nENV PORT 8080\r\nEXPOSE 8080\r\nCMD datasette serve --host 0.0.0.0 -i fixtures.db -i other.db --cors --inspect-file inspect-data.json --metadata metadata.json --create --port $PORT /data/*.db\r\n```\r\nThis is still on Python 3.8, and it generates a pretty large image compared to the `Dockerfile` used for https://hub.docker.com/datasetteproject/datasette - https://github.com/simonw/datasette/blob/0.60.2/Dockerfile\r\n\r\nHere's the code that generates it: https://github.com/simonw/datasette/blob/7d24fd405f3c60e4c852c5d746c91aa2ba23cf5b/datasette/utils/__init__.py#L389-L400", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1634/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 2, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 678760988, "node_id": "MDU6SXNzdWU2Nzg3NjA5ODg=", "number": 932, "title": "End-user documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 6, "created_at": "2020-08-13T22:04:39Z", "updated_at": "2022-03-08T15:20:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette's documentation is aimed at people who install and configure it.\r\n\r\nWhat about end users of preconfigured and deployed Datasette instances?\r\n\r\nSomething that can be linked to from the Datasette UI would be really useful.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/932/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1154399841, "node_id": "I_kwDOBm6k_c5Ezr5h", "number": 1645, "title": "Sensible `cache-control` headers for static assets, including those served by plugins", "user": {"value": 697092, "label": "curiousleo"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 4, "created_at": "2022-02-28T18:12:03Z", "updated_at": "2022-03-08T02:59:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "## What I'm seeing\r\n\r\nWith `default_cache_ttl = 86400`, I see the following:\r\n\r\nA table view returns `Cache-control: max-age=86400`:\r\n\r\n![Screenshot_20220228_190000](https://user-images.githubusercontent.com/697092/156034352-4d64683e-39c8-49af-81df-0217a5957bbd.png)\r\n\r\nA static asset returns no `Cache-control` header:\r\n\r\n![Screenshot_20220228_185933](https://user-images.githubusercontent.com/697092/156034363-d0b03cc2-5889-4ed2-b601-8c1846b8469a.png)\r\n\r\n## What I expected to see\r\n\r\nI expected the static asset to return a `Cache-control` header indicating that this response can be cached.\r\n\r\n## Why this matters\r\n\r\nI'm productionising a Datasette deployment right now and was looking into putting it behind a Varnish instance. I was surprised to see requests for static assets being served from Datasette rather than Varnish, this is what led me to look more closely at the response headers.\r\n\r\nWhile Datasette serves those static assets pretty quickly, I don't see why Datasette should serve them. By their nature, static assets like images and JS files are very cacheable, so it should be easy to serve them from a cache like Varnish.\r\n\r\n(Note that Varnish can easily be configured to override this header, enabling caching for static assets. But it would be better if this override was not necessary.)\r\n\r\n## Discussion\r\n\r\nIt seems clear to me that serving static assets without a `Cache-control` header is not ideal.\r\n\r\nI see two options here:\r\n\r\nA. Static assets use the same logic as table / SQL views to set the `Cache-control` header based on `default_cache_ttl`.\r\nB. An additional setting for static assets is introduced (`default_static_cache_ttl`, say).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1645/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1161969891, "node_id": "I_kwDOBm6k_c5FQkDj", "number": 1654, "title": "Adopt a code of conduct", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-03-07T22:00:24Z", "updated_at": "2022-03-07T22:19:35Z", "closed_at": "2022-03-07T22:19:35Z", "author_association": "OWNER", "pull_request": null, "body": "This is long overdue, especially given the size of the project now.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1654/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1161937073, "node_id": "I_kwDOBm6k_c5FQcCx", "number": 1653, "title": "Mechanism to default a table to sorting by multiple columns", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-07T21:20:11Z", "updated_at": "2022-03-07T21:23:39Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "### Discussed in https://github.com/simonw/datasette/discussions/1652\r\n\r\n
\r\n\r\nOriginally posted by **zaneselvans** March 7, 2022\r\nIt's easy to tell datasette to sort tables using a single column, as [described in the docs](https://docs.datasette.io/en/stable/metadata.html#setting-a-default-sort-order):\r\n\r\n```yaml\r\ndatabases:\r\n ferc1:\r\n tables:\r\n f1_edcfu_epda:\r\n sort: created_time\r\n```\r\n\r\nBut is there some way to tell it to sort using a composite key, like you would in an `ORDER BY` clause instead? For example, the way it's being done **[in this query](https://data.catalyst.coop/ferc1?sql=select%0D%0A++rowid%2C%0D%0A++respondent_id%2C%0D%0A++report_year%2C%0D%0A++spplmnt_num%2C%0D%0A++row_number%2C%0D%0A++row_seq%2C%0D%0A++row_prvlg%2C%0D%0A++acct_num%2C%0D%0A++depr_plnt_base%2C%0D%0A++est_avg_srvce_lf%2C%0D%0A++net_salvage%2C%0D%0A++apply_depr_rate%2C%0D%0A++mrtlty_crv_typ%2C%0D%0A++avg_remaining_lf%2C%0D%0A++report_prd%0D%0Afrom%0D%0A++f1_edcfu_epda%0D%0Awhere%0D%0A++respondent_id+%3D+210%0D%0A++AND+report_year+%3D+2020%0D%0Aorder+by%0D%0A++report_year%2C+report_prd%2C+respondent_id%2C+spplmnt_num%2C+row_number%0D%0Alimit%0D%0A++1000)** on our Datasette?\r\n\r\n```sql\r\nSELECT\r\n respondent_id,\r\n report_year,\r\n spplmnt_num,\r\n row_number,\r\n row_seq,\r\n row_prvlg,\r\n acct_num,\r\n depr_plnt_base,\r\n est_avg_srvce_lf,\r\n net_salvage,\r\n apply_depr_rate,\r\n mrtlty_crv_typ,\r\n avg_remaining_lf,\r\n report_prd\r\nFROM\r\n f1_edcfu_epda\r\nWHERE\r\n respondent_id = 210\r\n AND report_year = 2020\r\nORDER BY\r\n report_year, report_prd, respondent_id, spplmnt_num, row_number\r\nLIMIT\r\n 1000\r\n```\r\n\r\nThe problem here is that by default it's using `rowid` (the SQLite assigned autoincrementing integer key) to order the records, but the table **should** have a natural composite primary key, but the original database that this data is being migrated from doesn't enforce unique primary keys, so there are dupes, and we don't want to drop those rows, and the records are somehow getting jumbled in the database (the `rowid` ordering isn't lined up with the expected ordering based on the composite primary key, though it's close) and this jumbling is confusing to users that expect to see the data ordered based on the natural primary key.\r\n\r\nI've tried setting the `sort` metadata parameter to a list of column names, a tuple of column names, a quoted string of comma-separated column names, a quoted string of a tuple of column names...\r\n\r\n```yaml\r\ndatabases:\r\n ferc1:\r\n tables:\r\n f1_edcfu_epda:\r\n sort: \"(report_year, report_prd, respondent_id, spplmnt_num, row_number)\"\r\n```\r\n\r\nand they all give me server errors like:\r\n\r\n```\r\nCannot sort table by (report_year, report_prd, respondent_id, spplmnt_num, row_number)\r\n```
", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1653/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1160750713, "node_id": "I_kwDOBm6k_c5FL6Z5", "number": 1650, "title": "Implement redirects from old % encoding to new dash encoding", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2022-03-06T23:40:02Z", "updated_at": "2022-03-07T19:26:15Z", "closed_at": "2022-03-07T19:26:14Z", "author_association": "OWNER", "pull_request": null, "body": "> One big advantage to this scheme is that redirecting old links to `%2F` pages (e.g. https://fivethirtyeight.datasettes.com/fivethirtyeight/twitter-ratio%2Fsenators) is easy - if you see a `%` in the `raw_path`, redirect to that page with the `%` replaced by `-`.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1439#issuecomment-1060044007_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1650/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1160432941, "node_id": "PR_kwDOBm6k_c4z_p6S", "number": 1648, "title": "Use dash encoding for table names and row primary keys in URLs", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-03-05T19:50:45Z", "updated_at": "2022-03-07T15:38:30Z", "closed_at": "2022-03-07T15:38:30Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1648", "body": "Refs #1439.\r\n\r\n- [x] Build `dash_encode` / `dash_decode` functions\r\n- [x] Use dash encoding for row primary keys\r\n- [x] Use dash encoding for `?_next=` pagination tokens\r\n- [x] Use dash encoding for table names in URLs\r\n- [x] Use dash encoding for database name\r\n- ~~Implement redirects from previous `%` URLs that replace those with `-`~~ - separate issue: #1650", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1648/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1160677684, "node_id": "PR_kwDOBm6k_c40AW_v", "number": 1649, "title": "Add /opt/homebrew to where spatialite extension can be found", "user": {"value": 2182, "label": "danp"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-03-06T18:09:35Z", "updated_at": "2022-03-06T22:46:00Z", "closed_at": "2022-03-06T19:39:15Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1649", "body": "Helps homebrew on Apple Silicon setups find spatialite without needing\r\na full path.\r\n\r\nSimilar to #1114", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1649/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1149729902, "node_id": "PR_kwDOCGYnMM4zbaJy", "number": 410, "title": "Correct spelling mistakes (found with codespell)", "user": {"value": 3818, "label": "EdwardBetts"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-02-24T20:44:18Z", "updated_at": "2022-03-06T08:48:29Z", "closed_at": "2022-03-01T21:05:29Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/410", "body": null, "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/410/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1098275181, "node_id": "PR_kwDOBm6k_c4wwNCl", "number": 1589, "title": "Typo in docs about default redirect status code", "user": {"value": 3556, "label": "davidbgk"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-10T19:14:36Z", "updated_at": "2022-03-06T02:27:49Z", "closed_at": "2022-03-06T01:58:32Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1589", "body": null, "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1589/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1108084641, "node_id": "PR_kwDOBm6k_c4xQ0uZ", "number": 1602, "title": "Update pytest-timeout requirement from <2.1,>=1.4.2 to >=1.4.2,<2.2", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-19T13:11:50Z", "updated_at": "2022-03-06T01:41:50Z", "closed_at": "2022-03-06T01:41:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1602", "body": "Updates the requirements on [pytest-timeout](https://github.com/pytest-dev/pytest-timeout) to permit the latest version.\n
\nCommits\n
    \n
  • 8e4800e Fixup readme
  • \n
  • dc1efca 2.1.0 release
  • \n
  • dd9d608 Add custom hooks specifications for overriding setup_timeout and teardown_tim...
  • \n
  • ed8ecd6 module names, they're difficult
  • \n
  • 3ab4319 Add changelog
  • \n
  • 4f7ebae Replace deprecated py.io.get_terminal_width() with shutil.get_terminal_size()...
  • \n
  • b8a2fa6 Prep release
  • \n
  • 951972d Update changelog
  • \n
  • 748a9c3 Making detection of whether a debugger is currently attached more flexible. (...
  • \n
  • f8a46a1 Github removed the git protocol (#112)
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1602/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1112633417, "node_id": "PR_kwDOBm6k_c4xfryi", "number": 1610, "title": "Update asgiref requirement from <3.5.0,>=3.2.10 to >=3.2.10,<3.6.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-24T13:14:18Z", "updated_at": "2022-03-06T01:30:27Z", "closed_at": "2022-03-06T01:30:27Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1610", "body": "Updates the requirements on [asgiref](https://github.com/django/asgiref) to permit the latest version.\n
\nChangelog\n

Sourced from asgiref's changelog.

\n
\n

3.5.0 (2022-01-22)

\n
    \n
  • \n

    Python 3.6 is no longer supported, and asyncio calls have been changed to\nuse only the modern versions of the APIs as a result

    \n
  • \n
  • \n

    Several causes of RuntimeErrors in cases where an event loop was assigned\nto a thread but not running

    \n
  • \n
  • \n

    Speed improvements in the Local class

    \n
  • \n
\n

3.4.1 (2021-07-01)

\n
    \n
  • Fixed an issue with the deadlock detection where it had false positives\nduring exception handling.
  • \n
\n

3.4.0 (2021-06-27)

\n
    \n
  • \n

    Calling sync_to_async directly from inside itself (which causes a deadlock\nwhen in the default, thread-sensitive mode) now has deadlock detection.

    \n
  • \n
  • \n

    asyncio usage has been updated to use the new versions of get_event_loop,\nensure_future, wait and gather, avoiding deprecation warnings in Python 3.10.\nPython 3.6 installs continue to use the old versions; this is only for 3.7+

    \n
  • \n
  • \n

    sync_to_async and async_to_sync now have improved type hints that pass\nthrough the underlying function type correctly.

    \n
  • \n
  • \n

    All Websocket* types are now spelled WebSocket, to match our specs and the\nofficial spelling. The old names will work until release 3.5.0, but will\nraise deprecation warnings.

    \n
  • \n
  • \n

    The typing for WebSocketScope and HTTPScope's extensions key has been\nfixed.

    \n
  • \n
\n

3.3.4 (2021-04-06)

\n
    \n
  • The async_to_sync type error is now a warning due the high false negative\nrate when trying to detect coroutine-returning callables in Python.
  • \n
\n

3.3.3 (2021-04-06)

\n\n
\n

... (truncated)

\n
\n
\nCommits\n
    \n
  • 8b61513 Releasing 3.5.0
  • \n
  • b2e1c9d Fixed pytest_asyncio deprecation warning.
  • \n
  • 2eda551 Added testing for Python 3.10.
  • \n
  • 02fecb6 Drop Python 3.6 (#307)
  • \n
  • 6689c0a Added stacklevel to warning in AsyncToSync.
  • \n
  • 4364f9b Changed how StatelessServer handles event loops
  • \n
  • 7bc055c Update implementations.rst (#295)
  • \n
  • c758984 Move current_task import choice to module definition time
  • \n
  • dfe87b2 Fixed #292: Use get_event_loop in class-level code
  • \n
  • b3a65e3 Removed class variable which has been unused since a0bbe90
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1610/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1124191982, "node_id": "PR_kwDOBm6k_c4yFTCp", "number": 1629, "title": "Update pytest requirement from <6.3.0,>=5.2.2 to >=5.2.2,<7.1.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-02-04T13:14:10Z", "updated_at": "2022-03-06T01:30:06Z", "closed_at": "2022-03-06T01:30:06Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1629", "body": "Updates the requirements on [pytest](https://github.com/pytest-dev/pytest) to permit the latest version.\n
\nRelease notes\n

Sourced from pytest's releases.

\n
\n

7.0.0

\n

pytest 7.0.0 (2022-02-03)

\n

(Please see the full set of changes for this release also in the 7.0.0rc1 notes below)

\n

Deprecations

\n
    \n
  • \n

    #9488: If custom subclasses of nodes like pytest.Item{.interpreted-text role="class"} override the\n__init__ method, they should take **kwargs. See\nuncooperative-constructors-deprecated{.interpreted-text role="ref"} for details.

    \n

    Note that a deprection warning is only emitted when there is a conflict in the\narguments pytest expected to pass. This deprecation was already part of pytest\n7.0.0rc1 but wasn't documented.

    \n
  • \n
\n

Bug Fixes

\n
    \n
  • #9355: Fixed error message prints function decorators when using assert in Python 3.8 and above.
  • \n
  • #9396: Ensure pytest.Config.inifile{.interpreted-text role="attr"} is available during the pytest_cmdline_main <_pytest.hookspec.pytest_cmdline_main>{.interpreted-text role="func"} hook (regression during 7.0.0rc1).
  • \n
\n

Improved Documentation

\n
    \n
  • #9404: Added extra documentation on alternatives to common misuses of [pytest.warns(None)]{.title-ref} ahead of its deprecation.
  • \n
  • #9505: Clarify where the configuration files are located. To avoid confusions documentation mentions\nthat configuration file is located in the root of the repository.
  • \n
\n

Trivial/Internal Changes

\n
    \n
  • #9521: Add test coverage to assertion rewrite path.
  • \n
\n

pytest 7.0.0rc1 (2021-12-06)

\n

Breaking Changes

\n
    \n
  • \n

    #7259: The Node.reportinfo() <non-python tests>{.interpreted-text role="ref"} function first return value type has been expanded from [py.path.local | str]{.title-ref} to [os.PathLike[str] | str]{.title-ref}.

    \n

    Most plugins which refer to [reportinfo()]{.title-ref} only define it as part of a custom pytest.Item{.interpreted-text role="class"} implementation.\nSince [py.path.local]{.title-ref} is a [os.PathLike[str]]{.title-ref}, these plugins are unaffacted.

    \n

    Plugins and users which call [reportinfo()]{.title-ref}, use the first return value and interact with it as a [py.path.local]{.title-ref}, would need to adjust by calling [py.path.local(fspath)]{.title-ref}.\nAlthough preferably, avoid the legacy [py.path.local]{.title-ref} and use [pathlib.Path]{.title-ref}, or use [item.location]{.title-ref} or [item.path]{.title-ref}, instead.

    \n

    Note: pytest was not able to provide a deprecation period for this change.

    \n
  • \n
\n\n
\n

... (truncated)

\n
\n
\nCommits\n
    \n
  • 3554b83 Add note to changelog
  • \n
  • 6ea7f99 Prepare release version 7.0.0
  • \n
  • 737b220 [7.0.x] releasing: Add template for major releases (#9597)
  • \n
  • 7fa3972 [7.0.x] releasing: Always set doc_version (#9590)
  • \n
  • b304499 [7.0.x] Make 'warnings' and 'deselected' in assert_outcomes optional (#9566)
  • \n
  • f17525d [7.0.x] doc: Add ellipsis to warning usecase list (#9562)
  • \n
  • 0a7be97 ci: Bump up timeout (#9565)
  • \n
  • c17908c [7.0.x] doc: Recategorize 7.0.0 changelog items (#9564)
  • \n
  • ab549bb [7.0.x] Add missing cooperative constructor changelog (#9563)
  • \n
  • 4b1707f [7.0.x] Autouse linearization graph (#9557)
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1629/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1125973221, "node_id": "PR_kwDOBm6k_c4yK44E", "number": 1631, "title": "Update pytest-asyncio requirement from <0.17,>=0.10 to >=0.10,<0.19", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-02-07T13:13:19Z", "updated_at": "2022-03-06T01:29:54Z", "closed_at": "2022-03-06T01:29:53Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1631", "body": "Updates the requirements on [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) to permit the latest version.\n
\nRelease notes\n

Sourced from pytest-asyncio's releases.

\n
\n

pytest-asyncio 0.18.0

\n
\n

title: 'pytest-asyncio: pytest support for asyncio'

\n

\"image\"

\n

\"image\"

\n

\"image\"

\n

\"Supported

\n

\"image\"

\n

pytest-asyncio is an Apache2 licensed library, written in Python, for\ntesting asyncio code with pytest.

\n

asyncio code is usually written in the form of coroutines, which makes\nit slightly more difficult to test using normal testing tools.\npytest-asyncio provides useful fixtures and markers to make testing\neasier.

\n
@pytest.mark.asyncio\nasync def test_some_asyncio_code():\n    res = await library.do_something()\n    assert b"expected result" == res\n
\n

pytest-asyncio has been strongly influenced by\npytest-tornado.

\n

Features

\n
    \n
  • fixtures for creating and injecting versions of the asyncio event\nloop
  • \n
  • fixtures for injecting unused tcp/udp ports
  • \n
  • pytest markers for treating tests as asyncio coroutines
  • \n
  • easy testing with non-default event loops
  • \n
  • support for [async def]{.title-ref} fixtures and async generator\nfixtures
  • \n
  • support auto mode to handle all async fixtures and tests\nautomatically by asyncio; provide strict mode if a test suite\nshould work with different async frameworks simultaneously, e.g.\nasyncio and trio.
  • \n
\n

Installation

\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1631/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}