{"id": 273595473, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUyMzYwNzQw", "number": 81, "title": ":fire: Removes DS_Store", "user": {"value": 50527, "label": "jefftriplett"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-13T22:07:52Z", "updated_at": "2017-11-14T02:24:54Z", "closed_at": "2017-11-13T22:16:55Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/81", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/81/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 273775212, "node_id": "MDU6SXNzdWUyNzM3NzUyMTI=", "number": 88, "title": "Add NHS England Hospitals example to wiki", "user": {"value": 15543, "label": "tomdyson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2017-11-14T12:29:10Z", "updated_at": "2021-03-22T23:46:36Z", "closed_at": "2017-11-14T22:54:06Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "https://nhs-england-hospitals.now.sh\r\n\r\nand an associated map visualisation:\r\n\r\nhttp://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/\r\n\r\nDatasette is wonderful!\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/88/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273816720, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUyNTIyNzYy", "number": 89, "title": "SQL syntax highlighting with CodeMirror", "user": {"value": 15543, "label": "tomdyson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-14T14:43:33Z", "updated_at": "2017-11-15T02:03:01Z", "closed_at": "2017-11-15T02:03:01Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/89", "body": "Addresses #13 \r\n\r\nFuture enhancements could include autocompletion of table and column names, e.g. with\r\n\r\n```javascript\r\nextraKeys: {\"Ctrl-Space\": \"autocomplete\"},\r\nhintOptions: {tables: {\r\n users: [\"name\", \"score\", \"birthDate\"],\r\n countries: [\"name\", \"population\", \"size\"]\r\n }}\r\n```\r\n\r\n(see https://codemirror.net/doc/manual.html#addon_sql-hint and source at http://codemirror.net/mode/sql/)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/89/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 273961179, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUyNjMxNTcw", "number": 94, "title": "Initial add simple prod ready Dockerfile refs #57", "user": {"value": 247192, "label": "macropin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-14T22:09:09Z", "updated_at": "2017-11-15T03:08:04Z", "closed_at": "2017-11-15T03:08:04Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/94", "body": "Multi-stage build based off official python:3.6-slim\r\n\r\nExample usage:\r\n```\r\ndocker run --rm -t -i -p 9000:8001 -v $(pwd)/db:/db datasette datasette serve /db/chinook.db\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/94/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 274284246, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUyODcwMDMw", "number": 104, "title": "[WIP] Add publish to heroku support", "user": {"value": 21148, "label": "jacobian"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2017-11-15T19:56:22Z", "updated_at": "2017-11-21T20:55:05Z", "closed_at": "2017-11-21T20:55:05Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/104", "body": "\r\n\r\nRefs #90 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/104/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 274343647, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUyOTE0NDgw", "number": 107, "title": "add support for ?field__isnull=1", "user": {"value": 3433657, "label": "raynae"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2017-11-15T23:36:36Z", "updated_at": "2017-11-17T15:12:29Z", "closed_at": "2017-11-17T13:29:22Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/107", "body": "Is this what you had in mind for [this issue](https://github.com/simonw/datasette/issues/64)?", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/107/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 274733145, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUzMjAxOTQ1", "number": 114, "title": "Add spatialite, switch to debian and local build", "user": {"value": 54999, "label": "ingenieroariel"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-17T02:37:09Z", "updated_at": "2017-11-17T03:50:52Z", "closed_at": "2017-11-17T03:50:52Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/114", "body": "Improves the Dockerfile to support spatial datasets, work with the local datasette code (Friendly with git tags and Dockerhub) and moves to slim debian, a small image easy to extend via apt packages for sqlite.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/114/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 274877366, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUzMzA2ODgy", "number": 115, "title": "Add keyboard shortcut to execute SQL query", "user": {"value": 198537, "label": "rgieseke"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-17T14:13:33Z", "updated_at": "2017-11-17T15:16:34Z", "closed_at": "2017-11-17T14:22:56Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/115", "body": "Very cool tool, thanks a lot!\r\n\r\nThis PR adds a `Shift-Enter` short cut to execute the SQL query. I used CodeMirrors keyboard handling.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/115/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 274900388, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUzMzI0MzAx", "number": 117, "title": "Don't prevent tabbing to `Run SQL` button", "user": {"value": 198537, "label": "rgieseke"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-17T15:27:50Z", "updated_at": "2017-11-19T20:30:24Z", "closed_at": "2017-11-18T00:53:43Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/117", "body": "Mentioned in #115 \r\n\r\nHere you go!", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/117/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 275814941, "node_id": "MDU6SXNzdWUyNzU4MTQ5NDE=", "number": 141, "title": "datasette publish can fail if /tmp is on a different device", "user": {"value": 21148, "label": "jacobian"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 5, "created_at": "2017-11-21T18:28:05Z", "updated_at": "2020-04-29T03:27:54Z", "closed_at": "2017-12-08T16:06:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "`datasette publish` uses hard links to avoid copying the db into a tmp directory. This can fail if `/tmp` is on another device, because hardlinks can't cross devices. You'll see something like this:\r\n\r\n```\r\n$ datasette publish heroku whatever.db\r\n...\r\nOSError: [Errno 18] Invalid cross-device link: '/mnt/c/Users/jacob/c/datasette/whatever.db' -> '/tmp/tmpvxq2yof6/whatever.db'\r\n```\r\n[In my case this is failing because I'm on a Windows machine, using WSL, so my code's on a different virtual filesystem from the Linux subsystem, Because Reasons.]\r\n\r\nI'm not sure if it's possible to detect this (can you figure out which device `/tmp` is on?), or what the fallback should be (soft link? copy?).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/141/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 286938589, "node_id": "MDU6SXNzdWUyODY5Mzg1ODk=", "number": 177, "title": "Publishing to Heroku - metadata file not uploaded?", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-01-09T01:04:31Z", "updated_at": "2018-01-25T16:45:32Z", "closed_at": "2018-01-25T16:45:32Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Trying to run *datasette* (version 0.14) on Heroku with a `metadata.json` doesn't seem to be picking up the `metadata.json` file? \r\n\r\nOn a Mac with dodgy `tar` support:\r\n\r\n```\r\n \u25b8 Couldn't detect GNU tar. Builds could fail due to decompression errors\r\n \u25b8 See\r\n \u25b8 https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive\r\n \u25b8 Please install it, or specify the '--tar' option\r\n \u25b8 Falling back to node's built-in compressor\r\n```\r\n\r\nCould that be causing the issue?\r\n\r\nAlso, I'm not seeing custom query links anywhere obvious when I run the metadata file with a local *datasette* server?\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/177/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 287240246, "node_id": "MDExOlB1bGxSZXF1ZXN0MTYxOTgyNzEx", "number": 178, "title": "If metadata exists, add it to heroku launch command", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-01-09T21:42:21Z", "updated_at": "2018-01-15T09:42:46Z", "closed_at": "2018-01-14T21:05:16Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/178", "body": "The heroku build does seem to make use of any provided `metadata.json` file.\r\n\r\nAdd the `--metadata` switch to the Heroku web launch command if a `metadata.json` file is available.\r\n\r\nAddresses: https://github.com/simonw/datasette/issues/177", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/178/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 289375133, "node_id": "MDExOlB1bGxSZXF1ZXN0MTYzNTIzOTc2", "number": 180, "title": "make html title more readable in query template", "user": {"value": 56477, "label": "ryanpitts"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-01-17T18:56:03Z", "updated_at": "2018-04-03T16:03:38Z", "closed_at": "2018-04-03T15:24:05Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/180", "body": "tiny tweak to make this easier to visually parse\u2014I think it matches your style in other templates", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/180/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 291451116, "node_id": "MDExOlB1bGxSZXF1ZXN0MTY1MDI5ODA3", "number": 182, "title": "Add db filesize next to download link", "user": {"value": 3433657, "label": "raynae"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-01-25T04:58:56Z", "updated_at": "2019-03-22T13:50:57Z", "closed_at": "2019-02-06T04:59:38Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/182", "body": "Took a stab at #172, will this do the trick?", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/182/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 291639118, "node_id": "MDU6SXNzdWUyOTE2MzkxMTg=", "number": 183, "title": "Custom Queries - escaping strings", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-01-25T16:49:13Z", "updated_at": "2019-06-24T06:45:07Z", "closed_at": "2019-06-24T06:45:07Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If a SQLite table column name contains spaces, they are usually referred to in double quotes:\r\n\r\n`SELECT * FROM mytable WHERE \"gappy column name\"=\"my value\";`\r\n\r\nIn the JSON metadata file, this is passed by escaping the double quotes:\r\n\r\n`\"queries\": {\"my query\": \"SELECT * FROM mytable WHERE \\\"gappy column name\\\"=\\\"my value\\\";\"}`\r\n\r\nWhen specifying a custom query in `metadata.json` using double quotes, these are then rendered in the *datasette* query box using single quotes:\r\n\r\n`SELECT * FROM mytable WHERE 'gappy column name'='my value';`\r\n\r\nwhich does not work.\r\n\r\nAlternatively, a valid custom query can be passed using backticks (\\`) to quote the column name and single (unescaped) quotes for the matched value:\r\n\r\n``\"queries\": {\"my query\": \"SELECT * FROM mytable WHERE `gappy column name`='my value';\"}``\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/183/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 313494458, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxMDMzMDI0", "number": 200, "title": "Hide Spatialite system tables", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-04-11T21:26:58Z", "updated_at": "2018-04-12T21:34:48Z", "closed_at": "2018-04-12T21:34:48Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/200", "body": "They were getting on my nerves.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/200/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 313785206, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxMjQ3NTY4", "number": 202, "title": "Raise 404 on nonexistent table URLs", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-12T15:47:06Z", "updated_at": "2018-04-13T19:22:56Z", "closed_at": "2018-04-13T18:19:15Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/202", "body": "Currently they just 500. Also cleaned the logic up a bit, I hope I didn't miss anything.\r\n\r\nThis is issue #184.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/202/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 313837303, "node_id": "MDU6SXNzdWUzMTM4MzczMDM=", "number": 203, "title": "Support for units", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-04-12T18:24:28Z", "updated_at": "2018-04-16T21:59:17Z", "closed_at": "2018-04-16T21:59:17Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be nice to be able to attach a unit to a column in the metadata, and have it rendered with that unit (and SI prefix) when it's displayed.\r\n\r\nIt would also be nice to support entering the prefixes in variables when querying.\r\n\r\nWith my radio licensing app I've put all frequencies in Hz. It's easy enough to special-case the row rendering to add the SI prefixes, but it's pretty unusable when querying by that field.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/203/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 314256802, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNjAwOTI2", "number": 204, "title": "Initial units support", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-13T21:32:49Z", "updated_at": "2018-04-14T09:44:33Z", "closed_at": "2018-04-14T03:32:54Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/204", "body": "Add support for specifying units for a column in metadata.json and rendering them on display using [pint](https://pint.readthedocs.io/en/latest/).\r\n\r\nExample table metadata:\r\n```json\r\n \"license_frequency\": {\r\n \"units\": {\r\n \"frequency\": \"Hz\",\r\n \"channel_width\": \"Hz\",\r\n \"height\": \"m\",\r\n \"antenna_height\": \"m\",\r\n \"azimuth\": \"degrees\"\r\n }\r\n }\r\n```\r\n\r\n[Example result](https://wtr-api.herokuapp.com/wtr-663ea99/license_frequency/1)\r\n\r\nThis works surprisingly well! I'd like to add support for using units when querying but this is PR is pretty usable as-is.\r\n\r\n(Pint doesn't seem to support decibels though - it thinks they're decibytes - which is an annoying omission.)\r\n\r\n(ref ticket #203)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/204/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314319372, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNjQyMTE0", "number": 205, "title": "Support filtering with units and more", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-04-14T10:47:51Z", "updated_at": "2018-04-14T15:24:04Z", "closed_at": "2018-04-14T15:24:04Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/205", "body": "The first commit:\r\n* Adds units to exported JSON\r\n* Adds units key to metadata skeleton\r\n* Adds some docs for units\r\n\r\nThe second commit adds filtering by units by the first method I mentioned in #203:\r\n![image](https://user-images.githubusercontent.com/45057/38767463-7193be16-3fd9-11e8-8a5f-ac4159415c6d.png)\r\n\r\n[Try it here](https://wtr-api.herokuapp.com/wtr-663ea99/license_frequency?frequency__gt=50GHz&height__lt=50ft). I think it integrates pretty neatly.\r\n\r\nThe third commit adds support for registering custom units with Pint from metadata.json. Probably pretty niche, but I need decibels!", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314323977, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNjQ0ODA1", "number": 206, "title": "Fix sqlite error when loading rows with no incoming FKs", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-14T12:08:17Z", "updated_at": "2018-04-14T14:32:42Z", "closed_at": "2018-04-14T14:24:25Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/206", "body": "This fixes `ERROR: conn=, sql\r\n= 'select ', params = {'id': '1'}` caused by an invalid query loading incoming FKs when none exist.\r\n\r\nThe error was ignored due to async but it still got printed to the console.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/206/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314329002, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNjQ3NzE3", "number": 207, "title": "Link foreign keys which don't have labels", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-04-14T13:27:14Z", "updated_at": "2018-04-14T15:00:00Z", "closed_at": "2018-04-14T15:00:00Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/207", "body": "This renders unlabeled FKs as simple links. I can't see why this would cause any major problems.\r\n\r\n![image](https://user-images.githubusercontent.com/45057/38768722-ea15a000-3fef-11e8-8664-ffd7aa4894ea.png)\r\n\r\nAlso includes bonus fixes for two minor issues:\r\n\r\n* In foreign key link hrefs the primary key was escaped using HTML escaping rather than URL escaping. This broke some non-integer PKs.\r\n* Print tracebacks to console when handling 500 errors.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/207/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314340944, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNjU0ODM5", "number": 208, "title": "Return HTTP 405 on InvalidUsage rather than 500", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-14T16:12:50Z", "updated_at": "2018-04-14T18:00:39Z", "closed_at": "2018-04-14T18:00:39Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/208", "body": "This also stops it filling up the logs. This happens for HEAD requests at the moment - which perhaps should be handled better, but that's a different issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/208/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314455877, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxNzIzMzAz", "number": 209, "title": " Don't duplicate simple primary keys in the link column", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2018-04-15T21:56:15Z", "updated_at": "2018-04-18T08:40:37Z", "closed_at": "2018-04-18T01:13:04Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/209", "body": "When there's a simple (single-column) primary key, it looks weird to duplicate it in the link column.\r\n\r\nThis change removes the second PK column and treats the link column as if it were the PK column from a header/sorting perspective. \r\n\r\nThis might make it a bit more difficult to tell what the link for the row is, I'm not sure yet. I feel like the alternative is to change the link column to just have the text \"view\" or something, instead of repeating the PK. (I doubt it makes much more sense with compound PKs.)\r\n\r\nBonus change in this PR: fix urlencoding of links in the displayed HTML.\r\n\r\nBefore:\r\n![image](https://user-images.githubusercontent.com/45057/38783830-e2ababb4-40ff-11e8-97fb-25e286a8c920.png)\r\n\r\nAfter:\r\n![image](https://user-images.githubusercontent.com/45057/38783835-ebf6b48e-40ff-11e8-8c47-6a864cf21ccc.png)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/209/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 316365426, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgzMTM1NjA0", "number": 232, "title": "Fix a typo", "user": {"value": 45281, "label": "lsb"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-04-20T18:20:04Z", "updated_at": "2018-04-21T00:19:08Z", "closed_at": "2018-04-21T00:19:08Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/232", "body": "It looks like this was the only instance of it: https://github.com/simonw/datasette/search?utf8=%E2%9C%93&q=SOLite&type=", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/232/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 324835838, "node_id": "MDU6SXNzdWUzMjQ4MzU4Mzg=", "number": 276, "title": "Handle spatialite geometry columns better", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2018-05-21T08:46:55Z", "updated_at": "2022-03-21T22:22:20Z", "closed_at": "2022-03-21T22:22:20Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'd like to see spatialite geometry columns rendered more sensibly - at the moment they come through as well-known-binary unless you use custom SQL, and WKB isn't of much use to anyone on the web.\r\n\r\nIn HTML: they should be shown either as simple lat/long (if it's just a point, for example), or as a sensible placeholder if they're more complex geometries.\r\n\r\nIn JSON: they should be GeoJSON geometries, (which means they can be automatically fed into a leaflet map with no further messing around).\r\n\r\nIn CSV: they should be WKT.\r\n\r\nI briefly wondered if this should go into a plugin, but I suspect it needs hooking in at a deeper level than the plugin architecture will support any time soon.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/276/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 324836533, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg5MzE4NDUz", "number": 277, "title": "Refactor inspect logic", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-21T08:49:31Z", "updated_at": "2018-05-22T16:07:24Z", "closed_at": "2018-05-22T14:03:07Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/277", "body": "This pulls the logic for inspect out into a new file which makes it a bit easier to understand.\r\n\r\nThis was going to be the first part of an implementation for #276, but it seems like that might take a while so I'm going to PR a few bits of refactoring individually.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/277/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 325352370, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg5NzA3Mzc0", "number": 279, "title": "Add version number support with Versioneer", "user": {"value": 198537, "label": "rgieseke"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-05-22T15:39:45Z", "updated_at": "2018-05-22T19:35:23Z", "closed_at": "2018-05-22T19:35:22Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/279", "body": "I think that's all for getting Versioneer support, I've been happily using it in a couple of projects ... \r\n\r\n```\r\nIn [2]: datasette.__version__\r\nOut[2]: '0.22+3.g6e12445'\r\n```\r\nRepo:\r\nhttps://github.com/warner/python-versioneer\r\n\r\nVersioneer Licence:\r\nPublic Domain (CC0-1.0)\r\n\r\nCloses #273\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/279/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 325373747, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg5NzIzNzE2", "number": 280, "title": "Build Dockerfile with recent Sqlite + Spatialite", "user": {"value": 565628, "label": "r4vi"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-05-22T16:33:50Z", "updated_at": "2018-06-28T11:26:23Z", "closed_at": "2018-05-23T17:43:35Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/280", "body": "This solves #278 without bloating the Dockerfile too much, the image size is now\r\n495MB (original was ~240MB) but it could be reduced significantly if we only\r\ncopied the output of the compilation of spatialite and friends to\r\n/usr/local/lib, instead of the entirety of it however that will take more time.\r\n\r\nIn the python code change references to `import sqlite3` to `import pysqlite3`\r\nand it should use the compiled version of sqlite3.23.1. You don't need to\r\ntry/except because pysqlite3 falls back to builtin sqlite3 if there is no\r\ncompiled version.\r\n\r\n```bash\r\n $ docker run --rm -it datasette spatialite\r\n SpatiaLite version ..: 4.4.0-RC0\tSupported Extensions:\r\n - 'VirtualShape'\t[direct Shapefile access]\r\n - 'VirtualDbf'\t\t[direct DBF access]\r\n - 'VirtualXL'\t\t[direct XLS access]\r\n - 'VirtualText'\t\t[direct CSV/TXT access]\r\n - 'VirtualNetwork'\t[Dijkstra shortest path]\r\n - 'RTree'\t\t[Spatial Index - R*Tree]\r\n - 'MbrCache'\t\t[Spatial Index - MBR cache]\r\n - 'VirtualSpatialIndex'\t[R*Tree metahandler]\r\n - 'VirtualElementary'\t[ElemGeoms metahandler]\r\n - 'VirtualKNN'\t[K-Nearest Neighbors metahandler]\r\n - 'VirtualXPath'\t[XML Path Language - XPath]\r\n - 'VirtualFDO'\t\t[FDO-OGR interoperability]\r\n - 'VirtualGPKG'\t[OGC GeoPackage interoperability]\r\n - 'VirtualBBox'\t\t[BoundingBox tables]\r\n - 'SpatiaLite'\t\t[Spatial SQL - OGC]\r\n PROJ.4 version ......: Rel. 4.9.3, 15 August 2016\r\n GEOS version ........: 3.5.1-CAPI-1.9.1 r4246\r\n TARGET CPU ..........: x86_64-linux-gnu\r\n the SPATIAL_REF_SYS table already contains some row(s)\r\n SQLite version ......: 3.23.1\r\n Enter \".help\" for instructions\r\n SQLite version 3.23.1 2018-04-10 17:39:29\r\n Enter \".help\" for instructions\r\n Enter SQL statements terminated with a \";\"\r\n spatialite>\r\n```\r\n\r\n```bash\r\n$ docker run --rm -it datasette python -c \"import pysqlite3; print(pysqlite3.sqlite_version)\"\r\n3.23.1\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/280/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 336924199, "node_id": "MDU6SXNzdWUzMzY5MjQxOTk=", "number": 330, "title": "Limit text display in cells containing large amounts of text", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-06-29T09:15:22Z", "updated_at": "2018-07-24T04:53:20Z", "closed_at": "2018-07-10T16:20:48Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The default preview of a database shows all columns (is the row count limited?) which is fine in many cases but can take a long time to load / offer a large overhead if the table is a SpatiaLite table containing geometry columns that include large shapefiles.\r\n\r\nWould it make sense to have a setting that can limit the amount of text displayed in any given cell in the table preview, or (less useful?) suppress (with notification) the display of overlong columns unless enabled by the user?\r\n\r\nAn issue then arises if a user does want to see all the text in a cell:\r\n\r\n 1) for a particular cell;\r\n 2) for every cell in the table;\r\n 3) for all cells in a particular column or columns\r\n\r\n(I haven't checked but what if a column contains e.g. raw image data? Does this display as raw data? Or can this be rendered in a context aware way as an image preview? I guess a custom template would be one way to do that?)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/330/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 336936010, "node_id": "MDU6SXNzdWUzMzY5MzYwMTA=", "number": 331, "title": "Datasette throws error when loading spatialite db without extension loaded", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-29T09:51:14Z", "updated_at": "2022-01-20T21:29:40Z", "closed_at": "2018-07-10T15:13:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When starting datasette on a SpatialLite database *without* loading the SpatiaLite extension (using eg `--load-extension=/usr/local/lib/mod_spatialite.dylib`) an error is thrown and the server fails to start:\r\n\r\n```\r\ndatasette -p 8003 adminboundaries.db \r\nServe! files=('adminboundaries.db',) on port 8003\r\nTraceback (most recent call last):\r\n File \"/Users/ajh59/anaconda3/bin/datasette\", line 11, in \r\n sys.exit(cli())\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/cli.py\", line 552, in serve\r\n ds.inspect()\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/app.py\", line 273, in inspect\r\n \"tables\": inspect_tables(conn, self.metadata.get(\"databases\", {}).get(name, {}))\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/inspect.py\", line 79, in inspect_tables\r\n \"PRAGMA table_info({});\".format(escape_sqlite(table))\r\nsqlite3.OperationalError: no such module: VirtualSpatialIndex\r\n``` \r\n\r\nIt would be nice to trap this and return a message saying something like:\r\n\r\n```\r\nIt looks like you're trying to load a SpatiaLite database? Make sure you load in the SpatiaLite extension when starting datasette.\r\n\r\nRead more: https://datasette.readthedocs.io/en/latest/spatialite.html\r\n```\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/331/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 341229113, "node_id": "MDU6SXNzdWUzNDEyMjkxMTM=", "number": 344, "title": "datasette publish heroku fails without name provided", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-07-14T11:15:56Z", "updated_at": "2018-07-14T13:00:48Z", "closed_at": "2018-07-14T13:00:48Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It fails with the following JSON traceback if the `-n` option isn't provided, despite the fact that the command line help says that's not needed for heroku publishes.\r\n\r\n
\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasette\", line 11, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/datasette/cli.py\", line 265, in publish\r\n app_name = json.loads(create_output)[\"name\"]\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py\", line 354, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py\", line 339, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py\", line 357, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n```\r\n\r\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/344/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 341235633, "node_id": "MDExOlB1bGxSZXF1ZXN0MjAxNDUxMzMy", "number": 345, "title": "Allow app names for `datasette publish heroku`", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-07-14T13:12:34Z", "updated_at": "2018-07-14T14:09:54Z", "closed_at": "2018-07-14T14:04:44Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/345", "body": "Lets you supply the `-n` parameter for Heroku deploys, which also lets you update existing Heroku deployments.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/345/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 361764460, "node_id": "MDExOlB1bGxSZXF1ZXN0MjE2NjUxMzE3", "number": 365, "title": "fix small doc typo", "user": {"value": 418191, "label": "jaywgraves"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-09-19T14:02:02Z", "updated_at": "2019-12-19T02:30:33Z", "closed_at": "2018-09-19T17:15:43Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/365", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/365/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 369716228, "node_id": "MDU6SXNzdWUzNjk3MTYyMjg=", "number": 366, "title": "Default built image size over Zeit Now 100MiB limit", "user": {"value": 416374, "label": "gfrmin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-10-12T21:27:17Z", "updated_at": "2018-11-05T06:23:32Z", "closed_at": "2018-11-05T06:23:32Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Using `dataset publish now` with no other custom options on a small (43KB) sqlite database leads to the error \"The built image size (373.5M) exceeds the 100MiB limit\". I think this is because of a recent Zeit change: https://github.com/zeit/now-cli/issues/1523", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/366/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 374675798, "node_id": "MDExOlB1bGxSZXF1ZXN0MjI2MzE0ODYy", "number": 367, "title": "Mark codemirror files as vendored", "user": {"value": 48517, "label": "jaap3"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-10-27T18:41:25Z", "updated_at": "2019-05-03T21:12:09Z", "closed_at": "2019-05-03T21:11:20Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/367", "body": "GitHub lists datasette as a Javascript project, primarily because of the vendored codemirror files. This is somewhat confusing when you're looking for datasette, knowing it's written in Python.\r\n\r\nLuckily it's possible exclude certain files from GitHub's code statistics: https://github.com/github/linguist#using-gitattributes", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/367/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 374676773, "node_id": "MDExOlB1bGxSZXF1ZXN0MjI2MzE1NTEz", "number": 368, "title": "Update installation instructions", "user": {"value": 48517, "label": "jaap3"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-10-27T18:52:31Z", "updated_at": "2019-05-03T18:18:43Z", "closed_at": "2019-05-03T18:18:42Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/368", "body": "I was writing this as a response to your tweet, but decided I might just make it a pull request.\r\n\r\nI feel like it might be confusing to those unfamiliar with Python's `-m` flag and the built-in `venv` module to omit the space between the flag and its argument. By adding a space and prefixing the second occurrence of `venv` with a `./` it's maybe a bit clearer what the arguments are and what they do.\r\n\r\nBy also using `python3 -m pip` it becomes even clearer that `-m` is a special flag that makes the python executable do neat things.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/368/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 377156339, "node_id": "MDU6SXNzdWUzNzcxNTYzMzk=", "number": 371, "title": "datasette publish digitalocean plugin", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-11-04T14:07:41Z", "updated_at": "2021-01-04T20:14:28Z", "closed_at": "2021-01-04T20:14:28Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Provide support for launching `datasette` on Digital Ocean.\r\n\r\nExample: [Deploy Docker containers into Digital Ocean](https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd).\r\n\r\nDigital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: [Docker One-Click Application](https://www.digitalocean.com/docs/one-clicks/docker/).\r\n\r\nRelated:\r\n- Launching containers in Digital Ocean servers running docker: [How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-provision-and-manage-remote-docker-hosts-with-docker-machine-on-ubuntu-16-04)\r\n- [How To Use Doctl, the Official DigitalOcean Command-Line Client](https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/371/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 377266351, "node_id": "MDU6SXNzdWUzNzcyNjYzNTE=", "number": 373, "title": "Views should be shown on root/index page along with tables", "user": {"value": 416374, "label": "gfrmin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4305096, "label": "0.28"}, "comments": 1, "created_at": "2018-11-05T06:28:41Z", "updated_at": "2019-05-16T00:29:22Z", "closed_at": "2019-05-16T00:29:22Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the moment the number of views is given on a datasette \"homepage\", but not links to any views themselves", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/373/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 386459810, "node_id": "MDExOlB1bGxSZXF1ZXN0MjM1MTk0Mjg2", "number": 390, "title": "tiny typo in customization docs", "user": {"value": 418191, "label": "jaywgraves"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-12-01T13:44:42Z", "updated_at": "2019-12-19T02:30:35Z", "closed_at": "2018-12-16T21:32:56Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/390", "body": "was looking to add some custom templates to my use of datasette and saw this small typo.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/390/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 398559195, "node_id": "MDU6SXNzdWUzOTg1NTkxOTU=", "number": 400, "title": "datasette publish cloudrun plugin", "user": {"value": 10352819, "label": "rprimet"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-01-12T14:35:11Z", "updated_at": "2019-05-03T16:57:35Z", "closed_at": "2019-05-03T16:57:35Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Google announced that they may launch a simple service for running Docker containers (previously serverless containers, now called \"cloud run\" -- link to alpha [here](https://services.google.com/fb/forms/serverlesscontainers/)). If/when this happens, it might be a good fit for publishing datasettes? (at least using the current version, manually publishing a datasette seems relatively painless).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/400/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 405801771, "node_id": "MDExOlB1bGxSZXF1ZXN0MjQ5NjgwOTQ0", "number": 9, "title": ":pencil: Updates my_database.py to my_database.db", "user": {"value": 50527, "label": "jefftriplett"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-02-01T17:35:43Z", "updated_at": "2019-02-24T03:55:04Z", "closed_at": "2019-02-24T03:55:04Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/9", "body": "I noticed that both `.py` and `.db` were used in the docs and assumed you'd prefer `.db`. ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/9/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 413887019, "node_id": "MDExOlB1bGxSZXF1ZXN0MjU1NzI1MDU3", "number": 413, "title": "Update spatialite.rst", "user": {"value": 28597217, "label": "joelondon"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-02-25T00:08:35Z", "updated_at": "2019-03-15T05:06:45Z", "closed_at": "2019-03-15T05:06:45Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/413", "body": "a line of sql added to create the idx_ in the python recipe", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/413/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 415575624, "node_id": "MDU6SXNzdWU0MTU1NzU2MjQ=", "number": 414, "title": "datasette requires specific version of Click", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-02-28T11:24:59Z", "updated_at": "2019-03-15T04:42:13Z", "closed_at": "2019-03-15T04:42:13Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Is `datasette` beholden to version `click==6.7`?\r\n\r\nCurrent release is at 7.0. Can the requirement be liberalised, eg to `>=6.7`?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/414/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 427429265, "node_id": "MDExOlB1bGxSZXF1ZXN0MjY2MDM1Mzgy", "number": 424, "title": "Column types in inspected metadata", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-03-31T18:46:33Z", "updated_at": "2019-04-29T18:30:50Z", "closed_at": "2019-04-29T18:30:46Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/424", "body": "This PR does two things:\r\n\r\n* Adds the sqlite column type for each column to the inspected table info.\r\n* Stops binary columns from being rendered to HTML, unless a plugin handles it.\r\n\r\nThere's a bit more detail in the changeset descriptions.\r\n\r\nThese changes are intended as a precursor to a plugin which adds first-class support for Spatialite geographic primitives, and perhaps more useful geo-stuff.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/424/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 432870248, "node_id": "MDU6SXNzdWU0MzI4NzAyNDg=", "number": 431, "title": "Datasette doesn't reload when database file changes", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-04-13T16:50:43Z", "updated_at": "2019-05-02T05:13:55Z", "closed_at": "2019-05-02T05:13:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "My understanding of the `--reload` option was that if the database file changed `datasette` would automatically reload.\r\n\r\nI'm running on a Mac and from the `datasette` UI queries don't seem to be picking up data in a newly changed db (I checked the db timestamp - it certainly updated).\r\n\r\nI was also expecting to see some sort of log statement in the datasette logging to say that it had detected a file change and restarted, but don't see anything there?\r\n\r\nWill try to check on an Ubuntu box when I get a chance to see if this is a Mac thing.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/431/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 434321685, "node_id": "MDExOlB1bGxSZXF1ZXN0MjcxMzM4NDA1", "number": 434, "title": "\"datasette publish cloudrun\" command to publish to Google Cloud Run", "user": {"value": 10352819, "label": "rprimet"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2019-04-17T14:41:18Z", "updated_at": "2019-05-03T21:50:44Z", "closed_at": "2019-05-03T13:59:02Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/434", "body": "This is a very rough draft to start a discussion on a possible datasette cloud run publish plugin (see issue #400).\r\n\r\nThe main change was to dynamically set the listening port in `make_dockerfile` to satisfy cloud run's [requirements](https://cloud.google.com/run/docs/reference/container-contract).\r\n\r\nThis was done by running `datasette` through `sh` to get environment variable substitution. Not sure if that's the right approach?\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/434/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 438048318, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0MTc0NjE0", "number": 437, "title": "Add inspect and prepare_sanic hooks", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-28T11:53:34Z", "updated_at": "2019-06-24T16:38:57Z", "closed_at": "2019-06-24T16:38:56Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/437", "body": "This adds two new plugin hooks:\r\n\r\nThe `inspect` hook allows plugins to add data to the inspect dictionary.\r\n\r\nThe `prepare_sanic` hook allows plugins to hook into the web router. I've attached a warning to this hook in the docs in light of #272 but I want this hook now...\r\n\r\nOn quick inspection, I don't think it's worthwhile to try and make this hook independent of the web framework (but it looks like Starlette would make the hook implementation a bit nicer).\r\n\r\nRef #14", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/437/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 438200529, "node_id": "MDU6SXNzdWU0MzgyMDA1Mjk=", "number": 438, "title": "Plugins are loaded when running pytest", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T08:25:58Z", "updated_at": "2019-05-02T05:09:18Z", "closed_at": "2019-05-02T05:09:11Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If I have a datasette plugin installed on my system, its hooks are called when running the main datasette tests. This is probably undesirable, especially with the inspect hook in #437, as the plugin may rely on inspected state that the tests don't know about.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/438/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 438240541, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0MzEzNjI1", "number": 439, "title": "[WIP] Add primary key to the extra_body_script hook arguments", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T10:08:23Z", "updated_at": "2019-05-01T09:58:32Z", "closed_at": "2019-05-01T09:58:30Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/439", "body": "This allows the row to be identified on row pages. The context here is that I want to access the row's data to plot it on a map.\r\n\r\nI considered passing the entire template context through to the hook function. This would expose the actual row data and potentially avoid a further fetch request in JS, but it does make the plugin API a lot more leaky. \r\n\r\n(At any rate, using the selected row data is tricky in my case because of Spatialite's infuriating custom binary representation...)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/439/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 438259941, "node_id": "MDU6SXNzdWU0MzgyNTk5NDE=", "number": 440, "title": "Plugin hook for additional data export formats", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-04-29T11:01:39Z", "updated_at": "2019-05-01T23:01:57Z", "closed_at": "2019-05-01T23:01:57Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be nice to have a simple way for plugins to provide additional data export formats. Might require a bit of work on the internals. I can work around this at a lower level with the `prepare_sanic` hook from #437 in the mean time.\r\n\r\nI guess plugins should be able to register a function which takes a row or list of rows and returns the rendered data. They'll also need to provide a file extension and probably a Content-Type.\r\n\r\nDatasette could then automatically include this format in the list of export formats on each page.\r\n\r\nLooks like this is related to #119.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/440/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 438437973, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0NDY4ODM2", "number": 441, "title": "Add register_output_renderer hook", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2019-04-29T18:03:21Z", "updated_at": "2019-05-01T23:01:57Z", "closed_at": "2019-05-01T23:01:57Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/441", "body": "This changeset refactors out the JSON renderer and then adds a hook and\r\ndispatcher system to allow custom output renderers to be registered.\r\n\r\nThe CSV output renderer is untouched because supporting streaming\r\nrenderers through this system would be significantly more complex, and\r\nprobably not worthwhile.\r\n\r\nWe can't simply allow hooks to be called at request time because we need\r\na list of supported file extensions when the request is being routed in\r\norder to resolve ambiguous database/table names. So, renderers need to\r\nbe registered at startup.\r\n\r\nI've tried to make this API independent of Sanic's request/response\r\nobjects so that this can remain stable during the switch to ASGI. I'm\r\nusing dictionaries to keep it simple and to make adding additional\r\noptions in the future easy.\r\n\r\nFixes #440", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/441/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 438450757, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0NDc4NzYx", "number": 442, "title": "Suppress rendering of binary data", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T18:36:41Z", "updated_at": "2019-05-03T18:26:48Z", "closed_at": "2019-05-03T16:44:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/442", "body": "Binary columns (including spatialite geographies) get shown as ugly\r\nbinary strings in the HTML by default. Nobody wants to see that mess.\r\n\r\nShow the size of the column in bytes instead. If you want to decode\r\nthe binary data, you can use a plugin to do it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/442/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 439480260, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1Mjc1NjEw", "number": 443, "title": "Pass view_name to extra_body_script hook", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-02T08:38:36Z", "updated_at": "2019-05-03T13:12:20Z", "closed_at": "2019-05-03T13:12:20Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/443", "body": "At the moment it's not easy to tell whether the hook is being called\r\nin (for example) the row or table view, as in both cases the\r\n`database` and `table` parameters are provided.\r\n\r\nThis passes the `view_name` added in #441 to the `extra_body_script`\r\nhook.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/443/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 439487648, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1MjgxMzA3", "number": 444, "title": "Add a max-line-length setting for flake8", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-02T08:58:57Z", "updated_at": "2019-05-04T09:44:48Z", "closed_at": "2019-05-03T13:11:28Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/444", "body": "This stops my automatic editor linting from flagging lines which are too\r\nlong. It's been lingering in my checkout for ages.\r\n\r\n160 is an arbitrary large number - we could alter it if we have any\r\nopinions (but I find the line length limit to be my least favourite part\r\nof PEP8).", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/444/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 440304714, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1OTA5MTk3", "number": 450, "title": "Coalesce hidden table count to 0", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-04T09:37:10Z", "updated_at": "2019-05-11T18:10:09Z", "closed_at": "2019-05-11T18:10:09Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/450", "body": "For some reason I'm hitting a `None` here with a FTS table. I'm not\r\nentirely sure why but this makes the logic work the same as with\r\nnon-hidden tables.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/450/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 442327592, "node_id": "MDU6SXNzdWU0NDIzMjc1OTI=", "number": 456, "title": "Installing installs the tests package", "user": {"value": 7725188, "label": "hellerve"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-05-09T16:35:16Z", "updated_at": "2020-07-24T20:39:54Z", "closed_at": "2020-07-24T20:39:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Because `setup.py` uses `find_packages` and `tests` is on the top-level, `pip install datasette` will install a top-level package called `tests`, which is probably not desired behavior.\r\n\r\nThe offending line is here:\r\nhttps://github.com/simonw/datasette/blob/bfa2ae0d16d39bb82dbe4da4f3fdc3c7f6257418/setup.py#L40\r\n\r\nAnd only `pip uninstall datasette` with a conflicting package would warn you by default; apparently another package had the same problem, which is why I get this message when uninstalling:\r\n\r\n```\r\n$ pip uninstall datasette\r\nUninstalling datasette-0.27:\r\n Would remove:\r\n /usr/local/bin/datasette\r\n /usr/local/lib/python3.7/site-packages/datasette-0.27.dist-info/*\r\n /usr/local/lib/python3.7/site-packages/datasette/*\r\n /usr/local/lib/python3.7/site-packages/tests/*\r\n Would not remove (might be manually added):\r\n [ .. snip .. ]\r\nProceed (y/n)? \r\n```\r\n\r\nThis should be a relatively simple fix, and I could drop a PR if desired!\r\n\r\nCheers", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/456/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 442402832, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc3NTI0MDcy", "number": 458, "title": "setup: add tests to package exclusion", "user": {"value": 7725188, "label": "hellerve"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-05-09T19:47:21Z", "updated_at": "2020-07-21T01:14:42Z", "closed_at": "2019-05-10T01:54:51Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/458", "body": "This PR fixes #456 by adding `tests` to the package exclusion list.\r\n\r\nCheers", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/458/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 445873563, "node_id": "MDExOlB1bGxSZXF1ZXN0MjgwMjA0Mjc2", "number": 479, "title": "doc typo fix", "user": {"value": 98555, "label": "IgnoredAmbience"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-05-19T22:54:25Z", "updated_at": "2019-05-20T16:42:29Z", "closed_at": "2019-05-20T16:42:29Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/479", "body": "Fix typo in performance doc page", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/479/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 445875242, "node_id": "MDExOlB1bGxSZXF1ZXN0MjgwMjA1NTAy", "number": 480, "title": "Split pypi and docker travis tasks", "user": {"value": 813732, "label": "glasnt"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4471010, "label": "Datasette 0.29"}, "comments": 1, "created_at": "2019-05-19T23:14:37Z", "updated_at": "2019-07-07T20:03:20Z", "closed_at": "2019-07-07T20:03:20Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/480", "body": "Resolves #478 \r\n\r\nThis *should* work, but because this is a change that'll only really be testable on a) this repo, b) master branch, this might fail fast if I didn't get the configurations right. \r\n\r\n\r\nLooking at #478 it should just be as simple as splitting out the docker and pypi processes into separate jobs, but it might end up being more complicated than that, depending on what pre-processes the pypi deployment needs, and how travisci treats deployment steps without scripts in general. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/480/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 451705509, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjg0NzQzNzk0", "number": 500, "title": "Fix typo in install step: should be install -e", "user": {"value": 32314, "label": "tmcw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-06-03T21:50:51Z", "updated_at": "2019-06-11T18:48:43Z", "closed_at": "2019-06-11T18:48:40Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/500", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/500/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 459627549, "node_id": "MDU6SXNzdWU0NTk2Mjc1NDk=", "number": 523, "title": "Show total/unfiltered row count when filtering", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-06-23T22:56:48Z", "updated_at": "2019-06-24T01:38:14Z", "closed_at": "2019-06-24T01:38:14Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When I'm seeing a filtered view of a table, I'd like to be able to see something like '2 rows where status != \"closed\" (of 1000 total)' to have a context for the data I'm seeing \u2013 e.g. currently my database is being filled by an importer, so this information would be super helpful.\r\n\r\nSince this information would be a performance hit, maybe something like '12 rows where status != \"closed\" (of ??? total)' with lazy-loading on-click(?) could be applied (Or via a \"How many total?\" tooltip, or \u2026)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/523/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 465728430, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk1NzExNTA0", "number": 554, "title": "Fix static mounts using relative paths and prevent traversal exploits", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2019-07-09T11:32:02Z", "updated_at": "2019-07-11T16:29:26Z", "closed_at": "2019-07-11T16:13:19Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/554", "body": "While debugging why my static mounts using a relative path (`--static mystatic:rel/path/to/dir`) not working, I noticed that the requests fail no matter what, returning 404 errors. \r\n\r\nThe reason is that datasette tries to prevent traversal exploits by checking if the path is relative to its registered directory. This check fails when the mount is a relative directory, because `/abs/dir/file` obviously not under `dir/file`. \r\n\r\nhttps://github.com/simonw/datasette/blob/81fa8b6cdc5457b42a224779e5291952314e8d20/datasette/utils/asgi.py#L303-L306\r\n\r\nThis also has the consequence of returning any requested file, because when `/abs/dir/../../evil.file` resolves `aiofiles` happily returns it to the client after it resolves the path itself. The solution is to make sure we're checking relativity of paths after they're fully resolved.\r\n\r\nI've implemented the mentioned changes and also updated the tests.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/554/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 465731062, "node_id": "MDU6SXNzdWU0NjU3MzEwNjI=", "number": 555, "title": "Static mounts with relative paths not working", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-07-09T11:38:35Z", "updated_at": "2019-07-11T16:13:22Z", "closed_at": "2019-07-11T16:13:22Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Datasette fails to serve files from static mounts that are created using relative paths `datasette --static mystatic:rel/path/to/static/dir`. \r\nI've explained the problem and the solution in the pull request: https://github.com/simonw/datasette/pull/554", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/555/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 465773546, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk1NzQ4MjY4", "number": 556, "title": "Add support for running datasette as a module", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-07-09T13:13:30Z", "updated_at": "2019-07-11T16:07:45Z", "closed_at": "2019-07-11T16:07:44Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/556", "body": "This PR allows running datasette using `python -m datasette` command in addition to just running the executable.\r\n\r\nThis function is quite useful when debugging a plugin in a project because IDEs like PyCharm can easily start a debug session when datasette is run as a module in contrast to trying to attach a debugger to a running process.\r\n\r\n![image](https://user-images.githubusercontent.com/3243482/60890448-fc4ede80-a263-11e9-8b42-d2a3db8d1a59.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/556/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 469828961, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk4OTYyNTUx", "number": 561, "title": "Fix typos", "user": {"value": 15278512, "label": "minho42"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-07-18T15:13:35Z", "updated_at": "2019-07-26T10:25:45Z", "closed_at": "2019-07-26T10:25:45Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/561", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/561/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 471292050, "node_id": "MDU6SXNzdWU0NzEyOTIwNTA=", "number": 563, "title": "incorrect json url for row-level data?", "user": {"value": 10352819, "label": "rprimet"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-07-22T19:59:38Z", "updated_at": "2019-10-21T02:03:09Z", "closed_at": "2019-10-21T02:03:09Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "While visiting [this example page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/people/uk.org.publicwhip%2Fperson%2F10001) (linked from Datasette documentation), manually clicking on [the link](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/people/uk.org.publicwhip%2Fperson%2F10001?_format=json) (\"This data as .json\") to the json data results in an error 500 `data() got an unexpected keyword argument 'as_format'`\r\n\r\nThe [JSON page linked to from the documentation](https://register-of-members-interests.datasettes.com/regmem-d22c12c/people/uk.org.publicwhip%2Fperson%2F10001.json) however is correct (the page address ends in `.json` rather than using a query string `?format=json`)\r\n\r\nThis particular datasette demo page is now a few versions behind, but I was able to reproduce the issue using v0.29.2 and a downloaded copy of the demo database (and also with the current HEAD).\r\n\r\nHere is a stack trace:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 101, in __call__\r\n return await view(new_scope, receive, send)\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 173, in view\r\n request, **scope[\"url_route\"][\"kwargs\"]\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/views/base.py\", line 267, in get\r\n request, database, hash, correct_hash_provided, **kwargs\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/views/base.py\", line 399, in view_get\r\n request, database, hash, **kwargs\r\nTypeError: data() got an unexpected keyword argument 'as_format'\r\n\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/563/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 487847945, "node_id": "MDExOlB1bGxSZXF1ZXN0MzEzMDA3NDgz", "number": 56, "title": "Escape the table name in populate_fts and search.", "user": {"value": 49260, "label": "amjith"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-01T06:29:05Z", "updated_at": "2019-09-02T17:23:21Z", "closed_at": "2019-09-02T17:23:21Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/56", "body": "The table names weren't escaped using double quotes in the populate_fts method. \r\n\r\nReproducible case: \r\n```\r\n>>> import sqlite_utils\r\n>>> db = sqlite_utils.Database(\"abc.db\")\r\n>>> db[\"http://example.com\"].insert_all([\r\n... {\"id\": 1, \"age\": 4, \"name\": \"Cleo\"},\r\n... {\"id\": 2, \"age\": 2, \"name\": \"Pancakes\"}\r\n... ], pk=\"id\")\r\n\r\n>>> db[\"http://example.com\"].enable_fts([\"name\"])\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n db[\"http://example.com\"].enable_fts([\"name\"])\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/sqlite_utils/db.py\", l\r\nine 705, in enable_fts\r\n self.populate_fts(columns)\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/sqlite_utils/db.py\", l\r\nine 715, in populate_fts\r\n self.db.conn.executescript(sql)\r\nsqlite3.OperationalError: unrecognized token: \":\"\r\n>>> \r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/56/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 487987958, "node_id": "MDExOlB1bGxSZXF1ZXN0MzEzMTA1NjM0", "number": 57, "title": "Add triggers while enabling FTS", "user": {"value": 49260, "label": "amjith"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2019-09-02T04:23:40Z", "updated_at": "2019-09-03T01:03:59Z", "closed_at": "2019-09-02T23:42:29Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/57", "body": "This adds the option for a user to set up triggers in the database to keep their FTS table in sync with the parent table. \r\n\r\nRef: https://sqlite.org/fts5.html#external_content_and_contentless_tables\r\n\r\nI would prefer to make the creation of triggers the default behavior, but that will break existing usage where people have been calling `populate_fts` after inserting new rows.\r\n\r\nI am happy to make changes to the PR as you see fit. ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/57/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 488293926, "node_id": "MDU6SXNzdWU0ODgyOTM5MjY=", "number": 58, "title": "Support enabling FTS on views", "user": {"value": 49260, "label": "amjith"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-09-02T18:56:36Z", "updated_at": "2020-10-16T18:39:36Z", "closed_at": "2020-10-16T18:39:31Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Right now enable_fts() is only implemented for Table(). Technically sqlite supports enabling fts on views. But it requires deeper thought since views don't have `rowid` and the current implementation of enable_fts() relies on the presence of `rowid` column. \r\n\r\nIt is possible to provide an alternative rowid using the `content_rowid` option to the FTS5() function. \r\n\r\nRef: https://sqlite.org/fts5.html#fts5_table_creation_and_initialization\r\n\r\n> The \"content_rowid\" option, used to set the rowid field of an external content table. \r\n\r\nThis will further complicate `enable_fts()` function by adding an extra argument. I'm wondering if that is outside the scope of this tool or should I work on that feature and send a PR? ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/58/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 492153532, "node_id": "MDU6SXNzdWU0OTIxNTM1MzI=", "number": 573, "title": "Exposing Datasette via Jupyter-server-proxy", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-09-11T10:32:36Z", "updated_at": "2020-03-26T09:41:30Z", "closed_at": "2020-03-26T09:41:30Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It is possible to expose a running `datasette` service in a Jupyter environment such as a MyBinder environment using the [`jupyter-server-proxy`](https://github.com/jupyterhub/jupyter-server-proxy).\r\n\r\nFor example, using [this demo Binder](https://mybinder.org/v2/gh/binder-examples/r/master?filepath=index.ipynb) which has the server proxy installed, we can then upload a simple test database from the notebook homepage, from a Jupyter termianl install datasette and set it running against the test db on eg port 8001 and then view it via the path `proxy/8001`.\r\n\r\nClicking links results in 404s though because the `datasette` links aren't relative to the current path?\r\n\r\n![image](https://user-images.githubusercontent.com/82988/64689964-44b69280-d487-11e9-8f9f-3681422bcc9f.png)\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/573/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 505814865, "node_id": "MDExOlB1bGxSZXF1ZXN0MzI3MTY5NzQ4", "number": 589, "title": "Display metadata footer on custom SQL queries", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-11T12:10:28Z", "updated_at": "2019-10-14T08:58:23Z", "closed_at": "2019-10-14T03:53:22Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/589", "body": "Closes #408", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/589/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 505818256, "node_id": "MDExOlB1bGxSZXF1ZXN0MzI3MTcyNTQ1", "number": 590, "title": "Handle spaces in DB names", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-10-11T12:18:22Z", "updated_at": "2019-11-04T23:16:31Z", "closed_at": "2019-11-04T23:16:30Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/590", "body": "Closes #503", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/590/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 505837199, "node_id": "MDExOlB1bGxSZXF1ZXN0MzI3MTg4MDg3", "number": 591, "title": "Sort databases on homepage by argument order", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-10-11T12:57:38Z", "updated_at": "2019-10-14T08:57:50Z", "closed_at": "2019-10-14T03:52:34Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/591", "body": "Closes #585", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/591/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 505950145, "node_id": "MDExOlB1bGxSZXF1ZXN0MzI3Mjc5ODE4", "number": 592, "title": "Offer SQL formatting", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-10-11T16:35:49Z", "updated_at": "2019-10-14T08:57:12Z", "closed_at": "2019-10-14T03:46:13Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/592", "body": "SQL code will be formatted on page load, and can additionally be formatted by clicking the \"Format SQL\" button. Closes #136", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/592/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 509535510, "node_id": "MDExOlB1bGxSZXF1ZXN0MzMwMDc2MjYz", "number": 602, "title": "Offer to format readonly SQL", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-10-20T02:29:32Z", "updated_at": "2019-11-04T07:29:33Z", "closed_at": "2019-11-04T02:39:56Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/602", "body": "Following discussion in #601, this PR adds a \"Format SQL\" button to\r\nread-only SQL (if the SQL actually differs from the formatting result).\r\n\r\nIt also removes a console error on readonly SQL queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/602/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 509612217, "node_id": "MDExOlB1bGxSZXF1ZXN0MzMwMTI5MzU4", "number": 603, "title": "always pop as_format off args dict", "user": {"value": 6025893, "label": "chris48s"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-10-20T15:44:22Z", "updated_at": "2019-10-30T19:12:22Z", "closed_at": "2019-10-21T02:03:09Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/603", "body": "closes #563", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/603/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 518725064, "node_id": "MDU6SXNzdWU1MTg3MjUwNjQ=", "number": 29, "title": "`import` command fails on empty files", "user": {"value": 21148, "label": "jacobian"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2019-11-06T20:34:26Z", "updated_at": "2019-11-09T20:33:38Z", "closed_at": "2019-11-09T19:36:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If a file in the export is empty (in my case it was `account-suspensions.js`), `twitter-to-sqlite import` fails:\r\n\r\n```\r\n$ twitter-to-sqlite import twitter.db ~/Downloads/twitter-2019-11-06-926f4f3be4b3b1fcb1aa387c40cd14f7c8aaf9bbcdb2d78ac14d9989add501bb.zip\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/bin/twitter-to-sqlite\", line 10, in \r\n sys.exit(cli())\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 764, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 717, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 956, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/twitter_to_sqlite/cli.py\", line 627, in import_\r\n archive.import_from_file(db, filename, content)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/twitter_to_sqlite/archive.py\", line 224, in import_from_file\r\n db[table_name].upsert_all(rows, hash_id=\"pk\")\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/sqlite_utils/db.py\", line 1113, in upsert_all\r\n extracts=extracts,\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/sqlite_utils/db.py\", line 980, in insert_all\r\n first_record = next(records)\r\nStopIteration\r\n```\r\n\r\nThis appears to be because `db.upsert_all` is called with no rows -- I think? \r\n\r\nI hacked around this by modifying `import_from_file` to have an `if rows:` clause:\r\n\r\n```\r\n for table, rows in to_insert.items():\r\n if rows:\r\n table_name = \"archive_{}\".format(table.replace(\"-\", \"_\"))\r\n ...\r\n```\r\n\r\nI'm happy to work up a real PR if that's the right approach, but I'm not sure it is.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/29/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 518739697, "node_id": "MDU6SXNzdWU1MTg3Mzk2OTc=", "number": 30, "title": "`followers` fails because `transform_user` is called twice", "user": {"value": 21148, "label": "jacobian"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-11-06T20:44:52Z", "updated_at": "2019-11-09T20:15:28Z", "closed_at": "2019-11-09T19:55:52Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Trying to run `twitter-to-sqlite followers` errors out:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/bin/twitter-to-sqlite\", line 10, in \r\n sys.exit(cli())\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 764, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 717, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 956, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/twitter_to_sqlite/cli.py\", line 130, in followers\r\n go(bar.update)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/twitter_to_sqlite/cli.py\", line 116, in go\r\n utils.save_users(db, [profile])\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/twitter_to_sqlite/utils.py\", line 302, in save_users\r\n transform_user(user)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/twitter_to_sqlite/utils.py\", line 181, in transform_user\r\n user[\"created_at\"] = parser.parse(user[\"created_at\"])\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py\", line 1374, in parse\r\n return DEFAULTPARSER.parse(timestr, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py\", line 646, in parse\r\n res, skipped_tokens = self._parse(timestr, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py\", line 725, in _parse\r\n l = _timelex.split(timestr) # Splits the timestr into tokens\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py\", line 207, in split\r\n return list(cls(s))\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jkm-dogsheep-ezLnyXZS-py3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py\", line 76, in __init__\r\n '{itype}'.format(itype=instream.__class__.__name__))\r\nTypeError: Parser must be a string or character stream, not datetime\r\n```\r\n\r\nThis appears to be because https://github.com/dogsheep/twitter-to-sqlite/blob/master/twitter_to_sqlite/cli.py#L111 calls `transform_user`, and then https://github.com/dogsheep/twitter-to-sqlite/blob/master/twitter_to_sqlite/cli.py#L116 calls `transform_user` again, which fails because the user is already transformed.\r\n\r\nI was able to work around this by commenting out https://github.com/dogsheep/twitter-to-sqlite/blob/master/twitter_to_sqlite/cli.py#L116. \r\n\r\nShall I work up a patch for that, or is there a better approach?", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/30/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 519979091, "node_id": "MDExOlB1bGxSZXF1ZXN0MzM4NjQ3Mzc4", "number": 1, "title": "Add parkrun-to-sqlite", "user": {"value": 1101318, "label": "mrw34"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-11-08T12:05:32Z", "updated_at": "2020-10-12T00:35:16Z", "closed_at": "2020-10-12T00:35:16Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/1", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 521923131, "node_id": "MDExOlB1bGxSZXF1ZXN0MzQwMjExMTQ5", "number": 631, "title": "bugfix issue 572", "user": {"value": 3683993, "label": "qwo"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-13T02:46:50Z", "updated_at": "2019-11-13T04:28:43Z", "closed_at": "2019-11-13T04:28:42Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/631", "body": "closes bugfix issue #572 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/631/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 530513784, "node_id": "MDExOlB1bGxSZXF1ZXN0MzQ3MTc5MDgx", "number": 644, "title": "Validate metadata json on startup", "user": {"value": 6025893, "label": "chris48s"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-30T00:32:15Z", "updated_at": "2021-07-28T17:58:45Z", "closed_at": "2021-07-28T17:58:45Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/644", "body": "This PR adds a sanity check which builds up a marshmallow schema on-the-fly based on the structure of the database(s) on startup and then validates the metadata json against it.\r\n\r\nIn case of invalid data, this will raise with a descriptive error e.g:\r\n\r\n```\r\nmarshmallow.exceptions.ValidationError: {'databases': {'fixtures': {'tables': {'not_a_table': ['Unknown field.']}}}}\r\n```\r\n\r\nCloses #260\r\n\r\n---\r\n\r\nThis was intended to be fairly self-contained, but then while I was working on it, I hit some problems getting the tests to pass in the context of the test suite as a whole. My tests passed in isolation, but then failed while doing a full test suite run. That's when the worms started coming out of the can :bug: After some sleuthing, it turned out this was essentially the result of several issues intersecting:\r\n\r\n* There are certain events in the application lifecycle where the metadata schema can be modified after it is loaded e.g: https://github.com/simonw/datasette/blob/a562f2965552fb2dbbbd74df245c9965ee23d886/datasette/app.py#L299-L320 This means that sometimes what goes in isn't always exactly what comes out when you call `/-/metadata`.\r\n* Because the test fixtures use session scope for performance reasons if one unit test performs an action which mutates the metadata, that can impact on other unit tests which run after it using the same fixture.\r\n* Because the `self._metadata` property was being set with a simple assignment `self._metadata = metadata`, that created an object reference to the test fixture data, so operating on `self._metadata` was actually modifying the test fixture `METADATA` meaning that depending on when it was loaded in the test suite lifecycle, `METADATA` had different content, which was somewhat unexpected.\r\n\r\nAs such, I've added some band-aids in 3552024 and 6859fd8:\r\n* Switching the metadata object to a `deepcopy` of the input prevents us directly mutating the input fixture.\r\n* I've switched some of the tests to use a fixture with function scope instead of session scope so we're working on a clean copy that hasn't been mutated by other tests where necessary but keeping session scope in most cases for performance.\r\n* I haven't really addressed the fact that sometimes the metadata object gets mutated in place, so the object that is served from `/-/metadata` isn't necessarily always exactly the same as the file you fed into it on init. I'm not sure how much of a problem that is. The way the tests were written makes me think it was unexpected, but getting into it feels like too much scope creep for this PR so its probably best addressed as another issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/644/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 541331755, "node_id": "MDExOlB1bGxSZXF1ZXN0MzU2MDA0MjQy", "number": 653, "title": "allow leading comments in SQL input field", "user": {"value": 418191, "label": "jaywgraves"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2019-12-21T14:19:52Z", "updated_at": "2020-02-05T02:35:41Z", "closed_at": "2020-02-05T02:13:25Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/653", "body": "this changes the SQL validation to allow for lines that are commented out\r\n\r\nmy main use case for this is that I like to write a succession of queries when trying to solve a problem.\r\nIn most native SQL clients there is a key binding that will run just the current highlighted query or the program is smart enough to run just the query that the cursor is in if it's properly delimited with a ';'.\r\nTypically my workflow will start with a single simple query and I'll copy/paste it to a new query below when I want to make big changes while debugging. This makes it easy to go back to a working version above when the query doesn't work.\r\nSince datasette sends the whole query to the DB I have to comment out the older queries by prefixing each line with `--`. This gets caught by the validators when I use my typical strategy of copy/pasting each successive query below the last one. \r\nso this is just a simple fix to allow for a query to be sent to the DB with leading comments.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/653/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 543355051, "node_id": "MDExOlB1bGxSZXF1ZXN0MzU3NjQwMTg2", "number": 6, "title": "don't break if source is missing", "user": {"value": 78035, "label": "mfa"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-12-29T10:46:47Z", "updated_at": "2020-03-28T02:28:11Z", "closed_at": "2020-03-28T02:28:11Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/swarm-to-sqlite/pulls/6", "body": "broke for me. very old checkins in 2010 had no source set.", "repo": {"value": 205429375, "label": "swarm-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 543717994, "node_id": "MDExOlB1bGxSZXF1ZXN0MzU3OTc0MzI2", "number": 3, "title": "Add todoist-to-sqlite", "user": {"value": 706257, "label": "bcongdon"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-30T04:02:59Z", "updated_at": "2020-10-12T00:35:58Z", "closed_at": "2020-10-12T00:35:57Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/3", "body": "Really enjoying getting into the dogsheep/datasette ecosystem. I made a downloader for Todoist, and I think/hope others might find this useful", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 546078359, "node_id": "MDExOlB1bGxSZXF1ZXN0MzU5ODIyNzcz", "number": 75, "title": "Explicitly include tests and docs in sdist", "user": {"value": 15092, "label": "jayvdb"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-01-07T04:53:20Z", "updated_at": "2020-01-31T00:21:27Z", "closed_at": "2020-01-31T00:21:27Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/75", "body": "Also exclude 'tests' from runtime installation.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/75/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 546961357, "node_id": "MDU6SXNzdWU1NDY5NjEzNTc=", "number": 656, "title": "Display of the column definitions", "user": {"value": 6371750, "label": "JBPressac"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-01-08T16:16:53Z", "updated_at": "2020-01-20T14:17:11Z", "closed_at": "2020-01-20T14:14:33Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hello,\r\nIs the nice display of headers and definitions at the top of https://fivethirtyeight.datasettes.com/fivethirtyeight-ac35616/antiquities-act%2Factions_under_antiquities_act is configured in the metadata.json file ?\r\nThank you,", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/656/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 552773632, "node_id": "MDExOlB1bGxSZXF1ZXN0MzY1MjE4Mzkx", "number": 660, "title": "gcloud run is now GA, s/beta//", "user": {"value": 813732, "label": "glasnt"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-01-21T10:08:38Z", "updated_at": "2020-01-22T03:41:09Z", "closed_at": "2020-01-21T23:28:12Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/660", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/660/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 558715564, "node_id": "MDExOlB1bGxSZXF1ZXN0MzcwMDI0Njk3", "number": 4, "title": "Add beeminder-to-sqlite", "user": {"value": 706257, "label": "bcongdon"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-02-02T15:51:36Z", "updated_at": "2020-10-12T00:36:16Z", "closed_at": "2020-10-12T00:36:16Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/4", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/4/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 562085508, "node_id": "MDExOlB1bGxSZXF1ZXN0MzcyNzYzOTA2", "number": 666, "title": "Use inspect-file, if possible, for total row count", "user": {"value": 13896256, "label": "kevindkeogh"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-02-08T22:10:35Z", "updated_at": "2020-03-09T02:47:15Z", "closed_at": "2020-02-25T20:19:29Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/666", "body": "For large tables, counting the number of rows in the table can take a\r\nsignficant amount of time. Instead, where an inspect-file is provided\r\nfor an immutable database, look up the row-count for a plain count(*).", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/666/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 563348959, "node_id": "MDExOlB1bGxSZXF1ZXN0MzczNzc1Nzg4", "number": 669, "title": "fix db-to-sqlite command in ecosystem doc page", "user": {"value": 883348, "label": "adipasquale"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-02-11T17:05:41Z", "updated_at": "2020-02-22T02:32:18Z", "closed_at": "2020-02-22T02:32:17Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/669", "body": "the `--connection` parameter has become positional", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/669/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 589801352, "node_id": "MDExOlB1bGxSZXF1ZXN0Mzk1MjU4Njg3", "number": 96, "title": "Add type conversion for Panda's Timestamp", "user": {"value": 32605365, "label": "b0b5h4rp13"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-03-29T14:13:09Z", "updated_at": "2020-03-31T04:40:49Z", "closed_at": "2020-03-31T04:40:48Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/96", "body": "Add type conversion for Panda's Timestamp, if Panda library is present in system\r\n(thanks for this project, I was about to do the same thing from scratch)", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/96/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 594553553, "node_id": "MDExOlB1bGxSZXF1ZXN0Mzk5MTY2NDMz", "number": 719, "title": "asgi: check raw_path is not None", "user": {"value": 193185, "label": "cldellow"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-05T16:53:58Z", "updated_at": "2020-05-04T17:14:26Z", "closed_at": "2020-05-04T17:14:26Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/719", "body": "The ASGI spec\r\n(https://asgi.readthedocs.io/en/latest/specs/www.html#http) seems to imply that `None` is a valid value, so we need to check the value itself, not just whether the key is present.\r\n\r\nIn particular, the [mangum](https://github.com/erm/mangum) adapter passes `None` for this key's value. This change permits mangum to be used to front datasette in Amazon API Gateway + AWS Lambda deployments.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/719/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 596245802, "node_id": "MDExOlB1bGxSZXF1ZXN0NDAwNTc4OTc5", "number": 720, "title": "Update beautifulsoup4 requirement from ~=4.8.1 to >=4.8.1,<4.10.0", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-08T01:24:38Z", "updated_at": "2020-05-04T17:14:51Z", "closed_at": "2020-05-04T17:14:46Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/720", "body": "Updates the requirements on [beautifulsoup4](http://www.crummy.com/software/BeautifulSoup/bs4/) to permit the latest version.\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n**Note:** This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.\n\nYou can always request more updates by clicking `Bump now` in your [Dependabot dashboard](https://app.dependabot.com).\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/720/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 596245923, "node_id": "MDExOlB1bGxSZXF1ZXN0NDAwNTc5MDc3", "number": 721, "title": "Update pytest requirement from ~=5.2.2 to >=5.2.2,<5.5.0", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-08T01:25:04Z", "updated_at": "2020-05-04T17:13:49Z", "closed_at": "2020-05-04T17:13:41Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/721", "body": "Updates the requirements on [pytest](https://github.com/pytest-dev/pytest) to permit the latest version.\n
\nRelease notes\n

Sourced from pytest's releases.

\n
\n

5.4.1

\n

pytest 5.4.1 (2020-03-13)

\n

Bug Fixes

\n
    \n
  • \n

    #6909: Revert the change introduced by #6330, which required all arguments to @pytest.mark.parametrize to be explicitly defined in the function signature.

    \n

    The intention of the original change was to remove what was expected to be an unintended/surprising behavior, but it turns out many people relied on it, so the restriction has been reverted.

    \n
  • \n
  • \n

    #6910: Fix crash when plugins return an unknown stats while using the --reportlog option.

    \n
  • \n
\n
\n
\n
\nChangelog\n

Sourced from pytest's changelog.

\n
\n
\nCommits\n
    \n
  • 3d0f3ba Preparing release version 5.4.1
  • \n
  • b9e2cd0 Merge pull request #6914 from nicoddemus/revert-6330
  • \n
  • a84fcbf Revert "[parametrize] enforce explicit argnames declaration (#6330)"
  • \n
  • 59c1bfa Merge pull request #6913 from nicoddemus/backport-6910
  • \n
  • 3267f64 Merge pull request #6910 from nicoddemus/resultlog-logreport
  • \n
  • c9fd1bd Preparing release version 5.4.0
  • \n
  • 93aa988 Merge pull request #6901 from RonnyPfannschmidt/regendoc-fix-simple
  • \n
  • 7996724 Merge pull request #6902 from RoyalTS/filterwarnings-docfix
  • \n
  • 90ee8a7 docfix
  • \n
  • 378a75d run and fix tox -e regen to prepare 5.4
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n**Note:** This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.\n\nYou can always request more updates by clicking `Bump now` in your [Dependabot dashboard](https://app.dependabot.com).\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/721/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 596246006, "node_id": "MDExOlB1bGxSZXF1ZXN0NDAwNTc5MTM2", "number": 722, "title": "Update jinja2 requirement from ~=2.10.3 to >=2.10.3,<2.12.0", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-08T01:25:24Z", "updated_at": "2020-05-04T17:13:26Z", "closed_at": "2020-05-04T17:13:16Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/722", "body": "Updates the requirements on [jinja2](https://github.com/pallets/jinja) to permit the latest version.\n
\nRelease notes\n

Sourced from jinja2's releases.

\n
\n

2.11.1

\n

This fixes an issue in async environment when indexing the result of an attribute lookup, like {{ data.items[1:] }}.

\n\n
\n
\n
\nChangelog\n

Sourced from jinja2's changelog.

\n
\n

Version 2.11.1

\n

Released 2020-01-30

\n
    \n
  • Fix a bug that prevented looking up a key after an attribute ({{ data.items[1:] }}) in an async template. 1141
  • \n
\n

Version 2.11.0

\n

Released 2020-01-27

\n
    \n
  • Drop support for Python 2.6, 3.3, and 3.4. This will be the last version to support Python 2.7 and 3.5.
  • \n
  • Added a new ChainableUndefined class to support getitem and getattr on an undefined object. 977
  • \n
  • Allow {%+ syntax (with NOP behavior) when lstrip_blocks is disabled. 748
  • \n
  • Added a default parameter for the map filter. 557
  • \n
  • Exclude environment globals from meta.find_undeclared_variables. 931
  • \n
  • Float literals can be written with scientific notation, like 2.56e-3. 912, 922
  • \n
  • Int and float literals can be written with the '_' separator for legibility, like 12_345. 923
  • \n
  • Fix a bug causing deadlocks in LRUCache.setdefault. 1000
  • \n
  • The trim filter takes an optional string of characters to trim. 828
  • \n
  • A new jinja2.ext.debug extension adds a {% debug %} tag to quickly dump the current context and available filters and tests. 174, 798, 983
  • \n
  • Lexing templates with large amounts of whitespace is much faster. 857, 858
  • \n
  • Parentheses around comparisons are preserved, so {{ 2 * (3 < 5) }} outputs "2" instead of "False". 755, 938
  • \n
  • Add new boolean, false, true, integer and float tests. 824
  • \n
  • The environment's finalize function is only applied to the output of expressions (constant or not), not static template data. 63
  • \n
  • When providing multiple paths to FileSystemLoader, a template can have the same name as a directory. 821
  • \n
  • Always return Undefined when omitting the else clause in a {{ 'foo' if bar }} expression, regardless of the environment's undefined class. Omitting the else clause is a valid shortcut and should not raise an error when using StrictUndefined. 710, 1079
  • \n
  • Fix behavior of loop control variables such as length and revindex0 when looping over a generator. 459, 751, 794, 993
  • \n
  • Async support is only loaded the first time an environment enables it, in order to avoid a slow initial import. 765
  • \n
  • In async environments, the |map filter will await the filter call if needed. 913
  • \n
  • In for loops that access loop attributes, the iterator is not advanced ahead of the current iteration unless length, revindex, nextitem, or last are accessed. This makes it less likely to break groupby results. 555, 1101
  • \n
  • In async environments, the loop attributes length and revindex work for async iterators. 1101
  • \n
  • In async environments, values from attribute/property access will be awaited if needed. 1101
  • \n
  • ~loader.PackageLoader doesn't depend on setuptools or pkg_resources. 970
  • \n
  • PackageLoader has limited support for 420 namespace packages. 1097
  • \n
  • Support os.PathLike objects in ~loader.FileSystemLoader and ~loader.ModuleLoader. 870
  • \n
  • ~nativetypes.NativeTemplate correctly handles quotes between expressions. "'{{ a }}', '{{ b }}'" renders as the tuple ('1', '2') rather than the string '1, 2'. 1020
  • \n
  • Creating a ~nativetypes.NativeTemplate directly creates a ~nativetypes.NativeEnvironment instead of a default Environment. 1091
  • \n
  • After calling LRUCache.copy(), the copy's queue methods point to the correct queue. 843
  • \n
  • Compiling templates always writes UTF-8 instead of defaulting to the system encoding. 889
  • \n
  • |wordwrap filter treats existing newlines as separate paragraphs to be wrapped individually, rather than creating short intermediate lines. 175
  • \n
  • Add break_on_hyphens parameter to |wordwrap filter. 550
  • \n
  • Cython compiled functions decorated as context functions will be passed the context. 1108
  • \n
  • When chained comparisons of constants are evaluated at compile time, the result follows Python's behavior of returning False if any comparison returns False, rather than only the last one. 1102
  • \n
  • Tracebacks for exceptions in templates show the correct line numbers and source for Python >= 3.7. 1104
  • \n
  • Tracebacks for template syntax errors in Python 3 no longer show internal compiler frames. 763
  • \n
  • Add a DerivedContextReference node that can be used by extensions to get the current context and local variables such as loop. 860
  • \n
  • Constant folding during compilation is applied to some node types that were previously overlooked. 733
  • \n
  • TemplateSyntaxError.source is not empty when raised from an included template. 457
  • \n
\n
... (truncated)\n\n\n
\nCommits\n
    \n
  • b85283e release version 2.11.1
  • \n
  • 3d5bfc6 Merge pull request #1143 from pallets/bugfix/attribute-access
  • \n
  • d61c1ea add changelog
  • \n
  • 15d7e61 Added regression test for slicing of attributes
  • \n
  • 05dee9b Fix attribute access in async code. Fixes #1141
  • \n
  • bbdafe3 release version 2.11.0
  • \n
  • 9ff27f6 add python 3.8 classifier, clean up changelog
  • \n
  • d312609 isolate bytecode cache tests
  • \n
  • 9849979 import Markup from markupsafe, fix flake8 import warnings
  • \n
  • c6d864c increment bytecode cache version
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n**Note:** This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.\n\nYou can always request more updates by clicking `Bump now` in your [Dependabot dashboard](https://app.dependabot.com).\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/722/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 598891570, "node_id": "MDExOlB1bGxSZXF1ZXN0NDAyNjQ1OTg0", "number": 725, "title": "Update aiofiles requirement from ~=0.4.0 to >=0.4,<0.6", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-04-13T13:32:47Z", "updated_at": "2020-05-04T18:16:54Z", "closed_at": "2020-05-04T16:17:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/725", "body": "Updates the requirements on [aiofiles](https://github.com/Tinche/aiofiles) to permit the latest version.\n
\nCommits\n\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/725/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 600120439, "node_id": "MDU6SXNzdWU2MDAxMjA0Mzk=", "number": 726, "title": "Foreign key : case of a link to the associated row not displayed", "user": {"value": 6371750, "label": "JBPressac"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-15T08:31:27Z", "updated_at": "2020-04-27T22:05:47Z", "closed_at": "2020-04-27T22:05:46Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hello,\r\nI use Datasette to publish tsv files linked together by foreign keys declared thanks to sqlite-utils. In one table, [prelib_personne](http://crbc-dataset.huma-num.fr/prelib/prelib_personne), the foreign keys are properly noticed by a link to the associated row (for instance ville_naissance_id is properly linked to prelib_ville). But every link to the foreign key prelib_oeuvre.id fails. For instance, [prelib_ecritoeuvre](http://crbc-dataset.huma-num.fr/prelib/prelib_ecritoeuvre) has links to prelib_personne but none to prelib_oeuvre. In despite of the schema:\r\n\r\nCREATE TABLE \"prelib_ecritoeuvre\" (\r\n\"id\" INTEGER,\r\n \"fonction_id\" INTEGER,\r\n \"oeuvre_id\" INTEGER,\r\n \"personne_id\" INTEGER\r\n ,PRIMARY KEY ([id]),\r\n FOREIGN KEY(fonction_id) REFERENCES prelib_fonctionecritoeuvre(id),\r\n FOREIGN KEY(personne_id) REFERENCES prelib_personne(id),\r\n FOREIGN KEY(oeuvre_id) REFERENCES prelib_oeuvre(id)\r\n); \r\n\r\nWould you have any clue to investigate the reason of this problem?\r\nThanks,", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/726/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 603242257, "node_id": "MDExOlB1bGxSZXF1ZXN0NDA2MDY3MDE5", "number": 728, "title": "Update mergedeep requirement from ~=1.1.1 to >=1.1.1,<1.4.0", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-20T13:33:23Z", "updated_at": "2020-05-04T16:45:58Z", "closed_at": "2020-05-04T16:45:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/728", "body": "Updates the requirements on [mergedeep](https://github.com/clarketm/mergedeep) to permit the latest version.\n
\nCommits\n
    \n
  • 3d6e7b4 v1.3.0 - support additive merging of Counter types
  • \n
  • 56a258a v1.2.1 - tidy docs and variable names
  • \n
  • 61ab213 v1.2.0 - support both TYPESAFE_REPLACE and TYPESAFE_ADDITIVE merge strategies...
  • \n
  • b331bb5 cleanup Makefile
  • \n
  • 6f577bf officially label support for python3.8
  • \n
  • 84faf37 use pipenv for managing dev dependencies
  • \n
  • 3a8761a Update README.md
  • \n
  • See full diff in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/728/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 604001627, "node_id": "MDExOlB1bGxSZXF1ZXN0NDA2Njc3MjA1", "number": 730, "title": "Update pytest-asyncio requirement from ~=0.10.0 to >=0.10,<0.12", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-21T13:32:35Z", "updated_at": "2020-05-04T13:27:24Z", "closed_at": "2020-05-04T13:27:23Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/730", "body": "Updates the requirements on [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) to permit the latest version.\n
\nCommits\n
    \n
  • 1026c39 0.11.0
  • \n
  • ab2b140 Test on Python 3.8, drop 3.3 and 3.4
  • \n
  • 6397a22 plugin: Use pytest 5.4.0 new Function API
  • \n
  • 21a0f94 Replace yield_fixture() by fixture()
  • \n
  • 964b295 Added min hypothesis version so that bugfix for https://github.com/Hypothesis...
  • \n
  • 4a11a20 Add max supported pytest version to < 5.4.0 to prevent fails until #141 is fi...
  • \n
  • b305594 Change event_loop to module scope in hypothesis tests, fixing #145.
  • \n
  • d5a0f47 Enable test_subprocess to be run on win, by changing to ProactorEventLoop in ...
  • \n
  • d07cd2d Fix required pytest version
  • \n
  • 86cd9a6 Handle BaseExceptions from loop.run_until_complete (#126)
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/730/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}