{"id": 273775212, "node_id": "MDU6SXNzdWUyNzM3NzUyMTI=", "number": 88, "title": "Add NHS England Hospitals example to wiki", "user": {"value": 15543, "label": "tomdyson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2017-11-14T12:29:10Z", "updated_at": "2021-03-22T23:46:36Z", "closed_at": "2017-11-14T22:54:06Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "https://nhs-england-hospitals.now.sh\r\n\r\nand an associated map visualisation:\r\n\r\nhttp://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/\r\n\r\nDatasette is wonderful!\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/88/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275814941, "node_id": "MDU6SXNzdWUyNzU4MTQ5NDE=", "number": 141, "title": "datasette publish can fail if /tmp is on a different device", "user": {"value": 21148, "label": "jacobian"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 5, "created_at": "2017-11-21T18:28:05Z", "updated_at": "2020-04-29T03:27:54Z", "closed_at": "2017-12-08T16:06:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "`datasette publish` uses hard links to avoid copying the db into a tmp directory. This can fail if `/tmp` is on another device, because hardlinks can't cross devices. You'll see something like this:\r\n\r\n```\r\n$ datasette publish heroku whatever.db\r\n...\r\nOSError: [Errno 18] Invalid cross-device link: '/mnt/c/Users/jacob/c/datasette/whatever.db' -> '/tmp/tmpvxq2yof6/whatever.db'\r\n```\r\n[In my case this is failing because I'm on a Windows machine, using WSL, so my code's on a different virtual filesystem from the Linux subsystem, Because Reasons.]\r\n\r\nI'm not sure if it's possible to detect this (can you figure out which device `/tmp` is on?), or what the fallback should be (soft link? copy?).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/141/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 286938589, "node_id": "MDU6SXNzdWUyODY5Mzg1ODk=", "number": 177, "title": "Publishing to Heroku - metadata file not uploaded?", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-01-09T01:04:31Z", "updated_at": "2018-01-25T16:45:32Z", "closed_at": "2018-01-25T16:45:32Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Trying to run *datasette* (version 0.14) on Heroku with a `metadata.json` doesn't seem to be picking up the `metadata.json` file? \r\n\r\nOn a Mac with dodgy `tar` support:\r\n\r\n```\r\n \u25b8 Couldn't detect GNU tar. Builds could fail due to decompression errors\r\n \u25b8 See\r\n \u25b8 https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive\r\n \u25b8 Please install it, or specify the '--tar' option\r\n \u25b8 Falling back to node's built-in compressor\r\n```\r\n\r\nCould that be causing the issue?\r\n\r\nAlso, I'm not seeing custom query links anywhere obvious when I run the metadata file with a local *datasette* server?\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/177/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 291639118, "node_id": "MDU6SXNzdWUyOTE2MzkxMTg=", "number": 183, "title": "Custom Queries - escaping strings", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-01-25T16:49:13Z", "updated_at": "2019-06-24T06:45:07Z", "closed_at": "2019-06-24T06:45:07Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If a SQLite table column name contains spaces, they are usually referred to in double quotes:\r\n\r\n`SELECT * FROM mytable WHERE \"gappy column name\"=\"my value\";`\r\n\r\nIn the JSON metadata file, this is passed by escaping the double quotes:\r\n\r\n`\"queries\": {\"my query\": \"SELECT * FROM mytable WHERE \\\"gappy column name\\\"=\\\"my value\\\";\"}`\r\n\r\nWhen specifying a custom query in `metadata.json` using double quotes, these are then rendered in the *datasette* query box using single quotes:\r\n\r\n`SELECT * FROM mytable WHERE 'gappy column name'='my value';`\r\n\r\nwhich does not work.\r\n\r\nAlternatively, a valid custom query can be passed using backticks (\\`) to quote the column name and single (unescaped) quotes for the matched value:\r\n\r\n``\"queries\": {\"my query\": \"SELECT * FROM mytable WHERE `gappy column name`='my value';\"}``\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/183/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 313837303, "node_id": "MDU6SXNzdWUzMTM4MzczMDM=", "number": 203, "title": "Support for units", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-04-12T18:24:28Z", "updated_at": "2018-04-16T21:59:17Z", "closed_at": "2018-04-16T21:59:17Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be nice to be able to attach a unit to a column in the metadata, and have it rendered with that unit (and SI prefix) when it's displayed.\r\n\r\nIt would also be nice to support entering the prefixes in variables when querying.\r\n\r\nWith my radio licensing app I've put all frequencies in Hz. It's easy enough to special-case the row rendering to add the SI prefixes, but it's pretty unusable when querying by that field.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/203/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 324835838, "node_id": "MDU6SXNzdWUzMjQ4MzU4Mzg=", "number": 276, "title": "Handle spatialite geometry columns better", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2018-05-21T08:46:55Z", "updated_at": "2022-03-21T22:22:20Z", "closed_at": "2022-03-21T22:22:20Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'd like to see spatialite geometry columns rendered more sensibly - at the moment they come through as well-known-binary unless you use custom SQL, and WKB isn't of much use to anyone on the web.\r\n\r\nIn HTML: they should be shown either as simple lat/long (if it's just a point, for example), or as a sensible placeholder if they're more complex geometries.\r\n\r\nIn JSON: they should be GeoJSON geometries, (which means they can be automatically fed into a leaflet map with no further messing around).\r\n\r\nIn CSV: they should be WKT.\r\n\r\nI briefly wondered if this should go into a plugin, but I suspect it needs hooking in at a deeper level than the plugin architecture will support any time soon.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/276/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 336924199, "node_id": "MDU6SXNzdWUzMzY5MjQxOTk=", "number": 330, "title": "Limit text display in cells containing large amounts of text", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-06-29T09:15:22Z", "updated_at": "2018-07-24T04:53:20Z", "closed_at": "2018-07-10T16:20:48Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The default preview of a database shows all columns (is the row count limited?) which is fine in many cases but can take a long time to load / offer a large overhead if the table is a SpatiaLite table containing geometry columns that include large shapefiles.\r\n\r\nWould it make sense to have a setting that can limit the amount of text displayed in any given cell in the table preview, or (less useful?) suppress (with notification) the display of overlong columns unless enabled by the user?\r\n\r\nAn issue then arises if a user does want to see all the text in a cell:\r\n\r\n 1) for a particular cell;\r\n 2) for every cell in the table;\r\n 3) for all cells in a particular column or columns\r\n\r\n(I haven't checked but what if a column contains e.g. raw image data? Does this display as raw data? Or can this be rendered in a context aware way as an image preview? I guess a custom template would be one way to do that?)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/330/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 336936010, "node_id": "MDU6SXNzdWUzMzY5MzYwMTA=", "number": 331, "title": "Datasette throws error when loading spatialite db without extension loaded", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-29T09:51:14Z", "updated_at": "2022-01-20T21:29:40Z", "closed_at": "2018-07-10T15:13:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When starting datasette on a SpatialLite database *without* loading the SpatiaLite extension (using eg `--load-extension=/usr/local/lib/mod_spatialite.dylib`) an error is thrown and the server fails to start:\r\n\r\n```\r\ndatasette -p 8003 adminboundaries.db \r\nServe! files=('adminboundaries.db',) on port 8003\r\nTraceback (most recent call last):\r\n File \"/Users/ajh59/anaconda3/bin/datasette\", line 11, in \r\n sys.exit(cli())\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/cli.py\", line 552, in serve\r\n ds.inspect()\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/app.py\", line 273, in inspect\r\n \"tables\": inspect_tables(conn, self.metadata.get(\"databases\", {}).get(name, {}))\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/inspect.py\", line 79, in inspect_tables\r\n \"PRAGMA table_info({});\".format(escape_sqlite(table))\r\nsqlite3.OperationalError: no such module: VirtualSpatialIndex\r\n``` \r\n\r\nIt would be nice to trap this and return a message saying something like:\r\n\r\n```\r\nIt looks like you're trying to load a SpatiaLite database? Make sure you load in the SpatiaLite extension when starting datasette.\r\n\r\nRead more: https://datasette.readthedocs.io/en/latest/spatialite.html\r\n```\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/331/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 341229113, "node_id": "MDU6SXNzdWUzNDEyMjkxMTM=", "number": 344, "title": "datasette publish heroku fails without name provided", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-07-14T11:15:56Z", "updated_at": "2018-07-14T13:00:48Z", "closed_at": "2018-07-14T13:00:48Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It fails with the following JSON traceback if the `-n` option isn't provided, despite the fact that the command line help says that's not needed for heroku publishes.\r\n\r\n
\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasette\", line 11, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/datasette/cli.py\", line 265, in publish\r\n app_name = json.loads(create_output)[\"name\"]\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py\", line 354, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py\", line 339, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py\", line 357, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n```\r\n\r\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/344/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 369716228, "node_id": "MDU6SXNzdWUzNjk3MTYyMjg=", "number": 366, "title": "Default built image size over Zeit Now 100MiB limit", "user": {"value": 416374, "label": "gfrmin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-10-12T21:27:17Z", "updated_at": "2018-11-05T06:23:32Z", "closed_at": "2018-11-05T06:23:32Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Using `dataset publish now` with no other custom options on a small (43KB) sqlite database leads to the error \"The built image size (373.5M) exceeds the 100MiB limit\". I think this is because of a recent Zeit change: https://github.com/zeit/now-cli/issues/1523", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/366/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 377156339, "node_id": "MDU6SXNzdWUzNzcxNTYzMzk=", "number": 371, "title": "datasette publish digitalocean plugin", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-11-04T14:07:41Z", "updated_at": "2021-01-04T20:14:28Z", "closed_at": "2021-01-04T20:14:28Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Provide support for launching `datasette` on Digital Ocean.\r\n\r\nExample: [Deploy Docker containers into Digital Ocean](https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd).\r\n\r\nDigital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: [Docker One-Click Application](https://www.digitalocean.com/docs/one-clicks/docker/).\r\n\r\nRelated:\r\n- Launching containers in Digital Ocean servers running docker: [How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-provision-and-manage-remote-docker-hosts-with-docker-machine-on-ubuntu-16-04)\r\n- [How To Use Doctl, the Official DigitalOcean Command-Line Client](https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/371/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 377266351, "node_id": "MDU6SXNzdWUzNzcyNjYzNTE=", "number": 373, "title": "Views should be shown on root/index page along with tables", "user": {"value": 416374, "label": "gfrmin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4305096, "label": "0.28"}, "comments": 1, "created_at": "2018-11-05T06:28:41Z", "updated_at": "2019-05-16T00:29:22Z", "closed_at": "2019-05-16T00:29:22Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the moment the number of views is given on a datasette \"homepage\", but not links to any views themselves", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/373/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 398559195, "node_id": "MDU6SXNzdWUzOTg1NTkxOTU=", "number": 400, "title": "datasette publish cloudrun plugin", "user": {"value": 10352819, "label": "rprimet"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-01-12T14:35:11Z", "updated_at": "2019-05-03T16:57:35Z", "closed_at": "2019-05-03T16:57:35Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Google announced that they may launch a simple service for running Docker containers (previously serverless containers, now called \"cloud run\" -- link to alpha [here](https://services.google.com/fb/forms/serverlesscontainers/)). If/when this happens, it might be a good fit for publishing datasettes? (at least using the current version, manually publishing a datasette seems relatively painless).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/400/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 415575624, "node_id": "MDU6SXNzdWU0MTU1NzU2MjQ=", "number": 414, "title": "datasette requires specific version of Click", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-02-28T11:24:59Z", "updated_at": "2019-03-15T04:42:13Z", "closed_at": "2019-03-15T04:42:13Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Is `datasette` beholden to version `click==6.7`?\r\n\r\nCurrent release is at 7.0. Can the requirement be liberalised, eg to `>=6.7`?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/414/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 432870248, "node_id": "MDU6SXNzdWU0MzI4NzAyNDg=", "number": 431, "title": "Datasette doesn't reload when database file changes", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-04-13T16:50:43Z", "updated_at": "2019-05-02T05:13:55Z", "closed_at": "2019-05-02T05:13:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "My understanding of the `--reload` option was that if the database file changed `datasette` would automatically reload.\r\n\r\nI'm running on a Mac and from the `datasette` UI queries don't seem to be picking up data in a newly changed db (I checked the db timestamp - it certainly updated).\r\n\r\nI was also expecting to see some sort of log statement in the datasette logging to say that it had detected a file change and restarted, but don't see anything there?\r\n\r\nWill try to check on an Ubuntu box when I get a chance to see if this is a Mac thing.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/431/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 438200529, "node_id": "MDU6SXNzdWU0MzgyMDA1Mjk=", "number": 438, "title": "Plugins are loaded when running pytest", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T08:25:58Z", "updated_at": "2019-05-02T05:09:18Z", "closed_at": "2019-05-02T05:09:11Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If I have a datasette plugin installed on my system, its hooks are called when running the main datasette tests. This is probably undesirable, especially with the inspect hook in #437, as the plugin may rely on inspected state that the tests don't know about.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/438/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 438259941, "node_id": "MDU6SXNzdWU0MzgyNTk5NDE=", "number": 440, "title": "Plugin hook for additional data export formats", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-04-29T11:01:39Z", "updated_at": "2019-05-01T23:01:57Z", "closed_at": "2019-05-01T23:01:57Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be nice to have a simple way for plugins to provide additional data export formats. Might require a bit of work on the internals. I can work around this at a lower level with the `prepare_sanic` hook from #437 in the mean time.\r\n\r\nI guess plugins should be able to register a function which takes a row or list of rows and returns the rendered data. They'll also need to provide a file extension and probably a Content-Type.\r\n\r\nDatasette could then automatically include this format in the list of export formats on each page.\r\n\r\nLooks like this is related to #119.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/440/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 442327592, "node_id": "MDU6SXNzdWU0NDIzMjc1OTI=", "number": 456, "title": "Installing installs the tests package", "user": {"value": 7725188, "label": "hellerve"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-05-09T16:35:16Z", "updated_at": "2020-07-24T20:39:54Z", "closed_at": "2020-07-24T20:39:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Because `setup.py` uses `find_packages` and `tests` is on the top-level, `pip install datasette` will install a top-level package called `tests`, which is probably not desired behavior.\r\n\r\nThe offending line is here:\r\nhttps://github.com/simonw/datasette/blob/bfa2ae0d16d39bb82dbe4da4f3fdc3c7f6257418/setup.py#L40\r\n\r\nAnd only `pip uninstall datasette` with a conflicting package would warn you by default; apparently another package had the same problem, which is why I get this message when uninstalling:\r\n\r\n```\r\n$ pip uninstall datasette\r\nUninstalling datasette-0.27:\r\n Would remove:\r\n /usr/local/bin/datasette\r\n /usr/local/lib/python3.7/site-packages/datasette-0.27.dist-info/*\r\n /usr/local/lib/python3.7/site-packages/datasette/*\r\n /usr/local/lib/python3.7/site-packages/tests/*\r\n Would not remove (might be manually added):\r\n [ .. snip .. ]\r\nProceed (y/n)? \r\n```\r\n\r\nThis should be a relatively simple fix, and I could drop a PR if desired!\r\n\r\nCheers", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/456/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 459627549, "node_id": "MDU6SXNzdWU0NTk2Mjc1NDk=", "number": 523, "title": "Show total/unfiltered row count when filtering", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-06-23T22:56:48Z", "updated_at": "2019-06-24T01:38:14Z", "closed_at": "2019-06-24T01:38:14Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When I'm seeing a filtered view of a table, I'd like to be able to see something like '2 rows where status != \"closed\" (of 1000 total)' to have a context for the data I'm seeing \u2013 e.g. currently my database is being filled by an importer, so this information would be super helpful.\r\n\r\nSince this information would be a performance hit, maybe something like '12 rows where status != \"closed\" (of ??? total)' with lazy-loading on-click(?) could be applied (Or via a \"How many total?\" tooltip, or \u2026)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/523/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 465731062, "node_id": "MDU6SXNzdWU0NjU3MzEwNjI=", "number": 555, "title": "Static mounts with relative paths not working", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-07-09T11:38:35Z", "updated_at": "2019-07-11T16:13:22Z", "closed_at": "2019-07-11T16:13:22Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Datasette fails to serve files from static mounts that are created using relative paths `datasette --static mystatic:rel/path/to/static/dir`. \r\nI've explained the problem and the solution in the pull request: https://github.com/simonw/datasette/pull/554", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/555/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 471292050, "node_id": "MDU6SXNzdWU0NzEyOTIwNTA=", "number": 563, "title": "incorrect json url for row-level data?", "user": {"value": 10352819, "label": "rprimet"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-07-22T19:59:38Z", "updated_at": "2019-10-21T02:03:09Z", "closed_at": "2019-10-21T02:03:09Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "While visiting [this example page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/people/uk.org.publicwhip%2Fperson%2F10001) (linked from Datasette documentation), manually clicking on [the link](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/people/uk.org.publicwhip%2Fperson%2F10001?_format=json) (\"This data as .json\") to the json data results in an error 500 `data() got an unexpected keyword argument 'as_format'`\r\n\r\nThe [JSON page linked to from the documentation](https://register-of-members-interests.datasettes.com/regmem-d22c12c/people/uk.org.publicwhip%2Fperson%2F10001.json) however is correct (the page address ends in `.json` rather than using a query string `?format=json`)\r\n\r\nThis particular datasette demo page is now a few versions behind, but I was able to reproduce the issue using v0.29.2 and a downloaded copy of the demo database (and also with the current HEAD).\r\n\r\nHere is a stack trace:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 101, in __call__\r\n return await view(new_scope, receive, send)\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 173, in view\r\n request, **scope[\"url_route\"][\"kwargs\"]\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/views/base.py\", line 267, in get\r\n request, database, hash, correct_hash_provided, **kwargs\r\n File \"/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/views/base.py\", line 399, in view_get\r\n request, database, hash, **kwargs\r\nTypeError: data() got an unexpected keyword argument 'as_format'\r\n\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/563/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 492153532, "node_id": "MDU6SXNzdWU0OTIxNTM1MzI=", "number": 573, "title": "Exposing Datasette via Jupyter-server-proxy", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-09-11T10:32:36Z", "updated_at": "2020-03-26T09:41:30Z", "closed_at": "2020-03-26T09:41:30Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It is possible to expose a running `datasette` service in a Jupyter environment such as a MyBinder environment using the [`jupyter-server-proxy`](https://github.com/jupyterhub/jupyter-server-proxy).\r\n\r\nFor example, using [this demo Binder](https://mybinder.org/v2/gh/binder-examples/r/master?filepath=index.ipynb) which has the server proxy installed, we can then upload a simple test database from the notebook homepage, from a Jupyter termianl install datasette and set it running against the test db on eg port 8001 and then view it via the path `proxy/8001`.\r\n\r\nClicking links results in 404s though because the `datasette` links aren't relative to the current path?\r\n\r\n![image](https://user-images.githubusercontent.com/82988/64689964-44b69280-d487-11e9-8f9f-3681422bcc9f.png)\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/573/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 546961357, "node_id": "MDU6SXNzdWU1NDY5NjEzNTc=", "number": 656, "title": "Display of the column definitions", "user": {"value": 6371750, "label": "JBPressac"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-01-08T16:16:53Z", "updated_at": "2020-01-20T14:17:11Z", "closed_at": "2020-01-20T14:14:33Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hello,\r\nIs the nice display of headers and definitions at the top of https://fivethirtyeight.datasettes.com/fivethirtyeight-ac35616/antiquities-act%2Factions_under_antiquities_act is configured in the metadata.json file ?\r\nThank you,", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/656/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 600120439, "node_id": "MDU6SXNzdWU2MDAxMjA0Mzk=", "number": 726, "title": "Foreign key : case of a link to the associated row not displayed", "user": {"value": 6371750, "label": "JBPressac"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-15T08:31:27Z", "updated_at": "2020-04-27T22:05:47Z", "closed_at": "2020-04-27T22:05:46Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hello,\r\nI use Datasette to publish tsv files linked together by foreign keys declared thanks to sqlite-utils. In one table, [prelib_personne](http://crbc-dataset.huma-num.fr/prelib/prelib_personne), the foreign keys are properly noticed by a link to the associated row (for instance ville_naissance_id is properly linked to prelib_ville). But every link to the foreign key prelib_oeuvre.id fails. For instance, [prelib_ecritoeuvre](http://crbc-dataset.huma-num.fr/prelib/prelib_ecritoeuvre) has links to prelib_personne but none to prelib_oeuvre. In despite of the schema:\r\n\r\nCREATE TABLE \"prelib_ecritoeuvre\" (\r\n\"id\" INTEGER,\r\n \"fonction_id\" INTEGER,\r\n \"oeuvre_id\" INTEGER,\r\n \"personne_id\" INTEGER\r\n ,PRIMARY KEY ([id]),\r\n FOREIGN KEY(fonction_id) REFERENCES prelib_fonctionecritoeuvre(id),\r\n FOREIGN KEY(personne_id) REFERENCES prelib_personne(id),\r\n FOREIGN KEY(oeuvre_id) REFERENCES prelib_oeuvre(id)\r\n); \r\n\r\nWould you have any clue to investigate the reason of this problem?\r\nThanks,", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/726/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 620969465, "node_id": "MDU6SXNzdWU2MjA5Njk0NjU=", "number": 767, "title": "Allow to specify a URL fragment for canned queries", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5471110, "label": "Datasette 0.43"}, "comments": 2, "created_at": "2020-05-19T13:17:42Z", "updated_at": "2020-05-27T21:52:25Z", "closed_at": "2020-05-27T21:52:25Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Canned queries are very useful to direct users to prepared data and views. I like to use them with charts using datasette-vega a lot, because people get a direct impression at first glance.\r\n\r\ndatasette-vega doesn't show up by default though, and users have to click through to it. Also, datasette-vega does not always guess the best way to render columns correctly though, so it would be nice if I could specify a URL fragment in my canned queries to make sure people see what I want them to see.\r\n\r\nMy current workaround is to include a fragement link in ``description_html`` and ask people to reload the page, like [here](https://data.rixx.de/songs/show_by_bpm#g.mark=bar&g.x_column=bpm_floor&g.x_type=ordinal&g.y_column=bpm_count&g.y_type=quantitative), which is a bit hacky.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/767/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 640330278, "node_id": "MDU6SXNzdWU2NDAzMzAyNzg=", "number": 851, "title": "Having trouble getting writable canned queries to work", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-06-17T10:30:28Z", "updated_at": "2020-06-17T10:33:25Z", "closed_at": "2020-06-17T10:32:33Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hey,\r\n\r\nI'm trying to get canned inserts to work. I have an DB with following metadata:\r\n\r\n```text\r\nsqlite> .mode line\r\n\r\nsqlite> select name, sql from sqlite_master where name like '%search%';\r\n name = search\r\n sql = CREATE TABLE \"search\" (\"id\" INTEGER NOT NULL PRIMARY KEY, \"name\" VARCHAR(255) NOT NULL, \"url\" VARCHAR(255) NOT NULL)\r\n```\r\n\r\n```yaml\r\n# ...\r\nqueries:\r\n add_search:\r\n sql: insert into search(name, url) VALUES (:name, :url),\r\n write: true\r\n```\r\nwhich renders a form as expected, but when I submit the form I get `incomplete input` error.\r\n\r\n![image](https://user-images.githubusercontent.com/3243482/84885285-7f46fe80-b09b-11ea-8a05-92c8986bbf7a.png)\r\n\r\nbut when submit post the form\r\n\r\nI've attached a debugger to see where the error comes from, because `incomplete input` string doesn't appear in datasette codebase.\r\n\r\nInside `datasette.database.Database.execute_write_fn` \r\n\r\nhttps://github.com/simonw/datasette/blob/4fa7cf68536628344356d3ef8c92c25c249067a0/datasette/database.py#L69\r\n\r\n```py\r\nresult = await reply_queue.async_q.get()\r\n```\r\n\r\nthis line raises an exception. \r\n\r\nThat led me to believe I had something wrong with my SQL. But running the command in `sqlite3` inserts the record just fine.\r\n\r\n```text\r\nsqlite> insert into search (name, url) values ('my name', 'my url');\r\nsqlite> SELECT last_insert_rowid();\r\nlast_insert_rowid() = 3\r\n```\r\n\r\nSo I'm a bit lost here.\r\n\r\n---\r\n- datasette, version 0.44\r\n- Python 3.8.1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/851/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 649702801, "node_id": "MDU6SXNzdWU2NDk3MDI4MDE=", "number": 888, "title": "URLs in release notes point to 127.0.0.1", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-07-02T07:28:04Z", "updated_at": "2020-09-15T20:39:50Z", "closed_at": "2020-09-15T20:39:49Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Just a quick heads up:\r\n\r\nRelease notes for 0.45 include urls that point to localhost. \r\n\r\nhttps://github.com/simonw/datasette/releases/tag/0.45", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/888/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 649907676, "node_id": "MDU6SXNzdWU2NDk5MDc2NzY=", "number": 889, "title": "asgi_wrapper plugin hook is crashing at startup", "user": {"value": 49260, "label": "amjith"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-07-02T12:53:13Z", "updated_at": "2020-09-15T20:40:52Z", "closed_at": "2020-09-15T20:40:52Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Steps to reproduce: \r\n\r\n1. Install datasette-media plugin \r\n`pip install datasette-media`\r\n2. Launch datasette\r\n`datasette databasename.db`\r\n3. Error\r\n\r\n```\r\nINFO: Started server process [927704]\r\nINFO: Waiting for application startup.\r\nERROR: Exception in 'lifespan' protocol\r\nTraceback (most recent call last):\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/uvicorn/lifespan/on.py\", line 48, in main\r\n await app(scope, self.receive, self.send)\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py\", line 45, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/datasette_media/__init__.py\", line 9, in wrapped_app\r\n path = scope[\"path\"]\r\nKeyError: 'path'\r\nERROR: Application startup failed. Exiting.\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/889/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 708185405, "node_id": "MDU6SXNzdWU3MDgxODU0MDU=", "number": 975, "title": "Dependabot couldn't authenticate with https://pypi.python.org/simple/", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-24T13:44:40Z", "updated_at": "2020-09-25T13:34:34Z", "closed_at": "2020-09-25T13:34:34Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Dependabot couldn't authenticate with https://pypi.python.org/simple/.\n\nYou can provide authentication details in your [Dependabot dashboard](https://app.dependabot.com/accounts/simonw) by clicking into the account menu (in the top right) and selecting 'Config variables'.\n\n[View the update logs](https://app.dependabot.com/accounts/simonw/update-logs/48611311).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/975/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 721050815, "node_id": "MDU6SXNzdWU3MjEwNTA4MTU=", "number": 1019, "title": "\"Edit SQL\" button on canned queries", "user": {"value": 639012, "label": "jsfenfen"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6026070, "label": "0.51"}, "comments": 7, "created_at": "2020-10-14T00:51:39Z", "updated_at": "2020-10-23T19:44:06Z", "closed_at": "2020-10-14T03:44:23Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Feature request: Would it be possible to add an \"edit this query\" button on canned queries? Clicking it would open the canned query as an editable sql query. I think the intent is to have named parameters to allow this, but sometimes you just gotta rewrite it? ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1019/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 752966476, "node_id": "MDU6SXNzdWU3NTI5NjY0NzY=", "number": 1114, "title": "--load-extension=spatialite not working with datasetteproject/datasette docker image", "user": {"value": 2182, "label": "danp"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-11-29T17:35:20Z", "updated_at": "2022-01-20T21:29:42Z", "closed_at": "2020-11-29T17:37:45Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "https://github.com/simonw/datasette/commit/6aa5886379dd9017215904fb28567b80018902f9 added the `--load-extension=spatialite` shortcut looking for the extension in these places:\r\n\r\nhttps://github.com/simonw/datasette/blob/12877d7a48e2aa28bb5e780f929a218f7265d849/datasette/utils/__init__.py#L56-L60\r\n\r\nHowever, in the datasetteproject/datasette docker image the file is at `/usr/local/lib/mod_spatialite.so`.\r\n\r\nThis results in the example command [here](https://docs.datasette.io/en/stable/installation.html#loading-spatialite) failing:\r\n\r\n```\r\n% docker run --rm -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/data.db --load-extension=spatialite\r\nError: Could not find SpatiaLite extension\r\n```\r\n\r\nBut it does work when given an explicit path:\r\n\r\n```\r\n% docker run --rm -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/data.db --load-extension=/usr/local/lib/mod_spatialite.so\r\nINFO: Started server process [1]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nINFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)\r\n...\r\n```\r\n\r\nPerhaps `SPATIALITE_PATHS` should include `/usr/local/lib/mod_spatialite.so`?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1114/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 754178780, "node_id": "MDU6SXNzdWU3NTQxNzg3ODA=", "number": 1121, "title": "Table actions cog is misaligned", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-12-01T08:41:25Z", "updated_at": "2020-12-03T01:03:19Z", "closed_at": "2020-12-03T00:33:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the moment it looks like this\r\nhttps://datasette-graphql-demo.datasette.io/github/repos\r\n\r\n![image](https://user-images.githubusercontent.com/3243482/100716533-e6e2d300-33c9-11eb-866e-1e83ba228bf5.png)\r\n\r\nAdding a few flex statements fixes the alignment and centers `h1` text and the cog icon vertically.\r\n![image](https://user-images.githubusercontent.com/3243482/100716605-f8c47600-33c9-11eb-8d69-0e37499cf641.png)\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1121/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 777677671, "node_id": "MDU6SXNzdWU3Nzc2Nzc2NzE=", "number": 1169, "title": "Prettier package not actually being cached", "user": {"value": 3637, "label": "benpickles"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-01-03T17:04:41Z", "updated_at": "2021-01-04T19:52:34Z", "closed_at": "2021-01-04T19:52:33Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "With the current configuration Prettier seems to be installed on every run - which can been [seen from the output](https://github.com/simonw/datasette/runs/1631686028?check_suite_focus=true#step:4:4):\r\n\r\n```\r\nnpx: installed 1 in 5.166s\r\n```\r\n\r\nPrettier isn't explicitly being installed (it's surprising that actually installing the dependencies isn't included in the [actions/cache docs](https://github.com/actions/cache/blob/main/examples.md#macos-and-ubuntu)) but it turns out that `npx` will automatically install the package for the specified command (it actually _guesses_ the package name from the name of the command). I'm not sure where Prettier ends up being installed but it doesn't appear to be in `~/.npm` according to the [post-cache output](https://github.com/simonw/datasette/runs/1631686028#step:7:2) (or `./node_modules` when I tested locally):\r\n\r\n```\r\nCache hit occurred on the primary key Linux-npm-565329898f77080e58b14d45cf816ab94877e6f2ece9d395c369c533548a7ee7, not saving cache.\r\n```\r\n\r\nI think there are a couple of approaches to tackling this, you could manually install/cache Prettier within the action, or add a `package.json` with Prettier. I would go with the latter because it's a more standard and maintainable approach and it will also ensure that, along with CI, anyone working on the project will run the same version of Prettier (you'll also get Dependabot JavaScript updates).\r\n\r\nI've tested the [`package.json` approach on a branch](https://github.com/simonw/datasette/compare/main...benpickles:cache-prettier) and am happy to turn it into a pull request if you fancy.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1169/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 794554881, "node_id": "MDU6SXNzdWU3OTQ1NTQ4ODE=", "number": 1208, "title": "A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper", "user": {"value": 4488943, "label": "kbaikov"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-01-26T20:56:28Z", "updated_at": "2021-03-11T16:15:49Z", "closed_at": "2021-03-11T16:15:49Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Your code is full of open files that are never closed, especially when you deal with reading/writing json/yaml files.\r\n\r\nIf you run python with warnings enabled this problem becomes evident.\r\nThis probably contributes to some memory leaks in long running datasettes if the GC will not 'collect' those resources properly.\r\n\r\nThis is easily fixed by using a context manager instead of just using open:\r\n```python\r\nwith open('some_file', 'w') as opened_file:\r\n opened_file.write('string')\r\n```\r\n\r\nIn some newer parts of the code you use Path objects 'read_text' and 'write_text' functions which close the file properly and are prefered in some cases.\r\n\r\n\r\nIf you want I can create a PR for all places i found this pattern in.\r\n\r\n\r\nBellow is a fraction of places where i found a ResourceWarning:\r\n```python\r\n\r\nupdate-docs-help.py:\r\n 20 actual = actual.replace(\"Usage: cli \", \"Usage: datasette \")\r\n 21: open(docs_path / filename, \"w\").write(actual)\r\n 22 \r\n\r\ndatasette\\app.py:\r\n 210 ):\r\n 211: inspect_data = json.load((config_dir / \"inspect-data.json\").open())\r\n 212 if immutables is None:\r\n\r\n 266 if config_dir and (config_dir / \"settings.json\").exists() and not config:\r\n 267: config = json.load((config_dir / \"settings.json\").open())\r\n 268 self._settings = dict(DEFAULT_SETTINGS, **(config or {}))\r\n\r\n 445 self._app_css_hash = hashlib.sha1(\r\n 446: open(os.path.join(str(app_root), \"datasette/static/app.css\"))\r\n 447 .read()\r\n\r\ndatasette\\cli.py:\r\n 130 else:\r\n 131: out = open(inspect_file, \"w\")\r\n 132 loop = asyncio.get_event_loop()\r\n\r\n 459 if inspect_file:\r\n 460: inspect_data = json.load(open(inspect_file))\r\n 461 \r\n\r\n```\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1208/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 797651831, "node_id": "MDU6SXNzdWU3OTc2NTE4MzE=", "number": 1212, "title": "Tests are very slow. ", "user": {"value": 4488943, "label": "kbaikov"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-01-31T08:06:16Z", "updated_at": "2021-02-19T22:54:13Z", "closed_at": "2021-02-19T22:54:13Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Working on my PR i noticed that tests are very slow.\r\n\r\nThe plain pytest run took about 37 minutes for me.\r\nHowever i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution.\r\n`pytest -n 8` is run only in 28 minutes on my machine.\r\n\r\nI can create a PR to mention that in your documentation.\r\nThis will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation.\r\n\r\nDoes that make sense to you?\r\n\r\nAfter a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run.\r\n\r\nProfiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense.\r\n\r\nPerhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases.\r\nAnd/or use conn = sqlite3.connect(\":memory:\") which is much faster.\r\nAnd/or truncate tables after each TC instead of deleting the file and re-creating them.\r\n\r\nI can take a look which is the best approach if you give the go-ahead. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1212/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 807433181, "node_id": "MDU6SXNzdWU4MDc0MzMxODE=", "number": 1224, "title": "can't start immutable databases from configuration dir mode", "user": {"value": 295329, "label": "camallen"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-12T17:50:13Z", "updated_at": "2021-03-29T00:17:31Z", "closed_at": "2021-03-29T00:17:31Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Say I have a `/databases/` directory with multiple sqlite db files in that dir (`1.db` & `2.db`) and an `inspect-data.json` file.\r\n\r\nIf I start datasette via `datasette -h 0.0.0.0 /databases/` then the resulting databases are set to `is_mutable: true` as inspected via http://127.0.0.1:8001/-/databases.json\r\n\r\nI don't want to have to list out the databases by name, e.g. `datasette -i /databases/1.db -i /databases/2.db` as i want the system to autodetect the sqlite dbs i have in the configuration directory \r\n\r\nAccording to the docs outlined in https://docs.datasette.io/en/latest/settings.html?highlight=immutable#configuration-directory-mode this should be possible\r\n> `inspect-data.json` the result of running datasette inspect - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running\r\n \r\nI believe that if the `inspect-json.json` file present, then in theory the databases will be automatically set to immutable via this code https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/app.py#L211-L216\r\n\r\nHowever it appears the Click Multiple Options will return a tuple via https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/cli.py#L311-L317\r\n\r\nThe resulting tuple is passed to the Datasette app via `kwargs` and overrides the behaviour to set the databases to immutable via this arg https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/app.py#L182\r\n\r\nIf you think this is a bug and needs fixing, I am willing to make a PR to check for the empty `immutable` tuple before calling the Datasette class initializer as I think leaving that class interface alone is the best path here.\r\n\r\nThoughts?\r\n\r\nAlso - i'm loving Datasette, it truly is a wonderful tool, thank you :)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1224/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 839008371, "node_id": "MDU6SXNzdWU4MzkwMDgzNzE=", "number": 1274, "title": "Might there be some way to comment metadata.json?", "user": {"value": 192568, "label": "mroswell"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-03-23T18:33:00Z", "updated_at": "2021-03-23T20:14:54Z", "closed_at": "2021-03-23T20:14:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I don't know what license to use... Would be nice to be able to add a comment regarding that uncertainty in my metadata.json file\r\n\r\nI like laktak's little video comment in favor of Human json (Hjson)\r\nhttps://stackoverflow.com/questions/244777/can-comments-be-used-in-json\r\n\r\nHmmm... one of the commenters there said comments are allowed in yaml... so that's a good argument for yaml.\r\n\r\nAnyhow, just came to mind, and thought I'd mention it here. Looks like https://hjson.github.io/ has the details.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1274/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 860625833, "node_id": "MDU6SXNzdWU4NjA2MjU4MzM=", "number": 1300, "title": "Make row available to `render_cell` plugin hook", "user": {"value": 3243482, "label": "abdusco"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-04-18T10:14:37Z", "updated_at": "2022-07-07T16:34:05Z", "closed_at": "2022-07-07T16:31:22Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: **Generating URL for a row inside `render_cell` hook***\r\n\r\nHey,\r\nI am using Datasette to view a database that contains video metadata. It has BLOB columns that contain video thumbnails in JPG format (around 100-500KB per row). \r\n\r\nI've registered an output formatter that extends `datasette.blob_renderer.render_blob` function and serves the column with `image/jpeg` content type.\r\n\r\n```python\r\nfrom datasette.blob_renderer import render_blob\r\n\r\nasync def render_jpg(datasette, database, rows, columns, request, table, view_name):\r\n response = await render_blob(datasette, database, rows, columns, request, table, view_name)\r\n response.content_type = \"image/jpeg\"\r\n response.headers[\"Content-Disposition\"] = f'inline; filename=\"image.jpg\"'\r\n return response\r\n\r\n\r\n@hookimpl\r\ndef register_output_renderer():\r\n return {\r\n \"extension\": \"jpg\",\r\n \"render\": render_jpg,\r\n \"can_render\": lambda: True,\r\n }\r\n```\r\n\r\nThis works well. I can visit `http://localhost:8001/mydb/videos/1.jpg?_blob_column=thumbnail` and view the image.\r\n\r\nI want to display the image directly with an `` tag (lazy-loaded of course). So, I need a URL, because embedding base64 would increase the page size too much (each image > 100KB). \r\n\r\nDatasette generates a link with `.blob` extension for blob columns. It does this by calling `datasette.urls.row_blob`\r\n\r\nhttps://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/views/table.py#L169-L179\r\n\r\nBut I have no way of getting the row inside the `render_cell` hook. \r\n\r\n```python\r\n@hookimpl\r\ndef render_cell(value, column, table, database, datasette):\r\n if isinstance(value, bytes) and imghdr.what(None, value):\r\n # generate url\r\n return '$renderedLink'\r\n```\r\n\r\nAny pointers?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1300/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 864969683, "node_id": "MDU6SXNzdWU4NjQ5Njk2ODM=", "number": 1305, "title": "Index view crashes when any database table is not accessible to actor", "user": {"value": 416374, "label": "gfrmin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-22T13:44:22Z", "updated_at": "2021-06-02T04:26:29Z", "closed_at": "2021-06-02T04:26:29Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Because of https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L63, the ```tables``` dict built does not include invisible tables; however, if https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L80 is reached (because table_counts was not successfully initialized, e.g. due to a very large database) then as db.get_all_foreign_keys() returns ALL tables, a KeyError will be raised.\r\n\r\nThis error can be recreated with the fixtures.db if any table is hidden, e.g. by adding something like ```\"foreign_key_references\": {\r\n \"allow\": {}\r\n }``` to fixtures-metadata.json and deleting ```or not table_counts``` from https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L77.\r\n\r\nI'm not sure how to fix this error; perhaps by testing if the table is in the aforementions ```tables``` dict.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1305/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 884952179, "node_id": "MDU6SXNzdWU4ODQ5NTIxNzk=", "number": 1320, "title": "Can't use apt-get in Dockerfile when using datasetteproj/datasette as base", "user": {"value": 2670795, "label": "brandonrobertz"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-05-10T19:37:27Z", "updated_at": "2021-05-24T18:15:56Z", "closed_at": "2021-05-24T18:07:08Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The datasette base Docker image is super convenient, but there's one problem: if any of the plugins you install require additional system dependencies (e.g., xz, git, curl) then any attempt to use apt in said Dockerfile results in an explosion:\r\n\r\n```\r\n$ docker-compose build\r\nBuilding server\r\n[+] Building 9.9s (7/9)\r\n => [internal] load build definition from Dockerfile 0.0s\r\n => => transferring dockerfile: 666B 0.0s\r\n => [internal] load .dockerignore 0.0s\r\n => => transferring context: 34B 0.0s\r\n => [internal] load metadata for docker.io/datasetteproject/datasette:latest 0.6s\r\n => [base 1/4] FROM docker.io/datasetteproject/datasette@sha256:2250d0fbe57b1d615a8d6df0c9d43deb9533532e00bac68854773d8ff8dcf00a 0.0s\r\n => [internal] load build context 1.8s\r\n => => transferring context: 2.44MB 1.8s\r\n => CACHED [base 2/4] WORKDIR /datasette 0.0s\r\n => ERROR [base 3/4] RUN apt-get update && apt-get install --no-install-recommends -y git ssh curl xz-utils 9.2s\r\n------\r\n > [base 3/4] RUN apt-get update && apt-get install --no-install-recommends -y git ssh curl xz-utils:\r\n#6 0.446 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]\r\n#6 0.449 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]\r\n#6 0.459 Get:3 http://httpredir.debian.org/debian sid InRelease [157 kB]\r\n#6 0.784 Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]\r\n#6 0.790 Get:5 http://httpredir.debian.org/debian sid/main amd64 Packages [8626 kB]\r\n#6 1.003 Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]\r\n#6 1.180 Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [286 kB]\r\n#6 7.095 Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [10.9 kB]\r\n#6 8.058 Fetched 17.2 MB in 8s (2243 kB/s)\r\n#6 8.058 Reading package lists...\r\n#6 9.166 E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)\r\n#6 9.166 E: Could not open file - open (2: No such file or directory)\r\n#6 9.166 E: Problem opening\r\n#6 9.166 E: The package lists or status file could not be parsed or opened.\r\n```\r\n\r\nThe problem seems to be from completely wiping out `/var/lib/dpkg` in the upstream Dockerfile:\r\n\r\nhttps://github.com/simonw/datasette/blob/1b697539f5b53cec3fe13c0f4ada13ba655c88c7/Dockerfile#L18\r\n\r\nI've tested without removing the directory and apt works as expected.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1320/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 893890496, "node_id": "MDU6SXNzdWU4OTM4OTA0OTY=", "number": 1332, "title": "?_facet_size=X to increase number of facets results on the page", "user": {"value": 192568, "label": "mroswell"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-05-18T02:40:16Z", "updated_at": "2021-05-27T16:13:07Z", "closed_at": "2021-05-23T00:34:37Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Is there a way to add a parameter to the URL to modify default_facet_size?\r\n\r\nLIkewise, a way to produce a link on the three dots to expand to all items (or match previous number of items, or add x more)?\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1332/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 913017577, "node_id": "MDU6SXNzdWU5MTMwMTc1Nzc=", "number": 1365, "title": "pathlib.Path breaks internal schema", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-06-07T01:40:37Z", "updated_at": "2021-06-21T15:57:39Z", "closed_at": "2021-06-21T15:57:39Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Ran into an issue while trying to build a plugin to render GeoJSON. I'm using pytest's `tmp_path` fixture, which is a `pathlib.Path`, to get a temporary database path. I was getting a weird error involving writes, but I was doing reads. Turns out it's the internal database trying to insert a `Path` where it wants a string.\r\n\r\nMy test looked like this:\r\n\r\n```python\r\n@pytest.mark.asyncio\r\nasync def test_render_feature_collection(tmp_path):\r\n database = tmp_path / \"test.db\"\r\n datasette = Datasette([database])\r\n\r\n # this will break with a path\r\n await datasette.refresh_schemas()\r\n\r\n # build a url\r\n url = datasette.urls.table(database.stem, TABLE_NAME, format=\"geojson\")\r\n\r\n response = await datasette.client.get(url)\r\n fc = response.json()\r\n\r\n assert 200 == response.status_code\r\n```\r\n\r\nI only ran into this while running tests, because passing in database paths from the CLI uses strings, but it's a weird error and probably something other people have run into.\r\n\r\nThe fix is easy enough: Convert the path to a string and everything works. So this:\r\n\r\n```python\r\n@pytest.mark.asyncio\r\nasync def test_render_feature_collection(tmp_path):\r\n database = tmp_path / \"test.db\"\r\n datasette = Datasette([str(database)])\r\n\r\n # this is fine now\r\n await datasette.refresh_schemas()\r\n```\r\n\r\nThis could (probably, haven't tested) be fixed [here](https://github.com/simonw/datasette/blob/03ec71193b9545536898a4bc7493274fec48bdd7/datasette/app.py#L357) by calling `str(db.path)` or by doing that conversion earlier.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1365/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 939051549, "node_id": "MDU6SXNzdWU5MzkwNTE1NDk=", "number": 1388, "title": "Serve using UNIX domain socket", "user": {"value": 80737, "label": "aslakr"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2021-07-07T16:13:37Z", "updated_at": "2021-07-11T01:18:38Z", "closed_at": "2021-07-10T23:38:32Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Would it be possible to make datasette serve using UNIX domain socket similar to Uvicorn's ``--uds``?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1388/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 988553806, "node_id": "MDU6SXNzdWU5ODg1NTM4MDY=", "number": 1457, "title": "suggestion: distinguish names in `--static` documentation", "user": {"value": 51016, "label": "ctb"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-05T17:04:27Z", "updated_at": "2021-10-14T18:39:55Z", "closed_at": "2021-10-14T18:39:55Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Over in https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files, there is the slightly comical example command -\r\n\r\n```\r\ndatasette -m metadata.json --static static:static/ --memory\r\n```\r\n\r\n(now, with MORE STATIC!)\r\n\r\nIt took me a while to sort out all the URLs and paths involved because I wasn't being very clever. But in the interests of simplification and distinction, I might suggest something like\r\n\r\n```\r\ndatasette -m metadata.json --static loc:static-files/ --memory\r\n```\r\n\r\nI will submit a PR for your consideration.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1457/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 994390593, "node_id": "MDU6SXNzdWU5OTQzOTA1OTM=", "number": 1468, "title": "Faceting for custom SQL queries", "user": {"value": 72577720, "label": "MichaelTiemannOSC"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-09-13T02:52:16Z", "updated_at": "2021-09-13T04:54:22Z", "closed_at": "2021-09-13T04:54:17Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Facets are awesome. But not when I need to join to tidy tables together. Or even just running explicitly the default SQL query that simply lists all the rows and columns of a table (up to SIZE). That is to say, when I browse a table, I see facets:\r\n\r\nhttps://latest.datasette.io/fixtures/compound_three_primary_keys\r\n\r\nBut when I run a custom query, I don't:\r\n\r\nhttps://latest.datasette.io/fixtures?sql=select+pk1%2C+pk2%2C+pk3%2C+content+from+compound_three_primary_keys+order+by+pk1%2C+pk2%2C+pk3+limit+101\r\n\r\nIs there an idiom to cause custom SQL to come back with facet suggestions?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1468/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1023243105, "node_id": "I_kwDOBm6k_c48_XNh", "number": 1486, "title": "pipx installation instructions for plugins don't reference pipx inject", "user": {"value": 41546558, "label": "RhetTbull"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-10-12T00:43:42Z", "updated_at": "2021-10-13T21:09:11Z", "closed_at": "2021-10-13T21:09:11Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The datasette [installation instructions](https://github.com/simonw/datasette/blob/main/docs/installation.rst) discuss how to install with pipx, how to upgrade with pipx, and how to upgrade plugins with pipx but do not mention how to install a plugin with pipx. You discussed this on your [blog](https://til.simonwillison.net/python/installing-upgrading-plugins-with-pipx) but looks like this didn't make it in when you updated the docs for pipx (#756). \r\n\r\nI'll submit a PR shortly to fix this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1486/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1059549523, "node_id": "I_kwDOBm6k_c4_J3FT", "number": 1526, "title": "Add to vercel.json, rather than overwriting it.", "user": {"value": 192568, "label": "mroswell"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-11-22T00:47:12Z", "updated_at": "2021-11-22T04:49:45Z", "closed_at": "2021-11-22T04:13:47Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'd like to be able to add to vercel.json. But Datasette overwrites whatever I put in that file. I originally reported this here:\r\nhttps://github.com/simonw/datasette-publish-vercel/issues/51\r\n\r\nIn that case, I wanted to do a rewrite... and now I need to do 301 redirects (because we had to rename our site).\r\n\r\nCan this be addressed?\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1526/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1076388044, "node_id": "I_kwDOBm6k_c5AKGDM", "number": 1547, "title": "Writable canned queries fail to load custom templates", "user": {"value": 127565, "label": "wragge"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 6, "created_at": "2021-12-10T03:31:48Z", "updated_at": "2022-01-13T22:27:59Z", "closed_at": "2021-12-19T21:12:00Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I've created a canned query with `\"write\": true` set. I've also created a custom template for it, but the template doesn't seem to be found. If I look in the HTML I see (`stock_exchange` is the db name):\r\n\r\n``\r\n\r\nMy non-writeable canned queries pick up custom templates as expected, and if I look at their HTML I see the canned query name added to the templates considered (the canned query here is `date_search`):\r\n\r\n``\r\n\r\nSo it seems like the writeable canned query is behaving differently for some reason. Is it an authentication thing? I'm using the built in `--root` authentication.\r\n\r\nThanks!\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1547/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1078702875, "node_id": "I_kwDOBm6k_c5AS7Mb", "number": 1552, "title": "Allow to set `facets_array` in metadata (like current `facets`)", "user": {"value": 3556, "label": "davidbgk"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 9, "created_at": "2021-12-13T16:00:44Z", "updated_at": "2022-01-13T22:26:15Z", "closed_at": "2021-12-16T18:47:48Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "For now, you can set a `facets` value (array) in your metadata file but I couldn't find a way to set a `facets_array` in order to provide default facets for arrays (like tags). My use-case is to access to [that kind of view](https://latest.datasette.io/fixtures/facetable?_facet_array=tags) by default without URL's parameters as with other default facets.\r\n\r\n_I'm new to datasette, and I'm willing to help with a PR if that is not already implemented and I missed it!_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1552/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1082765654, "node_id": "I_kwDOBm6k_c5AibFW", "number": 1561, "title": "add hash id to \"_memory\" url if hashed url mode is turned on and crossdb is also turned on", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-17T00:45:12Z", "updated_at": "2022-03-19T04:45:40Z", "closed_at": "2022-03-19T04:45:40Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. \r\n\r\nOne way that it could work is to have the _memory hash be a hash of all the individual databases.\r\n\r\nOtherwise, crossdb queries can get quit out of data if using aggressive caching.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1561/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1089529555, "node_id": "I_kwDOBm6k_c5A8ObT", "number": 1581, "title": "when hashed urls are turned on, the _memory db has improperly long-lived cache expiry", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-28T00:05:48Z", "updated_at": "2022-03-24T04:08:18Z", "closed_at": "2022-03-24T04:08:18Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "if hashed_urls are on, then a -000 suffix is added to the `_memory` database, and the cache settings are set just as if it was a normal hashed database.\r\n\r\nin particular, this header is set:\r\n\r\n`cache-control: max-age=31536000`\r\n\r\nthis is not appropriate because the `_memory-000` database isn't really hashed based on the contents of the databases (see #1561).\r\n\r\nEither the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1581/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1105916061, "node_id": "I_kwDOBm6k_c5B6vCd", "number": 1601, "title": "Add KNN and data_licenses to hidden tables list", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-01-17T14:19:57Z", "updated_at": "2022-01-20T21:29:44Z", "closed_at": "2022-01-20T04:38:54Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "They're generated by Spatialite and not very interesting in most cases.\r\n\r\n\"Screen\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1601/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1114147905, "node_id": "I_kwDOBm6k_c5CaIxB", "number": 1612, "title": "Move canned queries closer to the SQL input area", "user": {"value": 639012, "label": "jsfenfen"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2022-01-25T17:06:39Z", "updated_at": "2022-03-19T04:04:49Z", "closed_at": "2022-01-25T18:34:21Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: Consider placing example queries above the sql input?*\r\n\r\nHi! Have been enjoying deploying ad hoc datasettes for collaborators to pick over! \r\n\r\nI keep finding myself manually \"fixing\" the database.html template so that the \"example queries\" (canned queries) appear directly *over* the sql box? So they are sorta more a suggestion for collaborators who aren't inclined to write their own queries? \r\n\r\nMy sense is any time I go to the trouble of writing canned queries my users should see 'em? \r\n\r\n(( I have also considered a client-side reactive-ish option where selecting a query just places the raw SQL in the box and doesn't execute it, but this seems to end up being an inconvenience, rather than a teaching tool. )) \r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1612/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1181432624, "node_id": "I_kwDOBm6k_c5Gazsw", "number": 1688, "title": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-26T01:17:44Z", "updated_at": "2022-03-27T01:01:14Z", "closed_at": "2022-03-26T21:34:47Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm trying to make a small plugin that depends on static assets, by following the guide [here](https://docs.datasette.io/en/stable/writing_plugins.html#writing-one-off-plugins). I made a `plugins/` directory with `datasette_nteract_data_explorer.py`. \r\n\r\nI am trying to follow the example of `datasette_vega`, and serving static assets. I created a `statics/` directory within `plugins/` to serve my JS and CSS.\r\n\r\nhttps://github.com/simonw/datasette-vega/blob/00de059ab1ef77394ba9f9547abfacf966c479c4/datasette_vega/__init__.py#L13\r\n\r\nUnfortunately, datasette doesn't seem to be able to find my assets.\r\n\r\nInput:\r\n\r\n```bash\r\ndatasette ~/Library/Safari/History.db --plugins-dir=plugins/\r\n```\r\n![Image 2022-03-25 at 9 18 17 PM](https://user-images.githubusercontent.com/9020979/160218979-a3ff474b-5255-4a76-85d1-6f90ab2e3b44.jpg)\r\n\r\nOutput:\r\n\r\n![Image 2022-03-25 at 9 11 00 PM](https://user-images.githubusercontent.com/9020979/160218733-ca5144cf-f23f-43d8-a8d3-e3a871e57f3a.jpg)\r\n\r\n\r\n\r\n\r\nI suspect this issue might go away if I move away from \"one-off\" plugin mode, but it's been a while since I created a new python package so I'm not sure how much work there is to go between \"one off\" and \"packaged for PyPI\". I'd like to try to avoid needing to repackage a new `tar.gz` file and or reinstall my library repeatedly when developing new python code.\r\n\r\n1. Is there a way to serve a static assets when using the `plugins/` directory method instead of installing plugins as a new python package?\r\n2. If not, is there a way I can work on developing a plugin without creating and repackaging tar.gz files after every change, or is that the recommended path? \r\n\r\nThanks for your help!\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1688/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1218133366, "node_id": "I_kwDOBm6k_c5Imz12", "number": 1728, "title": "Writable canned queries fail with useless non-error against immutable databases", "user": {"value": 127565, "label": "wragge"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 8303187, "label": "Datasette 0.62"}, "comments": 13, "created_at": "2022-04-28T03:10:34Z", "updated_at": "2022-08-14T16:34:40Z", "closed_at": "2022-08-14T16:34:40Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I've been banging my head against a wall for a while and would appreciate any pointers...\r\n\r\n- I have a writeable canned query to update rows in the db. \r\n- I'm using the github-oauth plugin for authentication. \r\n- I have `allow` set on the query to accept my GitHub id and a GH organisation. \r\n- Authentication seems to work as expected both locally and on Cloudrun -- viewing `/-/actor` gives the same result in both environments\r\n- I can access the 'padlocked' canned query in both environments.\r\n\r\nEverything seems to be the same, but the canned query works perfectly when run locally, and fails when I try it on Cloudrun. I'm redirected back to the canned query page and the db is not changed. There's nothing in the Cloudstor logs to indicate an error.\r\n\r\nAny clues as to where I should be looking for the problem?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1728/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1292368833, "node_id": "I_kwDOBm6k_c5NB_vB", "number": 1764, "title": "Keep track of config_dir in directory mode (for plugins)", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-07-03T16:57:49Z", "updated_at": "2022-07-18T01:12:45Z", "closed_at": "2022-07-18T01:12:45Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I started working on using `config_dir` with my [datasette-query-files plugin](https://github.com/eyeseast/datasette-query-files) and realized Datasette doesn't actually hold onto the `config_dir` argument. It gets used in `__init__` but then forgotten. It would be nice to be able to use it in plugins, though.\r\n\r\nHere's the reference issue: https://github.com/eyeseast/datasette-query-files/issues/4\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1764/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1334628400, "node_id": "I_kwDOBm6k_c5PjNAw", "number": 1779, "title": "google cloudrun updated their limits on maxscale based on memory and cpu count", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 8303187, "label": "Datasette 0.62"}, "comments": 13, "created_at": "2022-08-10T13:27:21Z", "updated_at": "2022-08-14T19:42:59Z", "closed_at": "2022-08-14T17:07:34Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "if you don't set an explicit limit on container scaling, then [google defaults to 100](https://cloud.google.com/run/docs/configuring/max-instances#limits)\r\n\r\ngoogle recently updated the [limits on container scaling](https://cloud.google.com/run/docs/configuring/max-instances#limits), such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100.\r\n\r\nwould be nice if `datasette publish` could do this math for you and set the right maxScale. \r\n\r\n[Log of an failing publish run](https://github.com/labordata/warehouse/runs/7764725972?check_suite_focus=true#step:8:332).\r\n\r\n```\r\nERROR: (gcloud.run.deploy) spec.template.spec.containers[0].resources.limits.cpu: Invalid value specified for cpu. For the specified value, maxScale may not exceed 15.\r\nConsider running your workload in a region with greater capacity, decreasing your requested cpu-per-instance, or requesting an increase in quota for this region if you are seeing sustained usage near this limit, see https://cloud.google.com/run/quotas. Your project may gain access to further scaling by adding billing information to your account.\r\nTraceback (most recent call last):\r\n File \"/home/runner/.local/bin/datasette\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/runner/.local/lib/python3.8/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/runner/.local/lib/python3.8/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/runner/.local/lib/python3.8/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/runner/.local/lib/python3.8/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/runner/.local/lib/python3.8/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/runner/.local/lib/python3.8/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/runner/.local/lib/python3.8/site-packages/datasette/publish/cloudrun.py\", line 160, in cloudrun\r\n check_call(\r\n File \"/usr/lib/python3.8/subprocess.py\", line 364, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nsubprocess.CalledProcessError: Command 'gcloud run deploy --allow-unauthenticated --platform=managed --image gcr.io/labordata/datasette warehouse --memory 8Gi --cpu 2' returned non-zero exit status 1.\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1779/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1339663518, "node_id": "I_kwDOBm6k_c5P2aSe", "number": 1784, "title": "Include \"entrypoint\" option on `--load-extension`?", "user": {"value": 15178711, "label": "asg017"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-08-16T00:22:57Z", "updated_at": "2022-08-23T18:34:31Z", "closed_at": "2022-08-23T18:34:31Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "## Problem \r\n\r\nSQLite extensions have the option to define multiple \"entrypoints\" in each loadable extension. For example, the upcoming version of `sqlite-lines` will have 2 entrypoints: the default `sqlite3_lines_init` (which SQLite will automatically guess for) and `sqlite3_lines_noread_init`. The `sqlite3_lines_noread_init` version omits functions that read from the filesystem, which is necessary for security purposes when running untrusted SQL (which Datasette does).\r\n\r\n(Similar multiple entrypoints will also be added for sqlite-http).\r\n\r\n\r\n\r\nThe `--load-extension` flag, however, doesn't give the option to specify a different entrypoint, so the default one is always used. \r\n\r\n## Proposal\r\n\r\nI want there to be a new command line option of the `--load-extension` flag to specify a custom entrypoint like so:\r\n```\r\ndatasette my.db \\\r\n --load-extension ./lines0 sqlite3_lines0_noread_init\r\n```\r\n\r\nThen, under the hood, this line of code:\r\n\r\nhttps://github.com/simonw/datasette/blob/7af67b54b7d9bca43e948510fc62f6db2b748fa8/datasette/app.py#L562\r\n\r\nWould look something like this:\r\n\r\n```python\r\n conn.execute(\"SELECT load_extension(?, ?)\", [extension, entrypoint]) \r\n```\r\n\r\nOne potential problem: For backward compatibility, I'm not sure if Click allows cli flags to have variable number of options (\"arity\"). So I guess it could also use a `:` delimiter like `--static`:\r\n\r\n```\r\ndatasette my.db \\\r\n --load-extension ./lines0:sqlite3_lines0_noread_init\r\n```\r\n\r\nOr maybe even a new flag name?\r\n\r\n```\r\ndatasette my.db \\\r\n --load-extension-entrypoint ./lines0 sqlite3_lines0_noread_init\r\n```\r\n\r\n\r\nPersonally I prefer the `:` option... and maybe even `--load-extension` -> `--load`? Definitely out of scope for this issue tho\r\n\r\n```\r\ndatasette my.db \\\r\n --load./lines0:sqlite3_lines0_noread_init\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1784/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1377811868, "node_id": "I_kwDOBm6k_c5SH72c", "number": 1813, "title": "missing next and next_url in JSON responses from an instance deployed on Fly ", "user": {"value": 883348, "label": "adipasquale"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-09-19T11:32:34Z", "updated_at": "2022-09-19T11:34:45Z", "closed_at": "2022-09-19T11:34:45Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "\ud83d\udc4b thank you for an incredibly useful project! \r\n\r\nI have noticed that my deployed instance on Fly does not include the `next` and `next_url` keys even for a truncated response :\r\n\r\n\"Screenshot\r\n \r\n\r\nThis is publically accessible here: `https://collectif-objets-datasette.fly.dev/collectif-objets.json?sql=select+*+from+mairies`\r\n\r\nHowever when I run the dataset server locally with the same data I get these next keys for the exact same query:\r\n\r\n\"Screenshot\r\n\r\nI am wondering if I've missed some config or something specific to deployments on Fly.io? \r\n\r\nI am running datasette v0.62, without any specific config : \r\n\r\n- locally `poetry run datasette data/collectif-objets.sqlite`\r\n- for the deploy : `poetry run datasette publish fly data/collectif-objets.sqlite`\r\n\r\nas visible in [the Makefile](https://github.com/adipasquale/collectif-objets-datasette/blob/main/Makefile). _The very limited codebase is public but the sqlite db is not versioned yet because it is too large._", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1813/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1385026210, "node_id": "I_kwDOBm6k_c5SjdKi", "number": 1819, "title": "Preserve query on timeout", "user": {"value": 2182, "label": "danp"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-09-25T13:32:31Z", "updated_at": "2022-09-26T23:16:15Z", "closed_at": "2022-09-26T23:06:06Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If a query hits the timeout it shows a message like:\r\n\r\n> SQL query took too long. The time limit is controlled by the [sql_time_limit_ms](https://docs.datasette.io/en/stable/settings.html#sql-time-limit-ms) configuration option.\r\n\r\nBut the query is lost. Hitting the browser back button shows the query _before_ the one that errored.\r\n\r\nIt would be nice if the query that errored was preserved for more tweaking. This would make it similar to how \"invalid syntax\" works since #1346 / #619.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1819/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1400083043, "node_id": "I_kwDOBm6k_c5Tc5Jj", "number": 1834, "title": "inspect data is not used for caching database hash", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-10-06T17:52:01Z", "updated_at": "2022-10-06T20:06:21Z", "closed_at": "2022-10-06T20:06:08Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When databases are loaded,\r\n\r\nhttps://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/app.py#L257-L260\r\n\r\nthere is nothing preventing the rehashing of the database for immutable databases.\r\n\r\nhttps://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L50-L53\r\n\r\nwhat i might expect is that relevant values of `inspect_data` get passed to the `Database` class to prevent re-hashing?\r\n\r\nWith data that is many gigs large, this is a significant start up time.\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1834/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1428560020, "node_id": "I_kwDOBm6k_c5VJhiU", "number": 1872, "title": "SITE-BUSTING ERROR: \"render_template() called before await ds.invoke_startup()\"", "user": {"value": 192568, "label": "mroswell"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-10-30T02:28:39Z", "updated_at": "2022-10-30T06:26:01Z", "closed_at": "2022-10-30T06:26:01Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "1. My https://list.saferdisinfectants.org/disinfectants/listN page (linked from https://SaferDisinfectants.org ) has been running beautifully for a year and a half, including a GitHub Actions workflow that's been routinely updating the database.\r\n\r\n2. I received a recent report that the list page is down. I don't know when it went down, but the content is replaced with: \"render_template() called before await ds.invoke_startup()\"\r\n\r\n3. The local datasette repo runs without incident.\r\n\r\n4. The site is hosted on vercel, linked to my github repo. Perhaps some vercel changes were made, but not by anyone on our side. Here is a screenshot of the current project settings:\r\n\r\n\"Screen\r\n\r\nHere a screenshot of the latest deployment status:\r\n\"Screen\r\n\r\nThis is my repository:\r\nhttps://github.com/mroswell/list-N\r\n(I notice: datasette==0.59 in my requirements.txt file)\r\n\r\nBecause it's been long while since I actively worked on this or any other datasette project, I forget a lot of what I knew at one point. Perhaps some configuration file could be missing? Or perhaps I just need to know the right incantation to add to that vercel settings page. \r\n\r\nHelp is welcome as the nonprofit org is soon hosting its annual conference, and we'd love to have the page working again.\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1872/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1448143294, "node_id": "I_kwDOBm6k_c5WUOm-", "number": 1890, "title": "Autocomplete text entry for filter values that correspond to facets", "user": {"value": 536941, "label": "fgregg"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 16, "created_at": "2022-11-14T14:11:31Z", "updated_at": "2022-11-17T00:47:36Z", "closed_at": "2022-11-16T03:23:01Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "datasette allows users to enter in the value for named parameters into a free-text form field.\r\n\r\nI think it would add a lot of usability, if the form field could be a drop down of options when query value is already a faceted column.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1890/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1450796965, "node_id": "I_kwDOBm6k_c5WeWel", "number": 1894, "title": "Initialize CodeMirror during DOMContentLoaded instead of onload", "user": {"value": 95570, "label": "bgrins"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-16T03:52:19Z", "updated_at": "2022-11-18T07:29:02Z", "closed_at": "2022-11-18T07:29:02Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "As per https://github.com/simonw/datasette/pull/1893/files#r1023248927 this should prevent a flash between the textarea being replaced by CodeMirror.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1894/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1452495049, "node_id": "I_kwDOBm6k_c5Wk1DJ", "number": 1899, "title": "Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused ", "user": {"value": 95570, "label": "bgrins"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-11-17T00:29:52Z", "updated_at": "2022-11-18T07:28:28Z", "closed_at": "2022-11-18T07:20:53Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "After the upgrade to 6 (#1893) I noticed this. I think it's because we're doing overflow:hidden to accomplish the CSS resizer.\r\n\r\nWhen there's a single line of SQL there's a gap below that line where clicking doesn't do anything. It should focus at the end of the line.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1899/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1453813400, "node_id": "I_kwDOBm6k_c5Wp26Y", "number": 1901, "title": "Some plugins show \"home\" breadcrumbs twice in the top left", "user": {"value": 95570, "label": "bgrins"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-11-17T18:44:58Z", "updated_at": "2022-11-18T07:22:37Z", "closed_at": "2022-11-18T07:02:56Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "\"Screenshot\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1901/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1473659191, "node_id": "I_kwDOBm6k_c5X1kE3", "number": 1929, "title": "Incorrect link from the API explorer to the JSON API documentation", "user": {"value": 3556, "label": "davidbgk"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-12-03T02:08:58Z", "updated_at": "2022-12-06T19:36:23Z", "closed_at": "2022-12-06T19:34:20Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I installed `datasette==1.0a1`.\r\n\r\nWhen I go to http://127.0.0.1:8001/-/api I have a link: `Use this tool to try out the [Datasette API](https://docs.datasette.io/en/1.0a1/json_api.html).` but that documentation page does not exist.\r\n\r\nI'm not sure where it has to be fixed, should it link to the stable page https://docs.datasette.io/en/stable/json_api.html , the latest one https://docs.datasette.io/en/latest/json_api.html#the-json-write-api or would it be more appropriated to deploy documentation for the `1.0a1` version?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1929/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1522778923, "node_id": "I_kwDOBm6k_c5aw8Mr", "number": 1978, "title": "Document datasette.urls.row and row_blob", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-01-06T15:45:51Z", "updated_at": "2023-01-09T14:30:00Z", "closed_at": "2023-01-09T14:30:00Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "These are in the codebase but not in documentation. I think everything else in this class is documented.\r\n\r\n```python\r\nclass Urls:\r\n ...\r\n def row(self, database, table, row_path, format=None):\r\n path = f\"{self.table(database, table)}/{row_path}\"\r\n if format is not None:\r\n path = path_with_format(path=path, format=format)\r\n return PrefixedUrlString(path)\r\n\r\n def row_blob(self, database, table, row_path, column):\r\n return self.table(database, table) + \"/{}.blob?_blob_column={}\".format(\r\n row_path, urllib.parse.quote_plus(column)\r\n )\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1978/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "not_planned"} {"id": 273944952, "node_id": "MDU6SXNzdWUyNzM5NDQ5NTI=", "number": 93, "title": "Package as standalone binary", "user": {"value": 67420, "label": "atomotic"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 18, "created_at": "2017-11-14T21:14:07Z", "updated_at": "2021-11-21T07:00:23Z", "closed_at": "2021-11-21T07:00:23Z", "author_association": "NONE", "pull_request": null, "body": "hint: more than the docker image a standalone and multiplatform binary (containing the app and the database) could be simpler to distribute.\r\n\r\ni would like to investigate the possibility to package everything with [pyinstaller](http://www.pyinstaller.org/) adding the database as a [data file](https://pythonhosted.org/PyInstaller/spec-files.html#adding-data-files)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/93/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 274160723, "node_id": "MDU6SXNzdWUyNzQxNjA3MjM=", "number": 100, "title": "TemplateAssertionError: no filter named 'tojson'", "user": {"value": 13304454, "label": "coisnepe"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-15T13:43:41Z", "updated_at": "2017-11-16T09:25:10Z", "closed_at": "2017-11-16T00:14:13Z", "author_association": "NONE", "pull_request": null, "body": "A 500 error is raised upon clicking on the name of a table on the homepage, say _http://0.0.0.0:8001/_ to _http://0.0.0.0:8001/test_check-c1f4771/users_ The API part seems to function as intended, though...\r\n\r\n```\r\n2017-11-15 14:33:57 - (sanic)[ERROR]: Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/sanic/app.py\", line 503, in handle_request\r\n response = await response\r\n File \"/usr/local/lib/python3.5/dist-packages/datasette/app.py\", line 155, in get\r\n return await self.view_get(request, name, hash, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/datasette/app.py\", line 219, in view_get\r\n **context,\r\n File \"/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py\", line 84, in render\r\n return html(self.render_string(template, request, **context))\r\n File \"/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py\", line 81, in render_string\r\n return self.env.get_template(template).render(**context)\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 812, in get_template\r\n return self._load_template(name, self.make_globals(globals))\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 786, in _load_template\r\n template = self.loader.load(self, name, globals)\r\n File \"/usr/lib/python3/dist-packages/jinja2/loaders.py\", line 125, in load\r\n code = environment.compile(source, name, filename)\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 565, in compile\r\n self.handle_exception(exc_info, source_hint=source_hint)\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 754, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/lib/python3/dist-packages/jinja2/_compat.py\", line 37, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/usr/local/lib/python3.5/dist-packages/datasette/templates/table.html\", line 29, in template\r\n
params = {{ query.params|tojson(4) }}
\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 515, in _generate\r\n return generate(source, self, name, filename, defer_init=defer_init)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 62, in generate\r\n generator.visit(node)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 849, in visit_Template\r\n self.blockvisit(block.body, block_frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 492, in blockvisit\r\n self.visit(node, frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 1172, in visit_If\r\n self.blockvisit(node.body, if_frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 492, in blockvisit\r\n self.visit(node, frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 1353, in visit_Output\r\n self.visit(argument, frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 1565, in visit_Filter\r\n self.fail('no filter named %r' % node.name, node.lineno)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 427, in fail\r\n raise TemplateAssertionError(msg, lineno, self.name, self.filename)\r\njinja2.exceptions.TemplateAssertionError: no filter named 'tojson'\r\n\r\n2017-11-15 14:33:57 - (network)[INFO][127.0.0.1:41316]: GET http://0.0.0.0:8001/test_check-c1f4771/users 500 144\r\n2017-11-15 14:33:57 - (network)[INFO][127.0.0.1:41316]: GET http://0.0.0.0:8001/favicon.ico 200 0\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/100/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 274161964, "node_id": "MDU6SXNzdWUyNzQxNjE5NjQ=", "number": 101, "title": "TemplateAssertionError: no filter named 'tojson'", "user": {"value": 450244, "label": "eaubin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-15T13:47:32Z", "updated_at": "2017-11-15T13:48:55Z", "closed_at": "2017-11-15T13:48:55Z", "author_association": "NONE", "pull_request": null, "body": "I get an exception clicking on the table link: \r\n\r\n```\r\n2017-11-15 08:40:10 - (sanic)[ERROR]: Traceback (most recent call last):\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic/app.py\", line 503, in handle_request\r\n response = await response\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/app.py\", line 155, in get\r\n return await self.view_get(request, name, hash, **kwargs)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/app.py\", line 219, in view_get\r\n **context,\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic_jinja2/__init__.py\", line 84, in render\r\n return html(self.render_string(template, request, **context))\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic_jinja2/__init__.py\", line 81, in render_string\r\n return self.env.get_template(template).render(**context)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py\", line 812, in get_template\r\n return self._load_template(name, self.make_globals(globals))\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py\", line 786, in _load_template\r\n template = self.loader.load(self, name, globals)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/loaders.py\", line 125, in load\r\n code = environment.compile(source, name, filename)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py\", line 565, in compile\r\n self.handle_exception(exc_info, source_hint=source_hint)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py\", line 754, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/_compat.py\", line 37, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/templates/table.html\", line 29, in template\r\n
params = {{ query.params|tojson(4) }}
\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py\", line 515, in _generate\r\n return generate(source, self, name, filename, defer_init=defer_init)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 62, in generate\r\n generator.visit(node)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 849, in visit_Template\r\n self.blockvisit(block.body, block_frame)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 492, in blockvisit\r\n self.visit(node, frame)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 1172, in visit_If\r\n self.blockvisit(node.body, if_frame)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 492, in blockvisit\r\n self.visit(node, frame)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 1353, in visit_Output\r\n self.visit(argument, frame)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 1565, in visit_Filter\r\n self.fail('no filter named %r' % node.name, node.lineno)\r\n File \"/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py\", line 427, in fail\r\n raise TemplateAssertionError(msg, lineno, self.name, self.filename)\r\njinja2.exceptions.TemplateAssertionError: no filter named 'tojson'\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/101/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 276091279, "node_id": "MDU6SXNzdWUyNzYwOTEyNzk=", "number": 144, "title": "apsw as alternative sqlite3 binding (for full text search)", "user": {"value": 649467, "label": "mhalle"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2017-11-22T14:40:39Z", "updated_at": "2018-05-28T21:29:42Z", "closed_at": "2018-05-28T21:29:42Z", "author_association": "NONE", "pull_request": null, "body": "Hey there,\r\n\r\nHave you considered providing apsw support as an alternative to stock python sqlite3? I use apsw because it keeps up with sqlite3 and is straightforward to bring in extensions like FTS5. FTS really accelerates the kind of searching often done by web clients.\r\n\r\nI may be able to help (it shouldn't be much code), but there are a couple of stylistic questions that come up when supporting an optional package. Also, apsw is tricky in that it doesn't have a pypi package (author says limitations in providing options to setup.py). ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/144/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 276842536, "node_id": "MDU6SXNzdWUyNzY4NDI1MzY=", "number": 153, "title": "Ability to customize presentation of specific columns in HTML view", "user": {"value": 20264, "label": "ftrain"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 14, "created_at": "2017-11-26T17:46:11Z", "updated_at": "2017-12-10T02:08:45Z", "closed_at": "2017-12-07T06:17:33Z", "author_association": "NONE", "pull_request": null, "body": "This ties into https://github.com/simonw/datasette/issues/3 in some ways. It would be great to have some adaptability in the HTML views and to specific some columns as displaying in certain ways.\r\n\r\n- [x] 1. **Auto-parsing URIs into in-browser links.** Why? Lots of public data around cultural commons stuff links to a specific URL. This would be a great utility to turn on at the command line, just parse everything for URLs. Maybe they need to be underlined or represented in a different way than internal URLs.\r\n- [x] 2. **Ability to identify a column as plain/preformatted text.** Why? Was trying to import the Enron emails, the body collapses. Hard to read. These fields also tend to screw up the ability to scan a table view. If you knew it was text the system could set an `overflow` property on the relevant CSS, so you could still scan.\r\n- [x] 3. **Ability to identify a column as HTML.** Why? I want to spider some stuff and drop sections into SQLite, and just keep them as HTML.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/153/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 277589569, "node_id": "MDU6SXNzdWUyNzc1ODk1Njk=", "number": 155, "title": "A primary key column that has foreign key restriction associated won't rendering label column", "user": {"value": 388154, "label": "wsxiaoys"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 4, "created_at": "2017-11-29T00:40:02Z", "updated_at": "2017-12-07T05:39:53Z", "closed_at": "2017-12-07T05:39:53Z", "author_association": "NONE", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/155/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 278814220, "node_id": "MDU6SXNzdWUyNzg4MTQyMjA=", "number": 161, "title": "Support WITH query ", "user": {"value": 388154, "label": "wsxiaoys"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2017-12-03T20:00:40Z", "updated_at": "2017-12-08T06:18:12Z", "closed_at": "2017-12-04T04:52:41Z", "author_association": "NONE", "pull_request": null, "body": "Currently datasettle failed with error message: Statement must begin with SELECT\r\n\r\nExample query\r\n```sql\r\nWITH RECURSIVE\r\n cnt(x) AS (\r\n SELECT 1\r\n UNION ALL\r\n SELECT x+1 FROM cnt\r\n LIMIT 1000000\r\n )\r\nSELECT x FROM cnt;\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/161/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 282971961, "node_id": "MDU6SXNzdWUyODI5NzE5NjE=", "number": 175, "title": "Add project topic \"automatic-api\"", "user": {"value": 3179832, "label": "dbohdan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-12-18T18:09:17Z", "updated_at": "2017-12-21T18:33:55Z", "closed_at": "2017-12-21T18:33:55Z", "author_association": "NONE", "pull_request": null, "body": "Hi there! Could you add the ~~tag~~ topic `automatic-api` to your repository? I am [making a list](https://github.com/dbohdan/automatic-api) of all projects that automatically expose APIs to databases. (Your Show HN made me do it. :-) I knew about PostgREST and PostGraphQL, but it took adding Datasette to sell me on the concept.) They will be easier to discover if there is a standard GitHub tag, and `automatic-api` seems as good a candidate as any. Two projects [already use it](https://github.com/topics/automatic-api).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/175/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 292011379, "node_id": "MDU6SXNzdWUyOTIwMTEzNzk=", "number": 184, "title": "500 from missing table name", "user": {"value": 222245, "label": "carlmjohnson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-01-26T19:46:45Z", "updated_at": "2019-05-21T16:17:29Z", "closed_at": "2018-04-13T18:18:59Z", "author_association": "NONE", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/56623e48da5412b25fb39cc26b9c743b684dd968/datasette/app.py#L517-L519 throws an error if it gets an empty list back. Simplest solution is to write a helper func that just says \r\n\r\n```python\r\nresult = list(await self.execute(name, sql, params)\r\nif result:\r\n return result[0][0]\r\n```\r\n\r\nand use it anywhere `[0][0]` is now.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/184/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 306811513, "node_id": "MDU6SXNzdWUzMDY4MTE1MTM=", "number": 186, "title": "proposal new option to disable user agents cache", "user": {"value": 47107, "label": "stefanocudini"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-03-20T10:42:20Z", "updated_at": "2018-03-21T09:07:22Z", "closed_at": "2018-03-21T01:28:31Z", "author_association": "NONE", "pull_request": null, "body": "I think it would be very useful for debugging an option of adding headers to http replies\r\n\r\n```\r\nCache-Control: no-cache \r\n```\r\n\r\nespecially in the html output", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/186/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 309033998, "node_id": "MDU6SXNzdWUzMDkwMzM5OTg=", "number": 187, "title": "Windows installation error", "user": {"value": 11855322, "label": "robmarkcole"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2018-03-27T16:04:37Z", "updated_at": "2019-06-15T21:44:23Z", "closed_at": "2019-06-15T21:44:23Z", "author_association": "NONE", "pull_request": null, "body": "On attempting install on a Win 7 PC with py 3.6.2 (Anaconda dist) I get the error:\r\n\r\n```\r\nCollecting uvloop>=0.5.3 (from Sanic==0.7.0->datasette)\r\n Downloading uvloop-0.9.1.tar.gz (1.8MB)\r\n 100% |\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6\u00a6| 1.8MB 12.8MB/s\r\n Complete output from command python setup.py egg_info:\r\n Traceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\RCole\\AppData\\Local\\Temp\\pip-build-juakfqt8\\uvloop\\setup.py\r\n\", line 10, in \r\n raise RuntimeError('uvloop does not support Windows at the moment')\r\n RuntimeError: uvloop does not support Windows at the moment\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/187/reactions\", \"total_count\": 4, \"+1\": 4, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 314665147, "node_id": "MDU6SXNzdWUzMTQ2NjUxNDc=", "number": 216, "title": "Bug: Sort by column with NULL in next_page URL", "user": {"value": 222245, "label": "carlmjohnson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 15, "created_at": "2018-04-16T14:03:18Z", "updated_at": "2018-04-17T01:45:24Z", "closed_at": "2018-04-17T01:45:24Z", "author_association": "NONE", "pull_request": null, "body": "Copy-pasting from https://github.com/simonw/datasette/issues/189#issuecomment-381429213, since that issue is closed:\r\n\r\nI think I found a bug. I tried to sort by middle initial in my salaries set, and many middle initials are null. The `next_url` gets set by Datasette to:\r\n\r\nhttp://localhost:8001/salaries-d3a5631/2017+Maryland+state+salaries?_next=None%2C391&_sort=middle_initial\r\n\r\nBut then None is interpreted literally and it tries to find a name with the middle initial \"None\" and ends up skipping ahead to O on page 2.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/216/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 322283067, "node_id": "MDU6SXNzdWUzMjIyODMwNjc=", "number": 254, "title": "Escaping named parameters in canned queries", "user": {"value": 247131, "label": "philroche"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-05-11T12:43:30Z", "updated_at": "2020-05-10T14:54:14Z", "closed_at": "2020-05-10T14:54:13Z", "author_association": "NONE", "pull_request": null, "body": "Thank you very much for this project.\r\n\r\nI have created some canned queries but some of the filters include a colon eg. \"com.ubuntu.cloud:server:18.04:amd64\". When saved these colons are parsed as named parameters. \r\n\r\nIs there a way to escape colons in a canned query?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/254/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 333238932, "node_id": "MDU6SXNzdWUzMzMyMzg5MzI=", "number": 316, "title": "datasette inspect takes a very long time on large dbs", "user": {"value": 132230, "label": "gavinband"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2018-06-18T11:56:27Z", "updated_at": "2019-05-11T18:26:25Z", "closed_at": "2019-05-11T18:26:25Z", "author_association": "NONE", "pull_request": null, "body": "Hi,\r\n\r\nI want to expose data in a very large sqlite database (~600Gb) to the web. I have used datasette with success on smaller test databases with the same schema - it works very well (thanks!). However, using the full db, both `datasette inspect` and `datasette serve` seem to hang or pause for a very long time (tens of minutes) on startup. Is this expected behaviour?\r\n\r\n(I noticed that the output of `datasette inspect` includes row counts for each table. Simply counting the rows in this db will take a long time (tens of millions of rows across each of ~10 tables), so I wondered if this is the source of the problem.)\r\n\r\nAny help on a workaround would be appreciated.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/316/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 334190959, "node_id": "MDU6SXNzdWUzMzQxOTA5NTk=", "number": 321, "title": "Wildcard support in query parameters", "user": {"value": 12617395, "label": "bsilverm"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3439337, "label": "0.23.1"}, "comments": 8, "created_at": "2018-06-20T18:03:56Z", "updated_at": "2018-06-21T17:00:10Z", "closed_at": "2018-06-21T04:55:26Z", "author_association": "NONE", "pull_request": null, "body": "I haven't found a way to get the wildcard (%) inserted automatically in to a query parameter. This would be useful for cases the query parameter is followed by a LIKE clause. Wrapping the parameter name using the wildcard character within the metadata file (ie - ...where xyz like %:querystring%) does not seem to work. Can this be made possible? Or if not, can the template be extended to provide a tip to the user that they need to insert the wildcard characters themselves?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/321/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 339095976, "node_id": "MDU6SXNzdWUzMzkwOTU5NzY=", "number": 334, "title": "extra_options not passed to heroku publisher", "user": {"value": 719357, "label": "kamicut"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-07-06T23:26:12Z", "updated_at": "2018-07-24T04:53:21Z", "closed_at": "2018-07-10T01:46:04Z", "author_association": "NONE", "pull_request": null, "body": "I might be wrong but I was not able to publish to `heroku` with `--extra-options`, I think `extra_options` is not being used in this function [here](https://github.com/simonw/datasette/blob/master/datasette/utils.py#L369). \r\n\r\nAny help appreciated! ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/334/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 340396247, "node_id": "MDU6SXNzdWUzNDAzOTYyNDc=", "number": 339, "title": "Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way", "user": {"value": 12617395, "label": "bsilverm"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-07-11T20:38:06Z", "updated_at": "2022-03-21T22:22:40Z", "closed_at": "2022-03-21T22:22:34Z", "author_association": "NONE", "pull_request": null, "body": "Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/339/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 341123355, "node_id": "MDU6SXNzdWUzNDExMjMzNTU=", "number": 342, "title": "Requesting support for query description", "user": {"value": 12617395, "label": "bsilverm"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-07-13T18:50:16Z", "updated_at": "2018-07-24T04:53:21Z", "closed_at": "2018-07-16T02:33:54Z", "author_association": "NONE", "pull_request": null, "body": "It would be great if the metadata file allowed you to enter a description for the query. We have a lot of pre-defined queries that can only be so descriptive by their name. It would be nice if an optional description could be included underneath the name within the UI, or on hover where it currently shows the SQL.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/342/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 343728754, "node_id": "MDU6SXNzdWUzNDM3Mjg3NTQ=", "number": 346, "title": "Logo design for DATASETTE", "user": {"value": 35750428, "label": "ggabogarcia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-07-23T17:40:17Z", "updated_at": "2018-08-02T02:31:59Z", "closed_at": "2018-08-02T02:31:59Z", "author_association": "NONE", "pull_request": null, "body": "Hello :) , I'm a graphic designer, I'm interested in collaborating with open source projects, besides this helps me expand my portfolio. I would like to design a logo for your project. I will be happy to collaborate with you :). ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/346/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 392610803, "node_id": "MDU6SXNzdWUzOTI2MTA4MDM=", "number": 391, "title": "Google Trends example doesn\u2019t work", "user": {"value": 229881, "label": "styfle"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-12-19T13:51:38Z", "updated_at": "2019-01-02T19:45:13Z", "closed_at": "2019-01-02T19:45:12Z", "author_association": "NONE", "pull_request": null, "body": "https://google-trends.datasettes.com/\r\n\r\nI see a cloud flare error. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/391/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 395236066, "node_id": "MDU6SXNzdWUzOTUyMzYwNjY=", "number": 393, "title": "CSV export in \"Advanced export\" pane doesn't respect query", "user": {"value": 1727065, "label": "ltrgoddard"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2019-01-02T12:39:41Z", "updated_at": "2021-06-17T18:14:24Z", "closed_at": "2019-01-03T02:44:10Z", "author_association": "NONE", "pull_request": null, "body": "It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at [songs released in 1989](https://fivethirtyeight.datasettes.com/fivethirtyeight-c300360/classic-rock%2Fclassic-rock-song-list?Release+Year__exact=1989) in the `classic-rock/classic-rock-song-list` table from the Five Thirty Eight data. The JSON and CSV export links at the top of the page both give me filtered data using `Release+Year__exact=1989` in the URL. In the `Advanced export` tab, though, the CSV option gives me the whole data set, while the JSON options preserve the query.\r\n\r\nIt may be that this is intended behaviour related to the streaming CSV stuff [discussed here](https://github.com/simonw/datasette/issues/266), but if that's the case then I think it should be a little clearer.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/393/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 397129564, "node_id": "MDU6SXNzdWUzOTcxMjk1NjQ=", "number": 397, "title": "Update official datasetteproject/datasette Docker container to SQLite 3.26.0", "user": {"value": 43564, "label": "claes"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-01-08T22:51:50Z", "updated_at": "2019-01-11T01:25:33Z", "closed_at": "2019-01-11T00:56:18Z", "author_association": "NONE", "pull_request": null, "body": "I try to start datasette on a database that contains the below view\r\n\r\nIt fails in a way that makes me think it does not support the window functions SQL syntax.\r\n\r\n\r\n```\r\ncreate view general_ledger as\r\nselect transactions.account_number, strftime(\"%Y-%m-%d\", verifications.verification_date) as verification_date, verifications.verification_number, verifications.verification_text, \r\ncase when transactions.centi_amount >= 0 and verifications.verification_number > 0 then printf(\"%.2f\", (transactions.centi_amount/100.0))\r\nend as debit,\r\ncase when transactions.centi_amount <= 0 and verifications.verification_number > 0 then printf(\"%.2f\", (transactions.centi_amount/100.0)) \r\nend as credit,\r\nprintf(\"%.2f\", sum(transactions.centi_amount) over (partition by transactions.account_number \r\n order by verifications.verification_number range between unbounded preceding and current row)/100.0)\r\nfrom verifications inner join transactions on transactions.verification_id = verifications.id \r\norder by transactions.account_number, verifications.verification_number;\r\n```\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/ledger.db\r\nServe! files=('/mnt/ledger.db',) on port 8001\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasette\", line 11, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.6/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/datasette/cli.py\", line 375, in serve\r\n ds.inspect()\r\n File \"/usr/local/lib/python3.6/site-packages/datasette/app.py\", line 308, in inspect\r\n \"views\": inspect_views(conn),\r\n File \"/usr/local/lib/python3.6/site-packages/datasette/inspect.py\", line 30, in inspect_views\r\n return [v[0] for v in conn.execute('select name from sqlite_master where type = \"view\"')]\r\nsqlite3.DatabaseError: malformed database schema (general_ledger) - near \"over\": syntax error\r\n\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/397/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 400229984, "node_id": "MDU6SXNzdWU0MDAyMjk5ODQ=", "number": 401, "title": "How to pass configuration to plugins?", "user": {"value": 1055831, "label": "dazzag24"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-01-17T11:20:41Z", "updated_at": "2019-01-18T11:48:13Z", "closed_at": "2019-01-18T06:49:07Z", "author_association": "NONE", "pull_request": null, "body": "Hi,\r\nFirstly, thanks for your work on datasette, it is a hugely useful tool!\r\n\r\nI've been working on a fork [https://github.com/dazzag24/datasette-cluster-map] of datasette-cluster-map to allow the tileserver to be easily switched. Primarily because the tiles being served in the current version use localised text for labels and I'd like to have English used for these names instead.\r\n\r\nIt uses http://leaflet-extras.github.io/leaflet-providers/preview/ to allow you to simply set the tile provider using a call like so:\r\n``` \r\nlet tiles = L.tileLayer.provider('Esri.WorldTopoMap');\r\n```\r\ninstead of the current:\r\n```\r\nlet tiles = L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {\r\n maxZoom: 19,\r\n detectRetina: true,\r\n attribution: '© OpenStreetMap contributors'\r\n }),\r\n ```\r\nHowever I've got stuck in trying to work out how to pass the provider string to the plugin.\r\nIn the documentation: https://datasette.readthedocs.io/en/stable/plugins.html you discuss configuration of plugins and use an example of passing in which latitude and longitude columns should be used. However I cannot seem to see anywhere in the current datasette-cluster-map code where these config params are passed in or used.\r\n\r\nCan you please point me to an example or how to pass configuration from the metadata.json down into a plugin. Once I've over come this issue I was wondering if you would be interested in taking this change into your version?\r\n\r\nMany thanks\r\nDarren", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/401/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 400511206, "node_id": "MDU6SXNzdWU0MDA1MTEyMDY=", "number": 403, "title": "How does persistence work?", "user": {"value": 1794527, "label": "ccorcos"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-01-17T23:41:57Z", "updated_at": "2019-01-19T05:47:55Z", "closed_at": "2019-01-18T06:51:14Z", "author_association": "NONE", "pull_request": null, "body": "I was under the impression that now.sh is for stateless microservices. So where are these SQLite databases stored and when do they get created and destroyed?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/403/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 407174173, "node_id": "MDU6SXNzdWU0MDcxNzQxNzM=", "number": 408, "title": "Show metadata info (e.g. license, source) on custom SQL query pages", "user": {"value": 78356, "label": "stefanw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-02-06T10:43:34Z", "updated_at": "2019-10-14T03:53:22Z", "closed_at": "2019-10-14T03:53:22Z", "author_association": "NONE", "pull_request": null, "body": "Currently metadata info is not displayed on custom SQL pages.\r\n\r\nE.g. compare the footer of [this normal table page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/categories) with the footer [this custom SQL page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7?sql=select+*+from+categories).\r\n\r\nThis is important in order to adhere to attribution license requirements.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/408/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 408376825, "node_id": "MDU6SXNzdWU0MDgzNzY4MjU=", "number": 409, "title": "Zeit API v1 does not work for new users - need to migrate to v2", "user": {"value": 209967, "label": "michaelmcandrew"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-02-09T00:50:33Z", "updated_at": "2020-04-06T15:44:46Z", "closed_at": "2020-04-06T15:44:46Z", "author_association": "NONE", "pull_request": null, "body": "Hello there,\r\n\r\nThis looks like a great tool. Thanks. \r\n\r\nUnfortunately, I hit the following error:\r\n\r\n```\r\nmichael@hazel ~/src/cc-datasette/data/out datasette publish now cc-datasette.db\r\n> WARN! You are using an old version of the Now Platform. More: https://zeit.co/docs/v1-upgrade\r\n> Deploying /tmp/tmpjtrxwsyf/datasette under michaelmcandrew\r\n> Using project datasette\r\n> Error! You tried to create a Now 1.0 deployment. Please use Now 2.0 instead: https://zeit.co/upgrade\r\n```\r\nI'm guessing you might not hit this because you are not a 'new user' of Zeit (https://github.com/zeit/now-cli/issues/1805#issuecomment-452470953).\r\n\r\nWould it be a lot of work to upgrade to the new Zeit API, do you think?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/409/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 408518024, "node_id": "MDU6SXNzdWU0MDg1MTgwMjQ=", "number": 410, "title": "How to setup a multi database environment?", "user": {"value": 30607, "label": "aborruso"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-02-10T09:39:24Z", "updated_at": "2019-04-12T04:42:28Z", "closed_at": "2019-04-12T04:42:27Z", "author_association": "NONE", "pull_request": null, "body": "Hi,\r\nfirst of all I need to write that Simon Willison and datasette are really great.\r\n\r\nI have probably a stupid question, but it seems to me that I do not have the reply in the documentation.\r\n\r\nI have installed datasette and run it with `datasette mydb.db`, and I can reach it on `http://127.0.0.1:8001`.\r\n\r\nBut how to work with more than one db? Imagine I have ten sqlite databases, and that I need to explore/query these via datasette, how to run datasette? Is it possibile to create a sort of db index and than run `datasette serve myindex`?\r\n\r\nThank you", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/410/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 410384988, "node_id": "MDU6SXNzdWU0MTAzODQ5ODg=", "number": 411, "title": "How to pass named parameter into spatialite MakePoint() function", "user": {"value": 1055831, "label": "dazzag24"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-02-14T16:30:22Z", "updated_at": "2023-10-25T13:23:04Z", "closed_at": "2019-05-05T12:25:04Z", "author_association": "NONE", "pull_request": null, "body": "Hi,\r\ndatasette version: \"0.26.2\"\r\nextensions: \r\n spatialite: \"4.4.0-RC0\"\r\nsqlite version: \"3.22.0\"\r\n\r\nI have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables:\r\n\r\n```\r\nconn.execute('SELECT InitSpatialMetadata(1)')\r\n\r\nconn.execute(\"SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);\")\r\n\r\nconn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||\"longitude\"||' '||\"latitude\"||')',4326);''')\r\n\r\nconn.execute(\"SELECT CreateSpatialIndex('airports', 'point_geom');\")\r\n```\r\n\r\nI'm attempting to create a canned query and have this in my metadata.json file:\r\n```\r\n\"find_airports_nearest_to_point\":{\r\n \"sql\":\"SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a JOIN airports AS b ON (b.rowid = a.fid) WHERE f_table_name = \\\"airports\\\" AND ref_geometry = MakePoint( :Long , :Lat ) AND max_items = 10;\"}\r\n```\r\nwhich doesn't seem to perform the templating of the name parameters correctly and I get no results. \r\n\r\nHave also tired:\r\n```\r\nMakePoint( || :Long || , || :Lat || )\r\n```\r\nwhich returns this error:\r\n```\r\nnear \"||\": syntax error\r\n```\r\n\r\nHowever I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported?\r\n\r\nThanks\r\nDarren", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/411/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 418329842, "node_id": "MDU6SXNzdWU0MTgzMjk4NDI=", "number": 415, "title": "Add query parameter to hide SQL textarea", "user": {"value": 36796532, "label": "ad-si"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-03-07T14:11:30Z", "updated_at": "2019-03-15T09:30:57Z", "closed_at": "2019-03-15T05:22:43Z", "author_association": "NONE", "pull_request": null, "body": "It would be cool if there was a query parameter to hide / remove the SQL textarea. Then I could simply save a bookmark for a certain query and open it to see the data without having to scroll below the (long) SQL query first.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/415/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 435819321, "node_id": "MDU6SXNzdWU0MzU4MTkzMjE=", "number": 436, "title": "400 Error when trying to register new user via https://publish.datasettes.com/", "user": {"value": 317694, "label": "nniiicc"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-04-22T17:55:00Z", "updated_at": "2021-01-04T20:15:42Z", "closed_at": "2021-01-04T20:15:41Z", "author_association": "NONE", "pull_request": null, "body": "Behavior: When registering a new user via Zeit - confirmation is sent and screen acknowledges registered user... When clicking grant access the next screen is a white 400 error message. \r\n\r\nReplicated: Chrome and Firefox; 2 different email accounts", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/436/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 448189298, "node_id": "MDU6SXNzdWU0NDgxODkyOTg=", "number": 486, "title": "Ability to add extra routes and related templates", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-24T14:04:25Z", "updated_at": "2019-05-24T14:43:28Z", "closed_at": "2019-05-24T14:43:09Z", "author_association": "NONE", "pull_request": null, "body": "Hi Simon\r\n\r\nThank for an excellent job! Datasette is such an obviously good idea (once you have that idea!) and so well done. The only thing that I miss, is the ability to add extras routes (with associated jinja2-templates). For most of the datasets, that I would like to publish, I would also like at least a page, that describes the data (semantics, provenance, biases...) and a page explaining our cookie- and privacy-policies (which would allows us to use something like Goggle Analytics).\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/486/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 450862577, "node_id": "MDU6SXNzdWU0NTA4NjI1Nzc=", "number": 496, "title": "Additional options to gcloud build command in cloudrun - timeout", "user": {"value": 1740337, "label": "costrouc"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-05-31T15:43:55Z", "updated_at": "2019-05-31T23:05:05Z", "closed_at": "2019-05-31T23:05:05Z", "author_association": "NONE", "pull_request": null, "body": "I am trying to deploy a 3.1 GB dataset to cloudrun with datasette. Currrently the docker build times out. Would be nice to have a timeout flag or additional gcloud commands that could be specified. \r\n\r\nHere is the line https://github.com/simonw/datasette/blob/f825e2012109247fa246e2b938f8174069e574f1/datasette/publish/cloudrun.py#L78\r\n\r\nI would be happy to submit a PR to allow for a timeout option. What are your ideas of allowing the user additional build publishing flag options?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/496/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}