{"html_url": "https://github.com/simonw/datasette/issues/14#issuecomment-346244871", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/14", "id": 346244871, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjI0NDg3MQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-22T05:06:30Z", "updated_at": "2017-11-22T05:06:30Z", "author_association": "CONTRIBUTOR", "body": "I'd also suggest taking a look at [stevedore](https://docs.openstack.org/stevedore/latest/), which has a ton of tools for doing plugin stuff. I've had good luck with it in the past.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267707940, "label": "Datasette Plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/27#issuecomment-345652450", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/27", "id": 345652450, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTY1MjQ1MA==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2017-11-20T10:19:39Z", "updated_at": "2017-11-20T10:19:39Z", "author_association": "CONTRIBUTOR", "body": "If Data Package metadata gets adopted (#105) the views spec work might also be worth a look:\r\n\r\nhttp://frictionlessdata.io/specs/views/\r\n\r\nhttp://datahub.io/docs/features/views\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267886330, "label": "Ability to plot a simple graph"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/46#issuecomment-344810525", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/46", "id": 344810525, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDgxMDUyNQ==", "user": {"value": 54999, "label": "ingenieroariel"}, "created_at": "2017-11-16T04:11:25Z", "updated_at": "2017-11-16T04:11:25Z", "author_association": "CONTRIBUTOR", "body": "@simonw On the spatialite support, here is some info to make it work and a screenshot:\r\n\r\n\"screen\r\n\r\nI used the following Dockerfile:\r\n```\r\nFROM prolocutor/python3-sqlite-ext:3.5.1-spatialite as build\r\n\r\nRUN mkdir /code\r\nADD . /code/\r\n\r\nRUN pip install /code/\r\n\r\nEXPOSE 8001\r\nCMD [\"datasette\", \"serve\", \"/code/ne.sqlite\", \"--host\", \"0.0.0.0\"]\r\n```\r\n\r\nand added this to `prepare_connection`:\r\n```\r\n conn.enable_load_extension(True)\r\n conn.execute(\"SELECT load_extension('/usr/local/lib/mod_spatialite.so')\")\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 271301468, "label": "Dockerfile should build more recent SQLite with FTS5 and spatialite support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/46#issuecomment-345002908", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/46", "id": 345002908, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTAwMjkwOA==", "user": {"value": 54999, "label": "ingenieroariel"}, "created_at": "2017-11-16T17:47:49Z", "updated_at": "2017-11-16T17:47:49Z", "author_association": "CONTRIBUTOR", "body": "I'll try to find alternatives to the Dockerfile option - I also think we should not use that old one without sources or license.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 271301468, "label": "Dockerfile should build more recent SQLite with FTS5 and spatialite support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/57#issuecomment-344145265", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/57", "id": 344145265, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDE0NTI2NQ==", "user": {"value": 247192, "label": "macropin"}, "created_at": "2017-11-14T04:45:38Z", "updated_at": "2017-11-14T04:45:38Z", "author_association": "CONTRIBUTOR", "body": "I'm happy to contribute this. Just let me know if you want a Dockerfile for development or production purposes, or both. \r\n\r\nIf it's prod then we can just pip install the source from pypi, otherwise for dev we'll need a `requirements.txt` to speed up rebuilds.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273127694, "label": "Ship a Docker image of the whole thing"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/57#issuecomment-344147583", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/57", "id": 344147583, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDE0NzU4Mw==", "user": {"value": 247192, "label": "macropin"}, "created_at": "2017-11-14T05:03:47Z", "updated_at": "2017-11-14T05:03:47Z", "author_association": "CONTRIBUTOR", "body": "Let me know if you'd like a PR. The image is usable as \r\n`docker run --rm -t -i -p 9000:8001 -v $(pwd)/db:/db datasette datasette serve /db/chinook.db`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273127694, "label": "Ship a Docker image of the whole thing"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/57#issuecomment-344151223", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/57", "id": 344151223, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDE1MTIyMw==", "user": {"value": 247192, "label": "macropin"}, "created_at": "2017-11-14T05:32:28Z", "updated_at": "2017-11-14T05:33:03Z", "author_association": "CONTRIBUTOR", "body": "The pattern is called \"multi-stage builds\". And the result is a svelte 226MB image (201MB for 3.6-slim) vs 700MB+ for the full image. It's possible to get it even smaller, but that takes a lot more work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273127694, "label": "Ship a Docker image of the whole thing"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/81#issuecomment-344125441", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/81", "id": 344125441, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDEyNTQ0MQ==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2017-11-14T02:24:54Z", "updated_at": "2017-11-14T02:24:54Z", "author_association": "CONTRIBUTOR", "body": "Oops, if I jumped the gun. I saw the project in my github activity feed and saw some low hanging fruit :) ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273595473, "label": ":fire: Removes DS_Store"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/88#issuecomment-344430689", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/88", "id": 344430689, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQzMDY4OQ==", "user": {"value": 15543, "label": "tomdyson"}, "created_at": "2017-11-14T23:08:22Z", "updated_at": "2017-11-14T23:08:22Z", "author_association": "CONTRIBUTOR", "body": "> I'm getting an internal server error on http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/ at the moment\r\n\r\nSorry about that - here's a working version on Netlify:\r\n\r\nhttps://nhs-england-map.netlify.com", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273775212, "label": "Add NHS England Hospitals example to wiki"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/88#issuecomment-804471733", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/88", "id": 804471733, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDQ3MTczMw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-22T23:46:36Z", "updated_at": "2021-03-22T23:46:36Z", "author_association": "CONTRIBUTOR", "body": "Google Map API limits seem to prevent https://nhs-england-map.netlify.com from being a working demo.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273775212, "label": "Add NHS England Hospitals example to wiki"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-344710204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 344710204, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDcxMDIwNA==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-15T19:57:50Z", "updated_at": "2017-11-15T19:57:50Z", "author_association": "CONTRIBUTOR", "body": "A first basic stab at making this work, just to prove the approach. Right now this requires [a Heroku CLI plugin](https://github.com/heroku/heroku-builds), which seems pretty unreasonable. I think this can be replaced with direct API calls, which could clean up a lot of things. But I wanted to prove it worked first, and it does.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-345452669", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 345452669, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTQ1MjY2OQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-18T16:18:45Z", "updated_at": "2017-11-18T16:18:45Z", "author_association": "CONTRIBUTOR", "body": "I'd like to do a bit of cleanup, and some error checking in case heroku/heroku-builds isn't installed.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-346116745", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 346116745, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjExNjc0NQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-21T18:23:25Z", "updated_at": "2017-11-21T18:23:25Z", "author_association": "CONTRIBUTOR", "body": "@simonw ready for a review and merge if you want.\r\n\r\nThere's still some nasty duplicated code in cli.py and utils.py, which is just going to get worse if/when we start adding any other deploy targets (and I want to do one for cloud.gov, at least). I think there's an opportunity for some refactoring here. I'm happy to do that now as part of this PR, or if you merge this first I'll do it in a different one.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-346124073", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 346124073, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjEyNDA3Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-21T18:49:55Z", "updated_at": "2017-11-21T18:49:55Z", "author_association": "CONTRIBUTOR", "body": "Actually hang on, don't merge - there are some bugs that #141 masked when I tested this out elsewhere.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-346124764", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 346124764, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjEyNDc2NA==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-21T18:52:14Z", "updated_at": "2017-11-21T18:52:14Z", "author_association": "CONTRIBUTOR", "body": "OK, now this should work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/105#issuecomment-345503897", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/105", "id": 345503897, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTUwMzg5Nw==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2017-11-19T09:38:08Z", "updated_at": "2017-11-19T09:38:08Z", "author_association": "CONTRIBUTOR", "body": "Thanks, I wrote this very simple reader because the default approach as described on the Datahub pages seemed to complicated. I had metadata from the `datapackage.json` attached to the returned DataFrames but removed this due to some attribute handling change in the latest Pandas version.\r\n\r\nThis could also be useful for getting from Data Package to SQL db: https://github.com/frictionlessdata/tableschema-sql-py\r\n\r\nI maintain a few climate science related dataset at https://github.com/openclimatedata/\r\n\r\nThe Data Retriever (mainly ecological data) by @ethanwhite et al. is also using the Data Package format for metadata and has some tooling for different dbs: \r\n\r\nhttps://frictionlessdata.io/articles/the-data-retriever/\r\nhttps://github.com/weecology/retriever\r\n\r\nThe Open Power System Data project also has a couple of datasets that show nicely how CSV is great for assembling and then already make SQLite files available. It's one of the first data sets I tried with Datasette, perfect for the use case of getting an API for putting power stations on a map ...\r\n\r\nhttps://data.open-power-system-data.org/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274314940, "label": "Consider data-package as a format for metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/107#issuecomment-344811268", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/107", "id": 344811268, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDgxMTI2OA==", "user": {"value": 3433657, "label": "raynae"}, "created_at": "2017-11-16T04:17:45Z", "updated_at": "2017-11-16T04:17:45Z", "author_association": "CONTRIBUTOR", "body": "Thanks for the guidance. I added a unit test and made a slight change to utils.py.\r\n\r\nI didn't realize this, but evidently string.format only complains if you supply less arguments than there are format placeholders, so the original commit worked, but was adding a superfluous named param.\r\n\r\nI added a conditional that prevents the named param from being created and ensures the correct number of args are passed to sting.format. It has the side effect of hiding the SQL query in /templates/table.html when there are no other where clauses--not sure if that's the desired outcome here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274343647, "label": "add support for ?field__isnull=1"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/107#issuecomment-345117690", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/107", "id": 345117690, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTExNzY5MA==", "user": {"value": 3433657, "label": "raynae"}, "created_at": "2017-11-17T01:29:41Z", "updated_at": "2017-11-17T01:29:41Z", "author_association": "CONTRIBUTOR", "body": "Thanks for bearing with me. I was getting a message about my branch diverging when I tried to push after rebasing, so I merged master into isnull, seems like that did the trick. Let me know if I should make any corrections.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274343647, "label": "add support for ?field__isnull=1"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/125#issuecomment-381361734", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/125", "id": 381361734, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTM2MTczNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-14T21:26:30Z", "updated_at": "2018-04-14T21:26:30Z", "author_association": "CONTRIBUTOR", "body": "FWIW I am now doing this on my WTR app (instead of silently limiting maps to 1000).\r\n\r\n[Telefonica](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/18325) now has about 4000 markers and good old [BT](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/8412) has 22,000 or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275135393, "label": "Plot rows on a map with Leaflet and Leaflet.markercluster"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/160#issuecomment-459915995", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/160", "id": 459915995, "node_id": "MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-02-02T00:43:16Z", "updated_at": "2019-02-02T00:58:20Z", "author_association": "CONTRIBUTOR", "body": "Do you have any simple working examples of how to use `--static`? Inspection of default served files suggests locations such as `http://example.com/-/static/app.css?0e06ee`.\r\n\r\nIf `datasette` is being proxied to `http://example.com/foo/datasette`, what form should arguments to `--static` take so that static files are correctly referenced?\r\n\r\nUse case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple `datasette` demo in MyBinder using jupyter-server-proxy.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 278208011, "label": "Ability to bundle and serve additional static files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/163#issuecomment-804539729", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/163", "id": 804539729, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDUzOTcyOQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:41:14Z", "updated_at": "2021-03-23T02:41:14Z", "author_association": "CONTRIBUTOR", "body": "I'm visiting old issues for context while learning datasette. Let me know if okay to make the occasional comment like this one.\r\nquerystring argument now located at:\r\nhttps://docs.datasette.io/en/latest/settings.html#sql-time-limit-ms", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 279547886, "label": "Document the querystring argument for setting a different time limit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/164#issuecomment-804541064", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/164", "id": 804541064, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:45:12Z", "updated_at": "2021-03-23T02:45:12Z", "author_association": "CONTRIBUTOR", "body": "\"datasette skeleton\" feature removed #476", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 280013907, "label": "datasette skeleton command for kick-starting database and table metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/179#issuecomment-360535979", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/179", "id": 360535979, "node_id": "MDEyOklzc3VlQ29tbWVudDM2MDUzNTk3OQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-01-25T17:18:24Z", "updated_at": "2018-01-25T17:18:24Z", "author_association": "CONTRIBUTOR", "body": "To summarise that thread:\r\n\r\n- expose full `metadata.json` object to the index page template, eg to allow tables to be referred to by name;\r\n- ability to import multiple `metadata.json` files, eg to allow metadata files created for a specific SQLite db to be reused in a datasette referring to several database files;\r\n\r\nIt could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 288438570, "label": "More metadata options for template authors "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/200#issuecomment-380608372", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/200", "id": 380608372, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MDYwODM3Mg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-11T21:55:46Z", "updated_at": "2018-04-11T21:55:46Z", "author_association": "CONTRIBUTOR", "body": "> I think the most reliable way to detect spatialite is to run `SELECT AddGeometryColumn(1, 2, 3, 4, 5);` against a `:memory:` database and see if it throws an exception\r\n\r\nOr just see if there's a `geometry_columns` table? I think that's quite unlikely to be added by accident (and it's an OGC standard). It also tells you if Spatialite is installed in the database rather than just loaded.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313494458, "label": "Hide Spatialite system tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/202#issuecomment-381237440", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/202", "id": 381237440, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTIzNzQ0MA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-13T19:22:53Z", "updated_at": "2018-04-13T19:22:53Z", "author_association": "CONTRIBUTOR", "body": "I spotted you'd mentioned that in #184 but only after I'd written the patch!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313785206, "label": "Raise 404 on nonexistent table URLs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/203#issuecomment-380966565", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/203", "id": 380966565, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MDk2NjU2NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-12T22:43:08Z", "updated_at": "2018-04-12T22:43:08Z", "author_association": "CONTRIBUTOR", "body": "Looks like [pint](https://pint.readthedocs.io/en/latest/tutorial.html) is pretty good at this.\r\n\r\n```python\r\nIn [1]: import pint\r\n\r\nIn [2]: ureg = pint.UnitRegistry()\r\n\r\nIn [3]: q = 3e6 * ureg('Hz')\r\n\r\nIn [4]: '{:~P}'.format(q.to_compact())\r\nOut[4]: '3.0 MHz'\r\n\r\nIn [5]: q = 0.3 * ureg('m')\r\n\r\nIn [5]: '{:~P}'.format(q.to_compact())\r\nOut[5]: '300.0 mm'\r\n\r\nIn [6]: q = 5 * ureg('')\r\n\r\nIn [7]: '{:~P}'.format(q.to_compact())\r\nOut[7]: '5'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313837303, "label": "Support for units"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/203#issuecomment-381315675", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/203", "id": 381315675, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTMxNTY3NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-14T09:14:45Z", "updated_at": "2018-04-14T09:27:30Z", "author_association": "CONTRIBUTOR", "body": "> I'd like to figure out a sensible opt-in way to expose this in the JSON output as well. Maybe with a &_units=true parameter?\r\n\r\nFrom a machine-readable perspective I'm not sure why it would be useful to decorate the values with units. Edit: Should have had some coffee first. It's clearly useful for stuff like map rendering!\r\n\r\nI agree that the unit metadata should definitely be exposed in the JSON.\r\n\r\n> In #204 you said \"I'd like to add support for using units when querying but this is PR is pretty usable as-is.\" - I'm fascinated to hear more about how this could work.\r\n\r\nI'm thinking about a couple of approaches here. I think the simplest one is: if the column has a unit attached, optionally accept units in query fields:\r\n\r\n```python\r\ncolumn_units = ureg(\"Hz\") # Create a unit object for the column's unit\r\nquery_variable = ureg(\"4 GHz\") # Supplied query variable\r\n\r\n# Now we can convert the query units into column units before querying\r\nsupplied_value.to(column_units).magnitude\r\n> 4000000000.0\r\n\r\n# If the user doesn't supply units, pint just returns the plain\r\n# number and we can query as usual assuming it's the base unit\r\nquery_variable = ureg(\"50\")\r\nquery_variable\r\n> 50\r\n\r\nisinstance(query_variable, numbers.Number)\r\n> True\r\n```\r\n\r\nThis also lets us do some nice unit conversion on querying:\r\n\r\n```python\r\ncolumn_units = ureg(\"m\")\r\nquery_variable = ureg(\"50 ft\")\r\n\r\nsupplied_value.to(column_units)\r\n> \r\n```\r\n\r\nThe alternative would be to provide a dropdown of units next to the query field (so a \"Hz\" field would give you \"kHz\", \"MHz\", \"GHz\"). Although this would be clearer to the user, it isn't so easy - we'd need to know more about the context of the field to give you sensible SI prefixes (I'm not so interested in nanoHertz, for example).\r\n\r\nYou also lose the bonus of being able to convert - although pint will happily show you all the compatible units, it again suffers from a lack of context:\r\n\r\n```python\r\nureg(\"m\").compatible_units()\r\n> frozenset({,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n })\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313837303, "label": "Support for units"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/203#issuecomment-381763651", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/203", "id": 381763651, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTc2MzY1MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-16T21:59:17Z", "updated_at": "2018-04-16T21:59:17Z", "author_association": "CONTRIBUTOR", "body": "Ah, I had no idea you could bind python functions into sqlite!\r\n\r\nI think the primary purpose of this issue has been served now - I'm going to close this and create a new issue for the only bit of this that hasn't been touched yet, which is (optionally) exposing units in the JSON API.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313837303, "label": "Support for units"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/205#issuecomment-381332222", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/205", "id": 381332222, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTMzMjIyMg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-14T14:16:35Z", "updated_at": "2018-04-14T14:16:35Z", "author_association": "CONTRIBUTOR", "body": "I've added some tests and that docs link.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 314319372, "label": "Support filtering with units and more"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/209#issuecomment-381441392", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/209", "id": 381441392, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTQ0MTM5Mg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-15T21:59:15Z", "updated_at": "2018-04-15T21:59:15Z", "author_association": "CONTRIBUTOR", "body": "I suspected this would cause some test failures, but I'll wait for opinions before attempting to fix them.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 314455877, "label": " Don't duplicate simple primary keys in the link column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/209#issuecomment-381738137", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/209", "id": 381738137, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTczODEzNw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-16T20:27:43Z", "updated_at": "2018-04-16T20:27:43Z", "author_association": "CONTRIBUTOR", "body": "Tests now fixed, honest. The failing test on Travis looks like an intermittent sqlite failure which should resolve itself on a retry...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 314455877, "label": " Don't duplicate simple primary keys in the link column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/209#issuecomment-381905593", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/209", "id": 381905593, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTkwNTU5Mw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-17T08:50:28Z", "updated_at": "2018-04-17T08:50:28Z", "author_association": "CONTRIBUTOR", "body": "I've added another commit which puts classes a class on each `` by default with its column name, and I've also made the PK column bold.\r\n\r\nUnfortunately the tests are still failing on 3.6, which is weird. I can't reproduce locally...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 314455877, "label": " Don't duplicate simple primary keys in the link column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-608716819", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 608716819, "node_id": "MDEyOklzc3VlQ29tbWVudDYwODcxNjgxOQ==", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2020-04-03T22:19:00Z", "updated_at": "2020-04-03T22:19:00Z", "author_association": "CONTRIBUTOR", "body": "Hi Simon,\r\n\r\nI'm thinking of attempting this. Can you clarify some questions I have?\r\n\r\n1) I assume the goal is to have a CORS-friendly HTTPS endpoint that hosts the datasette service + user's db.\r\n\r\n2) If that's the goal, I think Lambda alone is insufficient. Lambda provides the compute fabric, but not the HTTP routing. You'd also need to add Application Load Balancer or API Gateway to provide an HTTP endpoint that routes to the lambda function.\r\n\r\nDo you have a preference between ALB or API GW? ALB has better economics at scale, but has a minimum monthly cost. API GW has worse per-request economics, but scales to zero when no requests are happening.\r\n\r\n3) Does Datasette have any native components, or is it all pure python? If it has native bits, they'll likely need to be recompiled to work on Amazon Linux 2.\r\n\r\n4) There are a few disparate services that need to be wired together to expose a Python service securely to the web. If I was doing this outside of the datasette publish system, I'd use an AWS CloudFormation template. Even within datasette, I think it still makes sense to use a CloudFormation template and just have the publish plugin invoke it (via the standard `aws` cli) with user-specified parameters. Does that sound reasonable to you?\r\n\r\nThanks for your help!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-612216820", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 612216820, "node_id": "MDEyOklzc3VlQ29tbWVudDYxMjIxNjgyMA==", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2020-04-10T21:03:38Z", "updated_at": "2020-04-10T21:03:38Z", "author_association": "CONTRIBUTOR", "body": "I made a repo at https://github.com/code402/datasette-lambda to demonstrate the idea, and scratch my personal itch for this.\r\n\r\nThe demo relies on some central authority having already published a public, reusable Lambda layer with Datasette & its dependencies. I think that differs from the other publish plugins which seem to mainly publish Dockerfiles that the host will interpret to install deps from a requirements.txt file.\r\n\r\nI chose that approach because `uvloop` appears to be a dependency with native code that needs to be compiled for the target runtime environment. In this case, that's Amazon Linux 2. I'm not 100% clear on whether that's still required, because:\r\n\r\n- maybe `uvloop` is only needed for `uvicorn`, which the demo doesn't actually use since HTTP routing is handled by API Gateway\r\n- it seems like `uvloop` may be an optional, drop-in optimization for `asyncio` in any case (but I may be misreading this; I'm very much a Python noob)\r\n\r\nIf it's the case that `uvloop` is truly optional, then I think the publish plugin could do the packaging on the user's machine, regardless of what flavour of operating system they're on. That'd be a bit slower for the user, but would provide the most long-term flexibility in terms of supporting plugins.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-799002993", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 799002993, "node_id": "MDEyOklzc3VlQ29tbWVudDc5OTAwMjk5Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-03-14T23:41:51Z", "updated_at": "2021-03-14T23:41:51Z", "author_association": "CONTRIBUTOR", "body": "Now that [Lambda supports Docker](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/), this probably is a bit easier and may be able to build on top of the existing package command.\r\n\r\nThere are weirdnesses in how the command actually gets invoked; the [aws-lambda-python image](https://hub.docker.com/r/amazon/aws-lambda-python) shows a bit of that. So Datasette would probably need some sort of Lambda-specific entry point to make this work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-799003172", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 799003172, "node_id": "MDEyOklzc3VlQ29tbWVudDc5OTAwMzE3Mg==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-03-14T23:42:57Z", "updated_at": "2021-03-14T23:42:57Z", "author_association": "CONTRIBUTOR", "body": "Oh, and the container image can be up to 10GB, so the EFS step might not be needed except for pretty big stuff.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/273#issuecomment-390250253", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/273", "id": 390250253, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MDI1MDI1Mw==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-18T15:49:52Z", "updated_at": "2018-05-18T15:49:52Z", "author_association": "CONTRIBUTOR", "body": "Shouldn't [versioneer](https://github.com/warner/python-versioneer) do that?\r\n\r\nE.g. 0.21+2.g1076c97\r\n\r\nYou'd need to install via `pip install git+https://github.com/simow/datasette.git` though, this does a temp git clone.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324451322, "label": "Figure out a way to have /-/version return current git commit hash"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-390795067", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 390795067, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MDc5NTA2Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-21T21:55:57Z", "updated_at": "2018-05-21T21:55:57Z", "author_association": "CONTRIBUTOR", "body": "Well, we do have the capability to detect spatialite so my intention certainly wasn't to require it. \r\n\r\nI can see the advantage of having it as a plugin but it does touch a number of points in the code. I think I'm going to attack this by refactoring the necessary bits and seeing where that leads (which was my plan anyway).\r\n\r\nI think my main concern is - if I add certain plugin hooks for this, is anything else ever going to use them? I'm not sure I have an answer to that question yet, either way.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-391050113", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 391050113, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA1MDExMw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-22T16:13:00Z", "updated_at": "2018-05-22T16:13:00Z", "author_association": "CONTRIBUTOR", "body": "Yup, I'll have a think about it. My current thoughts are for spatialite we'll need to hook into the following places:\r\n\r\n* Inspection, so we can detect which columns are geometry columns. (We also currently ignore spatialite tables during inspection, it may be worth moving that to the plugin as well.)\r\n* After data load, so we can convert WKB into the correct intermediate format for display. The alternative here is to alter the select SQL itself and get spatialite to do this conversion, but that strikes me as a bit more complex and possibly not as useful.\r\n* HTML rendering.\r\n* Querying?\r\n\r\nThe rendering and querying hooks could also potentially be used to move the units support into a plugin.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-391505930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 391505930, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTUwNTkzMA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-23T21:41:37Z", "updated_at": "2018-05-23T21:41:37Z", "author_association": "CONTRIBUTOR", "body": "> I'm not keen on anything that modifies the SQLite file itself on startup\r\n\r\nAh I didn't mean that - I meant altering the SELECT query to fetch the data so that it ran a spatialite function to transform that specific column.\r\n\r\nI think that's less useful as a general-purpose plugin hook though, and it's not that hard to parse the WKB in Python (my default approach would be to use [shapely](https://github.com/Toblerity/Shapely), which is great, but geomet looks like an interesting pure-python alternative).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-392825746", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 392825746, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MjgyNTc0Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-29T15:42:53Z", "updated_at": "2018-05-29T15:42:53Z", "author_association": "CONTRIBUTOR", "body": "I haven't had time to look further into this, but if doing this as a plugin results in useful hooks then I think we should do it that way. We could always require the plugin as a standard dependency. \r\n\r\nI think this is going to result in quite a bit of refactoring anyway so it's a good time to add hooks regardless. \r\n\r\nOn the other hand, if we have to add lots of specialist hooks for it then maybe it's worth integrating into the core.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-393106520", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 393106520, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MzEwNjUyMA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-30T10:09:25Z", "updated_at": "2018-05-30T10:09:25Z", "author_association": "CONTRIBUTOR", "body": "I don't think it's unreasonable to only support spatialite geometries in a coordinate reference system which is at least transformable to WGS84. It would be nice to support different CRSes in the database so conversion to spatialite from the source data is lossless.\r\n\r\nI think the working CRS for datasette should be WGS84 though (leaflet requires it, for example) - it's just a case of calling `ST_Transform(geom, 4326)` on the column while we're loading the data.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-401310732", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 401310732, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMTMxMDczMg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-06-29T10:05:04Z", "updated_at": "2018-06-29T10:07:25Z", "author_association": "CONTRIBUTOR", "body": "@russs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg [kartena/Proj4Leaflet](https://kartena.github.io/Proj4Leaflet/)) although the leaflet side would need to detect or be informed of the original projection?\r\n\r\nAnother possibility would be to provide an easy way/guidance for users to create an FK'd table containing the WGS84 projection of a non-WGS84 geometry in the original/principle table? This could then as a proxy for serving GeoJSON to the leaflet map?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-401312981", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 401312981, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMTMxMjk4MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-06-29T10:14:54Z", "updated_at": "2018-06-29T10:14:54Z", "author_association": "CONTRIBUTOR", "body": "> @RusSs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg kartena/Proj4Leaflet) although the leaflet side would need to detect or be informed of the original projection?\r\n\r\nWell, as @simonw mentioned, GeoJSON only supports WGS84, and GeoJSON (and/or TopoJSON) is the standard we probably want to aim for. On-the-fly reprojection in spatialite is not an issue anyway, and in general I think you want to be serving stuff to web maps in WGS84 or Web Mercator.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/279#issuecomment-391073009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/279", "id": 391073009, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA3MzAwOQ==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-22T17:23:26Z", "updated_at": "2018-05-22T17:23:26Z", "author_association": "CONTRIBUTOR", "body": "> I think I prefer the aesthetics of just \"0.22\" for the version string if it's a tagged release with no additional changes - does that work?\r\n\r\nYes! That's the default versioneer behaviour.\r\n\r\n> I'd like to continue to provide a tuple that can be imported from the version.py module as well, as seen here:\r\n\r\nShould work now, it can be a two (for a tagged version), three or four items tuple.\r\n\r\n```\r\nIn [2]: datasette.__version__\r\nOut[2]: '0.12+292.ga70c2a8.dirty'\r\n\r\nIn [3]: datasette.__version_info__\r\nOut[3]: ('0', '12+292', 'ga70c2a8', 'dirty')\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325352370, "label": "Add version number support with Versioneer"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/279#issuecomment-391073267", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/279", "id": 391073267, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA3MzI2Nw==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-22T17:24:16Z", "updated_at": "2018-05-22T17:24:16Z", "author_association": "CONTRIBUTOR", "body": "Sorry, just realised you rely on `version` being a module ...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325352370, "label": "Add version number support with Versioneer"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/279#issuecomment-391077700", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/279", "id": 391077700, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA3NzcwMA==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-22T17:38:17Z", "updated_at": "2018-05-22T17:38:17Z", "author_association": "CONTRIBUTOR", "body": "Alright, that should work now -- let me know if you would prefer any different behaviour.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325352370, "label": "Add version number support with Versioneer"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/280#issuecomment-391059008", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/280", "id": 391059008, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA1OTAwOA==", "user": {"value": 565628, "label": "r4vi"}, "created_at": "2018-05-22T16:40:27Z", "updated_at": "2018-05-22T16:40:27Z", "author_association": "CONTRIBUTOR", "body": "```python\r\n>>> import sqlite3\r\n>>> sqlite3.sqlite_version\r\n'3.23.1'\r\n>>> \r\n```\r\nrunning the above in the container seems to show 3.23.1 too so maybe we don't need pysqlite3 at all?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325373747, "label": "Build Dockerfile with recent Sqlite + Spatialite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/280#issuecomment-391141391", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/280", "id": 391141391, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTE0MTM5MQ==", "user": {"value": 565628, "label": "r4vi"}, "created_at": "2018-05-22T21:08:39Z", "updated_at": "2018-05-22T21:08:39Z", "author_association": "CONTRIBUTOR", "body": "I'm going to clean this up for consistency tomorrow morning so hold off\nmerging until then please\n\nOn Tue, May 22, 2018 at 6:34 PM, Simon Willison \nwrote:\n\n> Yeah let's try this without pysqlite3 and see if we still get the correct\n> version.\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> , or mute\n> the thread\n> \n> .\n>\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325373747, "label": "Build Dockerfile with recent Sqlite + Spatialite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/280#issuecomment-391290271", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/280", "id": 391290271, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTI5MDI3MQ==", "user": {"value": 565628, "label": "r4vi"}, "created_at": "2018-05-23T09:53:38Z", "updated_at": "2018-05-23T09:53:38Z", "author_association": "CONTRIBUTOR", "body": "Running:\r\n```bash\r\ndocker run -p 8001:8001 -v `pwd`:/mnt datasette \\\r\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \\\r\n --load-extension=/usr/local/lib/mod_spatialite.so\r\n```\r\n\r\nis now returning FTS5 enabled in the versions output:\r\n\r\n```json\r\n{\r\n \"datasette\": {\r\n \"version\": \"0.22\"\r\n },\r\n \"python\": {\r\n \"full\": \"3.6.5 (default, May 5 2018, 03:07:21) \\n[GCC 6.3.0 20170516]\",\r\n \"version\": \"3.6.5\"\r\n },\r\n \"sqlite\": {\r\n \"extensions\": {\r\n \"json1\": null,\r\n \"spatialite\": \"4.4.0-RC0\"\r\n },\r\n \"fts_versions\": [\r\n \"FTS5\",\r\n \"FTS4\",\r\n \"FTS3\"\r\n ],\r\n \"version\": \"3.23.1\"\r\n }\r\n}\r\n```\r\nThe old query didn't work because specifying `(t TEXT)` caused an error", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325373747, "label": "Build Dockerfile with recent Sqlite + Spatialite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/280#issuecomment-391355030", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/280", "id": 391355030, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTM1NTAzMA==", "user": {"value": 565628, "label": "r4vi"}, "created_at": "2018-05-23T13:53:27Z", "updated_at": "2018-05-23T15:22:45Z", "author_association": "CONTRIBUTOR", "body": "No objections;\r\nIt's good to go @simonw\r\n\r\nOn Wed, 23 May 2018, 14:51 Simon Willison, wrote:\r\n\r\n> @r4vi any objections to me merging this?\r\n>\r\n> \u2014\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> , or mute\r\n> the thread\r\n> \r\n> .\r\n>\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325373747, "label": "Build Dockerfile with recent Sqlite + Spatialite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/294#issuecomment-405026800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/294", "id": 405026800, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNTAyNjgwMA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-07-14T14:24:31Z", "updated_at": "2018-07-14T14:24:31Z", "author_association": "CONTRIBUTOR", "body": "I had a quick look at this in relation to #343 and I feel like it might be worth modelling the inspected table metadata internally as an object rather than a dict. (We'd still have to serialise it back to JSON.)\r\n\r\nThere are a few places where we rely on the structure of this metadata dict for various reasons, including in templates (and potentially also in user templates). It would be nice to have a reasonably well defined API for accessing metadata internally so that it's clearer what we're breaking.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 327365110, "label": "inspect should record column types"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/329#issuecomment-422821483", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/329", "id": 422821483, "node_id": "MDEyOklzc3VlQ29tbWVudDQyMjgyMTQ4Mw==", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2018-09-19T14:17:42Z", "updated_at": "2018-09-19T14:17:42Z", "author_association": "CONTRIBUTOR", "body": "I'm using the docker image (0.23.2) and notice some differences/bugs between the docs and the published version with canned queries. (submitted a tiny doc fix also)\r\n\r\nI was able to build the docker container locally using `master` and I'm using that for now.\r\nWould it be possible to manually push 0.24 to DockerHub until the TravisCI stuff is fixed?\r\n\r\nI would like to run this in our Kubernetes cluster but don't want to publish a version in our internal registry if I don't have to.\r\nThanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 336465018, "label": "Travis should push tagged images to Docker Hub for each release"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/329#issuecomment-422915450", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/329", "id": 422915450, "node_id": "MDEyOklzc3VlQ29tbWVudDQyMjkxNTQ1MA==", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2018-09-19T18:45:02Z", "updated_at": "2018-09-20T10:50:50Z", "author_association": "CONTRIBUTOR", "body": "That works for me. Was able to pull the public image and no errors on my canned query. (~although a small rendering bug. I'll create an issue and if I have time today, a PR to fix~ this turned out to be my error.)\r\nThanks for the quick response!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 336465018, "label": "Travis should push tagged images to Docker Hub for each release"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/343#issuecomment-405026441", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/343", "id": 405026441, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNTAyNjQ0MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-07-14T14:17:14Z", "updated_at": "2018-07-14T14:17:14Z", "author_association": "CONTRIBUTOR", "body": "This probably depends on #294.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 341228846, "label": "Render boolean fields better by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/344#issuecomment-405022335", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/344", "id": 405022335, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNTAyMjMzNQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-07-14T13:00:48Z", "updated_at": "2018-07-14T13:00:48Z", "author_association": "CONTRIBUTOR", "body": "Looks like this was a red herring actually, and heroku had a blip when I was testing it...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 341229113, "label": "datasette publish heroku fails without name provided"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/366#issuecomment-429737929", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/366", "id": 429737929, "node_id": "MDEyOklzc3VlQ29tbWVudDQyOTczNzkyOQ==", "user": {"value": 416374, "label": "gfrmin"}, "created_at": "2018-10-15T07:32:57Z", "updated_at": "2018-10-15T07:32:57Z", "author_association": "CONTRIBUTOR", "body": "Very hacky solution is to write now.json file forcing the usage of v1 of Zeit cloud, see https://github.com/slygent/datasette/commit/3ab824793ec6534b6dd87078aa46b11c4fa78ea3\r\n\r\nThis does work, at least.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 369716228, "label": "Default built image size over Zeit Now 100MiB limit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/369#issuecomment-435768450", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/369", "id": 435768450, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNTc2ODQ1MA==", "user": {"value": 416374, "label": "gfrmin"}, "created_at": "2018-11-05T06:31:59Z", "updated_at": "2018-11-05T06:31:59Z", "author_association": "CONTRIBUTOR", "body": "That would be ideal, but you know better than me whether the CSV streaming trick works for custom SQL queries.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 374953006, "label": "Interface should show same JSON shape options for custom SQL queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/370#issuecomment-436037692", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/370", "id": 436037692, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNjAzNzY5Mg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-11-05T21:15:47Z", "updated_at": "2018-11-05T21:18:37Z", "author_association": "CONTRIBUTOR", "body": "In terms of integration with `pandas`, I was pondering two different ways `datasette`/`csvs_to_sqlite` integration may work:\r\n\r\n- like [`pandasql`](https://github.com/yhat/pandasql), to provide a SQL query layer either by a direct connection to the sqlite db or via `datasette` API;\r\n- as an improvement of `pandas.to_sql()`, which is a bit ropey (e.g. `pandas.to_sql_from_csvs()`, routing the dataframe to sqlite via `csvs_tosqlite` rather than the dodgy mapping that `pandas` supports).\r\n\r\nThe `pandas.publish_*` idea could be quite interesting though... Would it be useful/fruitful to think about `publish_` as a complement to [`pandas.to_`](https://pandas.pydata.org/pandas-docs/stable/api.html#id12)?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377155320, "label": "Integration with JupyterLab"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/370#issuecomment-436042445", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/370", "id": 436042445, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNjA0MjQ0NQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-11-05T21:30:42Z", "updated_at": "2018-11-05T21:31:48Z", "author_association": "CONTRIBUTOR", "body": "Another route would be something like creating a `datasette` IPython magic for notebooks to take a dataframe and easily render it as a `datasette`. You'd need to run the app in the background rather than block execution in the notebook. Related to that, or to publishing a dataframe in notebook cell for use in other cells in a non-blocking way, there may be cribs in something like https://github.com/micahscopes/nbmultitask .", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377155320, "label": "Integration with JupyterLab"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/370#issuecomment-1261930179", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/370", "id": 1261930179, "node_id": "IC_kwDOBm6k_c5LN4bD", "user": {"value": 72577720, "label": "MichaelTiemannOSC"}, "created_at": "2022-09-29T08:17:46Z", "updated_at": "2022-09-29T08:17:46Z", "author_association": "CONTRIBUTOR", "body": "Just watched this video which demonstrates the integration of *any* webapp into JupyterLab: https://youtu.be/FH1dKKmvFtc\r\n\r\nMaybe this is the answer?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377155320, "label": "Integration with JupyterLab"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/371#issuecomment-435862009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/371", "id": 435862009, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNTg2MjAwOQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-11-05T12:48:35Z", "updated_at": "2018-11-05T12:48:35Z", "author_association": "CONTRIBUTOR", "body": "I think you need to register a domain name you own separately in order to get a non-IP address address? https://www.digitalocean.com/docs/networking/dns/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377156339, "label": "datasette publish digitalocean plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-499320973", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 499320973, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5OTMyMDk3Mw==", "user": {"value": 13896256, "label": "kevindkeogh"}, "created_at": "2019-06-06T02:07:59Z", "updated_at": "2019-06-06T02:07:59Z", "author_association": "CONTRIBUTOR", "body": "Hey was this ever merged? Trying to run this behind nginx, and encountering this issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-499923145", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 499923145, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5OTkyMzE0NQ==", "user": {"value": 13896256, "label": "kevindkeogh"}, "created_at": "2019-06-07T15:10:57Z", "updated_at": "2019-06-07T15:11:07Z", "author_association": "CONTRIBUTOR", "body": "Putting this here in case anyone else encounters the same issue with nginx, I was able to resolve it by passing the header in the nginx proxy config (i.e., `proxy_set_header Host $host`).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-556749086", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 556749086, "node_id": "MDEyOklzc3VlQ29tbWVudDU1Njc0OTA4Ng==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2019-11-21T01:15:34Z", "updated_at": "2019-11-21T01:21:45Z", "author_association": "CONTRIBUTOR", "body": "Hey @simonw is the url_prefix config option available in another branch, it looks like you've written some tests for it above? In 0.32 I get \"url_prefix is not a valid option\". I think this would be *really helpful*!\r\n\r\nThis would be really handy for proxying datasette in another domain's *subdirectory* I believe this will allow folks to run upstream authentication, but the links break if the url_prefix doesn't match. \r\n\r\nI'd prefer not to host a proxied version of datasette on a subdomain (e.g. datasette.myurl.com b/c then I gotta worry about sharing authorization cookies with the subdomain, which I just assume not do, but...)\r\n\r\nEdit: I see the wip-url-prefix branch, I may try with that https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567133734", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567133734, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEzMzczNA==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2019-12-18T17:33:23Z", "updated_at": "2019-12-18T17:33:23Z", "author_association": "CONTRIBUTOR", "body": "FWIW I did a dumb merge of the branch here: https://github.com/jsfenfen/datasette and it seemed to work in that I could run stuff at a subdirectory, but ended up abandoning it in favor of just posting a subdomain because getting the nginx configs right was making me crazy. I still would prefer posting at a subdirectory but the subdomain seems simpler at the moment. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602907207", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602907207, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-23T23:12:18Z", "updated_at": "2020-03-23T23:12:18Z", "author_association": "CONTRIBUTOR", "body": "This would also be useful for running Datasette in Jupyter notebooks on [Binder](https://mybinder.org/). While you can use [Jupyter-server-proxy](https://github.com/jupyterhub/jupyter-server-proxy) to access Datasette on Binder, the links are broken.\r\n\r\nWhy run Datasette on Binder? I'm developing a [range of Jupyter notebooks](https://glam-workbench.github.io/) that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic.\r\n\r\nFor example, there are a [series of notebooks](https://glam-workbench.github.io/trove-harvester/) that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data.\r\n\r\nBut it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-604166918", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 604166918, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T00:56:30Z", "updated_at": "2020-03-26T00:56:30Z", "author_association": "CONTRIBUTOR", "body": "Thanks! I'm trying to launch Datasette from *within* a notebook using the jupyter-server-proxy and the new `base_url` parameter. While the assets load ok, and the breadcrumb navigation works, the facet links don't seem to use the `base_url`. Or have I missed something?\r\n\r\nMy test repository is here: https://github.com/wragge/datasette-test", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-641908346", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 641908346, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-06-10T10:22:54Z", "updated_at": "2020-06-10T10:22:54Z", "author_association": "CONTRIBUTOR", "body": "There's a working demo here: https://github.com/wragge/datasette-test\r\n\r\nAnd if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/412#issuecomment-474282321", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/412", "id": 474282321, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-03-19T10:09:46Z", "updated_at": "2019-03-19T10:09:46Z", "author_association": "CONTRIBUTOR", "body": "Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to `ATTACH DATABASE`?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 411257981, "label": "Linked Data(sette)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-474280581", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 474280581, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-03-19T10:06:42Z", "updated_at": "2019-03-19T10:06:42Z", "author_association": "CONTRIBUTOR", "body": "This would be really interesting but several possibilities in use arise, I think?\r\n\r\nFor example:\r\n\r\n- I put a new CSV file into the import dir and a new table is created therefrom\r\n- I put a CSV file into the import dir that replaces a previous file / table of the same name as a pre-existing table (eg files that contain monthly data in year to date). The data may also patch previous months, so a full replace / DROP on the original table may well be in order.\r\n- I put a CSV file into the import dir that updates a table of the same name as a pre-existing table (eg files that contain last month's data)\r\n\r\nCSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form `MYTABLENAME-February2019.csv` etc", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-586599424", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 586599424, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-02-15T15:12:19Z", "updated_at": "2020-02-15T15:12:33Z", "author_association": "CONTRIBUTOR", "body": "So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-752098906", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 752098906, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-12-29T14:34:30Z", "updated_at": "2020-12-29T14:34:50Z", "author_association": "CONTRIBUTOR", "body": "FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py\r\n\r\nNot a production thing, just an experiment trying to explore what might be possible...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/419#issuecomment-489060765", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/419", "id": 489060765, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTA2MDc2NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T11:07:42Z", "updated_at": "2019-05-03T11:07:42Z", "author_association": "CONTRIBUTOR", "body": "Are you planning on removing inspect entirely? \r\n\r\nI didn't spot this work before I started on datasette-geo, but ironically I think it has a use case which really needs the inspect functionality (or some replacement). \r\n\r\nDatasette-geo uses it to store the bounding box of all the geographic features in the table. This is needed when rendering the map because it avoids having to send loads of tile requests for areas which are empty. \r\n\r\nEven with relatively small datasets, calculating the bounding box seems to take around 5 seconds, so I don't think it's really feasible to do this on page load.\r\n\r\nOne possible fix would be to do this on startup, and then in a thread which watches the database for changes.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421551434, "label": "Default to opening files in mutable mode, special option for immutable files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/424#issuecomment-487689477", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/424", "id": 487689477, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY4OTQ3Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:22:40Z", "updated_at": "2019-04-29T18:22:40Z", "author_association": "CONTRIBUTOR", "body": "This is pretty conflicty because I forgot how to use git fetch. If you're interested in merging this I'll rewrite it against an actual modern checkout...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 427429265, "label": "Column types in inspected metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/424#issuecomment-487692377", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/424", "id": 487692377, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY5MjM3Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:30:46Z", "updated_at": "2019-04-29T18:30:46Z", "author_association": "CONTRIBUTOR", "body": "Actually no, I ended up not using the inspected column types in my plugin, and the binary column issue can be solved a lot more simply, so I'll close this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 427429265, "label": "Column types in inspected metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/429#issuecomment-483202658", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/429", "id": 483202658, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MzIwMjY1OA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-15T10:48:01Z", "updated_at": "2019-04-15T10:48:01Z", "author_association": "CONTRIBUTOR", "body": "Minor UI observation:\r\n\r\n![image](https://user-images.githubusercontent.com/82988/56127017-2bf78e80-5f74-11e9-9120-9393eb5d4988.png)\r\n\r\n`_where=` renders a `[remove]` link whereas `_facet=` gets a cross to remove it. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432636432, "label": "?_where=sql-fragment parameter for table views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/431#issuecomment-483017176", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/431", "id": 483017176, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-14T16:58:37Z", "updated_at": "2019-04-14T16:58:37Z", "author_association": "CONTRIBUTOR", "body": "Hmm... nope... I see an updated timestamp from `ls -al` on the db but no reload?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432870248, "label": "Datasette doesn't reload when database file changes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/432#issuecomment-488595724", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/432", "id": 488595724, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4ODU5NTcyNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-02T08:50:53Z", "updated_at": "2019-05-02T08:50:53Z", "author_association": "CONTRIBUTOR", "body": "> Can I pull those needs out of the Facet class somehow?\r\n\r\nI was thinking that it might be handy for datasette to have a request object which wraps the Sanic Request. This could include the datasette-specific querystring decoding and the `special_args` parsing from TableView.data.\r\n\r\nThis would mean that we could expose the request object to plugin hooks without coupling them to Sanic.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432893491, "label": "Refactor facets to a class and new plugin, refs #427"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/434#issuecomment-489105665", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/434", "id": 489105665, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTEwNTY2NQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2019-05-03T14:01:30Z", "updated_at": "2019-05-03T14:01:30Z", "author_association": "CONTRIBUTOR", "body": "This is exactly what I needed. Thank you.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 434321685, "label": "\"datasette publish cloudrun\" command to publish to Google Cloud Run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/434#issuecomment-489163939", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/434", "id": 489163939, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTE2MzkzOQ==", "user": {"value": 10352819, "label": "rprimet"}, "created_at": "2019-05-03T16:49:45Z", "updated_at": "2019-05-03T16:50:03Z", "author_association": "CONTRIBUTOR", "body": "> The second time I ran the command I got an error:\r\n\r\n> \r\n> ERROR: (gcloud.beta.run.deploy) Deployment endpoint was not found. Perhaps the\r\n> provided region was invalid. Set the `run/region` property to a valid region and\r\n> retry. Ex: `gcloud config set run/region us-central1`\r\n> \r\n\r\nYes, I was able to reproduce this; I used to get prompted for a run region interactively by the `gcloud` tool before, but maybe this is changing? (the [documentation](https://cloud.google.com/run/docs/deploying) now assumes `run/region` is set).\r\n\r\nNot sure which course of action is best: making `datasette` ensure that `run/region` is set beforehand or wait a bit until the gcloud CLI stabilizes?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 434321685, "label": "\"datasette publish cloudrun\" command to publish to Google Cloud Run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/437#issuecomment-487537452", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/437", "id": 487537452, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzUzNzQ1Mg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T10:58:49Z", "updated_at": "2019-04-29T10:58:49Z", "author_association": "CONTRIBUTOR", "body": "I've just spotted that this implements #215.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438048318, "label": "Add inspect and prepare_sanic hooks"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/439#issuecomment-487542486", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/439", "id": 487542486, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzU0MjQ4Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T11:20:30Z", "updated_at": "2019-04-29T11:20:30Z", "author_association": "CONTRIBUTOR", "body": "Actually I think this is not the whole story because of the rowid issue. I'm going to think about this one a bit more.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438240541, "label": "[WIP] Add primary key to the extra_body_script hook arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/439#issuecomment-487859345", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/439", "id": 487859345, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Nzg1OTM0NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-30T08:21:19Z", "updated_at": "2019-04-30T08:21:19Z", "author_association": "CONTRIBUTOR", "body": "I think the best approach to this is to pass through the `view_name` parameter I added in #441. It's then simple enough for me to add `.geojson` to the URL in JS - I don't need the pkey.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438240541, "label": "[WIP] Add primary key to the extra_body_script hook arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487686655", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487686655, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY4NjY1NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:14:25Z", "updated_at": "2019-04-29T18:14:25Z", "author_association": "CONTRIBUTOR", "body": "Subsidiary note which I forgot in the commit message:\r\n\r\nI've decided to give each view a short string name to aid in differentiating which view a hook is being called from. Since hooks are functions and not subclasses, and can get called from different places in the URL hierarchy, it's sometimes difficult to distinguish what data you're actually operating on. I think this will come in handy for other hooks as well.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487723476", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487723476, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzcyMzQ3Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T20:05:23Z", "updated_at": "2019-04-29T20:05:23Z", "author_association": "CONTRIBUTOR", "body": "This is the minimal example (I also included it in the docs):\r\n\r\n```python\r\nfrom datasette import hookimpl\r\n\r\ndef render_test(args, data, view_name):\r\n return {\r\n\u00a0 'body': 'Hello World',\r\n 'content_type': 'text/plain'\r\n }\r\n\r\n@hookimpl\r\ndef register_output_renderer():\r\n return {\r\n 'extension': 'test',\r\n 'callback': render_test\r\n }\r\n```\r\n\r\nI'm working on the GeoJSON one now and it should be ready soon. (I forgot I was going to run into the same problem as before - that Spatialite's stupid binary format isn't WKB and I have no way of altering the query to change that - but I've just managed to write some code to rearrange the bytes from Spatialite blob-geometry into WKB...)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487724539", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487724539, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzcyNDUzOQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T20:08:32Z", "updated_at": "2019-04-29T20:08:32Z", "author_association": "CONTRIBUTOR", "body": "I also just realised that I should be passing the datasette object into the hook function...as I just found I need it. So hold off merging until I've fixed that.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487735247", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487735247, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzczNTI0Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T20:39:43Z", "updated_at": "2019-04-29T20:39:43Z", "author_association": "CONTRIBUTOR", "body": "I updated the hook to pass the datasette object through now.\r\n\r\nYou can see the working [GeoJSON render function here](https://github.com/russss/datasette-geo/blob/master/datasette_plugin_geo/geojson.py) - the [hook function is here](https://github.com/russss/datasette-geo/blob/master/datasette_plugin_geo/__init__.py#L65-L70).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487748271", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487748271, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Nzc0ODI3MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T21:20:17Z", "updated_at": "2019-04-29T21:20:17Z", "author_association": "CONTRIBUTOR", "body": "Also I just pushed a change to add registered output renderers to the templates:\r\n![image](https://user-images.githubusercontent.com/45057/56927799-f18e0580-6acc-11e9-8ea9-a0ee961323ec.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-488247617", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 488247617, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4ODI0NzYxNw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-01T09:57:50Z", "updated_at": "2019-05-01T09:57:50Z", "author_association": "CONTRIBUTOR", "body": "Just for the record, this PR is now finished and ready to merge from my perspective.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/446#issuecomment-489221481", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/446", "id": 489221481, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTIyMTQ4MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T19:58:31Z", "updated_at": "2019-05-03T19:58:31Z", "author_association": "CONTRIBUTOR", "body": "In this particular case I don't think there's an issue making all those required. However, I suspect we might have to allow optional values at some point - my preferred solution to russss/datasette-geo#2 would need one.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440134714, "label": "Define mechanism for plugins to return structured data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/446#issuecomment-489222223", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/446", "id": 489222223, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTIyMjIyMw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T20:01:19Z", "updated_at": "2019-05-03T20:01:29Z", "author_association": "CONTRIBUTOR", "body": "Also I have a slight preference against (ab)using `__slots__` to enforce fields, although I have done it myself in the past. It would be possible to do this with `__setattr__` instead, although that's an implementation detail and I'm not too fussed about it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440134714, "label": "Define mechanism for plugins to return structured data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/450#issuecomment-489342728", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/450", "id": 489342728, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTM0MjcyOA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-04T16:37:35Z", "updated_at": "2019-05-04T16:37:35Z", "author_association": "CONTRIBUTOR", "body": "For a bit more context: this fixes a crash with `unsupported operand type(s) for +: 'int' and 'NoneType'` on the index page for me.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440304714, "label": "Coalesce hidden table count to 0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/456#issuecomment-661524006", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/456", "id": 661524006, "node_id": "MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg==", "user": {"value": 32467826, "label": "abeyerpath"}, "created_at": "2020-07-21T01:15:07Z", "updated_at": "2020-07-21T01:15:07Z", "author_association": "CONTRIBUTOR", "body": "Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue...\r\n\r\nThe `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 442327592, "label": "Installing installs the tests package"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-735281577", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 735281577, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTI4MTU3Nw==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-11-28T19:39:53Z", "updated_at": "2020-11-28T19:39:53Z", "author_association": "CONTRIBUTOR", "body": "I was confused by `--config` and I tried passing the json from datasette-ripgrep into `config.json` just as a wild guess. \r\n\r\nA short term solution might be pointing out in plugins that their snippet json can go in `metadata.json` at least makes it easier to search for config options or to know where to start if someone is new. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-748305976", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 748305976, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-12-18T20:34:39Z", "updated_at": "2020-12-18T20:34:39Z", "author_association": "CONTRIBUTOR", "body": "I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: \r\n\r\n```shell\r\ndatasette serve --metadata=./metadata.json\r\n```\r\n\r\nI have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/502#issuecomment-812813732", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/502", "id": 812813732, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg==", "user": {"value": 5413548, "label": "louispotok"}, "created_at": "2021-04-03T05:16:54Z", "updated_at": "2021-04-03T05:16:54Z", "author_association": "CONTRIBUTOR", "body": "For what it's worth, if anyone finds this in the future, I was having the same issue. \r\n\r\nAfter digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so `datasette serve -i xyz.db` rather than the doc's quickstart recommendation of `datasette serve xyz.db`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 453131917, "label": "Exporting sqlite database(s)?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/26#issuecomment-964205475", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/26", "id": 964205475, "node_id": "IC_kwDOCGYnMM45eJuj", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-11-09T14:31:29Z", "updated_at": "2021-11-09T14:31:29Z", "author_association": "CONTRIBUTOR", "body": "i was just reaching for a tool to do this this morning", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455486286, "label": "Mechanism for turning nested JSON into foreign keys / many-to-many"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1032120014", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/26", "id": 1032120014, "node_id": "IC_kwDOCGYnMM49hObO", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-02-08T01:32:34Z", "updated_at": "2022-02-08T01:32:34Z", "author_association": "CONTRIBUTOR", "body": "if you are curious about prior art, https://github.com/jsnell/json-to-multicsv is really good!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455486286, "label": "Mechanism for turning nested JSON into foreign keys / many-to-many"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/507#issuecomment-509013413", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/507", "id": 509013413, "node_id": "MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-07-07T16:31:57Z", "updated_at": "2019-07-07T16:31:57Z", "author_association": "CONTRIBUTOR", "body": "Chrome and Firefox [both support headless screengrabs]( https://www.bleepingcomputer.com/news/software/chrome-and-firefox-can-take-screenshots-of-sites-from-the-command-line/) from command line, but I don't know how parameterised they can be?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455852801, "label": "Every datasette plugin on the ecosystem page should have a screenshot"}, "performed_via_github_app": null}