{"html_url": "https://github.com/simonw/datasette/issues/889#issuecomment-653002499", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/889", "id": 653002499, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MzAwMjQ5OQ==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-07-02T13:22:13Z", "updated_at": "2020-07-02T13:22:13Z", "author_association": "CONTRIBUTOR", "body": "I was able to narrow this down to the fact that lifespan protocol is turned on. \r\n\r\nI see the workaround you've used here: https://github.com/simonw/datasette-debug-asgi/commit/72d568d32a3159c763ce908c0b269736935c6987\r\n\r\nIf so, maybe it's time to update some of the asg_wrapper [plugins](https://datasette.readthedocs.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 649907676, "label": "asgi_wrapper plugin hook is crashing at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/889#issuecomment-652990131", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/889", "id": 652990131, "node_id": "MDEyOklzc3VlQ29tbWVudDY1Mjk5MDEzMQ==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-07-02T12:58:11Z", "updated_at": "2020-07-02T13:00:18Z", "author_association": "CONTRIBUTOR", "body": "FWIW, this error does NOT happen in datasette 0.45a4.\r\n\r\nIt only started on 0.45a5", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 649907676, "label": "asgi_wrapper plugin hook is crashing at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652394742", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652394742, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjM5NDc0Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T12:41:13Z", "updated_at": "2020-07-01T12:41:13Z", "author_association": "CONTRIBUTOR", "body": "Well tests need to be updated.\r\n \r\nI need to get tests working on Windows.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652297139", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652297139, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI5NzEzOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T09:11:29Z", "updated_at": "2020-07-01T09:11:29Z", "author_association": "CONTRIBUTOR", "body": "Turns out we should include hidden tables in the result dict, or we're breaking tests. I've committed a refactor https://github.com/simonw/datasette/pull/883/commits/4f06e1bf6fbe4b73be770b87f610bf7c0e6e3ea7", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652255960", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652255960, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T07:52:25Z", "updated_at": "2020-07-01T08:10:00Z", "author_association": "CONTRIBUTOR", "body": "I am calling the API from another origin, so injecting CSRF token into templates wouldn't work.\r\n\r\nEDIT:\r\n\r\nI'll try the new version, it sounds promising", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652261382", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652261382, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T08:03:17Z", "updated_at": "2020-07-01T08:03:23Z", "author_association": "CONTRIBUTOR", "body": "Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652166115", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652166115, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T03:28:07Z", "updated_at": "2020-07-01T03:28:07Z", "author_association": "CONTRIBUTOR", "body": "Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-652160909", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 652160909, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjE2MDkwOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T03:09:32Z", "updated_at": "2020-07-01T03:10:21Z", "author_association": "CONTRIBUTOR", "body": "I've just realized Datasette tries to count hidden tables too. There are 5 visible tables, 25 hidden tables, which I haven't realize earlier to consider their effect. I've turned off counting for hidden tables to see if it has any effect.\r\n\r\nWhat's the point of counting FTS tables?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-648669523", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 648669523, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODY2OTUyMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-24T08:13:23Z", "updated_at": "2020-06-24T10:30:36Z", "author_association": "CONTRIBUTOR", "body": "I tried setting `cache_size_kb=0` then `cache_size_kb=100000`, still getting this behavior. I even changed `Database::table_counts` and lowered time limit to 1\r\n\r\n```py\r\ntable_count = (\r\n await self.execute(\r\n \"select count(*) from [{}]\".format(table),\r\n custom_time_limit=1,\r\n )\r\n).rows[0][0]\r\ncounts[table] = table_count\r\n```\r\n\r\nI feel like 10 seconds is a magic number, like a processing timeout and datasette gives up and returns the page. \r\nIndex page loads instantly, table page, query page, as well. But when I return to database page after some time, it loads in 10s.\r\n\r\nEDIT:\r\n\r\nIt's always like 10 + 0.3s, like 10s wait and timeout then 300ms to render the page", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-648232645", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 648232645, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODIzMjY0NQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T15:19:53Z", "updated_at": "2020-06-23T15:19:53Z", "author_association": "CONTRIBUTOR", "body": "The issue seems to appear sporadically, like when I return to database page after a while, during which some records have been added to the database.\r\n\r\nI've just visited database, page first visit took ~10s, consecutive visits took 0.3s.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647925594", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647925594, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:55:21Z", "updated_at": "2020-06-23T06:28:29Z", "author_association": "CONTRIBUTOR", "body": "Hmm, not seeing the problem now. \r\nI've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms.\r\n\r\nI have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page?\r\n\r\nEDIT: \r\nIt turns out it doesn't hold cache with mutable databases.\r\n\r\nI'll update the issue with more findings and a better way to reproduce the problem if I encounter it again.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647936117", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647936117, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:25:17Z", "updated_at": "2020-06-23T06:25:17Z", "author_association": "CONTRIBUTOR", "body": "> \r\n> \r\n> ```\r\n> sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50\r\n> ```\r\n> \r\n> Looks like that will take 35 minutes to run (it's not a particularly fast tool).\r\n\r\nTry chunking write operations into batches every 1000 records or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647935300", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647935300, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:23:01Z", "updated_at": "2020-06-23T06:23:01Z", "author_association": "CONTRIBUTOR", "body": "> You said \"200k+, 50+ rows in a couple of tables\" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes.\r\n\r\nAh that was a typo, I meant 50k.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647923666", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647923666, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyMzY2Ng==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:49:31Z", "updated_at": "2020-06-23T05:49:31Z", "author_association": "CONTRIBUTOR", "body": "I think I should mention that having FTS on all tables mean I have 5 visible, 25 hidden (FTS) tables displayed on database page.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647194131", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647194131, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzE5NDEzMQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-21T23:15:54Z", "updated_at": "2020-06-21T23:26:09Z", "author_association": "CONTRIBUTOR", "body": "I'm not sure if table counts are to blame. There shouldn't be a ~3 orders of magnitude difference.\r\n\r\n```fish\r\nuser@klein /a/w/scrapyard (master)> set sql \"select count(*) from table_1; select count(*) from table_2; select count(*) from table_3;\"\r\nuser@klein /a/w/scrapyard (master)> time sqlite3 scrapyard.db \"$sql\"\r\n187489\r\n46492\r\n2229\r\n\r\n________________________________________________________\r\nExecuted in 25.57 millis fish external\r\n usr time 3.55 millis 0.00 micros 3.55 millis\r\n sys time 22.42 millis 1123.00 micros 21.30 millis\r\n```\r\n\r\nbut not letting datasette count the tables definitely helps.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647135713", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647135713, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzEzNTcxMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-21T14:30:02Z", "updated_at": "2020-06-21T14:30:02Z", "author_association": "CONTRIBUTOR", "body": "Oops, the same method is called from both index and database pages. But removing select count queries speed up the page load quite a bit.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/851#issuecomment-645293374", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/851", "id": 645293374, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NTI5MzM3NA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-17T10:32:02Z", "updated_at": "2020-06-17T10:32:28Z", "author_association": "CONTRIBUTOR", "body": "Welp, I'm an idiot.\r\n\r\nTurns out I had a sneaky comma `,` after `sql` key:\r\n```\r\n... (:name, :url),\r\n```\r\nwhich tells sqlite to expect another `values(...)` list.\r\n\r\nCorrecting the SQL solved the issue. \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 640330278, "label": "Having trouble getting writable canned queries to work"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/691#issuecomment-643709037", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/691", "id": 643709037, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MzcwOTAzNw==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-06-14T02:35:16Z", "updated_at": "2020-06-14T02:35:16Z", "author_association": "CONTRIBUTOR", "body": "The server should reload in the `config_dir` mode. \r\n\r\nRef: #848", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 574021194, "label": "--reload sould reload server if code in --plugins-dir changes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-641908346", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 641908346, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-06-10T10:22:54Z", "updated_at": "2020-06-10T10:22:54Z", "author_association": "CONTRIBUTOR", "body": "There's a working demo here: https://github.com/wragge/datasette-test\r\n\r\nAnd if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/767#issuecomment-632555800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/767", "id": 632555800, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjU1NTgwMA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2020-05-22T08:00:23Z", "updated_at": "2020-05-22T08:00:23Z", "author_association": "CONTRIBUTOR", "body": "That would be perfect!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 620969465, "label": "Allow to specify a URL fragment for canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 628405453, "node_id": "MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-14T05:59:53Z", "updated_at": "2020-05-14T05:59:53Z", "author_association": "CONTRIBUTOR", "body": "I've added support for the above exif data to [v0.28.17](https://github.com/RhetTbull/osxphotos/releases/tag/v0.28.17) of osxphotos. `PhotoInfo.exif_info` will return an `ExifInfo` [dataclass](https://docs.python.org/3/library/dataclasses.html) object with the following properties:\r\n\r\n```python\r\n flash_fired: bool\r\n iso: int\r\n metering_mode: int\r\n sample_rate: int\r\n track_format: int\r\n white_balance: int\r\n aperture: float\r\n bit_rate: float\r\n duration: float\r\n exposure_bias: float\r\n focal_length: float\r\n fps: float\r\n latitude: float\r\n longitude: float\r\n shutter_speed: float\r\n camera_make: str\r\n camera_model: str\r\n codec: str\r\n lens_model: str\r\n```\r\n\r\nIt's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data\r\n\r\nNote: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 627007458, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T22:51:52Z", "updated_at": "2020-05-11T22:52:26Z", "author_association": "CONTRIBUTOR", "body": "I'm not familiar with `ExifReader`. I wrote my own wrapper around `exiftool` because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like [pyexiftool](https://github.com/smarnach/pyexiftool) didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in `ZEXTENDEDATTRIBUTES` table. I've got an open issue [#11](https://github.com/RhetTbull/osxphotos/issues/11) to add this to osxphotos but it hasn't bubbled to the top of my backlog yet. \r\n\r\nosxphotos will give you the location info: `PhotoInfo.location` returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS).\r\n\r\n```sql\r\nCREATE TABLE ZEXTENDEDATTRIBUTES (\r\n Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, \r\n Z_OPT INTEGER, ZFLASHFIRED INTEGER, \r\n ZISO INTEGER, ZMETERINGMODE INTEGER, \r\n ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, \r\n ZWHITEBALANCE INTEGER, ZASSET INTEGER, \r\n ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, \r\n ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, \r\n ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, \r\n ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, \r\n ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, \r\n ZLENSMODEL VARCHAR\r\n);\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 626667235, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T12:20:34Z", "updated_at": "2020-05-11T12:20:34Z", "author_association": "CONTRIBUTOR", "body": "@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example:\r\n\r\n```python\r\nimport osxphotos\r\nphotos = osxphotos.PhotosDB().photos()\r\nexiftool = osxphotos.exiftool.ExifTool(photos[0].path)\r\nexifdata = exiftool.as_dict()\r\ntags = exifdata[\"IPTC:Keywords\"]\r\n```\r\n\r\nNot as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC.\r\n\r\nAs for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626396379, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T22:01:48Z", "updated_at": "2020-05-10T22:01:48Z", "author_association": "CONTRIBUTOR", "body": "Frustrates me when package authors create a \"drop in\" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do \"import bpylist2 as bpylist\"", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395641, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:55:54Z", "updated_at": "2020-05-10T21:55:54Z", "author_association": "CONTRIBUTOR", "body": "Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395507, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:54:45Z", "updated_at": "2020-05-10T21:54:45Z", "author_association": "CONTRIBUTOR", "body": "@simonw does Photos show valid reverse geolocation info? Are you sure you're using [bpylist2](https://github.com/xa4a/bpylist2) and not bpylist? They're both unfortunately imported as \"bpylist\" so if you somehow got the wrong (original bpylist) version installed, it could be the issue. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626390317", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626390317, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5MDMxNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:11:24Z", "updated_at": "2020-05-10T21:50:58Z", "author_association": "CONTRIBUTOR", "body": "Ugh....Yeah, I think easiest is to catch the exception and return no place as you suggest. This particular bit of code involves un-archiving a serialized NSKeyedArchiver which uses an object table and it is certainly possible to create a circular reference that way. Because this is happening in the decode, the circular reference must be in the original data. Does Photos show valid reverse geolocation info for the photo in question? If so, Photos may be doing something beyond a simple decode of the binary plist. For now, I'll push a patch to catch the exception.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17", "id": 624284539, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-05T20:20:05Z", "updated_at": "2020-05-05T20:20:05Z", "author_association": "CONTRIBUTOR", "body": "FYI, I've got an [issue](https://github.com/RhetTbull/osxphotos/issues/25) to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612860531, "label": "Only install osxphotos if running on macOS"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623845014", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16", "id": 623845014, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzg0NTAxNA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-05T03:55:14Z", "updated_at": "2020-05-05T03:56:24Z", "author_association": "CONTRIBUTOR", "body": "I'm traveling w/o access to my Mac so can't help with any code right now. I suspected ZSCENEIDENTIFIER was a foreign key into one of these psi.sqlite tables. But looks like you're on to something connecting groups to assets. As for the UUID, I think there's two ints because each is 64-bits but UUIDs are 128-bits. Thus they need to be combined to get the 128 bit UUID. You might be able to use Apple's [NSUUID](https://developer.apple.com/documentation/foundation/nsuuid?language=objc), for example, by wrapping with pyObjC. Here's one [example](https://github.com/ronaldoussoren/pyobjc/blob/881c82a7ba90f193934b52b44143360c80dce5e5/pyobjc-framework-Cocoa/PyObjCTest/test_nsuuid.py) of using this in PyObjC's test suite. Interesting it's stored this way instead of a UUIDString as in Photos.sqlite. Perhaps it for faster indexing.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612287234, "label": "Import machine-learning detected labels (dog, llama etc) from Apple Photos"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/730#issuecomment-623463200", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/730", "id": 623463200, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzQ2MzIwMA==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2020-05-04T13:27:22Z", "updated_at": "2020-05-04T13:27:22Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #753.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 604001627, "label": "Update pytest-asyncio requirement from ~=0.10.0 to >=0.10,<0.12"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/103#issuecomment-622599528", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/103", "id": 622599528, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjU5OTUyOA==", "user": {"value": 32605365, "label": "b0b5h4rp13"}, "created_at": "2020-05-01T22:49:12Z", "updated_at": "2020-05-02T11:15:44Z", "author_association": "CONTRIBUTOR", "body": "With SQLITE_MAX_VARS = 999, or even 899, This hits the problem with the batch rows causing a overflow (works fine if SQLITE_MAX_VARS = 799).\r\n\r\np.s. I have tried a few list of dicts to sqlite modules and this was the easiest to use/understand\r\n\r\n------------- file begins ------------------\r\nimport sqlite_utils as su\r\n\r\n\r\ndata = [\r\n{'tickerId': 913324382, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CONSTELLATION B', 'symbol': 'STZ B', 'disSymbol': 'STZ-B', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '163.13', 'change': '6.46', 'changeRatio': '0.0412', 'marketValue': '31180699895.63', 'volume': '417', 'turnoverRate': '0.0000'},\r\n{'tickerId': 913323791, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Molina Health', 'symbol': 'MOH', 'disSymbol': 'MOH', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '173.25', 'change': '9.28', 'changeRatio': '0.0566', 'pPrice': '173.25', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '10520341695.50', 'volume': '1281557', 'turnoverRate': '0.0202'},\r\n{'tickerId': 913257501, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seattle Genetics', 'symbol': 'SGEN', 'disSymbol': 'SGEN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '145.64', 'change': '8.41', 'changeRatio': '0.0613', 'pPrice': '146.45', 'pChange': '0.8100', 'pChRatio': '0.0056', 'marketValue': '25117961347.60', 'volume': '2791411', 'turnoverRate': '0.0162'},\r\n{'tickerId': 925381971, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bandwidth', 'symbol': 'BAND', 'disSymbol': 'BAND', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '89.22', 'change': '7.66', 'changeRatio': '0.0939', 'pPrice': '89.00', 'pChange': '-0.2200', 'pChRatio': '-0.0025', 'marketValue': '2100025474.98', 'volume': '1508629', 'turnoverRate': '0.0641'},\r\n{'tickerId': 913323935, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Magellan Health', 'symbol': 'MGLN', 'disSymbol': 'MGLN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '68.00', 'change': '7.27', 'changeRatio': '0.1197', 'pPrice': '68.00', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '1697894040.00', 'volume': '448919', 'turnoverRate': '0.0180'},\r\n{'tickerId': 913254854, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'On Assignment', 'symbol': 'ASGN', 'disSymbol': 'ASGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '53.04', 'change': '6.59', 'changeRatio': '0.1419', 'pPrice': '53.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '2811120000.00', 'volume': '1339771', 'turnoverRate': '0.0253'},\r\n{'tickerId': 913255732, 'exchangeId': 95, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Arcturus', 'symbol': 'ARCT', 'disSymbol': 'ARCT', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '40.86', 'change': '6.36', 'changeRatio': '0.1843', 'pPrice': '42.60', 'pChange': '1.740', 'pChRatio': '0.0426', 'marketValue': '812021444.46', 'volume': '1577508', 'turnoverRate': '0.0794'},\r\n{'tickerId': 913256616, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'DexCom', 'symbol': 'DXCM', 'disSymbol': 'DXCM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '341.52', 'change': '6.32', 'changeRatio': '0.0189', 'pPrice': '340.00', 'pChange': '-1.5200', 'pChRatio': '-0.0045', 'marketValue': '31522296000.00', 'volume': '1008849', 'turnoverRate': '0.0109'},\r\n{'tickerId': 913255108, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Clorox', 'symbol': 'CLX', 'disSymbol': 'CLX', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '192.71', 'change': '6.27', 'changeRatio': '0.0336', 'pPrice': '192.95', 'pChange': '0.2400', 'pChRatio': '0.0012', 'marketValue': '24185773318.28', 'volume': '4996414', 'turnoverRate': '0.0398'},\r\n{'tickerId': 925314627, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'FRANCO NEVADA', 'symbol': 'FNV', 'disSymbol': 'FNV', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '137.85', 'change': '5.64', 'changeRatio': '0.0427', 'pPrice': '138.50', 'pChange': '0.6500', 'pChRatio': '0.0047', 'marketValue': '26110405326.30', 'volume': '1047688', 'turnoverRate': '0.0055'},\r\n{'tickerId': 913254955, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Aon Plc', 'symbol': 'AON', 'disSymbol': 'AON', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '178.21', 'change': '5.54', 'changeRatio': '0.0321', 'pPrice': '178.21', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '41181209117.22', 'volume': '2026234', 'turnoverRate': '0.0088'},\r\n{'tickerId': 913324105, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Willis Towers', 'symbol': 'WLTW', 'disSymbol': 'WLTW', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '183.34', 'change': '5.05', 'changeRatio': '0.0283', 'pPrice': '183.34', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23597461124.96', 'volume': '968943', 'turnoverRate': '0.0075'},\r\n{'tickerId': 913254759, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'TELADOC HEALTH', 'symbol': 'TDOC', 'disSymbol': 'TDOC', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '169.43', 'change': '4.84', 'changeRatio': '0.0294', 'pPrice': '168.88', 'pChange': '-0.5500', 'pChRatio': '-0.0032', 'marketValue': '12614616858.38', 'volume': '2628946', 'turnoverRate': '0.0353'},\r\n{'tickerId': 913255222, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Emergent Bio', 'symbol': 'EBS', 'disSymbol': 'EBS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '78.70', 'change': '4.75', 'changeRatio': '0.0642', 'pPrice': '78.40', 'pChange': '-0.3000', 'pChRatio': '-0.0038', 'marketValue': '4113368277.10', 'volume': '783804', 'turnoverRate': '0.0150'},\r\n{'tickerId': 913323443, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pool', 'symbol': 'POOL', 'disSymbol': 'POOL', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '216.02', 'change': '4.36', 'changeRatio': '0.0206', 'pPrice': '216.02', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8696077573.82', 'volume': '310837', 'turnoverRate': '0.0077'},\r\n{'tickerId': 913257075, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Masimo', 'symbol': 'MASI', 'disSymbol': 'MASI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '218.00', 'change': '4.09', 'changeRatio': '0.0191', 'pPrice': '217.00', 'pChange': '-1.0000', 'pChRatio': '-0.0046', 'marketValue': '11797070000.00', 'volume': '542131', 'turnoverRate': '0.0100'},\r\n{'tickerId': 913253761, 'exchangeId': 10, 'type': 2, 'secType': [62], 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pope Resources', 'symbol': 'POPE', 'disSymbol': 'POPE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NAS', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '101.05', 'change': '3.95', 'changeRatio': '0.0407', 'pPrice': '99.90', 'pChange': '2.800', 'pChRatio': '0.0288', 'marketValue': '447370075.75', 'volume': '33138', 'turnoverRate': '0.0075'},\r\n{'tickerId': 913323560, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seneca Foods', 'symbol': 'SENEB', 'disSymbol': 'SENEB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '40.04', 'change': '3.84', 'changeRatio': '0.1061', 'marketValue': '347950039.71', 'volume': '501'},\r\n{'tickerId': 913324274, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Resmed', 'symbol': 'RMD', 'disSymbol': 'RMD', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '159.07', 'change': '3.75', 'changeRatio': '0.0241', 'pPrice': '159.07', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23004217759.29', 'volume': '1267075', 'turnoverRate': '0.0088'},\r\n{'tickerId': 913323736, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Vertex Pharms', 'symbol': 'VRTX', 'disSymbol': 'VRTX', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '254.90', 'change': '3.70', 'changeRatio': '0.0147', 'pPrice': '255.00', 'pChange': '0.1000', 'pChRatio': '0.0004', 'marketValue': '66062980780.10', 'volume': '1939843', 'turnoverRate': '0.0075'},\r\n{'tickerId': 913323767, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MCCORMICK VTG', 'symbol': 'MKC V', 'disSymbol': 'MKC-V', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '159.99', 'change': '3.42', 'changeRatio': '0.0218', 'marketValue': '21262671000.00', 'volume': '432', 'turnoverRate': '0.0000'},\r\n{'tickerId': 950118595, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'ZOOM VIDEO', 'symbol': 'ZM', 'disSymbol': 'ZM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '138.56', 'change': '3.39', 'changeRatio': '0.0251', 'pPrice': '138.99', 'pChange': '0.4300', 'pChRatio': '0.0031', 'marketValue': '38620532420.16', 'volume': '13786017', 'turnoverRate': '0.0495'},\r\n{'tickerId': 916040738, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'WHEATON PRECIOUS', 'symbol': 'WPM', 'disSymbol': 'WPM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '41.10', 'change': '3.34', 'changeRatio': '0.0885', 'pPrice': '41.09', 'pChange': '-0.0100', 'pChRatio': '-0.0002', 'marketValue': '18404536146.30', 'volume': '5019137', 'turnoverRate': '0.0112'},\r\n{'tickerId': 913257174, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Royal Gold', 'symbol': 'RGLD', 'disSymbol': 'RGLD', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '125.86', 'change': '3.33', 'changeRatio': '0.0272', 'pPrice': '125.86', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8253015011.08', 'volume': '853473', 'turnoverRate': '0.0130'},\r\n{'tickerId': 913254394, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Fortune Brand', 'symbol': 'FBHS', 'disSymbol': 'FBHS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '51.50', 'change': '3.30', 'changeRatio': '0.0685', 'pPrice': '51.50', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '7194870278.50', 'volume': '3004021', 'turnoverRate': '0.0214'},\r\n{'tickerId': 913323312, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYK', 'disSymbol': 'LBTYK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '21.49', 'change': '3.18', 'changeRatio': '0.1737', 'pPrice': '21.48', 'pChange': '-0.0100', 'pChRatio': '-0.0005', 'marketValue': '13594662302.41', 'volume': '19980228', 'turnoverRate': '0.0315'},\r\n{'tickerId': 913323882, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Preformed Line', 'symbol': 'PLPC', 'disSymbol': 'PLPC', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '52.82', 'change': '3.14', 'changeRatio': '0.0632', 'pPrice': '52.10', 'pChange': '-0.7200', 'pChRatio': '-0.0136', 'marketValue': '264979981.20', 'volume': '9305', 'turnoverRate': '0.0018'},\r\n{'tickerId': 913323248, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Discovery', 'symbol': 'DISCB', 'disSymbol': 'DISCB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'A', 'close': '57.95', 'change': '23.63', 'changeRatio': '0.6884', 'pPrice': '54.26', 'pChange': '-3.6900', 'pChRatio': '-0.0637', 'marketValue': '29362894177.95', 'volume': '218305', 'turnoverRate': '0.0004'},\r\n{'tickerId': 913323930, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MercadoLibre', 'symbol': 'MELI', 'disSymbol': 'MELI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '605.52', 'change': '22.01', 'changeRatio': '0.0377', 'pPrice': '603.69', 'pChange': '-1.8300', 'pChRatio': '-0.0030', 'marketValue': '30226598045.28', 'volume': '699008', 'turnoverRate': '0.0140'},\r\n{'tickerId': 913257170, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYA', 'disSymbol': 'LBTYA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '22.28', 'change': '2.86', 'changeRatio': '0.1473', 'pPrice': '22.29', 'pChange': '0.0100', 'pChRatio': '0.0004', 'marketValue': '14094419548.52', 'volume': '10534672', 'turnoverRate': '0.0167'},\r\n{'tickerId': 913303991, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Brodband', 'symbol': 'LBRDK', 'disSymbol': 'LBRDK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '125.44', 'change': '2.76', 'changeRatio': '0.0225', 'pPrice': '125.44', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '22817900904.96', 'volume': '926177', 'turnoverRate': '0.0042'},\r\n{'tickerId': 913257082, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Helen of Troy', 'symbol': 'HELE', 'disSymbol': 'HELE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '167.04', 'change': '2.76', 'changeRatio': '0.0168', 'pPrice': '167.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '4216707982.08', 'volume': '341465', 'turnoverRate': '0.0135'},\r\n{'tickerId': 913256458, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Forrester', 'symbol': 'FORR', 'disSymbol': 'FORR', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '33.88', 'change': '2.58', 'changeRatio': '0.0824', 'marketValue': '635419400.00', 'volume': '85115', 'turnoverRate': '0.0045'},\r\n{'tickerId': 950158952, 'exchangeId': 95, 'type': 2, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'LYRA THERAPEUTICS, INC.', 'symbol': 'LYRA', 'disSymbol': 'LYRA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'ipo', 'status': 'A', 'close': '18.56', 'change': '2.56', 'changeRatio': '0.1600', 'pPrice': '18.96', 'pChange': '0.4000', 'pChRatio': '0.0216', 'marketValue': '229705575.68', 'volume': '1738472', 'turnoverRate': '0.1405'},\r\n{'tickerId': 913257570, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bio-Techne', 'symbol': 'TECH', 'disSymbol': 'TECH', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '227.54', 'change': '2.54', 'changeRatio': '0.0113', 'pPrice': '227.54', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8726538309.18', 'volume': '497006', 'turnoverRate': '0.0130'},\r\n{'tickerId': 913323246, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bel Fuse', 'symbol': 'BELFB', 'disSymbol': 'BELFB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '9.99', 'change': '2.53', 'changeRatio': '0.3391', 'pPrice': '9.75', 'pChange': '-0.2400', 'pChRatio': '-0.0240', 'marketValue': '122562454.86', 'volume': '177634', 'turnoverRate': '0.0145'},\r\n{'tickerId': 916040647, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Agnico Eagle', 'symbol': 'AEM', 'disSymbol': 'AEM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '61.20', 'change': '2.52', 'changeRatio': '0.0429', 'pPrice': '61.10', 'pChange': '-0.1000', 'pChRatio': '-0.0016', 'marketValue': '14739911553.60', 'volume': '2820765', 'turnoverRate': '0.0117'},\r\n{'tickerId': 913303768, 'exchangeId': 12, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CHASE CORP', 'symbol': 'CCF', 'disSymbol': 'CCF', 'disExchangeCode': 'AMEX', 'exchangeCode': 'ASE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '96.71', 'change': '2.45', 'changeRatio': '0.0260', 'marketValue': '916799598.60', 'volume': '29229', 'turnoverRate': '0.0031'},\r\n{'tickerId': 913324557, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Allergan', 'symbol': 'AGN', 'disSymbol': 'AGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '189.74', 'change': '2.40', 'changeRatio': '0.0128', 'pPrice': '189.76', 'pChange': '0.0200', 'pChRatio': '0.0001', 'marketValue': '62424842326.10', 'volume': '5787032', 'turnoverRate': '0.0176'},\r\n{'tickerId': 913324566, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'West Pharm Svc', 'symbol': 'WST', 'disSymbol': 'WST', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '191.64', 'change': '2.38', 'changeRatio': '0.0126', 'pPrice': '191.64', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '14078267117.08', 'volume': '352460', 'turnoverRate': '0.0042'}\r\n]\r\n\r\ndb = su.Database(f\"overnight hold.db\" )\r\ndb['active'].insert_all(data)\r\n\r\n--------------- file ends ----------------------", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 610517472, "label": "sqlite3.OperationalError: too many SQL variables in insert_all when using rows with varying numbers of columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/731#issuecomment-618758326", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/731", "id": 618758326, "node_id": "MDEyOklzc3VlQ29tbWVudDYxODc1ODMyNg==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2020-04-24T01:55:00Z", "updated_at": "2020-04-24T01:55:00Z", "author_association": "CONTRIBUTOR", "body": "Mounting `./static` at `/static` seems the simplest way. Saves you the trouble of deciding what else (`img` for example) gets special treatment.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 605110015, "label": "Option to automatically configure based on directory layout"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/731#issuecomment-618126449", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/731", "id": 618126449, "node_id": "MDEyOklzc3VlQ29tbWVudDYxODEyNjQ0OQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2020-04-23T01:38:55Z", "updated_at": "2020-04-23T01:38:55Z", "author_association": "CONTRIBUTOR", "body": "I've almost suggested this same thing a couple times. I tend to have Makefile (because I'm doing other `make` stuff anyway to get data prepped), and I end up putting all those CLI options in something like `make run`. But it would be way easier to just have all those typical options -- plugins, templates, metadata -- be defaults.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 605110015, "label": "Option to automatically configure based on directory layout"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-612216820", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 612216820, "node_id": "MDEyOklzc3VlQ29tbWVudDYxMjIxNjgyMA==", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2020-04-10T21:03:38Z", "updated_at": "2020-04-10T21:03:38Z", "author_association": "CONTRIBUTOR", "body": "I made a repo at https://github.com/code402/datasette-lambda to demonstrate the idea, and scratch my personal itch for this.\r\n\r\nThe demo relies on some central authority having already published a public, reusable Lambda layer with Datasette & its dependencies. I think that differs from the other publish plugins which seem to mainly publish Dockerfiles that the host will interpret to install deps from a requirements.txt file.\r\n\r\nI chose that approach because `uvloop` appears to be a dependency with native code that needs to be compiled for the target runtime environment. In this case, that's Amazon Linux 2. I'm not 100% clear on whether that's still required, because:\r\n\r\n- maybe `uvloop` is only needed for `uvicorn`, which the demo doesn't actually use since HTTP routing is handled by API Gateway\r\n- it seems like `uvloop` may be an optional, drop-in optimization for `asyncio` in any case (but I may be misreading this; I'm very much a Python noob)\r\n\r\nIf it's the case that `uvloop` is truly optional, then I think the publish plugin could do the packaging on the user's machine, regardless of what flavour of operating system they're on. That'd be a bit slower for the user, but would provide the most long-term flexibility in terms of supporting plugins.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-608716819", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 608716819, "node_id": "MDEyOklzc3VlQ29tbWVudDYwODcxNjgxOQ==", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2020-04-03T22:19:00Z", "updated_at": "2020-04-03T22:19:00Z", "author_association": "CONTRIBUTOR", "body": "Hi Simon,\r\n\r\nI'm thinking of attempting this. Can you clarify some questions I have?\r\n\r\n1) I assume the goal is to have a CORS-friendly HTTPS endpoint that hosts the datasette service + user's db.\r\n\r\n2) If that's the goal, I think Lambda alone is insufficient. Lambda provides the compute fabric, but not the HTTP routing. You'd also need to add Application Load Balancer or API Gateway to provide an HTTP endpoint that routes to the lambda function.\r\n\r\nDo you have a preference between ALB or API GW? ALB has better economics at scale, but has a minimum monthly cost. API GW has worse per-request economics, but scales to zero when no requests are happening.\r\n\r\n3) Does Datasette have any native components, or is it all pure python? If it has native bits, they'll likely need to be recompiled to work on Amazon Linux 2.\r\n\r\n4) There are a few disparate services that need to be wired together to expose a Python service securely to the web. If I was doing this outside of the datasette publish system, I'd use an AWS CloudFormation template. Even within datasette, I think it still makes sense to use a CloudFormation template and just have the publish plugin invoke it (via the standard `aws` cli) with user-specified parameters. Does that sound reasonable to you?\r\n\r\nThanks for your help!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-604328163", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 604328163, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-03-26T09:41:30Z", "updated_at": "2020-03-26T09:41:30Z", "author_association": "CONTRIBUTOR", "body": "Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/712#issuecomment-604249402", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/712", "id": 604249402, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDI0OTQwMg==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T06:11:44Z", "updated_at": "2020-03-26T06:11:44Z", "author_association": "CONTRIBUTOR", "body": "Following on from @betatim's suggestion on Twitter, I've changed the proxy url to include 'absolute'.\r\n\r\n``` python\r\nproxy_url = f'{base_url}proxy/absolute/8001/'\r\n```\r\nThis works both on Binder and locally, without using the `path_from_header` option. I've updated the demo repository. Sorry @simonw if I've led you down the wrong path!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 588108428, "label": "base_url doesn't entirely work for running Datasette inside Binder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/712#issuecomment-604225034", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/712", "id": 604225034, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDIyNTAzNA==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T04:40:08Z", "updated_at": "2020-03-26T04:40:08Z", "author_association": "CONTRIBUTOR", "body": "Great! Yes, can confirm that this works on Binder. However, when I try to run the same code locally, I get an Internal Server Error when I try to access Datasette.\r\n\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 385, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py\", line 45, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette_debug_asgi.py\", line 24, in wrapped_app\r\n await app(scope, recieve, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 174, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/tracer.py\", line 75, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/app.py\", line 746, in __call__\r\n raw_path = dict(scope[\"headers\"])[path_from_header.encode(\"utf8\")].split(b\"?\")[0]\r\nKeyError: b'x-original-uri'\r\nINFO: 127.0.0.1:49320 - \"GET / HTTP/1.1\" 500 Internal Server Error\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 588108428, "label": "base_url doesn't entirely work for running Datasette inside Binder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-604166918", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 604166918, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T00:56:30Z", "updated_at": "2020-03-26T00:56:30Z", "author_association": "CONTRIBUTOR", "body": "Thanks! I'm trying to launch Datasette from *within* a notebook using the jupyter-server-proxy and the new `base_url` parameter. While the assets load ok, and the breadcrumb navigation works, the facet links don't seem to use the `base_url`. Or have I missed something?\r\n\r\nMy test repository is here: https://github.com/wragge/datasette-test", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602907207", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602907207, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-23T23:12:18Z", "updated_at": "2020-03-23T23:12:18Z", "author_association": "CONTRIBUTOR", "body": "This would also be useful for running Datasette in Jupyter notebooks on [Binder](https://mybinder.org/). While you can use [Jupyter-server-proxy](https://github.com/jupyterhub/jupyter-server-proxy) to access Datasette on Binder, the links are broken.\r\n\r\nWhy run Datasette on Binder? I'm developing a [range of Jupyter notebooks](https://glam-workbench.github.io/) that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic.\r\n\r\nFor example, there are a [series of notebooks](https://glam-workbench.github.io/trove-harvester/) that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data.\r\n\r\nBut it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-593026413", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 593026413, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MzAyNjQxMw==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-01T01:24:45Z", "updated_at": "2020-03-01T01:24:45Z", "author_association": "CONTRIBUTOR", "body": "Did you manage to find an answer to this? I've got a notebook to help people generate datasets on the fly from an API, so it would be cool if they flick it to Datasette for initial exploration.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/666#issuecomment-590022164", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/666", "id": 590022164, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDAyMjE2NA==", "user": {"value": 13896256, "label": "kevindkeogh"}, "created_at": "2020-02-23T03:26:00Z", "updated_at": "2020-02-23T03:26:00Z", "author_association": "CONTRIBUTOR", "body": "It was very helpful for me, using it for a 15M row table. Added a test, happy to amend though!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 562085508, "label": "Use inspect-file, if possible, for total row count"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-586599424", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 586599424, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-02-15T15:12:19Z", "updated_at": "2020-02-15T15:12:33Z", "author_association": "CONTRIBUTOR", "body": "So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/653#issuecomment-582106085", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/653", "id": 582106085, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MjEwNjA4NQ==", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2020-02-04T20:43:43Z", "updated_at": "2020-02-04T20:43:43Z", "author_association": "CONTRIBUTOR", "body": "but this also doesn't have to land at all if it doesn't match your use case. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 541331755, "label": "allow leading comments in SQL input field"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/653#issuecomment-582105810", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/653", "id": 582105810, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MjEwNTgxMA==", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2020-02-04T20:43:01Z", "updated_at": "2020-02-04T20:43:01Z", "author_association": "CONTRIBUTOR", "body": "I *think* the existing code will be OK even if I strip the lines in the middle of a new line delimited string.\r\n\r\nIt's only used for the validation, SQLite handles the `--` just fine and the whole SQL textarea still gets sent once it passes validation.\r\n\r\nI can add your test case to my branch later this evening though.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 541331755, "label": "allow leading comments in SQL input field"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/656#issuecomment-576293773", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/656", "id": 576293773, "node_id": "MDEyOklzc3VlQ29tbWVudDU3NjI5Mzc3Mw==", "user": {"value": 6371750, "label": "JBPressac"}, "created_at": "2020-01-20T14:17:11Z", "updated_at": "2020-01-20T14:17:11Z", "author_association": "CONTRIBUTOR", "body": "Seems that headers and definitions has simply to be filled as an HTML table in the description field of matadata.json.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546961357, "label": "Display of the column definitions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573389669", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/74", "id": 573389669, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzM4OTY2OQ==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-01-12T07:21:17Z", "updated_at": "2020-01-12T07:21:17Z", "author_association": "CONTRIBUTOR", "body": "I guess there is some extra flag for ` CliRunner.invoke` to check exitcode and raise the exception, or that should be an extra assert added.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546073980, "label": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573388052", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/74", "id": 573388052, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzM4ODA1Mg==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-01-12T06:51:30Z", "updated_at": "2020-01-12T06:51:30Z", "author_association": "CONTRIBUTOR", "body": "Thanks. That showed me that there was a click cli runner error, and setting `export LANG=en_US.UTF-8` fixed it.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546073980, "label": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567133734", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567133734, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEzMzczNA==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2019-12-18T17:33:23Z", "updated_at": "2019-12-18T17:33:23Z", "author_association": "CONTRIBUTOR", "body": "FWIW I did a dumb merge of the branch here: https://github.com/jsfenfen/datasette and it seemed to work in that I could run stuff at a subdirectory, but ended up abandoning it in favor of just posting a subdomain because getting the nginx configs right was making me crazy. I still would prefer posting at a subdirectory but the subdomain seems simpler at the moment. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/644#issuecomment-565755208", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/644", "id": 565755208, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NTc1NTIwOA==", "user": {"value": 6025893, "label": "chris48s"}, "created_at": "2019-12-14T21:33:31Z", "updated_at": "2019-12-14T21:33:31Z", "author_association": "CONTRIBUTOR", "body": "Hi @simonw\r\n\r\nHave you had a chance to look at this at all?\r\n\r\nI'm going to have a chunk of time free next week so if there is additional work needed on this, that would be a particularly convenient time for me to revisit this.\r\n\r\nCheers", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 530513784, "label": "Validate metadata json on startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-559632608", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 559632608, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTYzMjYwOA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-11-29T01:43:38Z", "updated_at": "2019-11-29T01:43:38Z", "author_association": "CONTRIBUTOR", "body": "In passing, it looks like a start was made on a datasette Jupyter server extension in https://github.com/lucasdurand/jupyter-datasette although the build fails in MyBinder.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/642#issuecomment-559207224", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/642", "id": 559207224, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTIwNzIyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-11-27T18:40:57Z", "updated_at": "2019-11-27T18:41:07Z", "author_association": "CONTRIBUTOR", "body": "Would cookie cutter approaches also work for creating various flavours of customised templates?\r\n\r\nI need to try to create a couple of sites for myself to get a feel for what sorts of thing are easily doable, and what cribbable cookie cutter items might be. I'm guessing https://simonwillison.net/2019/Nov/25/niche-museums/ is a good place to start from?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 529429214, "label": "Provide a cookiecutter template for creating new plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558687342", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558687342, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODY4NzM0Mg==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2019-11-26T15:40:00Z", "updated_at": "2019-11-26T15:40:00Z", "author_association": "CONTRIBUTOR", "body": "A bit of background: the reason `heroku git:clone` brings down an empty directory is because `datasette publish heroku` uses the [builds API](https://devcenter.heroku.com/articles/build-and-release-using-the-api), rather than a `git push`, to release the app. I originally did this because it seemed like a lower bar than having a working `git`, but the downside is, as you found out, that tweaking the created app is hard. \r\n\r\nSo there's one option -- change `datasette publish heroku` to use `git push` instead of `heroku builds:create`.\r\n\r\n@pkoppstein - what you suggested seems like it ought to work (you don't need maintenance mode, though). I'm not sure why it doesn't.\r\n\r\nYou could also look into using the [slugs API](https://devcenter.heroku.com/articles/platform-api-deploying-slugs) to download the slug, change `metadata.json`, re-pack and re-upload the slug.\r\n\r\nUltimately though I think I think @simonw's idea of reading `metadata.json` from an external source might be better (#357). Reading from an alternate URL would be fine, or you could also just stuff the whole `metadata.json` into a Heroku config var, and write a plugin to read it from there. \r\n\r\nHope this helps a bit!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-556749086", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 556749086, "node_id": "MDEyOklzc3VlQ29tbWVudDU1Njc0OTA4Ng==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2019-11-21T01:15:34Z", "updated_at": "2019-11-21T01:21:45Z", "author_association": "CONTRIBUTOR", "body": "Hey @simonw is the url_prefix config option available in another branch, it looks like you've written some tests for it above? In 0.32 I get \"url_prefix is not a valid option\". I think this would be *really helpful*!\r\n\r\nThis would be really handy for proxying datasette in another domain's *subdirectory* I believe this will allow folks to run upstream authentication, but the links break if the url_prefix doesn't match. \r\n\r\nI'd prefer not to host a proxied version of datasette on a subdomain (e.g. datasette.myurl.com b/c then I gotta worry about sharing authorization cookies with the subdomain, which I just assume not do, but...)\r\n\r\nEdit: I see the wip-url-prefix branch, I may try with that https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/29#issuecomment-552134876", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/29", "id": 552134876, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MjEzNDg3Ng==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2019-11-09T20:33:38Z", "updated_at": "2019-11-09T20:33:38Z", "author_association": "CONTRIBUTOR", "body": "\u2764\ufe0f thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 518725064, "label": "`import` command fails on empty files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/602#issuecomment-549246007", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/602", "id": 549246007, "node_id": "MDEyOklzc3VlQ29tbWVudDU0OTI0NjAwNw==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-11-04T07:29:33Z", "updated_at": "2019-11-04T07:29:33Z", "author_association": "CONTRIBUTOR", "body": "Not sure \u2013 I'm always a bit weirded out when elements that I clicked disappear on me.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 509535510, "label": "Offer to format readonly SQL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/601#issuecomment-544214418", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/601", "id": 544214418, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NDIxNDQxOA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-20T02:29:49Z", "updated_at": "2019-10-20T02:29:49Z", "author_association": "CONTRIBUTOR", "body": "Submitted in #602!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 509340359, "label": "Don't auto-format SQL on page load"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/601#issuecomment-544008944", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/601", "id": 544008944, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NDAwODk0NA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-18T23:40:48Z", "updated_at": "2019-10-18T23:40:48Z", "author_association": "CONTRIBUTOR", "body": "The only negative impact that comes to mind is that now you have no way to get the read-only query to be formatted nicely, I think, so maybe a second PR adding the formatting functionality even to the read-only page would be good?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 509340359, "label": "Don't auto-format SQL on page load"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/601#issuecomment-544008463", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/601", "id": 544008463, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NDAwODQ2Mw==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-18T23:39:21Z", "updated_at": "2019-10-18T23:39:21Z", "author_association": "CONTRIBUTOR", "body": "That looks right, and I completely agree with the intent.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 509340359, "label": "Don't auto-format SQL on page load"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/590#issuecomment-541587823", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/590", "id": 541587823, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTU4NzgyMw==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-14T09:58:23Z", "updated_at": "2019-10-14T09:58:23Z", "author_association": "CONTRIBUTOR", "body": "Added tests.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 505818256, "label": "Handle spaces in DB names"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/590#issuecomment-541562581", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/590", "id": 541562581, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTU2MjU4MQ==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-14T08:57:46Z", "updated_at": "2019-10-14T08:57:46Z", "author_association": "CONTRIBUTOR", "body": "Ah, thank you \u2013 I saw the need for unit tests but wasn't sure what the best way to add one would be.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 505818256, "label": "Handle spaces in DB names"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/512#issuecomment-541119038", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/512", "id": 541119038, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTExOTAzOA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-11T15:49:13Z", "updated_at": "2019-10-11T15:49:13Z", "author_association": "CONTRIBUTOR", "body": "How open are you to changing the config variable names (with appropriate deprecation, of course)? `\"about_url_text\", \"license_url_text\"` etc might be better suited to convey that these are just meant as basically URL titles.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 457147936, "label": "\"about\" parameter in metadata does not appear when alone"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/507#issuecomment-541118904", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/507", "id": 541118904, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTExODkwNA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-11T15:48:49Z", "updated_at": "2019-10-11T15:48:49Z", "author_association": "CONTRIBUTOR", "body": "Headless Chrome and Firefox via Selenium are a solid choice in my experience. You may be interested in how pretix and pretalx solve this problem: They use pytest to create those screenshots on release to make sure they are up to date. See [this writeup](https://behind.pretix.eu/2018/11/15/automated-screenshots/) and [this repo](https://github.com/pretix/pretix-screenshots).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455852801, "label": "Every datasette plugin on the ecosystem page should have a screenshot"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/585#issuecomment-541052329", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/585", "id": 541052329, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTA1MjMyOQ==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-11T12:53:51Z", "updated_at": "2019-10-11T12:53:51Z", "author_association": "CONTRIBUTOR", "body": "I think this would be good, yeah \u2013 currently, databases are explicitly sorted by name in the IndexView, we could just remove that part (and use an `OrderedDict` for consistency, I suppose)?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 503217375, "label": "Databases on index page should display in order they were passed to \"datasette serve\"?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/61#issuecomment-533818697", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/61", "id": 533818697, "node_id": "MDEyOklzc3VlQ29tbWVudDUzMzgxODY5Nw==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2019-09-21T18:09:01Z", "updated_at": "2019-09-21T18:09:28Z", "author_association": "CONTRIBUTOR", "body": "@witeshadow The library version doesn't have helpers around CSV (at least not from what I can see in the code). \r\n\r\nBut here's a snippet that makes it easy to insert from CSV using the library. \r\n\r\n```\r\nimport csv\r\nfrom sqlite_utils import Database\r\n\r\n# CSV Reader\r\n\r\ncsv_file = open(\"filename.csv\") # open the csv file.\r\nreader = csv.reader(csv_file) # Create a CSV reader\r\nheaders = next(reader) # First line is the header\r\ndocs = (dict(zip(headers, row)) for row in reader)\r\n\r\n# Now you can use the `sqlite_utils` library. \r\n\r\ndb = Database(\"my_database.db\")\r\ndb[\"table_name\"].insert_all(docs)\r\n```\r\n\r\nThis snippet is adapted from reading the CLI source code on how it implements the csv option.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 491219910, "label": "importing CSV to SQLite as library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/57#issuecomment-527211047", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/57", "id": 527211047, "node_id": "MDEyOklzc3VlQ29tbWVudDUyNzIxMTA0Nw==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2019-09-02T17:30:43Z", "updated_at": "2019-09-02T17:30:43Z", "author_association": "CONTRIBUTOR", "body": "I have merged the other PR (#56) into this one. \r\n\r\nI have incorporated your suggestions. Cheers!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 487987958, "label": "Add triggers while enabling FTS"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/56#issuecomment-527209840", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/56", "id": 527209840, "node_id": "MDEyOklzc3VlQ29tbWVudDUyNzIwOTg0MA==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2019-09-02T17:23:21Z", "updated_at": "2019-09-02T17:23:21Z", "author_association": "CONTRIBUTOR", "body": "I have updated the other PR with the changes from this one and added tests. I have also changed the escaping from double quotes to brackets. \r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 487847945, "label": "Escape the table name in populate_fts and search."}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/511#issuecomment-510730200", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/511", "id": 510730200, "node_id": "MDEyOklzc3VlQ29tbWVudDUxMDczMDIwMA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2019-07-12T03:23:22Z", "updated_at": "2019-07-12T03:23:22Z", "author_association": "CONTRIBUTOR", "body": "@simonw yes it works fine on Windows, but test suite doesn't run properly, for that I had to use WSL", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 456578474, "label": "Get Datasette tests passing on Windows in GitHub Actions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/554#issuecomment-509629331", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/554", "id": 509629331, "node_id": "MDEyOklzc3VlQ29tbWVudDUwOTYyOTMzMQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2019-07-09T12:51:35Z", "updated_at": "2019-07-09T12:51:35Z", "author_association": "CONTRIBUTOR", "body": "I wanted to add a test for it too, but I've realized it's impossible to test a server process as we cannot get its exit code.\r\n\r\n```python\r\n# tests/test_cli.py\r\ndef test_static_mounts_on_windows():\r\n if sys.platform != \"win32\":\r\n return\r\n runner = CliRunner()\r\n result = runner.invoke(\r\n cli, [\"serve\", \"--static\", r\"s:C:\\\\\"]\r\n )\r\n assert result.exit_code == 0\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 465728430, "label": "Fix static mounts using relative paths and prevent traversal exploits"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/554#issuecomment-509618339", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/554", "id": 509618339, "node_id": "MDEyOklzc3VlQ29tbWVudDUwOTYxODMzOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2019-07-09T12:16:32Z", "updated_at": "2019-07-09T12:16:32Z", "author_association": "CONTRIBUTOR", "body": "I've also added another fix for using static mounts with absolute paths on Windows. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 465728430, "label": "Fix static mounts using relative paths and prevent traversal exploits"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/507#issuecomment-509013413", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/507", "id": 509013413, "node_id": "MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-07-07T16:31:57Z", "updated_at": "2019-07-07T16:31:57Z", "author_association": "CONTRIBUTOR", "body": "Chrome and Firefox [both support headless screengrabs]( https://www.bleepingcomputer.com/news/software/chrome-and-firefox-can-take-screenshots-of-sites-from-the-command-line/) from command line, but I don't know how parameterised they can be?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455852801, "label": "Every datasette plugin on the ecosystem page should have a screenshot"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/523#issuecomment-504809397", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/523", "id": 504809397, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDgwOTM5Nw==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-06-24T01:38:14Z", "updated_at": "2019-06-24T01:38:14Z", "author_association": "CONTRIBUTOR", "body": "Ah, apologies \u2013 I had found and read those issues, but I was under the impression that they refered only to the filtered row count, not the unfiltered total row count.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459627549, "label": "Show total/unfiltered row count when filtering"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504690927", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504690927, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY5MDkyNw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T19:06:07Z", "updated_at": "2019-06-22T19:06:07Z", "author_association": "CONTRIBUTOR", "body": "I'd rather not turn this into a systemd support thread, but you're trying to execute the package directory there. Your datasette executable is probably at `/home/chris/Env/datasette/bin/datasette`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504684831", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504684831, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY4NDgzMQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T17:38:23Z", "updated_at": "2019-06-22T17:38:23Z", "author_association": "CONTRIBUTOR", "body": "> > WorkingDirectory=/path/to/data\r\n> \r\n> @russss, Which directory does this represent?\r\n\r\nIt's the working directory (cwd) of the spawned process. In this case if you set it to the directory your data is in, you can use relative paths to the db (and metadata/templates/etc) in the `ExecStart` command.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504663766", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504663766, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY2Mzc2Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T12:57:59Z", "updated_at": "2019-06-22T12:57:59Z", "author_association": "CONTRIBUTOR", "body": "> This example is useful to - I like how it has a Makefile that knows how to set up systemd: https://github.com/pikesley/Queube\r\n\r\nI wasn't even aware it was possible to add a systemd service at an arbitrary path, but it seems a little messy to me.\r\n\r\nMaybe worth noting that systemd does support [per-user services](https://wiki.archlinux.org/index.php/Systemd/User) which don't require root access. Cool but probably overkill for most people (especially when you're going to need root to listen on port 80 anyway, directly or via a reverse proxy).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504662904", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504662904, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY2MjkwNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T12:45:21Z", "updated_at": "2019-06-22T12:45:39Z", "author_association": "CONTRIBUTOR", "body": "On most modern Linux distros, systemd is the easiest answer.\r\n\r\nExample systemd unit file (save to `/etc/systemd/system/datasette.service`):\r\n```\r\n[Unit]\r\nDescription=Datasette\r\nAfter=network.target\r\n\r\n[Service]\r\nType=simple\r\nUser=\r\nWorkingDirectory=/path/to/data\r\nExecStart=/path/to/datasette serve -h 0.0.0.0 ./my.db\r\nRestart=on-failure\r\n\r\n[Install]\r\nWantedBy=multi-user.target\r\n```\r\n\r\nActivate it with:\r\n```bash\r\n$ sudo systemctl daemon-reload\r\n$ sudo systemctl enable datasette\r\n$ sudo systemctl start datasette\r\n```\r\n\r\nLogs are best viewed using `journalctl -u datasette -f`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-499923145", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 499923145, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5OTkyMzE0NQ==", "user": {"value": 13896256, "label": "kevindkeogh"}, "created_at": "2019-06-07T15:10:57Z", "updated_at": "2019-06-07T15:11:07Z", "author_association": "CONTRIBUTOR", "body": "Putting this here in case anyone else encounters the same issue with nginx, I was able to resolve it by passing the header in the nginx proxy config (i.e., `proxy_set_header Host $host`).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-499320973", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 499320973, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5OTMyMDk3Mw==", "user": {"value": 13896256, "label": "kevindkeogh"}, "created_at": "2019-06-06T02:07:59Z", "updated_at": "2019-06-06T02:07:59Z", "author_association": "CONTRIBUTOR", "body": "Hey was this ever merged? Trying to run this behind nginx, and encountering this issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/450#issuecomment-489342728", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/450", "id": 489342728, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTM0MjcyOA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-04T16:37:35Z", "updated_at": "2019-05-04T16:37:35Z", "author_association": "CONTRIBUTOR", "body": "For a bit more context: this fixes a crash with `unsupported operand type(s) for +: 'int' and 'NoneType'` on the index page for me.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440304714, "label": "Coalesce hidden table count to 0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/446#issuecomment-489222223", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/446", "id": 489222223, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTIyMjIyMw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T20:01:19Z", "updated_at": "2019-05-03T20:01:29Z", "author_association": "CONTRIBUTOR", "body": "Also I have a slight preference against (ab)using `__slots__` to enforce fields, although I have done it myself in the past. It would be possible to do this with `__setattr__` instead, although that's an implementation detail and I'm not too fussed about it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440134714, "label": "Define mechanism for plugins to return structured data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/446#issuecomment-489221481", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/446", "id": 489221481, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTIyMTQ4MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T19:58:31Z", "updated_at": "2019-05-03T19:58:31Z", "author_association": "CONTRIBUTOR", "body": "In this particular case I don't think there's an issue making all those required. However, I suspect we might have to allow optional values at some point - my preferred solution to russss/datasette-geo#2 would need one.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440134714, "label": "Define mechanism for plugins to return structured data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/434#issuecomment-489163939", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/434", "id": 489163939, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTE2MzkzOQ==", "user": {"value": 10352819, "label": "rprimet"}, "created_at": "2019-05-03T16:49:45Z", "updated_at": "2019-05-03T16:50:03Z", "author_association": "CONTRIBUTOR", "body": "> The second time I ran the command I got an error:\r\n\r\n> \r\n> ERROR: (gcloud.beta.run.deploy) Deployment endpoint was not found. Perhaps the\r\n> provided region was invalid. Set the `run/region` property to a valid region and\r\n> retry. Ex: `gcloud config set run/region us-central1`\r\n> \r\n\r\nYes, I was able to reproduce this; I used to get prompted for a run region interactively by the `gcloud` tool before, but maybe this is changing? (the [documentation](https://cloud.google.com/run/docs/deploying) now assumes `run/region` is set).\r\n\r\nNot sure which course of action is best: making `datasette` ensure that `run/region` is set beforehand or wait a bit until the gcloud CLI stabilizes?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 434321685, "label": "\"datasette publish cloudrun\" command to publish to Google Cloud Run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/434#issuecomment-489105665", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/434", "id": 489105665, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTEwNTY2NQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2019-05-03T14:01:30Z", "updated_at": "2019-05-03T14:01:30Z", "author_association": "CONTRIBUTOR", "body": "This is exactly what I needed. Thank you.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 434321685, "label": "\"datasette publish cloudrun\" command to publish to Google Cloud Run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/419#issuecomment-489060765", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/419", "id": 489060765, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTA2MDc2NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T11:07:42Z", "updated_at": "2019-05-03T11:07:42Z", "author_association": "CONTRIBUTOR", "body": "Are you planning on removing inspect entirely? \r\n\r\nI didn't spot this work before I started on datasette-geo, but ironically I think it has a use case which really needs the inspect functionality (or some replacement). \r\n\r\nDatasette-geo uses it to store the bounding box of all the geographic features in the table. This is needed when rendering the map because it avoids having to send loads of tile requests for areas which are empty. \r\n\r\nEven with relatively small datasets, calculating the bounding box seems to take around 5 seconds, so I don't think it's really feasible to do this on page load.\r\n\r\nOne possible fix would be to do this on startup, and then in a thread which watches the database for changes.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421551434, "label": "Default to opening files in mutable mode, special option for immutable files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/432#issuecomment-488595724", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/432", "id": 488595724, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4ODU5NTcyNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-02T08:50:53Z", "updated_at": "2019-05-02T08:50:53Z", "author_association": "CONTRIBUTOR", "body": "> Can I pull those needs out of the Facet class somehow?\r\n\r\nI was thinking that it might be handy for datasette to have a request object which wraps the Sanic Request. This could include the datasette-specific querystring decoding and the `special_args` parsing from TableView.data.\r\n\r\nThis would mean that we could expose the request object to plugin hooks without coupling them to Sanic.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432893491, "label": "Refactor facets to a class and new plugin, refs #427"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-488247617", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 488247617, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4ODI0NzYxNw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-01T09:57:50Z", "updated_at": "2019-05-01T09:57:50Z", "author_association": "CONTRIBUTOR", "body": "Just for the record, this PR is now finished and ready to merge from my perspective.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/439#issuecomment-487859345", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/439", "id": 487859345, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Nzg1OTM0NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-30T08:21:19Z", "updated_at": "2019-04-30T08:21:19Z", "author_association": "CONTRIBUTOR", "body": "I think the best approach to this is to pass through the `view_name` parameter I added in #441. It's then simple enough for me to add `.geojson` to the URL in JS - I don't need the pkey.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438240541, "label": "[WIP] Add primary key to the extra_body_script hook arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487748271", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487748271, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Nzc0ODI3MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T21:20:17Z", "updated_at": "2019-04-29T21:20:17Z", "author_association": "CONTRIBUTOR", "body": "Also I just pushed a change to add registered output renderers to the templates:\r\n![image](https://user-images.githubusercontent.com/45057/56927799-f18e0580-6acc-11e9-8ea9-a0ee961323ec.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487735247", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487735247, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzczNTI0Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T20:39:43Z", "updated_at": "2019-04-29T20:39:43Z", "author_association": "CONTRIBUTOR", "body": "I updated the hook to pass the datasette object through now.\r\n\r\nYou can see the working [GeoJSON render function here](https://github.com/russss/datasette-geo/blob/master/datasette_plugin_geo/geojson.py) - the [hook function is here](https://github.com/russss/datasette-geo/blob/master/datasette_plugin_geo/__init__.py#L65-L70).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487724539", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487724539, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzcyNDUzOQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T20:08:32Z", "updated_at": "2019-04-29T20:08:32Z", "author_association": "CONTRIBUTOR", "body": "I also just realised that I should be passing the datasette object into the hook function...as I just found I need it. So hold off merging until I've fixed that.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487723476", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487723476, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzcyMzQ3Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T20:05:23Z", "updated_at": "2019-04-29T20:05:23Z", "author_association": "CONTRIBUTOR", "body": "This is the minimal example (I also included it in the docs):\r\n\r\n```python\r\nfrom datasette import hookimpl\r\n\r\ndef render_test(args, data, view_name):\r\n return {\r\n\u00a0 'body': 'Hello World',\r\n 'content_type': 'text/plain'\r\n }\r\n\r\n@hookimpl\r\ndef register_output_renderer():\r\n return {\r\n 'extension': 'test',\r\n 'callback': render_test\r\n }\r\n```\r\n\r\nI'm working on the GeoJSON one now and it should be ready soon. (I forgot I was going to run into the same problem as before - that Spatialite's stupid binary format isn't WKB and I have no way of altering the query to change that - but I've just managed to write some code to rearrange the bytes from Spatialite blob-geometry into WKB...)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/424#issuecomment-487692377", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/424", "id": 487692377, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY5MjM3Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:30:46Z", "updated_at": "2019-04-29T18:30:46Z", "author_association": "CONTRIBUTOR", "body": "Actually no, I ended up not using the inspected column types in my plugin, and the binary column issue can be solved a lot more simply, so I'll close this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 427429265, "label": "Column types in inspected metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/424#issuecomment-487689477", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/424", "id": 487689477, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY4OTQ3Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:22:40Z", "updated_at": "2019-04-29T18:22:40Z", "author_association": "CONTRIBUTOR", "body": "This is pretty conflicty because I forgot how to use git fetch. If you're interested in merging this I'll rewrite it against an actual modern checkout...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 427429265, "label": "Column types in inspected metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487686655", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487686655, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY4NjY1NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:14:25Z", "updated_at": "2019-04-29T18:14:25Z", "author_association": "CONTRIBUTOR", "body": "Subsidiary note which I forgot in the commit message:\r\n\r\nI've decided to give each view a short string name to aid in differentiating which view a hook is being called from. Since hooks are functions and not subclasses, and can get called from different places in the URL hierarchy, it's sometimes difficult to distinguish what data you're actually operating on. I think this will come in handy for other hooks as well.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/439#issuecomment-487542486", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/439", "id": 487542486, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzU0MjQ4Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T11:20:30Z", "updated_at": "2019-04-29T11:20:30Z", "author_association": "CONTRIBUTOR", "body": "Actually I think this is not the whole story because of the rowid issue. I'm going to think about this one a bit more.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438240541, "label": "[WIP] Add primary key to the extra_body_script hook arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/437#issuecomment-487537452", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/437", "id": 487537452, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzUzNzQ1Mg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T10:58:49Z", "updated_at": "2019-04-29T10:58:49Z", "author_association": "CONTRIBUTOR", "body": "I've just spotted that this implements #215.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438048318, "label": "Add inspect and prepare_sanic hooks"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/429#issuecomment-483202658", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/429", "id": 483202658, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MzIwMjY1OA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-15T10:48:01Z", "updated_at": "2019-04-15T10:48:01Z", "author_association": "CONTRIBUTOR", "body": "Minor UI observation:\r\n\r\n![image](https://user-images.githubusercontent.com/82988/56127017-2bf78e80-5f74-11e9-9120-9393eb5d4988.png)\r\n\r\n`_where=` renders a `[remove]` link whereas `_facet=` gets a cross to remove it. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432636432, "label": "?_where=sql-fragment parameter for table views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/431#issuecomment-483017176", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/431", "id": 483017176, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-14T16:58:37Z", "updated_at": "2019-04-14T16:58:37Z", "author_association": "CONTRIBUTOR", "body": "Hmm... nope... I see an updated timestamp from `ls -al` on the db but no reload?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432870248, "label": "Datasette doesn't reload when database file changes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/412#issuecomment-474282321", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/412", "id": 474282321, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-03-19T10:09:46Z", "updated_at": "2019-03-19T10:09:46Z", "author_association": "CONTRIBUTOR", "body": "Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to `ATTACH DATABASE`?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 411257981, "label": "Linked Data(sette)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-474280581", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 474280581, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-03-19T10:06:42Z", "updated_at": "2019-03-19T10:06:42Z", "author_association": "CONTRIBUTOR", "body": "This would be really interesting but several possibilities in use arise, I think?\r\n\r\nFor example:\r\n\r\n- I put a new CSV file into the import dir and a new table is created therefrom\r\n- I put a CSV file into the import dir that replaces a previous file / table of the same name as a pre-existing table (eg files that contain monthly data in year to date). The data may also patch previous months, so a full replace / DROP on the original table may well be in order.\r\n- I put a CSV file into the import dir that updates a table of the same name as a pre-existing table (eg files that contain last month's data)\r\n\r\nCSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form `MYTABLENAME-February2019.csv` etc", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null}