{"id": 268110769, "node_id": "MDU6SXNzdWUyNjgxMTA3Njk=", "number": 33, "title": "Use locust for benchmarking and load tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-10-24T17:00:09Z", "updated_at": "2017-12-10T03:12:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/locustio/locust\r\n\r\nNeeded for #32 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/33/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275159710, "node_id": "MDU6SXNzdWUyNzUxNTk3MTA=", "number": 128, "title": "Every visualization should have an \"embed\" button", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-11-19T13:38:13Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "At least for the first round of visualizations, any time you construct one using the UI the result should include an \"embed this\" button that returns source code to copy and paste\r\n\r\nThese examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/128/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312395790, "node_id": "MDU6SXNzdWUzMTIzOTU3OTA=", "number": 197, "title": "Ability to sort by more than one column", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:13:30Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189.\r\n\r\nI'd like to support \"sort by X descending, then by Y ascending if there are dupes for X\" as well. Suggested syntax for that:\r\n\r\n ?_sort_desc=X&_sort=Y\r\n\r\nwe currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example:\r\n\r\n ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/197/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312396095, "node_id": "MDU6SXNzdWUzMTIzOTYwOTU=", "number": 198, "title": "Ability to sort with nulls last", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:15:40Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189\r\n\r\nHere's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid\r\n\r\n order by case when career_ranypa is null then 1 else 0 end, career_ranypa", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/198/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 314771615, "node_id": "MDU6SXNzdWUzMTQ3NzE2MTU=", "number": 218, "title": "Support custom unit display in order to handle \"$10,000\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T18:39:31Z", "updated_at": "2018-07-10T17:45:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I tried to get Datasette to display `$10,000` using the new units support but we currently only display units as a suffix:\r\n\r\nhttps://github.com/simonw/datasette/blob/10a34f995c70daa37a8a2aa02c3135a4b023a24c/datasette/app.py#L563-L572\r\n\r\nIt would be neat if there was a mechanism for specifying a custom unit display - maybe something like this:\r\n\r\n```\r\n{\r\n \"custom_units\": {\r\n \"us_dollar\": {\r\n \"unit\": \"us_dollar = [] = $\",\r\n \"format\": \"${:,}\"\r\n }\r\n }\r\n}\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/218/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 314834783, "node_id": "MDU6SXNzdWUzMTQ4MzQ3ODM=", "number": 219, "title": "Expose units in the JSON API?", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T22:04:25Z", "updated_at": "2018-04-16T22:04:25Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map.\r\n\r\nI'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/219/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 318490133, "node_id": "MDU6SXNzdWUzMTg0OTAxMzM=", "number": 241, "title": "Default datasette logging format should be JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-27T17:32:48Z", "updated_at": "2018-07-10T17:45:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Structured logs are better. Datasette should default to outputting it's HTTP access log lines as newline delimited JSON instead of the Sanic default format it uses at the moment.\r\n\r\nFor improved greppability these logs should have keys ordered in a consistent way. Python's JSON module can do this with ordered dictionaries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/241/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 320132682, "node_id": "MDU6SXNzdWUzMjAxMzI2ODI=", "number": 250, "title": "Setup some issue templates", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-04T01:49:07Z", "updated_at": "2018-05-04T01:49:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://twitter.com/left_pad/status/99216385740464537\r\n\r\nI like the idea of using these to help people understand some of the ways I want to use issues.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/250/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 326778161, "node_id": "MDU6SXNzdWUzMjY3NzgxNjE=", "number": 290, "title": "Consider increasing the default for num_sql_threads (currently 3)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-27T00:52:41Z", "updated_at": "2018-05-27T00:52:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I ran a very rough micro-benchmark on the new `num_sql_threads` config option (added in #285)\r\n\r\n datasette --config num_sql_threads:1 fivethirtyeight.db\r\n\r\nThen\r\n\r\n ab -n 100 -c 10 'http://127.0.0.1:8011/fivethirtyeight-2628db9/twitter-ratio%2Fsenators'\r\n\r\n| Number of threads | Requests/second |\r\n|---|---|\r\n| 1 | 4.57 |\r\n| 3 | 9.77 |\r\n| 10 | 13.53 |\r\n| 20 | 15.24 \r\n| 50 | 8.21 | \r\n\r\nThis was on my early 2018 OS X laptop. Need to benchmark in other common environments before making a decision on changing the default. That said, the default of 3 was a number I plucked out of thin air.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/290/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 344654623, "node_id": "MDU6SXNzdWUzNDQ2NTQ2MjM=", "number": 347, "title": "Rename \"datasette package\" to \"datasette publish docker\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-07-26T00:42:46Z", "updated_at": "2018-07-26T00:42:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/347/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 346026869, "node_id": "MDU6SXNzdWUzNDYwMjY4Njk=", "number": 354, "title": "Handle many-to-many relationships", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-07-31T04:03:13Z", "updated_at": "2020-11-24T19:51:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a master tracking ticket for various many-2-many features.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/354/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 359075028, "node_id": "MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx", "number": 364, "title": "Support for other types of databases using external connectors", "user": {"value": 11912854, "label": "jsancho-gpl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-09-11T14:31:47Z", "updated_at": "2018-09-11T14:31:47Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/364", "body": "This PR is related to #293, but now all commits have been merged.\r\n\r\nThe purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points.\r\n\r\nThe modifications in the original datasette code are minimal and many are in a separated file.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/364/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 377166793, "node_id": "MDU6SXNzdWUzNzcxNjY3OTM=", "number": 372, "title": "Docker build tools", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-11-04T16:02:35Z", "updated_at": "2018-11-04T16:02:35Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as `requirements.txt` files.\r\n\r\nIf plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base `datasette` container may be required.\r\n\r\nExamples of Dockerfile generators / container builders:\r\n\r\n- [openshift/source-to-image (s2i)](https://github.com/openshift/source-to-image)\r\n- [jupyter/repo2docker](https://github.com/jupyter/repo2docker)\r\n- [stencila/dockter](https://github.com/stencila/dockter)\r\n\r\nDiscussions / threads (via Binderhub gitter) on:\r\n- [why `repo2docker` not `s2i`](http://words.yuvi.in/post/why-not-s2i/)\r\n- [why `dockter` not `repo2docker`](https://twitter.com/choldgraf/status/1058499607309647872)\r\n- [composability in `s2i`](https://trello.com/c/AexIVZNf/1008-8-composable-builds-builds-evg)\r\n\r\nRelates to things like:\r\n\r\n- https://github.com/simonw/datasette/pull/280", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/372/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 426722204, "node_id": "MDU6SXNzdWU0MjY3MjIyMDQ=", "number": 423, "title": "?_search_col=X not reflected correctly in the UI", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-03-28T21:48:19Z", "updated_at": "2020-11-03T19:01:59Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "e.g. https://latest.datasette.io/fixtures/searchable?_search_text1=barry\r\n\r\n![2019-03-28 at 2 47 PM](https://user-images.githubusercontent.com/9599/55195035-84ebb800-5168-11e9-910b-fc9868bcd93e.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/423/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 440325850, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2", "number": 452, "title": "SQL builder utility classes", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-04T13:57:47Z", "updated_at": "2019-05-04T14:03:04Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/452", "body": "This adds a straightforward set of classes to aid in the construction of\r\nSQL queries.\r\n\r\nMy plan for this was to allow plugins to manipulate the\r\nDatasette-generated SQL in a more structured way. I'm not sure that's\r\ngoing to work, but I feel like this is still a step forward - it\r\nreduces the number of intermediate variables in `TableView.data` which\r\naids readability, and also factors out a lot of the boring string\r\nconcatenation.\r\n\r\nThere are a fair number of minor structure changes in here too as I've\r\ntried to make the ordering of `TableView.data` a bit more logical. As\r\nfar as I can tell, I haven't broken anything...", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/452/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 449445715, "node_id": "MDU6SXNzdWU0NDk0NDU3MTU=", "number": 491, "title": "Figure out how to use Firebase with cloudrun to enable vanity URLs and CDN caching", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-28T19:48:06Z", "updated_at": "2019-05-28T19:48:35Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It looks like Firebase can solve a couple of problems with the existing `datasette publish cloudrun` hosting mechanism:\r\n\r\n* The URLs it produces aren't pretty enough. Firebase offers more control over vanity URLs.\r\n* CDN caching (as seen in `datasette publish now`) is great for improving performance and saving money on Cloud Run execution time.\r\n\r\nhttps://firebase.google.com/docs/hosting/cloud-run looks like it can help with both of these.\r\n\r\nLots of interesting questions:\r\n\r\n* Should this be a new `datasette publish firebase` command or should it instead be implemented as additional custom options to `datasette publish cloudrun`?\r\n* How much harder does it become to do account setup?\r\n* How much will this option cost users?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/491/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 459469278, "node_id": "MDU6SXNzdWU0NTk0NjkyNzg=", "number": 515, "title": "Try shrinking official image with docker-slim", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-06-22T12:25:37Z", "updated_at": "2019-06-22T12:25:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This looks really promising: https://github.com/docker-slim/docker-slim\r\n\r\nIf it can shave substantial size from our official container reliably we could add it to the automated build process.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/515/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 460095928, "node_id": "MDU6SXNzdWU0NjAwOTU5Mjg=", "number": 528, "title": "Establish a pattern for Datasette plugins built on top of Pandas", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-06-24T21:05:52Z", "updated_at": "2019-06-24T21:05:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The Pandas ecosystem is huge, varied and full of tools that are really good at doing interesting analysis on top of tabular data.\r\n\r\nPandas should not be a dependency of Datasette core, but I think there is a lot of potential in having plugins which use Pandas to apply interesting analysis to data sucked out of Datasette's SQLite tables.\r\n\r\nOne example ([thanks, Tony](https://twitter.com/psychemedia/status/1143259809715752962)): https://github.com/ResidentMario/missingno could form the basis of a fantastic plugin for getting a high-level overview of how complete each column in a table is.\r\n\r\nSome thought is needed here about what shape these kind of plugins might take, and what plugin hooks they would use.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 503053243, "node_id": "MDU6SXNzdWU1MDMwNTMyNDM=", "number": 582, "title": "Datasette should not completely crash if one SQLite database is malformed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-06T05:11:43Z", "updated_at": "2019-10-06T05:11:43Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "If you run Datasette against a number of database files and one of them is malformed, you get this 500 error on the index page:\r\n\r\n\"Error_500\"\r\n\r\nIt would be better if Datasette still worked and listed the databases that were NOT malformed, then showed an inline error message just for the one that could not be accessed.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/582/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 504720731, "node_id": "MDU6SXNzdWU1MDQ3MjA3MzE=", "number": 1, "title": "Add more details on how to request data from google takeout correctly.", "user": {"value": 1055831, "label": "dazzag24"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-09T15:17:34Z", "updated_at": "2019-10-09T15:17:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:\r\n\r\n- My Activity\r\n- Location History\r\n\r\nIn addition unless you specify that \"My Activity\" is downloaded in JSON format the default is HTML. This then causes the \r\n\r\n`google-takeout-to-sqlite my-activity takeout.db takeout.zip`\r\n\r\ncommand to fail as it only contains html files not json files.\r\n\r\nThanks", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 505673645, "node_id": "MDU6SXNzdWU1MDU2NzM2NDU=", "number": 16, "title": "Do a better job with archived direct message threads", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-11T06:55:21Z", "updated_at": "2019-10-11T06:55:27Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://github.com/dogsheep/twitter-to-sqlite/blob/fb2698086d766e0333a55bb73435e7283feeb438/twitter_to_sqlite/archive.py#L98-L99", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 530468212, "node_id": "MDU6SXNzdWU1MzA0NjgyMTI=", "number": 643, "title": "Set up some basic benchmarks as part of the unit tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-11-29T19:24:19Z", "updated_at": "2019-11-29T19:24:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://pypi.org/project/pytest-benchmark/ looks great for this.\r\n\r\nHere's how to run it as a github action: https://github.com/rhysd/github-action-benchmark/blob/master/examples/pytest/README.md", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/643/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 541274681, "node_id": "MDU6SXNzdWU1NDEyNzQ2ODE=", "number": 2, "title": "Add linkedin-to-sqlite", "user": {"value": 881925, "label": "mnp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-21T03:13:40Z", "updated_at": "2019-12-21T03:13:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There is an API available. https://developer.linkedin.com/docs/rest-api#\r\n\r\nAt the minimum, I would think contact list and messages would be of interest.", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 581795570, "node_id": "MDU6SXNzdWU1ODE3OTU1NzA=", "number": 93, "title": "Support more string values for types in .add_column()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-03-15T19:32:49Z", "updated_at": "2020-09-24T20:36:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://sqlite-utils.readthedocs.io/en/2.4.2/python-api.html#adding-columns says:\r\n> SQLite types you can specify are \"TEXT\", \"INTEGER\", \"FLOAT\" or \"BLOB\".\r\n\r\nAs discovered in #92 this isn't the right list of values. I should expand this to match https://www.sqlite.org/datatype3.html", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/93/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 593006814, "node_id": "MDU6SXNzdWU1OTMwMDY4MTQ=", "number": 715, "title": "Refactor duplicate cell display logic", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-03T00:58:11Z", "updated_at": "2020-04-03T00:58:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The logic for rendering cells in table view and in database (or canned query) view is currently very similar:\r\n\r\nhttps://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/base.py#L514-L539\r\n\r\nCompared with:\r\n\r\nhttps://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/table.py#L104-L195\r\n\r\nI'll be changing this a bit in #698 but I should still try to clean this up more further in the future.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/715/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 594237015, "node_id": "MDU6SXNzdWU1OTQyMzcwMTU=", "number": 718, "title": "Plugin idea: datasette-redirects", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-05T03:41:38Z", "updated_at": "2023-08-30T22:17:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin.\r\n\r\nhttps://github.com/simonw/museums/blob/6b1faf00c463b2228860d4d62d104b11935e01b1/plugins/redirect_www.py", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/718/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "reopened"} {"id": 599776345, "node_id": "MDU6SXNzdWU1OTk3NzYzNDU=", "number": 24, "title": "Feature idea: github-to-sqlite everything ...", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-14T18:34:00Z", "updated_at": "2020-04-14T18:34:00Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "At the moment if you want to pull all your repos, issues, issues comments etc you have to do it with a sequence of separate commands.\r\n\r\nConsider adding a `everything` or `all` command which fetches everything that the tool knows how to fetch, and is designed to be run on a cron in a way that fetches just new stuff each time.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/24/reactions\", \"total_count\": 7, \"+1\": 7, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 613491342, "node_id": "MDU6SXNzdWU2MTM0OTEzNDI=", "number": 762, "title": "Experiment with PRAGMA hard_heap_limit ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-06T17:33:23Z", "updated_at": "2020-05-07T03:08:44Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This was added in SQLite 2020-01-22 (3.31.0): https://www.sqlite.org/changes.html#version_3_31_0\r\n\r\n> Add the [sqlite3_hard_heap_limit64()](https://www.sqlite.org/c3ref/hard_heap_limit64.html) interface and the corresponding [PRAGMA hard_heap_limit](https://www.sqlite.org/pragma.html#pragma_hard_heap_limit) command. \r\n\r\nThis sounds like it could be a nice extra safety measure.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/762/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 621486115, "node_id": "MDU6SXNzdWU2MjE0ODYxMTU=", "number": 27, "title": "photos_with_apple_metadata view should include labels", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-20T06:06:17Z", "updated_at": "2020-05-20T06:06:17Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://dogsheep-photos.dogsheep.net/public/photos_with_apple_metadata?place_city=New+Orleans&_facet=place_city&_facet_array=albums&_facet_array=persons\r\n\r\nHere's one way to add that:\r\n```sql\r\n select\r\n rowid,\r\n photo,\r\n (\r\n select\r\n json_group_array(\r\n json_object(\r\n 'label',\r\n normalized_string,\r\n 'href',\r\n '/photos/labelled?_hide_sql=1&label=' || normalized_string\r\n )\r\n )\r\n from\r\n labels\r\n where\r\n labels.uuid = photos_with_apple_metadata.uuid\r\n ) as labels,\r\n date,\r\n```", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/27/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 626582657, "node_id": "MDU6SXNzdWU2MjY1ODI2NTc=", "number": 779, "title": "Make human_description_en explicitly available to output renderers", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-28T14:59:54Z", "updated_at": "2020-05-28T14:59:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "`datasette-atom` uses this:\r\n\r\nhttps://github.com/simonw/datasette-atom/blob/df98a6c43a443224b6cd232f84703ec297ef046b/datasette_atom/__init__.py#L36-L37\r\n```python\r\n if data.get(\"human_description_en\"):\r\n title += \": \" + data[\"human_description_en\"]\r\n```\r\nIt's a nice way to generate a useful title for a filtered table.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/779/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 638238548, "node_id": "MDU6SXNzdWU2MzgyMzg1NDg=", "number": 845, "title": "Code coverage should ignore files in .coveragerc", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-06-13T21:45:42Z", "updated_at": "2020-06-13T21:46:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I'm not sure why this is, but the code coverage I have running in a GitHub Action doesn't take my `.coveragerc` file into account. It should:\r\n\r\nhttps://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/.github/workflows/test-coverage.yml#L31-L35\r\n\r\nHere's the bit that's ignored:\r\n\r\nhttps://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/.coveragerc#L1-L2\r\n\r\nAs a result my coverage score is 84%, when it should be 92%:\r\n```\r\n2020-06-13T21:41:18.4404252Z ----------- coverage: platform linux, python 3.8.3-final-0 -----------\r\n2020-06-13T21:41:18.4404570Z Name Stmts Miss Cover\r\n2020-06-13T21:41:18.4404971Z --------------------------------------------------------\r\n2020-06-13T21:41:18.4405227Z datasette/__init__.py 3 0 100%\r\n2020-06-13T21:41:18.4405441Z datasette/__main__.py 3 3 0%\r\n2020-06-13T21:41:18.4405668Z datasette/_version.py 279 279 0%\r\n2020-06-13T21:41:18.4405921Z datasette/actor_auth_cookie.py 20 0 100%\r\n2020-06-13T21:41:18.4406135Z datasette/app.py 499 27 95%\r\n2020-06-13T21:41:18.4406343Z datasette/cli.py 162 45 72%\r\n2020-06-13T21:41:18.4406553Z datasette/database.py 236 17 93%\r\n2020-06-13T21:41:18.4406761Z datasette/default_permissions.py 40 0 100%\r\n2020-06-13T21:41:18.4406975Z datasette/facets.py 210 24 89%\r\n2020-06-13T21:41:18.4407186Z datasette/filters.py 122 7 94%\r\n2020-06-13T21:41:18.4407394Z datasette/hookspecs.py 34 0 100%\r\n2020-06-13T21:41:18.4407600Z datasette/inspect.py 36 23 36%\r\n2020-06-13T21:41:18.4407807Z datasette/plugins.py 34 6 82%\r\n2020-06-13T21:41:18.4408014Z datasette/publish/__init__.py 0 0 100%\r\n2020-06-13T21:41:18.4408240Z datasette/publish/cloudrun.py 57 2 96%\r\n2020-06-13T21:41:18.4408786Z datasette/publish/common.py 19 1 95%\r\n2020-06-13T21:41:18.4409029Z datasette/publish/heroku.py 97 13 87%\r\n2020-06-13T21:41:18.4409243Z datasette/renderer.py 63 4 94%\r\n2020-06-13T21:41:18.4409450Z datasette/sql_functions.py 5 0 100%\r\n2020-06-13T21:41:18.4410480Z datasette/tracer.py 87 16 82%\r\n2020-06-13T21:41:18.4410972Z datasette/utils/__init__.py 504 31 94%\r\n2020-06-13T21:41:18.4411755Z datasette/utils/asgi.py 264 24 91%\r\n2020-06-13T21:41:18.4412173Z datasette/utils/shutil_backport.py 44 44 0%\r\n2020-06-13T21:41:18.4412822Z datasette/version.py 4 0 100%\r\n2020-06-13T21:41:18.4413562Z datasette/views/__init__.py 0 0 100%\r\n2020-06-13T21:41:18.4414276Z datasette/views/base.py 288 19 93%\r\n2020-06-13T21:41:18.4414579Z datasette/views/database.py 120 2 98%\r\n2020-06-13T21:41:18.4414860Z datasette/views/index.py 57 2 96%\r\n2020-06-13T21:41:18.4415379Z datasette/views/special.py 72 16 78%\r\n2020-06-13T21:41:18.4418994Z datasette/views/table.py 418 18 96%\r\n2020-06-13T21:41:18.4428811Z --------------------------------------------------------\r\n2020-06-13T21:41:18.4430394Z TOTAL 3777 623 84%\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/845/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 648659536, "node_id": "MDU6SXNzdWU2NDg2NTk1MzY=", "number": 881, "title": "Figure out why restore_working_directory is needed in some places", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-01T04:19:25Z", "updated_at": "2020-07-01T04:19:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a frustrating workaround. I have a `restore_working_directory` fixture that I wrote to solve errors that look like this:\r\n```\r\n/Users/simon/Dropbox/Development/datasette/tests/test_publish_cloudrun.py:148: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py:112: in __enter__\r\n return next(self.gen)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = \r\n\r\n @contextlib.contextmanager\r\n def isolated_filesystem(self):\r\n \"\"\"A context manager that creates a temporary folder and changes\r\n the current working directory to it for isolated filesystem tests.\r\n \"\"\"\r\n> cwd = os.getcwd()\r\nE FileNotFoundError: [Errno 2] No such file or directory\r\n```\r\nHere's an example of it in use: removing the `restore_working_directory` argument from this function causes the failure. https://github.com/simonw/datasette/blob/549b1c2063db48c4622ee5c7b478a1e3cbc1ac07/tests/test_plugins.py#L689-L690\r\n\r\nI'd like to not have to do this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/881/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 655974395, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw", "number": 30, "title": "Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces)", "user": {"value": 110038, "label": "scanner"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-13T16:15:26Z", "updated_at": "2020-07-13T16:15:26Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep-photos/pulls/30", "body": "Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3.\r\nHad to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop.\r\n\r\nAlso probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 664793260, "node_id": "MDU6SXNzdWU2NjQ3OTMyNjA=", "number": 2, "title": "Yak shave", "user": {"value": 145425, "label": "ekg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-23T22:04:18Z", "updated_at": "2020-07-23T22:04:18Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Just a quick note... The 23andme data is not exactly your genome, but a SNP chip of your genome. It's \"some of your genotypes.\" Or about 0.1% of your genome. Nice work in any case! It deserves to be liberated!!!!!", "repo": {"value": 209590345, "label": "genome-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 668064026, "node_id": "MDU6SXNzdWU2NjgwNjQwMjY=", "number": 911, "title": "Rethink the --name option to \"datasette publish\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2020-07-29T18:49:49Z", "updated_at": "2020-07-29T18:49:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "`--name` works inconsistently across the different publish providers - on Cloud Run you should use `--service` instead for example. Need to review it across all of them and either remove it or clarify what it does.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/911/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 673602857, "node_id": "MDU6SXNzdWU2NzM2MDI4NTc=", "number": 9, "title": "Define a view that displays photos correctly", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-05T14:53:39Z", "updated_at": "2020-08-05T14:53:39Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The `photos` table stores data like this:\r\n\r\nid | createdAt | source | prefix | suffix | width | height | visibility | created\u00a0\u25b2 | user\r\n-- | -- | -- | -- | -- | -- | -- | -- | -- | --\r\n5e12c9708506bc000840262a | January 06, 2020 - 05:45:20 UTC | Swarm for iOS\u00a01 | https://fastly.4sqi.net/img/general/ | /15889193_AXxGk4I1nbzUZuyYqObgbXdJNyEHiwj6AUDq0tPZWtw.jpg | 1920 | 1440 | public | 2020-01-06T05:45:20 | 15889193\r\n\r\nThe photo URL can be derived from those pieces - define a SQL view which does that (using `datasette-json-html` to display the pictures)", "repo": {"value": 205429375, "label": "swarm-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/9/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 675594325, "node_id": "MDU6SXNzdWU2NzU1OTQzMjU=", "number": 917, "title": "Idea: \"datasette publish\" option for \"only if the data has changed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-08T21:58:27Z", "updated_at": "2020-08-08T21:58:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a pattern I often find myself needing. I usually implement this in GitHub Actions like this:\r\n\r\nhttps://github.com/simonw/covid-19-datasette/blob/efa01c39abc832b8641fc2a92840cc3acae2fb08/.github/workflows/scheduled.yml#L52-L63\r\n\r\n```yaml\r\n - name: Set variables to decide if we should deploy\r\n id: decide_variables\r\n run: |-\r\n echo \"##[set-output name=latest;]$(datasette inspect covid.db | jq '.covid.hash' -r)\"\r\n echo \"##[set-output name=deployed;]$(curl -s https://covid-19.datasettes.com/-/databases.json | jq '.[0].hash' -r)\"\r\n - name: Set up Cloud Run\r\n if: github.event_name == 'workflow_dispatch' || steps.decide_variables.outputs.latest != steps.decide_variables.outputs.deployed\r\n uses: GoogleCloudPlatform/github-actions/setup-gcloud@master\r\n```\r\nThis is pretty fiddly. It might be good for `datasette publish` to grow a helper option that does effectively this - hashes the databases (and the `metadata.json`) and compares them to the deployed version.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/917/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 688352145, "node_id": "MDU6SXNzdWU2ODgzNTIxNDU=", "number": 141, "title": "insert-files support for compressed values", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-28T20:59:46Z", "updated_at": "2020-09-24T20:36:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The `sqlar` format supports this, it would be useful if `insert-files` could support this too.\r\n\r\nhttps://www.sqlite.org/sqlar.html", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/141/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 689848827, "node_id": "MDU6SXNzdWU2ODk4NDg4Mjc=", "number": 6, "title": "ISO timestamps", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-01T06:16:42Z", "updated_at": "2020-09-01T06:16:42Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The `time_added`, `time_updated` and `time_read` columns currently store data like this:\r\n\r\n September 19, 2019 - 00:30:30 UTC\r\n\r\nShould use ISO instead, e.g. `2020-07-26T01:05:24+00:00`", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 689850810, "node_id": "MDU6SXNzdWU2ODk4NTA4MTA=", "number": 6, "title": "Set up a demo instance", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-01T06:20:24Z", "updated_at": "2020-09-01T06:20:24Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Once I've got the Datasette plugin to a state where it's worth building a demo: #3\r\n\r\nI can use data from my public https://github-to-sqlite.dogsheep.net/ demo plus the Pocket data subset I use for the demo in https://github.com/dogsheep/pocket-to-sqlite/issues/5 - I could pull in the https://dogsheep-photos.dogsheep.net/ photos data too.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 691537426, "node_id": "MDU6SXNzdWU2OTE1Mzc0MjY=", "number": 959, "title": "Internals API idea: results.dicts in addition to results.rows", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-03T00:50:17Z", "updated_at": "2020-09-03T00:50:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just wrote this code:\r\n```python\r\n results = await database.execute(SEARCH_SQL, {\"query\": query})\r\n return [dict(r) for r in results.rows]\r\n```\r\nHow about having `results.dicts` as a utility property that does that?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/959/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 692202408, "node_id": "MDU6SXNzdWU2OTIyMDI0MDg=", "number": 12, "title": "Idea: maps and GeoJSON support", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-03T18:47:10Z", "updated_at": "2020-09-04T01:45:03Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "It would be cool if the `display_sql` could return a column populated with GeoJSON which would the automatically be displayed on a map in the results (or maybe default JS would look for a `class=\"geojson\"` element output by the `display` template) - ala https://github.com/simonw/datasette-leaflet-geojson\r\n\r\nThen I could render workout routes on a map, or Swarm checkin points.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 697162939, "node_id": "MDU6SXNzdWU2OTcxNjI5Mzk=", "number": 20, "title": "Add more tags so people can find your project.", "user": {"value": 7902810, "label": "ran88dom99"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-09T21:14:09Z", "updated_at": "2020-09-09T21:14:09Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "quantified-self habit-tracking google-fit time-tracking wearables quantifiedself \r\nfor example", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 1, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 703216044, "node_id": "MDU6SXNzdWU3MDMyMTYwNDQ=", "number": 49, "title": "Feature: gists and starred gists", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-17T02:30:52Z", "updated_at": "2020-09-17T02:30:52Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://developer.github.com/v3/gists/#list-starred-gists", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/49/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 703218448, "node_id": "MDU6SXNzdWU3MDMyMTg0NDg=", "number": 51, "title": "Documentation for twitter-to-sqlite fetch", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-17T02:38:10Z", "updated_at": "2020-09-17T02:38:10Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "It's mentioned in passing in the README but it deserves its own section:\r\n```\r\n$ twitter-to-sqlite fetch \\\r\n \"https://api.twitter.com/1.1/account/verify_credentials.json\" \\\r\n | grep '\"id\"' | head -n 1\r\n```", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/51/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 709789634, "node_id": "MDU6SXNzdWU3MDk3ODk2MzQ=", "number": 27, "title": "Sort order is not persisted by facet filter links", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-27T18:22:07Z", "updated_at": "2020-09-27T18:22:07Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "A link to `/-/beta?category=1×tamp__date=2018-08-01&q=swedish` should be to `/-/beta?category=1×tamp__date=2018-08-01&q=swedish&sort=newest`", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/27/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 723499985, "node_id": "MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4", "number": 5, "title": "Add fitbit-to-sqlite", "user": {"value": 4632208, "label": "mrphil007"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-10-16T20:04:05Z", "updated_at": "2020-10-16T20:04:05Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/5", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 730210880, "node_id": "MDU6SXNzdWU3MzAyMTA4ODA=", "number": 1055, "title": "query.html and table.html should share the same table implementation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2020-10-27T07:58:21Z", "updated_at": "2020-10-27T07:58:29Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In #998 I made a change that affected the table page but didn't affect the query page because I incorrectly assumed they shared rendering logic.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1055/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 741231849, "node_id": "MDU6SXNzdWU3NDEyMzE4NDk=", "number": 1087, "title": "Idea: ?_extra=urls for getting back URLs to useful things", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-11-12T02:55:41Z", "updated_at": "2021-12-15T18:06:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Working on https://github.com/simonw/datasette-search-all/issues/10 made me realize that sometimes it can be difficult to calculate the URL for a database, table or row within Datasette.\r\n\r\nIt would be useful to have an optional extra JSON extension (using `?_extra=` from #262) that can help with this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1087/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 750089847, "node_id": "MDU6SXNzdWU3NTAwODk4NDc=", "number": 1109, "title": "Deprecate --config in Datasette 1.0 (in favour of --setting)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2020-11-24T21:43:57Z", "updated_at": "2020-12-17T22:07:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I added a deprecation warning to this in #992.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1109/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 756875827, "node_id": "MDU6SXNzdWU3NTY4NzU4Mjc=", "number": 1129, "title": "Fix footer to the bottom of the page", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-12-04T07:28:07Z", "updated_at": "2020-12-04T16:04:29Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport.\r\n\r\n![before & after](https://user-images.githubusercontent.com/3243482/101134785-f6595a80-361b-11eb-81ce-b8b5cb9c5bc2.png)\r\n\r\n\r\nThis can be fixed using flexbox.\r\n\r\n```css\r\nbody {\r\n min-height: 100vh;\r\n display: flex;\r\n flex-direction: column;\r\n}\r\n\r\n.content {\r\n flex-grow: 1;\r\n}\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1129/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 769397742, "node_id": "MDU6SXNzdWU3NjkzOTc3NDI=", "number": 3, "title": "sqlite-utils error on takeout import", "user": {"value": 231498, "label": "khimaros"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-12-17T01:18:48Z", "updated_at": "2020-12-17T01:19:04Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "```\r\n$ google-takeout-to-sqlite my-activity takeout.db /path/to/zip\r\n...\r\nsqlite3.OperationalError: no such table: main.my_activity\r\n```\r\n\r\nthere is no table create in `utils.py`, unlike other importers such as github-to-sqlite\r\n\r\nadditionally, this package and hackernews-to-sqlite have conflicting `sqlite-utils` dep with datasette and dogsheep-beta", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/3/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 776128269, "node_id": "MDU6SXNzdWU3NzYxMjgyNjk=", "number": 1162, "title": "First working version of \"datasette insert data.db file.csv\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-12-29T23:20:11Z", "updated_at": "2021-06-17T18:12:32Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Refs #1160", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1162/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 778530523, "node_id": "MDU6SXNzdWU3Nzg1MzA1MjM=", "number": 1172, "title": "/-/static should be excluded from auth and permission checks", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-05T02:53:41Z", "updated_at": "2021-01-05T02:53:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I want to set far future / immutable cache headers on everything served from `/-/static` and `/-/static-plugins`\r\n\r\nThis has security implications since it will be possible to see what plugins are installed by checking for known static URLs. I'm fine with that - performance is more important here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1172/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 787104850, "node_id": "MDU6SXNzdWU3ODcxMDQ4NTA=", "number": 1192, "title": "Form Plugin for in-depth Datasette Querying", "user": {"value": 1024355, "label": "tomershvueli"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-15T18:24:50Z", "updated_at": "2021-01-15T18:24:50Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I envision a sort of easy-to-build form plugin that would be able to map a user's inputs to different fields/columns in a Datasette database. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1192/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 793907673, "node_id": "MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz", "number": 15, "title": "added try / except to write_records ", "user": {"value": 9857779, "label": "ryancheley"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-26T03:56:21Z", "updated_at": "2021-01-26T03:56:21Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/healthkit-to-sqlite/pulls/15", "body": "to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following:\r\n\r\n```\r\ndb.py 1709 insert_chunk\r\nresult = self.db.execute(query, params)\r\n\r\ndb.py 226 execute\r\nreturn self.conn.execute(sql, parameters)\r\n\r\nsqlite3.OperationalError:\r\ntoo many SQL variables\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\ndb.py 1709 insert_chunk\r\nresult = self.db.execute(query, params)\r\n\r\ndb.py 226 execute\r\nreturn self.conn.execute(sql, parameters)\r\n\r\nsqlite3.OperationalError:\r\ntoo many SQL variables\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\ndb.py 1709 insert_chunk\r\nresult = self.db.execute(query, params)\r\n\r\ndb.py 226 execute\r\nreturn self.conn.execute(sql, parameters)\r\n\r\nsqlite3.OperationalError:\r\ntable rBodyMass has no column named metadata_HKWasUserEntered\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nhealthkit-to-sqlite 8 \r\nsys.exit(cli())\r\n\r\ncore.py 829 __call__\r\nreturn self.main(*args, **kwargs)\r\n\r\ncore.py 782 main\r\nrv = self.invoke(ctx)\r\n\r\ncore.py 1066 invoke\r\nreturn ctx.invoke(self.callback, **ctx.params)\r\n\r\ncore.py 610 invoke\r\nreturn callback(*args, **kwargs)\r\n\r\ncli.py 57 cli\r\nconvert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf)\r\n\r\nutils.py 42 convert_xml_to_sqlite\r\nwrite_records(records, db)\r\n\r\nutils.py 143 write_records\r\ndb[table].insert_all(\r\n\r\ndb.py 1899 insert_all\r\nself.insert_chunk(\r\n\r\ndb.py 1720 insert_chunk\r\nself.insert_chunk(\r\n\r\ndb.py 1720 insert_chunk\r\nself.insert_chunk(\r\n\r\ndb.py 1714 insert_chunk\r\nresult = self.db.execute(query, params)\r\n\r\ndb.py 226 execute\r\nreturn self.conn.execute(sql, parameters)\r\n\r\nsqlite3.OperationalError:\r\ntable rBodyMass has no column named metadata_HKWasUserEntered\r\n```\r\n\r\nAdding the try / except in the `write_records` seems to fix that issue. ", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 797728929, "node_id": "MDU6SXNzdWU3OTc3Mjg5Mjk=", "number": 8, "title": "QUESTION: extract full text", "user": {"value": 417363, "label": "darribas"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-31T14:50:10Z", "updated_at": "2021-01-31T14:50:10Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "This may be solved or a feature already, but I couldn't figure it out, is it possible to extract and store also full text from the saved pages? The same way that Pocket parses the text, it'd be amazing to be able to store (and thus make searchable later) the text.\r\n\r\nThank you very much for the project, it's such an amazing idea! ", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/8/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 797784080, "node_id": "MDU6SXNzdWU3OTc3ODQwODA=", "number": 62, "title": "Stargazers and workflows commands always require an auth file when using GITHUB_TOKEN ", "user": {"value": 631242, "label": "frosencrantz"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-31T18:56:05Z", "updated_at": "2021-01-31T18:56:05Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Requested fix in https://github.com/dogsheep/github-to-sqlite/pull/59\r\n\r\nThe stargazers and workflows commands always require an auth file, even when using a `GITHUB_TOKEN`. Other commands don't require the auth file.\r\n\r\n", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/62/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 808771690, "node_id": "MDU6SXNzdWU4MDg3NzE2OTA=", "number": 1225, "title": "More flexible formatting of records with CSS grid", "user": {"value": 649467, "label": "mhalle"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-15T19:28:17Z", "updated_at": "2021-02-15T19:28:35Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "In several applications I've been experimenting with alternate formatting of datasette query results. Lately I've found that CSS grids work very well and seem quite general for formatting rows. In CSS I use grid templates to define the layout of each record and the regions for each field, hiding the fields I don't want. It's pretty flexible and looks good. It's also a great basis for highly responsive layout.\r\n\r\nI initially thought I'd only use this feature for record detail views, but now I use it for index views as well. \r\n\r\nHowever, there are some limitations:\r\n* With the existing table templates, it seems that you can change the `display` property on the enclosing `table`, `tbody`, and `tr` to make them be grid-like, but that seems hacky (convert `table` and `tbody` to be `display: block` and `tr` to be `display: grid`).\r\n* More significantly, it's very nice to have the column name available when rendering each record to display headers/field labels. The existing templates don't do that, so a custom `_table` template is necessary.\r\n* I don't know if any plugins are sensitive to whether data is rendered as a table or not since I'm not completely clear how plugins get their data. \r\n* Regardless, you need custom CSS to take full advantage of grids. I don't have a proposal on how to integrate them more deeply.\r\n\r\nIt would be helpful to at least have an official example or test that used a grid layout for records to make sure nothing in datasette breaks with it. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1225/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 816601354, "node_id": "MDExOlB1bGxSZXF1ZXN0NTgwMjM1NDI3", "number": 241, "title": "Extract expand - work in progress", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-25T16:36:38Z", "updated_at": "2021-02-25T16:36:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/sqlite-utils/pulls/241", "body": "Refs #239. Still needs documentation and CLI implementation.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/241/reactions\", \"total_count\": 3, \"+1\": 3, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 818684978, "node_id": "MDU6SXNzdWU4MTg2ODQ5Nzg=", "number": 243, "title": "How can i use this utils to deal with fts on column meta of tables ?", "user": {"value": 27874014, "label": "svjack"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-01T09:45:05Z", "updated_at": "2021-03-01T09:45:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Thank you to release this bravo project.\r\nWhen i use this project on multi table db, I want to implement \r\nconvenient search on column name from different tables.\r\nI want to develop a meta table to save the meta data of different columns\r\nof different tables and search on this meta table to get rows from the\r\ndata table (which the meta table describes)\r\ndoes this project provide some simple function on it ?\r\n\r\nYou can think a have a knowledge graph about the table in the db, \r\nand i save this knowledge graph into the db with fts enabled.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/243/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 824750134, "node_id": "MDU6SXNzdWU4MjQ3NTAxMzQ=", "number": 1251, "title": "facet option not appearing when table is big", "user": {"value": 15836677, "label": "verajosemanuel"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-08T16:54:04Z", "updated_at": "2021-03-08T16:54:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a big table with more than 500.000 rows.\r\nTrying to facet by one of my columns, the options are not available as for the other smaller tables.\r\n\r\nI have tried to set it in URL as:\r\n\r\n`&_facet=city_id`\r\n\r\nto no avail.\r\n\r\nis there any limit? how can I force the option \"facet\" to appear for big tables?\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1251/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 830283447, "node_id": "MDU6SXNzdWU4MzAyODM0NDc=", "number": 34, "title": "bucket name", "user": {"value": 6213, "label": "dsisnero"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-12T16:40:57Z", "updated_at": "2021-03-12T16:40:57Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I followed the instructions to setup credentials but I am getting a invalid bucket name. Can you put a sample auth.json file in the base that shows the correct format for this? Thanks", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/34/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 830901133, "node_id": "MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1", "number": 16, "title": "Add a fallback ID, print if no ID found", "user": {"value": 1234956, "label": "n8henrie"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-13T13:38:29Z", "updated_at": "2021-03-13T14:44:04Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/healthkit-to-sqlite/pulls/16", "body": "Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/14\n", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 836063389, "node_id": "MDU6SXNzdWU4MzYwNjMzODk=", "number": 17, "title": "Datetime columns are not properly formatted to be recognizes as datetime", "user": {"value": 1234956, "label": "n8henrie"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-19T14:33:04Z", "updated_at": "2021-03-19T14:33:04Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "\r\n\r\nCurrently, the datetimes are formatted in a way that is not recognized by datasette-vega for plotting with a `Date/time` type for the axis. \r\n\r\nFor example, if you have datasette running locally with `datasette-vega` installed and have a database that includes resting heart rate:\r\n\r\n```\r\nhttp://localhost:8001/healthkit/rRestingHeartRate#g.mark=line&g.x_column=startDate&g.x_type=temporal&g.y_column=value&g.y_type=quantitative\r\n```\r\n\r\nThe plot is blank unless you choose `Label` as the type for the date data.\r\n\r\nThe `startDate` (and `creationDate` and `endDate`) columns appear like: `2019-11-14 18:22:18 -0700`\r\n\r\nIf instead the format for this column is changed slightly: `2019-11-14T18:22:18-07:00` they are recognized as proper dates and the charting works as expected.\r\n\r\nI have a PR that addresses this issue, will submit shortly.", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/17/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 836064851, "node_id": "MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw", "number": 18, "title": "Add datetime parsing", "user": {"value": 1234956, "label": "n8henrie"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-19T14:34:22Z", "updated_at": "2021-03-19T14:34:22Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/healthkit-to-sqlite/pulls/18", "body": "Parses the datetime columns so they are subsequently properly recognized as\ndatetime.\n\nFixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17\n", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 836923194, "node_id": "MDU6SXNzdWU4MzY5MjMxOTQ=", "number": 32, "title": "JSON API for search results", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-20T22:21:36Z", "updated_at": "2021-03-20T22:21:36Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Refs https://github.com/simonw/datasette/issues/878", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/32/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 839367451, "node_id": "MDU6SXNzdWU4MzkzNjc0NTE=", "number": 1275, "title": "Idea: long-running query mode", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-24T05:23:20Z", "updated_at": "2021-03-24T05:23:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It would be cool if you could run Datasette in a long-running query mode, for use with trusted users - something like this:\r\n\r\n datasette --unlimited my.db\r\n\r\nThis would disable the query limit, but would also enable a feature where if a query takes longer than e.g. 1s to return Datasette returns an HTML page to the browser with a progress indicator and polls the server until the query is complete.... but also provides the user with a \"cancel\" button. This relates to the `.interrupt()` research in #1270.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1275/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 843884745, "node_id": "MDU6SXNzdWU4NDM4ODQ3NDU=", "number": 1283, "title": "advanced #export causes unexpected scrolling", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-29T22:46:57Z", "updated_at": "2021-03-29T22:46:57Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "1. Visit a datasette table page\r\n2. Click on the \"(advanced)\" link. This adds a fragment identifier \"#export\" to the URL, and scrolls down to the \"Advanced export\" div with the \"export\" id.\r\n3. Manually scroll back up, and click on a suggested facet. The fragment identifier is still present, and the app scrolls back down to the \"Advanced export\" div. I think this is unwanted behavior.\r\n\r\nThe user remedy seems to be to manually remove the \"#export\" from the URL.\r\n\r\nThis behavior happens in my project, and in:\r\nhttps://covid-19.datasettes.com/covid/economist_excess_deaths (for instance) \r\nbut not in this table: \r\nhttps://global-power-plants.datasettes.com/global-power-plants/global-power-plants", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1283/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 847700726, "node_id": "MDU6SXNzdWU4NDc3MDA3MjY=", "number": 1285, "title": "Feature Request or Plugin Request: Numeric Range Facets", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-01T01:50:20Z", "updated_at": "2021-04-01T02:28:19Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be great to offer facets for numeric data ranges. \r\n\r\nThe ranges could pull from typical GIS methods of creating choropleth maps. \r\nhttps://gisgeography.com/choropleth-maps-data-classification/\r\nOf the following, for mapping, I've always preferred a Jenks Natural Breaks, or a cross between Jenks and Pretty breaks.\r\n\r\n- Equal Intervals \r\n- Quantile (equal count) \r\n- Standard Deviation\r\n- Natural Breaks (Jenks) Classification \r\n- Pretty Breaks\r\n- Some sort of Aggregate Jenks Classification (this isn't standard, but it would be nice to be able to set classification ranges that work across tables.)\r\n\r\nHere are some links for Natural Breaks, in case this method is unfamiliar.\r\n\r\n- https://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization\r\n- http://wiki.gis.com/wiki/index.php/Jenks_Natural_Breaks_Classification\r\n- https://medium.com/analytics-vidhya/jenks-natural-breaks-best-range-finder-algorithm-8d1907192051\r\n\r\nPer that last link, there is a Jenks Python module... They also describe it as data-intensive for larger datasets. Maybe this is a good plugin idea.\r\n\r\nAn example of equal Intervals would be \r\n0 \u2013 < 10\r\n10 \u2013 < 20\r\n20 \u2013 < 30\r\n30 \u2013 < 40\r\n\r\nIt's kind of confusing to have that less-than sign in there. it could also be displayed as:\r\n0 \u2013 10\r\n10 \u2013 20\r\n20 \u2013 30\r\n30 \u2013 40\r\n\r\nBut then it's not completely clear which category 10 is in, for instance.\r\n\r\n(Best to right-justify.. and use an \"en dash\" between numbers.)\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1285/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 849512840, "node_id": "MDU6SXNzdWU4NDk1MTI4NDA=", "number": 1288, "title": "Facets: show counts for null", "user": {"value": 1111743, "label": "jungle-boogie"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-02T22:33:44Z", "updated_at": "2021-04-02T22:33:44Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi,\r\n\r\nThank you for Datasette and being a fan of SQLite!\r\n\r\nNot all rows in a record will always contain data.\r\nSo when using a facet on a column where some records have data and others don't, you don't get an accurate count of the results.\r\n\r\nPlease consider also counting and showing null records with facets.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1288/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 853672224, "node_id": "MDU6SXNzdWU4NTM2NzIyMjQ=", "number": 1294, "title": "\"You can check out any time you like. But you can never leave!\"", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-08T17:02:15Z", "updated_at": "2021-04-08T18:35:50Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "(Feel free to rename this one.)\r\n\r\n- The column gear lets you \"Show not-blank rows.\" Then it places a parameter in the URL, which a web developer would notice, but a lot of users won't notice, or know to delete it. Would be good to toggle \"Show not-blank rows\" with \"Show all rows.\" (Also would be quite helpful to have a \"Show blank rows | Show all rows\" option)\r\n- The column gear lets you \"Sort ascending\" and \"Sort descending\" but then you're stuck with some sort of sorted version thereafter, unless you know to sort the ID column, or to remove the full _sort parameter and its value in the URL. Would be good to offer a \"Remove sort\" option in the gear.\r\n- These requests are in the same camp as: https://github.com/simonw/datasette-vega/issues/36\r\n- I suspect there are other url parameter instances where similar analysis would be helpful, but the three above are the use cases I've run across. \r\n\r\nUPDATE:\r\n- It would be helpful to have a \"Previous page\" available for all but the first table page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1294/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 855451460, "node_id": "MDU6SXNzdWU4NTU0NTE0NjA=", "number": 1297, "title": "Documentation: json1, and introspection endpoints", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-12T00:38:00Z", "updated_at": "2021-04-12T01:29:33Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "https://docs.datasette.io/en/stable/facets.html notes that:\r\n> If your SQLite installation provides the json1 extension (you can check using /-/versions) Datasette will automatically detect columns that contain JSON arrays...\r\n\r\nWhen I check -/versions I see two sections relevant to json1:\r\n```\r\n \"extensions\": {\r\n \"json1\": null\r\n },\r\n \"compile_options\": [\r\n ...\r\n \"ENABLE_JSON1\",\r\n```\r\n \r\n The ENABLE_JSON1 makes me think json1 is likely available. But the `\"json1\": null` made me think it wasn't available (because of the `null`). It would help if the documentation provided clarity about how to know if json1 is installed. It would also be helpful if the `/-/versions` information signalled somehow that that is to be appended to the hostname or domain name (or whatever you want to call it, or simply show it, using `example.com/-/versions` instead of `/-/versions`. Likewise on that last point, for https://docs.datasette.io/en/stable/introspection.html#introspection , at least at some point on that page detailing where those introspection endpoints go. (Sometimes documentation can be so abbreviated that it's hard for new users to figure out what's going on.)\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1297/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 856895291, "node_id": "MDU6SXNzdWU4NTY4OTUyOTE=", "number": 1299, "title": "Design better empty states", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-13T12:06:12Z", "updated_at": "2021-04-13T12:06:12Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspiration here: https://emptystat.es/", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1299/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 860734722, "node_id": "MDU6SXNzdWU4NjA3MzQ3MjI=", "number": 1302, "title": "Fix disappearing facets", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-18T18:42:33Z", "updated_at": "2021-04-20T07:40:15Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "1. Clone https://github.com/mroswell/list-N\r\n2. Run `datasette disinfectants.db -o`\r\n3. Select the `Safer_or_Toxic` facet.\r\n4. Select `Toxic`.\r\n5. Close out the `Safer_or_Toxic` facet.\r\n6. Examine `Suggested facets` list. `Safer_or_Toxic` is GONE.\r\n7. Try some other facets. When you select an element, and then close the list, in some cases, the facet properly returns to the `Suggested facet` list... Arrays and dates properly return to the list, but fields with strings don't return to the list. \r\n\r\nSince my site is devoted to whether disinfectants are Safer or Toxic, having the suggested facet disappear from the suggested facet list is very confusing* to end-users. This, along with a few other issues, unfortunately proved beyond my own programming ability to address. So I hired a Senior-level developer to address a number of issues, including this disappearing act.\r\n\r\n8. Open a new terminal. Run `datasette disinfectants.db -m metadata.json --static static:static/ --template-dir templates/ --plugins-dir plugins/ -p 8001 -o`\r\n9. Repeat steps 3-6, but this time, the Safer_or_Toxic facet returns to the list (and the related URL parameters are removed).\r\n\r\nI'm not sure how to do a pull request for this, because the plugin contains other functionality that goes beyond this bug. I wanted the facets sorted in a certain order (both in the suggested facet list, and the detail lists) (... the detail lists were hopping around all over the place before...) I wanted the duplicate facets removed (leaving only the one where you can facet by individual item in an array.) I wanted the arrays to be presented in a prettier fashion (I did that in the template... That could be moved over to the plugin at some point)\r\n\r\nI'm thinking it'll be very helpful if applicable parts of my project's plugin (sort_suggested_facets_plugin.py) will be able to be incorporated back into datasette, but I leave that to you to consider.\r\n\r\n(* The disappearing facet bug was especially confusing because I'm removing the filters and sql from the table page, at the request of the organization. The filters and sql detail created a lot of confusion for end users who try to find disinfectants used by Hospitals, for instance, as an '=' won't find them, since they are part of the Use_site array.) My disappearing-facet confusion was documented in my own issue: https://github.com/mroswell/list-N/issues/57 (addressed by the plugin). Other facet-related issues here: https://github.com/mroswell/list-N/issues/54 (addressed by the plugin); https://github.com/mroswell/list-N/issues/15 (addressed by template); https://github.com/mroswell/list-N/issues/53 (not yet addressed). \r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1302/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 870946764, "node_id": "MDU6SXNzdWU4NzA5NDY3NjQ=", "number": 1312, "title": "how to query many-to-many relationship via json API?", "user": {"value": 5268174, "label": "bram2000"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-29T12:09:49Z", "updated_at": "2021-04-29T12:09:49Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi,\r\n\r\nFirstly thanks for Datasette, it's great!\r\n\r\nI'm trying to use the JSON API to query data from a Datasette instance. I have a simple 3 table many-to-many relationship, like so:\r\n\r\n`category` - list of categories\r\n`document` - list of documents\r\n`document_category` - join table (a category contains many documents, and a document can be a member of multiple categories)\r\n\r\nthe `document_category` table foreign keys to the other two using their respective row_ids.\r\n\r\nNow I want to return \"all documents within category X\" but I cannot see a way to do this without executing two queries; the first to lookup the row_id of category X, and the second to join `document` with `document_category` where category ID is .\r\n\r\nI could easily write this in SQL, but this makes programmatic handling of pagination much more difficult (we'd have to dynamically modify the SQL to select the row_id and include the correct where and limit clauses).\r\n\r\nIs there a way to achieve this using the JSON API?\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1312/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 871304967, "node_id": "MDU6SXNzdWU4NzEzMDQ5Njc=", "number": 1315, "title": "settings.json should be picked up by \"datasette publish cloudrun\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-29T18:16:41Z", "updated_at": "2021-04-29T18:16:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1315/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 892383270, "node_id": "MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4", "number": 12, "title": "Recovering of malformed ENEX file", "user": {"value": 8431437, "label": "engdan77"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-05-15T07:49:31Z", "updated_at": "2021-05-15T19:57:50Z", "closed_at": null, "author_association": "FIRST_TIMER", "pull_request": "dogsheep/evernote-to-sqlite/pulls/12", "body": "Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :)\r\n\r\nSome background to this PR... \r\nI've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services. \r\n\r\nNow I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process. \r\n\r\nMy first attempt I tried to adapt to more robust lxml package allowing huge data and with \"recover\", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble.\r\n\r\nAnd with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a \"hugexmlparser\" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end before new (malformed?) it would add this after encounter a new start-tag.\r\n\r\nThe code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted.\r\n\r\nNow with the above we pass this a minor changed version of save_note_recovery() assure the existing works.\r\nAlso adding this as a new recover-enex command to click and kept the original options. \r\nA couple of new tests was added as well to check against using this command.\r\n\r\nNow this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository.\r\n\r\nAs a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-) ", "repo": {"value": 303218369, "label": "evernote-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 897212458, "node_id": "MDU6SXNzdWU4OTcyMTI0NTg=", "number": 63, "title": "Ability to fetch commits from branches other than the default", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-05-20T17:58:08Z", "updated_at": "2021-05-20T17:58:08Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "This tool is currently almost entirely ignorant of the concept of branches. One example: you can't retrieve commits from any branch other than the default (usually main).", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/63/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 915488244, "node_id": "MDU6SXNzdWU5MTU0ODgyNDQ=", "number": 1372, "title": "Add section to \"writing plugins\" about security, e.g. avoiding XSS", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-08T20:49:33Z", "updated_at": "2021-06-08T20:49:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/stable/writing_plugins.html should have tips on writing secure plugins.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1372/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 924203783, "node_id": "MDU6SXNzdWU5MjQyMDM3ODM=", "number": 1379, "title": "Idea: ?_end=1 option for streaming CSV responses", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-17T18:11:21Z", "updated_at": "2021-06-17T18:11:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "As discussed in this thread: https://twitter.com/simonw/status/1405554676993433605 - one of the disadvantages of Datasette's streaming CSV feature is that it's hard to tell if you got the whole file or if the connection ended early - or if an error occurred.\r\n\r\nIdea: offer an optional `?_end=1` parameter which, if enabled, adds a single row to the end of the CSV file that looks like this:\r\n\r\n`END,,,,,,,,,`\r\n\r\nFor however many columns the CSV file usually has.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1379/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 925384329, "node_id": "MDExOlB1bGxSZXF1ZXN0NjczODcyOTc0", "number": 7, "title": "Add instagram-to-sqlite", "user": {"value": 36654812, "label": "gavindsouza"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-19T12:26:16Z", "updated_at": "2021-07-28T07:58:59Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/7", "body": "The tool covers only chat imports at the time of opening this PR but I'm planning to import everything else that I feel inquisitive about\r\n\r\nref: https://github.com/gavindsouza/instagram-to-sqlite", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/7/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 925491857, "node_id": "MDU6SXNzdWU5MjU0OTE4NTc=", "number": 1383, "title": "Improve test coverage for `inspect.py`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-20T00:22:43Z", "updated_at": "2021-06-20T00:22:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://codecov.io/gh/simonw/datasette/src/main/datasette/inspect.py shows only 36% coverage for that module at the moment.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1383/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 927385540, "node_id": "MDU6SXNzdWU5MjczODU1NDA=", "number": 8, "title": "any guidance / experience on imessage-to-sqlite ?", "user": {"value": 2675621, "label": "Casyfill"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-22T15:46:16Z", "updated_at": "2021-06-22T15:46:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/8/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 930946817, "node_id": "MDU6SXNzdWU5MzA5NDY4MTc=", "number": 7, "title": "KeyError: 'accuracy' when processing Location History", "user": {"value": 403152, "label": "davidwilemski"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-27T14:39:43Z", "updated_at": "2021-06-27T14:39:43Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm new to both the dogsheep tools and datasette but have been experimenting a bit the last few days and these are really cool tools!\r\n\r\nI encountered a problem running my Google location history through this tool running the latest release in a docker container:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/google-takeout-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/google_takeout_to_sqlite/cli.py\", line 49, in my_activity\r\n utils.save_location_history(db, zf)\r\n File \"/usr/local/lib/python3.9/site-packages/google_takeout_to_sqlite/utils.py\", line 27, in save_location_history\r\n db[\"location_history\"].upsert_all(\r\n File \"/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py\", line 1105, in upsert_all\r\n return self.insert_all(\r\n File \"/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py\", line 990, in insert_all\r\n chunk = list(chunk)\r\n File \"/usr/local/lib/python3.9/site-packages/google_takeout_to_sqlite/utils.py\", line 33, in \r\n \"accuracy\": row[\"accuracy\"],\r\nKeyError: 'accuracy'\r\n```\r\n\r\nIt looks like the tool assumes the `accuracy` key will be in every location history entry. \r\n\r\nMy first attempt at a local patch to get myself going was to convert accessing the `accuracy` key to a `.get` instead to hopefully make the row nullable but I wasn't quite sure what `sqlite_utils` would do there. That did work in that the import happened and so I was going to propose a patch that made that change but in updating the existing test to include an entry with a missing accuracy entry, I noticed the expected type of the field appeared to be changing to a string in the test (and from a quick scan through the sqlite_utils code, probably TEXT in the database). Given this change in column type, it seemed that opening an issue first before proposing a fix seemed warranted. It seems the schema would need to be explicitly specified if you wanted a nullable integer column.\r\n\r\nNow that I've done a successful import run using my initial fix of calling `.get` on the row dict, I can see with datasette that I only have 7 data points (out of ~250k) that have a null accuracy column. They are all from 2011-2012 in an import that includes points spanning ~2010-2016 so perhaps another approach might be to filter those entries out during import if it really is that infrequent?\r\n\r\nI'm happy to provide a PR for a fix but figured I'd ask about which direction is preferred first.", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/7/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 947596222, "node_id": "MDExOlB1bGxSZXF1ZXN0NjkyNTU3Mzgx", "number": 1399, "title": "Multiple sort", "user": {"value": 87192257, "label": "jgryko5"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-07-19T12:20:14Z", "updated_at": "2021-07-19T12:20:14Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1399", "body": "Closes #197.\r\nI have added support for sorting by multiple parameters as mentioned in the issue above, and together with that, a suggestion on how to implement such sorting in the user interface.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1399/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 953218043, "node_id": "MDU6SXNzdWU5NTMyMTgwNDM=", "number": 1403, "title": "Labels explaining what hidden tables are for", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-07-26T19:29:22Z", "updated_at": "2022-03-21T22:20:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "A reasonable question: \"What are those hidden tables for?\"\r\n\r\nThis could be answered by adding a small piece of explanatory text to each table - based on if it's related to FTS or to SpatiaLite or configured to be hidden for some other reason.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1403/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 957315684, "node_id": "MDU6SXNzdWU5NTczMTU2ODQ=", "number": 1410, "title": "Rename settings to `default_allow_facet` and `default_allow_download` and `default_allow_csv_stream`", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2021-07-31T20:27:12Z", "updated_at": "2021-07-31T20:27:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> If I was prone to over-thinking (which I am) I'd note that `allow_facet` and `allow_download` and `allow_csv_stream` are all settings that do NOT have an equivalent in the newer permissions system, which is itself a little weird and inconsistent.\r\n>\r\n> So maybe there's a future task where I introduce those as both permissions and metadata `\"allow_x\"` blocks, then rename the settings themselves to be called `default_allow_facet` and `default_allow_download` and `default_allow_csv_stream`.\r\n>\r\n> If I was going to do that I should get it in before Datasette 1.0.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1409#issuecomment-890400425_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1410/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 957347860, "node_id": "MDU6SXNzdWU5NTczNDc4NjA=", "number": 1412, "title": "Mention WAL mode in the documentation (plus backup tips)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-08-01T00:27:11Z", "updated_at": "2021-08-01T00:27:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is useful for people who are deploying Datasette with any write functionality, especially if they might be using something like EFS.\r\n\r\nI can add a section about this to the bottom of https://docs.datasette.io/en/stable/deploying.html and then link to that section from both https://docs.datasette.io/en/stable/sql_queries.html#writable-canned-queries and https://docs.datasette.io/en/stable/internals.html#await-db-execute-write-sql-params-none-block-false\r\n\r\nAlso useful: mention that just copying a SQLite database file while it is being written to may not get a consistent file, so tell people to use one of the SQLite backup mechanisms instead.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1412/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 970626625, "node_id": "MDU6SXNzdWU5NzA2MjY2MjU=", "number": 1435, "title": "Turn off suggest facets on tables with large numbers of columns", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-08-13T18:30:48Z", "updated_at": "2021-08-13T18:30:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "If a table has 200 columns it will take multiple seconds to try and suggest facets. I should either quit after the first 20 or not suggest facets at all.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1435/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 981690086, "node_id": "MDExOlB1bGxSZXF1ZXN0NzIxNjg2NzIx", "number": 67, "title": "Replacing step ID key with step_id", "user": {"value": 16374374, "label": "jshcmpbll"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-08-28T01:26:41Z", "updated_at": "2021-08-28T01:27:00Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/github-to-sqlite/pulls/67", "body": "Workflows that have an `id` in any step result in the following error when running `workflows`:\r\n\r\ne.g.`github-to-sqlite workflows github.db nixos/nixpkgs`\r\n\r\n```Traceback (most recent call last):\r\n File \"/usr/local/bin/github-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1137, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1062, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1668, in invoke```Traceback (most recent call last):\r\n File \"/usr/local/bin/github-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1137, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1062, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1668, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.8/dist-packages/click/core.py\", line 763, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/github_to_sqlite/cli.py\", line 601, in workflows\r\n utils.save_workflow(db, repo_id, filename, content)\r\n File \"/usr/local/lib/python3.8/dist-packages/github_to_sqlite/utils.py\", line 865, in save_workflow\r\n db[\"steps\"].insert_all(\r\n File \"/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py\", line 2596, in insert_all\r\n self.insert_chunk(\r\n File \"/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py\", line 2378, in insert_chunk\r\n result = self.db.execute(query, params)\r\n File \"/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py\", line 419, in execute\r\n return self.conn.execute(sql, parameters)\r\nsqlite3.IntegrityError: datatype mismatch\r\n```\r\n\r\n - [Information about the ID key in a step for GHA](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsid)\r\n - [An example workflow from a public repo](https://github.com/NixOS/nixpkgs/blob/b4cc66827745e525ce7bb54659845ac89788a597/.github/workflows/direct-push.yml#L16)\r\n\r\n# Changes\r\nI'm proposing that the key for `id` in step is replaced with `step_id` so that it no longer interferes with the table `id` for tracking the record.\r\n\r\nSpecial thanks to @sarcasticadmin @egiffen and @ruebenramirez for helping a bit on this \ud83d\ude04 ", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/67/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 982803408, "node_id": "MDU6SXNzdWU5ODI4MDM0MDg=", "number": 1454, "title": "Feature Request: Publish to IPFS", "user": {"value": 1560788, "label": "blitmap"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-08-30T13:36:18Z", "updated_at": "2021-08-30T13:36:18Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hello,\r\n\r\nI am a huge fan of this being used for exploring data. I think it has a lot of flexibility not found in other tools.\r\n\r\nI'm not sure if what I'm asking for is possible: Can this be extended to publish to IPFS?\r\n\r\nIPFS is an attractive hosting option for decentralized journalism.\r\n\r\nFood for thought ~", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1454/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 983221851, "node_id": "MDU6SXNzdWU5ODMyMjE4NTE=", "number": 34, "title": "Data folder as index command parameter", "user": {"value": 1223625, "label": "humrochagf"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-08-30T21:29:33Z", "updated_at": "2021-08-30T21:29:33Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi,\r\n\r\nFirst of all, thank you for this wonderful project :smile:\r\n\r\nI started to use dogsheep to make my personal data searchable, and by using the project I noticed an issue with the index command.\r\n\r\nIt always expects you are running it from the root folder from where the data is located, so I got some errors while trying to make it work on my setup.\r\n\r\nI separate all databases inside a `data` folder (I published my setup to be easier to follow: https://github.com/humrochagf/my-dogsheep)\r\n\r\nBefore, I configured `dogsheep.yml` to add the data folder to its path like this:\r\n\r\n```yml\r\ndata/twitter.db:\r\n tweets:\r\n sql: |-\r\n...\r\n```\r\n\r\nAnd running the index command like this:\r\n\r\n```\r\ndogsheep-beta index data/dogsheep.db dogsheep.yml\r\n```\r\n\r\nIt worked to the normal search feature with no problem this way, but when I started adding `display_sql` rules the app started to crash, because at datasette `get_database` it was looking for `data/twitter` and it only had a db called `twitter` there.\r\n\r\nSo my workaround to that was to cd into the data folder and run the indexer. You can check the way I'm doing it at this line of the makefile: https://github.com/humrochagf/my-dogsheep/blob/main/makefile#L3\r\n\r\nIt works but it would be nice to have an option to pass the path where the data is located to the index function.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/34/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 987985935, "node_id": "MDExOlB1bGxSZXF1ZXN0NzI2OTkwNjgw", "number": 35, "title": "Support for Datasette's --base-url setting", "user": {"value": 2670795, "label": "brandonrobertz"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-03T17:47:45Z", "updated_at": "2021-09-03T17:47:45Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep-beta/pulls/35", "body": "This makes it so you can use Dogsheep if you're using Datasette with the `--base-url /some-path/` setting.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/35/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 988552851, "node_id": "MDU6SXNzdWU5ODg1NTI4NTE=", "number": 1456, "title": "conda install results in non-functioning `datasette serve` due to out-of-date asgiref", "user": {"value": 51016, "label": "ctb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-05T16:59:55Z", "updated_at": "2021-09-05T16:59:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Over in https://github.com/ctb/2021-sourmash-datasette, I discovered that the following commands fail:\r\n```\r\nconda create -n datasette4 -y datasette=0.58.1\r\nconda activate datasette4\r\ndatasette gathertax.db\r\n```\r\nwith `ImportError: cannot import name 'WebSocketScope' from 'asgiref.typing'`.\r\n\r\nThis appears to be because asgiref 3.3.4 doesn't have WebSocketScope, but later versions do - a simple\r\n\r\n```\r\npip install asgiref==3.4.1\r\n```\r\n\r\nfixes the problem for me, at least to the point where I can run datasette and poke around as usual.\r\n\r\nI note that over in the conda-forge recipe, https://github.com/conda-forge/datasette-feedstock/blob/master/recipe/meta.yaml pins asgiref to < 3.4.0, but I'm not sure why - so I'm not sure how to best resolve this issue :).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1456/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 989109888, "node_id": "MDU6SXNzdWU5ODkxMDk4ODg=", "number": 1460, "title": "Override column metadata with metadata from another column", "user": {"value": 72577720, "label": "MichaelTiemannOSC"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-06T12:13:33Z", "updated_at": "2021-09-06T12:13:33Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I have a table from the PUDL project (https://github.com/catalyst-cooperative/pudl) that looks like this:\r\n\r\n```\r\nCREATE TABLE fuel_ferc1 (\r\n\tid INTEGER NOT NULL, \r\n\trecord_id TEXT, \r\n\tutility_id_ferc1 INTEGER, \r\n\treport_year INTEGER, \r\n\tplant_name_ferc1 TEXT, \r\n\tfuel_type_code_pudl VARCHAR(7), \r\n\tfuel_unit VARCHAR(7),\r\n\tfuel_qty_burned FLOAT,\r\n\tfuel_mmbtu_per_unit FLOAT, \r\n\tfuel_cost_per_unit_burned FLOAT, \r\n\tfuel_cost_per_unit_delivered FLOAT, \r\n\tfuel_cost_per_mmbtu FLOAT, \r\n\tPRIMARY KEY (id), \r\n\tFOREIGN KEY(plant_name_ferc1, utility_id_ferc1) REFERENCES plants_ferc1 (plant_name_ferc1, utility_id_ferc1), \r\n\tCONSTRAINT fuel_ferc1_fuel_type_code_pudl_enum CHECK (fuel_type_code_pudl IN ('coal', 'oil', 'gas', 'solar', 'wind', 'hydro', 'nuclear', 'waste', 'unknown')), \r\n\tCONSTRAINT fuel_ferc1_fuel_unit_enum CHECK (fuel_unit IN ('ton', 'mcf', 'bbl', 'gal', 'kgal', 'gramsU', 'kgU', 'klbs', 'btu', 'mmbtu', 'mwdth', 'mwhth', 'unknown'))\r\n);\r\n```\r\n\r\nNote that `fuel_unit` is a unit that **pint** can understand, and that `fuel_qty_burned` is a column of data that could be expressed in terms of actual units, not merely as a dimensionless number. Ditto the `fuel_cost_per_unit_...` columns. Is there a way to give a column a default metadata unit (such as *tons* or *USD/ton*) and then let that be overridden when the metadata in another column says *barrels* or *USD/gramsU*?\r\n\r\n@catalyst-cooperative", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1460/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 991206402, "node_id": "MDExOlB1bGxSZXF1ZXN0NzI5NzA0NTM3", "number": 1465, "title": "add support for -o --get /path", "user": {"value": 51016, "label": "ctb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-08T14:30:42Z", "updated_at": "2021-09-08T14:31:45Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1465", "body": "Fixes https://github.com/simonw/datasette/issues/1459\r\n\r\nAdds support for `--open --get /path` to be used in combination.\r\n\r\nIf `--open` is provided alone, datasette will open a web page to a default URL.\r\nIf `--get ` is provided alone, datasette will output the result of doing a GET to that URL and then exit.\r\nIf `--open --get ` are provided together, datasette will open a web page to that URL.\r\n\r\nTODO items:\r\n- [ ] update documentation\r\n- [ ] print out error message when `--root --open --get ` is used\r\n- [ ] adjust code to require that `` start with a `/` when `-o --get ` is used\r\n- [ ] add test(s)\r\n\r\nnote, '@CTB' is used in this PR to flag code that needs revisiting.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1465/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 1001104942, "node_id": "PR_kwDOBm6k_c4r-EVH", "number": 1475, "title": "feat: allow joins using _through in both directions", "user": {"value": 5268174, "label": "bram2000"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-20T15:28:20Z", "updated_at": "2021-09-20T15:28:20Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1475", "body": "Currently the `_through` clause can only work if the FK relationship is defined in a specific direction. I don't think there is any reason for this limitation, as an FK allows joining in both directions.\r\n\r\nThis is an admittedly hacky change to implement bidirectional joins using `_through`. It does work for our use-case, but I don't know if there are other implications that I haven't thought of. Also if this change is desirable we probably want to make the code a little nicer.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1475/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1006016302, "node_id": "I_kwDOBm6k_c479pcu", "number": 1477, "title": "Consider adding request to the documented default template context", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T02:34:09Z", "updated_at": "2021-09-24T02:34:09Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I made a plugin for this today but I think perhaps it should be a default thing instead: https://datasette.io/plugins/datasette-template-request", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1477/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1006781949, "node_id": "I_kwDOBm6k_c48AkX9", "number": 1478, "title": "Documentation Request: Feature alternative ID instead of default ID", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T19:56:13Z", "updated_at": "2021-09-25T16:18:54Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "My data already has an ID that comes from a federal agency.\r\nWould love to have documentation on how to modify the template to:\r\n- Remove the generated ID from the table\r\n- Link the federal ID to the detail page\r\n- and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key.\r\n\r\nI don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports.\r\n\r\nPerhaps this isn't a template issue, maybe more of a db manipulation...\r\n\r\nMargie", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1478/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}