{"id": 1239034903, "node_id": "I_kwDOCGYnMM5J2iwX", "number": 433, "title": "CLI eats my cursor", "user": {"value": 7908073, "label": "chapmanjacobd"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2022-05-17T18:52:52Z", "updated_at": "2023-11-04T00:46:30Z", "closed_at": "2023-11-04T00:46:30Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm not sure why this happens but `sqlite-utils` makes my terminal cursor disappear after running commands like `sqlite-utils insert`. I've only noticed this behavior in `sqlite-utils`, not in any other CLI tools\r\n\r\nI can still type commands after it runs but the text cursor is invisible", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/433/reactions\", \"total_count\": 5, \"+1\": 5, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1773458985, "node_id": "PR_kwDOCGYnMM5T2mMb", "number": 560, "title": "Use sqlean if available in environment", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2023-06-25T19:48:48Z", "updated_at": "2023-06-26T08:21:00Z", "closed_at": "2023-06-25T23:25:51Z", "author_association": "OWNER", "pull_request": "simonw/sqlite-utils/pulls/560", "body": "Refs:\r\n- #559 \r\n\r\n\r\n----\n:books: Documentation preview :books:: https://sqlite-utils--560.org.readthedocs.build/en/560/\n\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/560/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1686042269, "node_id": "I_kwDOBm6k_c5kfvad", "number": 2066, "title": "Failing test: httpx.InvalidURL: URL too long", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2023-04-27T03:48:47Z", "updated_at": "2023-04-27T04:27:50Z", "closed_at": "2023-04-27T04:27:50Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/actions/runs/4815723640/jobs/8574667731\r\n```\r\n def urlparse(url: str = \"\", **kwargs: typing.Optional[str]) -> ParseResult:\r\n # Initial basic checks on allowable URLs.\r\n # ---------------------------------------\r\n \r\n # Hard limit the maximum allowable URL length.\r\n if len(url) > MAX_URL_LENGTH:\r\n> raise InvalidURL(\"URL too long\")\r\nE httpx.InvalidURL: URL too long\r\n\r\n/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/httpx/_urlparse.py:155: InvalidURL\r\n=========================== short test summary info ============================\r\nFAILED tests/test_csv.py::test_max_csv_mb - httpx.InvalidURL: URL too long\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2066/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 957310278, "node_id": "MDU6SXNzdWU5NTczMTAyNzg=", "number": 1409, "title": "`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting)", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 10, "created_at": "2021-07-31T19:48:56Z", "updated_at": "2023-01-07T18:06:01Z", "closed_at": "2023-01-05T00:51:31Z", "author_association": "OWNER", "pull_request": null, "body": "In 49d6d2f7b0f6cb02e25022e1c9403811f1fa0a7c as part of #813 I removed the `allow_sql` setting - on the basis that users could disable the ability to execute custom SQL queries using the new permission system instead.\r\n\r\nI don't think this was the right decision. Disabling custom SQL is an important security capability, and explaining how to do it using permissions is significantly more complex than letting people know they can add `--setting allow_sql off`.\r\n\r\nSo I want to bring that setting back - maybe with a different, better name - and have it modify the default for that option if the permissions system doesn't have an opinion.\r\n\r\nThat way people can still use the setting but then use permissions to allow specific signed-in users access to execute SQL.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1409/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1493471221, "node_id": "I_kwDOBm6k_c5ZBI_1", "number": 1949, "title": "`.json` errors should be returned as JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 10, "created_at": "2022-12-13T06:14:12Z", "updated_at": "2022-12-15T00:46:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Eg the error in this issue:\r\n- #1945 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1949/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1108671952, "node_id": "I_kwDOBm6k_c5CFP3Q", "number": 1605, "title": "Scripted exports", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2022-01-19T23:45:55Z", "updated_at": "2022-11-30T15:06:38Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Posting this while I'm thinking about it: I mentioned at the end of [this thread](https://twitter.com/eyeseast/status/1483893011658551299) that I'm usually doing `datasette --get` to export canned queries.\r\n\r\nI used to use a tool called [datafreeze](https://github.com/pudo/datafreeze) to do scripted exports, but that project looks dead now. The ergonomics of it are pretty nice, though, and the `Freezefile.yml` structure is actually not too far from Datasette's canned queries.\r\n\r\nThis is related to the idea for `datasette query` (#1356) but I think it's a distinct feature. It's most likely a plugin, but I want to raise it here because it's probably something other people have thought about.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1605/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 639072811, "node_id": "MDU6SXNzdWU2MzkwNzI4MTE=", "number": 849, "title": "Rename master branch to main", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 10, "created_at": "2020-06-15T19:05:54Z", "updated_at": "2022-10-27T13:57:08Z", "closed_at": "2020-09-15T20:37:14Z", "author_association": "OWNER", "pull_request": null, "body": "I was waiting for consensus to form around this (and kind-of hoping for `trunk` since I like the tree metaphor) and it looks like `main` is it.\r\n\r\nI've seen convincing arguments against `trunk` too - it indicates that the branch has some special significance like in Subversion (where all branches come from trunk) when it doesn't. So `main` is better anyway.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/849/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1423000702, "node_id": "I_kwDOCGYnMM5U0UR-", "number": 503, "title": "test_recreate failing on Windows Python 3.11", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2022-10-25T20:01:41Z", "updated_at": "2022-10-25T20:47:34Z", "closed_at": "2022-10-25T20:45:43Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils/actions/runs/3323672128/jobs/5494726927\r\n\r\nRelated:\r\n- #502\r\n\r\n```\r\nFAILED tests/test_recreate.py::test_recreate[True-True] - \r\n PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:\r\n 'C:\\\\Users\\\\runneradmin\\\\AppData\\\\Local\\\\Temp\\\\pytest-of-runneradmin\\\\pytest-0\\\\test_recreate_True_True_0\\\\data.db'\r\nFAILED tests/test_recreate.py::test_recreate[False-True] - \r\n PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:\r\n 'C:\\\\Users\\\\runneradmin\\\\AppData\\\\Local\\\\Temp\\\\pytest-of-runneradmin\\\\pytest-0\\\\test_recreate_False_True_0\\\\data.db'\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/503/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1397084281, "node_id": "I_kwDOBm6k_c5TRdB5", "number": 1831, "title": "If user can see table but NOT database/instance nav links should not display", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2022-10-05T02:16:31Z", "updated_at": "2022-10-13T21:52:04Z", "closed_at": "2022-10-13T21:52:04Z", "author_association": "OWNER", "pull_request": null, "body": "Spotted this bug while building this plugin:\r\n- https://github.com/simonw/datasette-public\r\n\r\nThis is a public table, but the two links in the nav go to forbidden pages:\r\n\r\n\"image\"\r\n\r\nThose nav links shouldn't be shown at all.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1831/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 447469253, "node_id": "MDU6SXNzdWU0NDc0NjkyNTM=", "number": 485, "title": "Improvements to table label detection ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": {"value": 9599, "label": "simonw"}, "milestone": null, "comments": 10, "created_at": "2019-05-23T06:19:49Z", "updated_at": "2022-10-03T00:04:42Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Label detection doesn't work if the primary key is called pk rather than id, so this page doesn't work: https://latest.datasette.io/fixtures/roadside_attraction_characteristics\r\n\r\nCode is here: \r\n\r\nhttps://github.com/simonw/datasette/blob/cccea85be6aaaeadb31f3b588ec7f732628815f5/datasette/app.py#L644-L653", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/485/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 903986178, "node_id": "MDU6SXNzdWU5MDM5ODYxNzg=", "number": 1344, "title": "Test Datasette Docker images built for different architectures", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-05-27T16:52:29Z", "updated_at": "2022-09-06T00:07:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Continuing on from #1319 - now that we have the ability to build Datasette's Docker image against multiple architectures we should test that it works.\r\n\r\nWe can do this with QEMU emulation, see https://twitter.com/nevali/status/1397958044571602945", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1344/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1170355774, "node_id": "I_kwDOBm6k_c5FwjY-", "number": 1661, "title": "Remove Hashed URL mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 10, "created_at": "2022-03-15T23:13:56Z", "updated_at": "2022-03-19T00:37:37Z", "closed_at": "2022-03-19T00:37:36Z", "author_association": "OWNER", "pull_request": null, "body": "It's now handled by a plugin instead:\r\n- #647\r\n- https://github.com/simonw/datasette-hashed-urls/issues/3\r\n\r\nhttps://github.com/simonw/datasette-hashed-urls\r\n\r\nSub-tasks:\r\n\r\n- [x] Remove hashed URL mode implementation\r\n- [x] Update documentation\r\n- [x] Ensure `--setting hash_urls 1` shows a useful message", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1661/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 646737558, "node_id": "MDU6SXNzdWU2NDY3Mzc1NTg=", "number": 870, "title": "Refactor default views to use register_routes", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-06-27T18:53:12Z", "updated_at": "2022-03-15T20:07:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It would be much cleaner if Datasette's default views were all registered using the new `register_routes()` plugin hook. Could dramatically reduce the code in `datasette/app.py`.\r\n\r\n> The ideal fix here would be to rework my `BaseView` subclass mechanism to work with `register_routes()` so that those views don't have any special privileges above plugin-provided views.\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/864#issuecomment-648580556_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/870/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 910092577, "node_id": "MDU6SXNzdWU5MTAwOTI1Nzc=", "number": 1356, "title": "Research: syntactic sugar for using --get with SQL queries, maybe \"datasette query\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-06-03T04:49:42Z", "updated_at": "2022-01-20T01:06:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by https://github.com/simonw/sqlite-utils/issues/264 - in particular this example:\r\n```\r\ndatasette covid.db --get='/covid.yaml?sql=select * from ny_times_us_counties limit 1' \r\n- date: '2020-01-21'\r\n county: Snohomish\r\n state: Washington\r\n fips: 53061\r\n cases: 1\r\n deaths: 0\r\n```\r\nHaving to construct that URL - including potentially URL escaping the SQL query - isn't a great developer experience.\r\n\r\nImagine if you could do this instead:\r\n\r\n datasette covid.db --query \"select * from ny_times_us_counties limit 1\" --format yaml\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1356/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 777140799, "node_id": "MDU6SXNzdWU3NzcxNDA3OTk=", "number": 1166, "title": "Adopt Prettier for JavaScript code formatting", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-12-31T21:25:27Z", "updated_at": "2022-01-13T22:22:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://prettier.io/ - I'm going to go with 2 spaces.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1166/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 642651572, "node_id": "MDU6SXNzdWU2NDI2NTE1NzI=", "number": 860, "title": "Plugin hook for instance/database/table metadata", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-06-21T22:20:25Z", "updated_at": "2022-01-13T22:21:42Z", "closed_at": "2021-06-26T22:56:28Z", "author_association": "OWNER", "pull_request": null, "body": "I'm not happy with how `metadata.(json|yaml)` keeps growing new features. Rather than having a single plugin hook for all of `metadata.json` I'm going to split out the feature that shows actual real metadata for tables and databases - `source`, `license` etc - into its own plugin-powered mechanism.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/357#issuecomment-647189045_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/860/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1096563265, "node_id": "I_kwDOCGYnMM5BXDpB", "number": 366, "title": "Python library methods for calling ANALYZE", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7558727, "label": "3.21"}, "comments": 10, "created_at": "2022-01-07T18:28:01Z", "updated_at": "2022-01-11T01:09:33Z", "closed_at": "2022-01-11T01:09:33Z", "author_association": "OWNER", "pull_request": null, "body": "> Relevant documentation: https://www.sqlite.org/lang_analyze.html\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1007633376_", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/366/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 807437089, "node_id": "MDU6SXNzdWU4MDc0MzcwODk=", "number": 228, "title": "--no-headers option for CSV and TSV", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-02-12T17:56:51Z", "updated_at": "2021-12-26T07:01:31Z", "closed_at": "2021-02-14T22:25:17Z", "author_association": "OWNER", "pull_request": null, "body": "https://bl.iro.bl.uk/work/ns/3037474a-761c-456d-a00c-9ef3c6773f4c has a fascinating CSV file that doesn't have a header row - it starts like this:\r\n\r\n```csv\r\nComputation and measurement of turbulent flow through idealized turbine blade passages,,\"Loizou, Panos A.\",https://isni.org/isni/0000000136122593,,University of Manchester,https://isni.org/isni/0000000121662407,1989,Thesis (Ph.D.),,Physical Sciences,,,https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.232781,\r\n\"Prolactin and growth hormone secretion in normal, hyperprolactinaemic and acromegalic man\",,\"Prescott, R. W. G.\",https://isni.org/isni/0000000134992122,,University of Newcastle upon Tyne,https://isni.org/isni/0000000104627212,1983,Thesis (Ph.D.),,Biological Sciences,,,https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.232784,\r\n```\r\n\r\nIt would be useful if `sqlite-utils insert ... --csv` had a mechanism for importing files like this one.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/228/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1058815557, "node_id": "I_kwDOBm6k_c4_HD5F", "number": 1521, "title": "Docker configuration for exercising Datasette behind Apache mod_proxy", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-11-19T18:46:18Z", "updated_at": "2021-11-19T20:32:29Z", "closed_at": "2021-11-19T20:32:29Z", "author_association": "OWNER", "pull_request": null, "body": "> Having a live demo running on Cloud Run that proxies through Apache and uses `base_url` would be incredibly useful for replicating and debugging this kind of thing. I wonder how hard it is to run Apache and `mod_proxy` in the same Docker container as Datasette?\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1519#issuecomment-974310208_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1521/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 976399638, "node_id": "MDU6SXNzdWU5NzYzOTk2Mzg=", "number": 319, "title": "[Enhancement] Please allow 'insert-files' to insert content as text.", "user": {"value": 66709385, "label": "pjamargh"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-08-22T15:10:46Z", "updated_at": "2021-08-24T23:33:45Z", "closed_at": "2021-08-24T23:33:44Z", "author_association": "NONE", "pull_request": null, "body": "'insert-files' creates BLOB columns for file contents. Transforming the column to TEXT still keep the content as binary. Even though I'm sure there is a transform that can be applied decoding the text it would be great to have a argument to make 'insert-files' to do it as text (with optional text encoding).\r\n\r\nThe use case is a bunch of htmls (single file) on a directory structure that inserted with this command could be served in Datasette allowing full text search.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/319/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 965143346, "node_id": "MDExOlB1bGxSZXF1ZXN0NzA3NDkwNzg5", "number": 312, "title": "Add reference page to documentation using Sphinx autodoc", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-08-10T16:59:17Z", "updated_at": "2021-08-10T23:09:32Z", "closed_at": "2021-08-10T23:09:28Z", "author_association": "OWNER", "pull_request": "simonw/sqlite-utils/pulls/312", "body": "Refs #311.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/312/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 893537744, "node_id": "MDU6SXNzdWU4OTM1Mzc3NDQ=", "number": 1331, "title": "Add support for Jinja2 version 3.0", "user": {"value": 475613, "label": "MarkusH"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-05-17T17:14:36Z", "updated_at": "2021-05-23T00:57:39Z", "closed_at": "2021-05-23T00:57:39Z", "author_association": "NONE", "pull_request": null, "body": "A week ago, [The Pallets Project](https://github.com/pallets) released [new major versions of several of its projects](https://palletsprojects.com/blog/flask-2-0-released/). Among those updates is one for Jinja2, which bumps it to version 3.0.0.\r\n\r\nI'd like for datasette to support Jinaj2 version 3.0.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1331/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 497170355, "node_id": "MDU6SXNzdWU0OTcxNzAzNTU=", "number": 576, "title": "Documented internals API for use in plugins", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 10, "created_at": "2019-09-23T15:28:50Z", "updated_at": "2021-01-05T23:12:51Z", "closed_at": "2021-01-05T23:12:37Z", "author_association": "OWNER", "pull_request": null, "body": "Quite a few of the plugin hooks make a `datasette\u201d`instance of the Datasette class available to the plugins, so that they can look up configuration settings and execute database queries.\r\n\r\nThis means it should provide a documented, stable API so that plugin authors can rely on it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/576/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 732798913, "node_id": "MDU6SXNzdWU3MzI3OTg5MTM=", "number": 1064, "title": "Navigation menu plus plugin hook", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6026070, "label": "0.51"}, "comments": 10, "created_at": "2020-10-30T00:49:36Z", "updated_at": "2020-10-30T03:45:16Z", "closed_at": "2020-10-30T03:45:16Z", "author_association": "OWNER", "pull_request": null, "body": "Needed for #690. Prototype in https://github.com/simonw/datasette/commit/0d7ac764861d84be24d661cf4104ce61ea11a82a", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1064/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 729057388, "node_id": "MDU6SXNzdWU3MjkwNTczODg=", "number": 1050, "title": "Switch to .blob render extension for BLOB downloads", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6026070, "label": "0.51"}, "comments": 10, "created_at": "2020-10-25T16:26:21Z", "updated_at": "2020-10-29T22:01:39Z", "closed_at": "2020-10-29T22:01:39Z", "author_association": "OWNER", "pull_request": null, "body": "This may require a complete rethink of the `/db/table/-/blob/row/column.blob` mechanism I just built for #1036.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1050/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 472115381, "node_id": "MDU6SXNzdWU0NzIxMTUzODE=", "number": 49, "title": "extracts= should support multiple-column extracts", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2019-07-24T07:06:41Z", "updated_at": "2020-10-16T19:18:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Lookup tables can be constructed on compound columns, but the `extracts=` option doesn't currently support that.\r\n\r\nRight now extracts can be defined in two ways:\r\n```python\r\n# Extract these columns into tables with the same name:\r\ndogs = db.table(\"dogs\", extracts=[\"breed\", \"most_recent_trophy\"])\r\n\r\n# Same as above but with custom table names:\r\ndogs = db.table(\"dogs\", extracts={\"breed\": \"Breeds\", \"most_recent_trophy\": \"Trophies\"})\r\n```\r\nNeed some kind of syntax for much more complicated extractions, like when two columns (say \"source\" and \"source_version\") are extracted into a single table.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/49/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 695319258, "node_id": "MDU6SXNzdWU2OTUzMTkyNTg=", "number": 149, "title": "FTS table with 7 rows has _fts_docsize table with 9,141 rows", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-09-07T18:06:16Z", "updated_at": "2020-09-07T21:16:34Z", "closed_at": "2020-09-07T21:16:34Z", "author_association": "OWNER", "pull_request": null, "body": "I'm seeing a weird issue with some of the SQLite databases that I am using with the FTS5 module.\r\n\r\nI have a database with a `licenses` table that contains 7 rows: \r\n\r\nThe FTS table also has 7 rows: \r\n\r\nSomehow the accompanying `licenses_fts_docsize` shadow table now has 9,141 rows in it! \r\n\r\nAnd `licenses_fts_data` has 41 rows - should I expect that to have 7 rows? \r\n\r\nI have a hunch that it might be a problem with the triggers. These are the triggers that are updating that FTS table: \r\n\r\n| type | name | tbl_name | rootpage | sql |\r\n| --- | --- | --- | --- | --- |\r\n| trigger | licenses_ai | licenses | 0 | `CREATE TRIGGER [licenses_ai] AFTER INSERT ON [licenses] BEGIN INSERT INTO [licenses_fts] (rowid, [name]) VALUES (new.rowid, new.[name]); END` |\r\n| trigger | licenses_ad | licenses | 0 | `CREATE TRIGGER [licenses_ad] AFTER DELETE ON [licenses] BEGIN INSERT INTO [licenses_fts] ([licenses_fts], rowid, [name]) VALUES('delete', old.rowid, old.[name]); END` |\r\n| trigger | licenses_au | licenses | 0 | `CREATE TRIGGER [licenses_au] AFTER UPDATE ON [licenses] BEGIN INSERT INTO [licenses_fts] ([licenses_fts], rowid, [name]) VALUES('delete', old.rowid, old.[name]); INSERT INTO [licenses_fts] (rowid, [name]) VALUES (new.rowid, new.[name]); END` |", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/149/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 665700495, "node_id": "MDU6SXNzdWU2NjU3MDA0OTU=", "number": 122, "title": "CLI utility for inserting binary files into SQLite", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-07-26T03:27:39Z", "updated_at": "2020-07-27T07:10:41Z", "closed_at": "2020-07-27T07:09:03Z", "author_association": "OWNER", "pull_request": null, "body": "SQLite BLOB columns can store entire binary files. The challenge is inserting them, since they don't neatly fit into JSON objects.\r\n\r\nIt would be great if the `sqlite-utils` CLI had a trick for helping with this.\r\n\r\nInspired by https://github.com/simonw/datasette-media/issues/14", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/122/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 632753851, "node_id": "MDU6SXNzdWU2MzI3NTM4NTE=", "number": 806, "title": "Release Datasette 0.44", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5512395, "label": "Datasette 0.44"}, "comments": 10, "created_at": "2020-06-06T21:49:52Z", "updated_at": "2020-06-12T01:20:03Z", "closed_at": "2020-06-12T01:20:03Z", "author_association": "OWNER", "pull_request": null, "body": "See also [milestone](https://github.com/simonw/datasette/milestone/14). This is a pretty big release: flash messaging, writable canned queries, authentication and permissions!\r\n\r\nI'll want to ship some plugin releases in conjunction with this - `datasette-auth-github` for example.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/806/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 613006393, "node_id": "MDU6SXNzdWU2MTMwMDYzOTM=", "number": 20, "title": "Ability to serve thumbnailed Apple Photo from its place on disk", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-05-06T02:17:50Z", "updated_at": "2020-05-25T20:14:22Z", "closed_at": "2020-05-25T20:09:41Z", "author_association": "MEMBER", "pull_request": null, "body": "A custom Datasette plugin that can be run locally on a Mac laptop which knows how to serve photos such that they can be seen in the browser.\r\n\r\n_Originally posted by @simonw in https://github.com/dogsheep/photos-to-sqlite/issues/19#issuecomment-624406285_", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 594189527, "node_id": "MDU6SXNzdWU1OTQxODk1Mjc=", "number": 717, "title": "See if I can get Datasette working on Zeit Now v2", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-04-05T00:56:48Z", "updated_at": "2020-04-06T22:47:22Z", "closed_at": "2020-04-06T22:47:21Z", "author_association": "OWNER", "pull_request": null, "body": "I thought this was impossible because AWS Lambda doesn't ship the `sqlite3` standard library module... but apparenttly that's not the case on Now v2 any more!\r\n\r\nhttps://now-2-python-versions-ks69olzpi.now.sh/api\r\n\r\n```\r\n _________________________________________________________________________________________________________________________________________________________________ \r\n/ Hello from Python from a ZEIT Now Serverless Function! Version is 3.6.10 (default, Mar 10 2020, 22:54:43) \\\r\n\\ [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)], sqlite3 module = , sqlite3 version = [('3.7.17',)] /\r\n ----------------------------------------------------------------------------------------------------------------------------------------------------------------- \r\n \\ ^__^\r\n \\ (oo)\\_______\r\n (__)\\ )\\/\\\r\n ||----w |\r\n || ||\r\n```\r\nThat's from shipping this code as `api/index.py`:\r\n```python\r\nfrom http.server import BaseHTTPRequestHandler\r\nfrom cowpy import cow\r\nimport sys\r\n\r\n\r\ntry:\r\n import sqlite3\r\nexcept ImportError:\r\n sqlite3 = None\r\n\r\n\r\nclass handler(BaseHTTPRequestHandler):\r\n def do_GET(self):\r\n self.send_response(200)\r\n self.send_header(\"Content-type\", \"text/plain\")\r\n self.end_headers()\r\n message = cow.Cowacter().milk(\r\n \"Hello from Python from a ZEIT Now Serverless Function! Version is {}, sqlite3 module = {}, sqlite3 version = {}\".format(\r\n sys.version, sqlite3, sqlite3.connect(\":memory:\").execute(\"select sqlite_version()\").fetchall()\r\n )\r\n )\r\n self.wfile.write(message.encode())\r\n return\r\n```\r\nNow v2 supports ASGI so this might be possible without too much work: https://zeit.co/docs/runtimes#advanced-usage/advanced-python-usage/asynchronous-server-gateway-interface", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/717/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 569613563, "node_id": "MDU6SXNzdWU1Njk2MTM1NjM=", "number": 682, "title": "Mechanism for writing to database via a queue", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2020-02-24T03:10:07Z", "updated_at": "2020-02-25T04:45:10Z", "closed_at": "2020-02-25T04:45:10Z", "author_association": "OWNER", "pull_request": null, "body": "I've been mulling this over for a long time, and I have a new approach that I think is worth exploring.\r\n\r\nThe catch with writing to SQLite is that it should only accept one write at a time. I'm now thinking that an easy way to manage that would be with a write queue for each database which is then read by a single dedicated write thread which manages its own writable connection.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/682/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 449565204, "node_id": "MDU6SXNzdWU0NDk1NjUyMDQ=", "number": 23, "title": "Syntactic sugar for creating m2m records", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2019-05-29T02:17:48Z", "updated_at": "2019-08-04T03:54:58Z", "closed_at": "2019-08-04T03:37:34Z", "author_association": "OWNER", "pull_request": null, "body": "Python library only. What would be a syntactically pleasant way of creating a m2m record?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/23/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 453846217, "node_id": "MDU6SXNzdWU0NTM4NDYyMTc=", "number": 506, "title": "Option to display binary data", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2019-06-08T23:44:12Z", "updated_at": "2019-06-11T15:48:27Z", "closed_at": "2019-06-09T16:07:39Z", "author_association": "OWNER", "pull_request": null, "body": "In #442 we suppressed rendering of binary data:\r\n\r\n\"many-photos-tables__RKAlbumVersion_albumId_RidIndex__36_rows\"\r\n\r\nIt turns out there is one use-case where displaying binary data is useful: when you're poking around looking at random SQLite databases you find in `~/Library` trying to figure out what they are for.\r\n\r\nSo, a mechanism for opting in to ugly display of binary data again would be useful.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/506/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 431800286, "node_id": "MDU6SXNzdWU0MzE4MDAyODY=", "number": 427, "title": "New design for facet abstraction, including querystring and metadata.json", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2019-04-11T02:24:15Z", "updated_at": "2019-05-29T21:39:12Z", "closed_at": "2019-05-03T00:11:29Z", "author_association": "OWNER", "pull_request": null, "body": "I need a better design for query strings for facets (and for how facets are enabled in `metadata.json`).\r\n\r\nThink of all of the potential kinds of facets:\r\n\r\n* `?_facet_array=tags` where tags is a JSON array of values\r\n* `_facet_date=datetimecol` - faceted by date part of a datetime\r\n* `_facet_bins=numeric_column` - can I do some kind of fancy binning here? Might need to take an argument\r\n* `?_facet_bins=numeric_column:5` - could be a way to take an argument. We\u2019ll ignore columns with a : in their name.\r\n* `?_facet_json=jsoncol:jsonpath` - could use a JSON path to extract out something to facet on?\r\n* `?_facet_percentile=numericcolumn` - could this work?\r\n* `?_facet_function=column:sqlfunctionname` - maybe this could be interesting? Would allow for e.g. facet by soundex\r\n* `?_facet_prefix=column:prefix` - facet by terms but only if they start with a specific prefix\r\n* `?_facet_substring=column:3,6` - facet by a substr(column, 3, 6)\r\n\r\nMaybe bundling JSON in querystrings is a way to do options?\r\n\r\n`?_facet_distance={\"latitude_column\":\"x\",...}`\r\n\r\nCould detect values starting with `{` - and if for some weird reason you have a column starting with that character you can pass this instead: `?_facet_percentile={\"column\": \"{value}\"}`\r\n\r\nThis could even be the mechanism that allows us to extend regular facets to support additional options like adding a sum or max to each one.\r\n\r\nProblem: it\u2019s not obvious what the name associated with these facets should be. What if one column is faceted multiple times using multiple facet variants?\r\n\r\nMaybe just number them? name1=\u2026 name2=\u2026 etc?\r\n\r\nOther option is to use Solr style querystring syntax for notation. Solr does this: `?f.price.facet.range.gap=100&f.age.facet.range.gap=10`\r\n\r\nSo how about this:\r\n\r\n`?_facet_range=age&_facet_range.span=5`\r\n\r\nRelated: #359", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/427/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 346027040, "node_id": "MDU6SXNzdWUzNDYwMjcwNDA=", "number": 355, "title": "Table view should support filtering via many-to-many relationships", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-07-31T04:04:16Z", "updated_at": "2019-05-23T06:04:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Parent: #354 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/355/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 421551434, "node_id": "MDU6SXNzdWU0MjE1NTE0MzQ=", "number": 419, "title": "Default to opening files in mutable mode, special option for immutable files", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4305096, "label": "0.28"}, "comments": 10, "created_at": "2019-03-15T14:39:27Z", "updated_at": "2019-05-16T15:14:32Z", "closed_at": "2019-05-16T15:14:31Z", "author_association": "OWNER", "pull_request": null, "body": "One of the original ideas behind Datasette was that serving immutable data makes everything way easier. Two examples: You don't have to worry about SQLite concurrency and you can bundle the database inside a Docker container and deploy it to immutable hosting. See [The interesting ideas in Datasette](https://simonwillison.net/2018/Oct/4/datasette-ideas/) for more on this.\r\n\r\nI'm beginning to see a much stronger case for being able to serve mutable data as well.\r\n\r\nSQLite is actually perfectly capable of handling reads against a database that is also being written to, even if the writes are coming from another process. https://www.sqlite.org/wal.htm\r\n\r\nThere are all kinds of interesting use-cases which Datasette is currently unsuitable for due to its insistence on immutable databases. Some examples:\r\n\r\n* Continually run Datasette against a SQLite database updated by another process, e.g. Firefox bookmarks\r\n* Projects where a cron runs every X minutes and writes new entries gathered from other sources to SQLite\r\n* Tail a log file, write those log updates to a SQLite file, view recent log entries in Datasette\r\n\r\nThis is also relevant to #417, Datasette Library.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/419/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 325373747, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg5NzIzNzE2", "number": 280, "title": "Build Dockerfile with recent Sqlite + Spatialite", "user": {"value": 565628, "label": "r4vi"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-05-22T16:33:50Z", "updated_at": "2018-06-28T11:26:23Z", "closed_at": "2018-05-23T17:43:35Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/280", "body": "This solves #278 without bloating the Dockerfile too much, the image size is now\r\n495MB (original was ~240MB) but it could be reduced significantly if we only\r\ncopied the output of the compilation of spatialite and friends to\r\n/usr/local/lib, instead of the entirety of it however that will take more time.\r\n\r\nIn the python code change references to `import sqlite3` to `import pysqlite3`\r\nand it should use the compiled version of sqlite3.23.1. You don't need to\r\ntry/except because pysqlite3 falls back to builtin sqlite3 if there is no\r\ncompiled version.\r\n\r\n```bash\r\n $ docker run --rm -it datasette spatialite\r\n SpatiaLite version ..: 4.4.0-RC0\tSupported Extensions:\r\n - 'VirtualShape'\t[direct Shapefile access]\r\n - 'VirtualDbf'\t\t[direct DBF access]\r\n - 'VirtualXL'\t\t[direct XLS access]\r\n - 'VirtualText'\t\t[direct CSV/TXT access]\r\n - 'VirtualNetwork'\t[Dijkstra shortest path]\r\n - 'RTree'\t\t[Spatial Index - R*Tree]\r\n - 'MbrCache'\t\t[Spatial Index - MBR cache]\r\n - 'VirtualSpatialIndex'\t[R*Tree metahandler]\r\n - 'VirtualElementary'\t[ElemGeoms metahandler]\r\n - 'VirtualKNN'\t[K-Nearest Neighbors metahandler]\r\n - 'VirtualXPath'\t[XML Path Language - XPath]\r\n - 'VirtualFDO'\t\t[FDO-OGR interoperability]\r\n - 'VirtualGPKG'\t[OGC GeoPackage interoperability]\r\n - 'VirtualBBox'\t\t[BoundingBox tables]\r\n - 'SpatiaLite'\t\t[Spatial SQL - OGC]\r\n PROJ.4 version ......: Rel. 4.9.3, 15 August 2016\r\n GEOS version ........: 3.5.1-CAPI-1.9.1 r4246\r\n TARGET CPU ..........: x86_64-linux-gnu\r\n the SPATIAL_REF_SYS table already contains some row(s)\r\n SQLite version ......: 3.23.1\r\n Enter \".help\" for instructions\r\n SQLite version 3.23.1 2018-04-10 17:39:29\r\n Enter \".help\" for instructions\r\n Enter SQL statements terminated with a \";\"\r\n spatialite>\r\n```\r\n\r\n```bash\r\n$ docker run --rm -it datasette python -c \"import pysqlite3; print(pysqlite3.sqlite_version)\"\r\n3.23.1\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/280/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 313837303, "node_id": "MDU6SXNzdWUzMTM4MzczMDM=", "number": 203, "title": "Support for units", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-04-12T18:24:28Z", "updated_at": "2018-04-16T21:59:17Z", "closed_at": "2018-04-16T21:59:17Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be nice to be able to attach a unit to a column in the metadata, and have it rendered with that unit (and SI prefix) when it's displayed.\r\n\r\nIt would also be nice to support entering the prefixes in variables when querying.\r\n\r\nWith my radio licensing app I've put all frequencies in Hz. It's easy enough to special-case the row rendering to add the SI prefixes, but it's pretty unusable when querying by that field.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/203/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273703829, "node_id": "MDU6SXNzdWUyNzM3MDM4Mjk=", "number": 86, "title": "Filter UI on table page", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2919870, "label": "Foreign key edition"}, "comments": 10, "created_at": "2017-11-14T08:22:43Z", "updated_at": "2017-11-23T20:34:32Z", "closed_at": "2017-11-23T20:34:32Z", "author_association": "OWNER", "pull_request": null, "body": "A UI for building up simple table queries by adding additional filter rules that get executed as query parameters in the URL.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/86/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273278840, "node_id": "MDU6SXNzdWUyNzMyNzg4NDA=", "number": 71, "title": "Set up some example datasets on a Cloudflare-backed domain", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 10, "created_at": "2017-11-13T00:06:30Z", "updated_at": "2017-11-13T02:09:34Z", "closed_at": "2017-11-13T02:09:34Z", "author_association": "OWNER", "pull_request": null, "body": "To better demonstrate the caching and HTTP/2 features, I'd like to go live with some demos that are hosted behind Cloudflare.\r\n\r\n- [x] Redirect https://datasettes.com/ and https://www.datasettes.com/ to https://github.com/simonw/datasette\r\n- [x] Have `now domain add -e datasettes.com` run without errors (hopefully just a matter of waiting for the DNS to update)\r\n- [x] Alias an example dataset hosted on Now on a datasettes.com subdomain\r\n- [x] Confirm that HTTP caching and HTTP/2 redirect pushing works as expected - this may require another page rule", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/71/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}