{"id": 836273891, "node_id": "MDU6SXNzdWU4MzYyNzM4OTE=", "number": 1266, "title": "Documentation for Response.asgi_send(send) method", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-03-19T18:52:49Z", "updated_at": "2021-03-20T21:35:00Z", "closed_at": "2021-03-20T21:32:28Z", "author_association": "OWNER", "pull_request": null, "body": "I found myself wanting to use this method for https://github.com/simonw/datasette-auth-passwords/issues/15 - but it's not documented. It should be documented.\r\n\r\nhttps://github.com/simonw/datasette/blob/8e18c7943181f228ce5ebcea48deb59ce50bee1f/datasette/utils/asgi.py#L320-L340", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1266/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 836123030, "node_id": "MDU6SXNzdWU4MzYxMjMwMzA=", "number": 1265, "title": "Support for HTTP Basic Authentication", "user": {"value": 468612, "label": "yunzheng"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-03-19T15:31:09Z", "updated_at": "2021-03-19T22:05:12Z", "closed_at": "2021-03-19T21:03:09Z", "author_association": "NONE", "pull_request": null, "body": "It would be nice if datasette could support [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).\r\n\r\nFor now I could ofcourse leverage Nginx for basic authentication, but it would be nice to have support for this in datasette by default or via a plugin like datasette-auth-github.\r\n\r\nMy main usecase is to put the whole datasette instance behind a username/password prompt via Basic Auth and not specific urls.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1265/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 834602299, "node_id": "MDU6SXNzdWU4MzQ2MDIyOTk=", "number": 1262, "title": "Plugin hook that could support 'order by random()' for table view", "user": {"value": 19328961, "label": "henry501"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-03-18T10:02:01Z", "updated_at": "2021-03-18T17:55:01Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I am frequently using Datasette to quickly get a visual impression for a table without reviewing it in its entirety. Because I have some groups of similar records, the default sorting options mean that each page is very similar and not representative of the full dataset. The current interface allows sorting by columns, but random sorting is only available via custom SQL.\r\n\r\nMaybe this could be a button or link.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1262/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 787173276, "node_id": "MDU6SXNzdWU3ODcxNzMyNzY=", "number": 1193, "title": "Research plugin hook for alternative database backends", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-01-15T20:27:50Z", "updated_at": "2021-03-12T01:01:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I started exploring what Datasette would like running against PostgreSQL in #670 and @dazzag24 did some work on Parquet described in #657.\r\n\r\nI had initially thought this was WAY too much additional complexity, but I'm beginning to think that the `Database` class may be small enough that having it abstract away the details of running queries against alternative database backends could be feasible.\r\n\r\nA bigger issue is SQL generation, but I realized that most of Datasette's SQL generation code exists just in the `TableView` class that runs the table page. If this was abstracted into some kind of SQL builder that could be then customized per-database it might be reasonable to get it working.\r\n\r\nVery unlikely for this to make it into Datasette 1.0, but maybe this would be the defining feature of Datasette 2.0?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1193/reactions\", \"total_count\": 3, \"+1\": 3, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 824067604, "node_id": "MDU6SXNzdWU4MjQwNjc2MDQ=", "number": 1250, "title": "Research: Plugin hook for alternative database connections", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-03-08T00:28:15Z", "updated_at": "2021-03-12T01:01:25Z", "closed_at": "2021-03-12T01:01:17Z", "author_association": "OWNER", "pull_request": null, "body": "The `Database` class is a natural looking fit for a plugin hook to load custom database connections... potentially even databases other than SQLite. DuckDB (refs #968) could make for a great starting point, since it looks very compatible with the existing SQLite code.\r\n\r\nThe real win would be if this could lead to running Datasette against PostgreSQL. I made some initial explorations in that direction a while ago in #670.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1250/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 797649915, "node_id": "MDExOlB1bGxSZXF1ZXN0NTY0NjA4MjY0", "number": 1211, "title": "Use context manager instead of plain open", "user": {"value": 4488943, "label": "kbaikov"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-01-31T07:58:10Z", "updated_at": "2021-03-11T16:15:50Z", "closed_at": "2021-03-11T16:15:50Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1211", "body": "Context manager with open closes the files after usage. Fixes: https://github.com/simonw/datasette/issues/1208\r\n\r\nWhen the object is already a pathlib.Path i used read_text\r\nwrite_text functions\r\n\r\nIn some cases pathlib.Path.open were used in context manager,\r\nit is basically the same as builtin open.\r\n\r\nTests are passing: 850 passed, 5 xfailed, 10 xpassed", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1211/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 794554881, "node_id": "MDU6SXNzdWU3OTQ1NTQ4ODE=", "number": 1208, "title": "A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper", "user": {"value": 4488943, "label": "kbaikov"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-01-26T20:56:28Z", "updated_at": "2021-03-11T16:15:49Z", "closed_at": "2021-03-11T16:15:49Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Your code is full of open files that are never closed, especially when you deal with reading/writing json/yaml files.\r\n\r\nIf you run python with warnings enabled this problem becomes evident.\r\nThis probably contributes to some memory leaks in long running datasettes if the GC will not 'collect' those resources properly.\r\n\r\nThis is easily fixed by using a context manager instead of just using open:\r\n```python\r\nwith open('some_file', 'w') as opened_file:\r\n opened_file.write('string')\r\n```\r\n\r\nIn some newer parts of the code you use Path objects 'read_text' and 'write_text' functions which close the file properly and are prefered in some cases.\r\n\r\n\r\nIf you want I can create a PR for all places i found this pattern in.\r\n\r\n\r\nBellow is a fraction of places where i found a ResourceWarning:\r\n```python\r\n\r\nupdate-docs-help.py:\r\n 20 actual = actual.replace(\"Usage: cli \", \"Usage: datasette \")\r\n 21: open(docs_path / filename, \"w\").write(actual)\r\n 22 \r\n\r\ndatasette\\app.py:\r\n 210 ):\r\n 211: inspect_data = json.load((config_dir / \"inspect-data.json\").open())\r\n 212 if immutables is None:\r\n\r\n 266 if config_dir and (config_dir / \"settings.json\").exists() and not config:\r\n 267: config = json.load((config_dir / \"settings.json\").open())\r\n 268 self._settings = dict(DEFAULT_SETTINGS, **(config or {}))\r\n\r\n 445 self._app_css_hash = hashlib.sha1(\r\n 446: open(os.path.join(str(app_root), \"datasette/static/app.css\"))\r\n 447 .read()\r\n\r\ndatasette\\cli.py:\r\n 130 else:\r\n 131: out = open(inspect_file, \"w\")\r\n 132 loop = asyncio.get_event_loop()\r\n\r\n 459 if inspect_file:\r\n 460: inspect_data = json.load(open(inspect_file))\r\n 461 \r\n\r\n```\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1208/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 826613352, "node_id": "MDExOlB1bGxSZXF1ZXN0NTg4NjAxNjI3", "number": 1254, "title": "Update Docker Spatialite version to 5.0.1 + add support for Spatialite topology functions", "user": {"value": 3200608, "label": "durkie"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2021-03-09T20:49:08Z", "updated_at": "2021-03-10T18:27:45Z", "closed_at": "2021-03-09T22:04:23Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/1254", "body": "This requires adding the RT Topology library (Spatialite changed to RT Topology from LWGEOM between 4.4 and 5.0), as well as upgrading the GEOS version (which is the reason for switching to `python:3.7.10-slim-buster` as the base image.)\r\n\r\n`autoconf` and `libtool` are added to build RT Topology, and Spatialite is now built with `--disable-minizip` (minizip wasn't an option in 4.4 and I didn't want to add another dependency) and `--disable-dependency-tracking` which, according to Spatialite, \"speeds up one-time builds\"", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1254/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 827341657, "node_id": "MDExOlB1bGxSZXF1ZXN0NTg5MjYzMjk3", "number": 1256, "title": "Minor type in IP adress", "user": {"value": 6371750, "label": "JBPressac"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-03-10T08:28:22Z", "updated_at": "2021-03-10T18:26:46Z", "closed_at": "2021-03-10T18:26:40Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1256", "body": "127.0.01 replaced by 127.0.0.1", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1256/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 824750134, "node_id": "MDU6SXNzdWU4MjQ3NTAxMzQ=", "number": 1251, "title": "facet option not appearing when table is big", "user": {"value": 15836677, "label": "verajosemanuel"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-08T16:54:04Z", "updated_at": "2021-03-08T16:54:16Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a big table with more than 500.000 rows.\r\nTrying to facet by one of my columns, the options are not available as for the other smaller tables.\r\n\r\nI have tried to set it in URL as:\r\n\r\n`&_facet=city_id`\r\n\r\nto no avail.\r\n\r\nis there any limit? how can I force the option \"facet\" to appear for big tables?\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1251/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 823035080, "node_id": "MDU6SXNzdWU4MjMwMzUwODA=", "number": 1248, "title": "duckdb database (very low performance in SQLite)", "user": {"value": 15836677, "label": "verajosemanuel"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-03-05T12:20:29Z", "updated_at": "2021-03-08T00:25:27Z", "closed_at": "2021-03-08T00:25:27Z", "author_association": "NONE", "pull_request": null, "body": "My sqlite is getting too big to be processed by datasette (more than 10 minutes waiting to load) so I am working with duckdb and is waaaaay faster. I think the fastest embeddable database actually.\r\n\r\nhttps://duckdb.org/\r\n\r\nTaking into account DuckDb is SQLite based it would be GREAT to use it with datasette.\r\n\r\nis that possible?\r\n\r\nRegards and thanks for a superb job", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1248/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 806918878, "node_id": "MDExOlB1bGxSZXF1ZXN0NTcyMjU0MTAz", "number": 1223, "title": "Add compile option to Dockerfile to fix failing test (fixes #696)", "user": {"value": 7476523, "label": "bobwhitelock"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-02-12T03:38:05Z", "updated_at": "2021-03-07T12:01:12Z", "closed_at": "2021-03-07T07:41:17Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1223", "body": "This test was failing when run inside the Docker container: `test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3]`,\r\n\r\nwith this error:\r\n\r\n```\r\n def test_searchable(app_client, path, expected_rows):\r\n response = app_client.get(path)\r\n> assert expected_rows == response.json[\"rows\"]\r\nE AssertionError: assert [[1, 'barry c...sel', 'puma']] == []\r\nE Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']\r\nE Full diff:\r\nE + []\r\nE - [[1, 'barry cat', 'terry dog', 'panther'],\r\nE - [2, 'terry dog', 'sara weasel', 'puma']]\r\n```\r\n\r\nThe issue was that the version of sqlite3 built inside the Docker container was built with FTS3 and FTS4 enabled, but without the\r\n`SQLITE_ENABLE_FTS3_PARENTHESIS` compile option passed, which adds support for using `AND` and `NOT` within `match` expressions (see https://sqlite.org/fts3.html#compiling_and_enabling_fts3_and_fts4 and https://www.sqlite.org/compile.html).\r\n\r\nWithout this, the `AND` used in the search in this test was being interpreted as a literal string, and so no matches were found. Adding this compile option fixes this.\r\n\r\n---\r\n\r\nI actually ran into this issue because the same test was failing when I ran the test suite on my own machine, outside of Docker, and so I eventually tracked this down to my system sqlite3 also being compiled without this option.\r\n\r\nI wonder if this is a sign of a slightly deeper issue, that Datasette can silently behave differently based on the version and compilation of sqlite3 it is being used with. On my own system I fixed the test suite by running `pip install pysqlite3-binary`, so that this would be picked up instead of the `sqlite` package, as this seems to be compiled using this option, . Maybe using `pysqlite3-binary` could be installed/recommended by default so a more deterministic version of sqlite is used? Or there could be some feature detection done on the available sqlite version, to know what features are available and can be used/tested?", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1223/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 617323873, "node_id": "MDU6SXNzdWU2MTczMjM4NzM=", "number": 766, "title": "Enable wildcard-searches by default", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-05-13T10:14:48Z", "updated_at": "2021-03-05T16:35:21Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi Simon.\r\n\r\nIt seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the [docs](https://datasette.readthedocs.io/en/stable/full_text_search.html#the-table-page-and-table-view-api) (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance.\r\n\r\nI would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring \"_searchmode=raw\" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using \"_searchmode=raw\"? Is the \"escape_fts\"-function from datasette.utils ignored?\r\n\r\n\r\nThanks!\r\n\r\n/Claus", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/766/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 815955014, "node_id": "MDExOlB1bGxSZXF1ZXN0NTc5Njk3ODMz", "number": 1243, "title": "fix small typo", "user": {"value": 306240, "label": "UtahDave"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-02-25T00:22:34Z", "updated_at": "2021-03-04T05:46:10Z", "closed_at": "2021-03-04T05:46:10Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1243", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1243/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 818430405, "node_id": "MDU6SXNzdWU4MTg0MzA0MDU=", "number": 1247, "title": "datasette.add_memory_database() method", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-03-01T03:48:38Z", "updated_at": "2021-03-01T04:02:26Z", "closed_at": "2021-03-01T04:02:26Z", "author_association": "OWNER", "pull_request": null, "body": "I just wrote this code:\r\n\r\nhttps://github.com/simonw/datasette/blob/47eb885cc2c3aafa03645c330c6f597bee9b3b25/tests/test_facets.py#L334-L335\r\n\r\nIt would be nice if you didn't have to separately instantiate a database object here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1247/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 817597268, "node_id": "MDU6SXNzdWU4MTc1OTcyNjg=", "number": 1246, "title": "Suggest for ArrayFacet possibly confused by blank values", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-02-26T19:11:52Z", "updated_at": "2021-03-01T03:46:11Z", "closed_at": "2021-03-01T03:46:11Z", "author_association": "OWNER", "pull_request": null, "body": "I sometimes don't get the suggestion for facet-by-array for columns that contain arrays. I think it may be because they have empty spaces in them - or perhaps it's because the null detection doesn't actually work.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1246/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 718259202, "node_id": "MDU6SXNzdWU3MTgyNTkyMDI=", "number": 1005, "title": "Remove xfail tests when new httpx is released", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2020-10-09T16:00:19Z", "updated_at": "2021-02-28T22:41:08Z", "closed_at": "2021-02-28T22:41:08Z", "author_association": "OWNER", "pull_request": null, "body": "> My `httpx` pull request adding `raw_path` support was just merged: https://github.com/encode/httpx/pull/1357 - but it's not in a release yet.\r\n>\r\n> I'm going to mark these tests as `xfail` so I can land this change - I'll remove that once an `httpx` release comes out that I can use to get the tests passing.\r\n>\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/pull/1000#issuecomment-706263157_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1005/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 814591962, "node_id": "MDU6SXNzdWU4MTQ1OTE5NjI=", "number": 1240, "title": "Allow facetting on custom queries", "user": {"value": 7107523, "label": "Kabouik"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-02-23T15:52:19Z", "updated_at": "2021-02-26T18:19:46Z", "closed_at": "2021-02-26T18:18:18Z", "author_association": "NONE", "pull_request": null, "body": "Facets are a tremendously useful feature, especially for people peeking at the database for the first time and still having little knowledge about the details of the data. It is of great assistance to discover interesting features to explore futher in advanced queries.\r\n\r\nYet, it seems it's impossible to use facets when running a custom SQL query, be it from the little gear icons in column names, the facet suggestions at the top (hidden when performing a custom query), or by appending a facet code to the URL. \r\n\r\nIs there a technical limitation, or is this something that could be unlocked easily?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1240/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 817528452, "node_id": "MDU6SXNzdWU4MTc1Mjg0NTI=", "number": 1244, "title": "Plugin tip: look at the examples linked from the hooks page", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-26T17:18:27Z", "updated_at": "2021-02-26T17:30:38Z", "closed_at": "2021-02-26T17:27:15Z", "author_association": "OWNER", "pull_request": null, "body": "Someone asked \"what are good example plugins I can look at?\" and I realized that the answer is to look through the example links on https://docs.datasette.io/en/stable/plugin_hooks.html - but that tip should be written down somewhere on the https://docs.datasette.io/en/stable/writing_plugins.html page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1244/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 803356942, "node_id": "MDU6SXNzdWU4MDMzNTY5NDI=", "number": 1218, "title": " /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory", "user": {"value": 11855322, "label": "robmarkcole"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-08T09:07:00Z", "updated_at": "2021-02-23T12:12:17Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Error as above, however I do have python3.8 and the readme indicates this is supported.\r\n\r\n```\r\n(venv) (base) Robins-MacBook:datasette robin$ ls /usr/local/opt/python3/bin/\r\n\r\n.. pip3 python3 python3.8\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1218/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 813978858, "node_id": "MDU6SXNzdWU4MTM5Nzg4NTg=", "number": 1239, "title": "JSON filter fails if column contains spaces", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-23T00:18:07Z", "updated_at": "2021-02-23T00:22:53Z", "closed_at": "2021-02-23T00:22:53Z", "author_association": "OWNER", "pull_request": null, "body": "Got this exception:\r\n\r\n`ERROR: conn=, sql = 'select Address, Affiliation, County, [Has Report], [Latest report notes], [Latest report yes], Latitude, [Location Type], Longitude, Name, id, [Appointment scheduling instructions], [Availability Info], [Latest report] from locations where rowid in (\\n select locations.rowid from locations, json_each(locations.Availability Info) j\\n where j.value = :p0\\n ) and \"Latest report yes\" = :p1 order by id limit 101', params = {'p0': 'Yes: appointment required', 'p1': '1'}: near \"Info\": syntax error`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1239/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 811505638, "node_id": "MDU6SXNzdWU4MTE1MDU2Mzg=", "number": 1234, "title": "Runtime support for ATTACHing multiple databases", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-18T22:06:47Z", "updated_at": "2021-02-22T21:06:28Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> The implementation in #1232 is ready to land. It's the simplest-thing-that-could-possibly-work: you can run `datasette one.db two.db three.db --crossdb` and then use the `/_memory` page to run joins across tables from multiple databases.\r\n>\r\n> It only works on the first 10 databases that were passed to the command-line. This means that if you have a Datasette instance with hundreds of attached databases (see [Datasette Library](https://github.com/simonw/datasette/issues/417)) this won't be particularly useful for you.\r\n>\r\n> So... a better, future version of this feature would be one that lets you join across databases on command - maybe by hitting `/_memory?attach=db1&attach=db2` to get a special connection.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/283#issuecomment-781665560_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1234/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 797651831, "node_id": "MDU6SXNzdWU3OTc2NTE4MzE=", "number": 1212, "title": "Tests are very slow. ", "user": {"value": 4488943, "label": "kbaikov"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-01-31T08:06:16Z", "updated_at": "2021-02-19T22:54:13Z", "closed_at": "2021-02-19T22:54:13Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Working on my PR i noticed that tests are very slow.\r\n\r\nThe plain pytest run took about 37 minutes for me.\r\nHowever i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution.\r\n`pytest -n 8` is run only in 28 minutes on my machine.\r\n\r\nI can create a PR to mention that in your documentation.\r\nThis will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation.\r\n\r\nDoes that make sense to you?\r\n\r\nAfter a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run.\r\n\r\nProfiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense.\r\n\r\nPerhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases.\r\nAnd/or use conn = sqlite3.connect(\":memory:\") which is much faster.\r\nAnd/or truncate tables after each TC instead of deleting the file and re-creating them.\r\n\r\nI can take a look which is the best approach if you give the go-ahead. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1212/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 811589344, "node_id": "MDU6SXNzdWU4MTE1ODkzNDQ=", "number": 1235, "title": "Upgrade Python version used by official Datasette Docker image", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-02-19T00:47:40Z", "updated_at": "2021-02-19T01:48:31Z", "closed_at": "2021-02-19T01:48:30Z", "author_association": "OWNER", "pull_request": null, "body": "Currently uses 3.7.2:\r\n\r\nhttps://github.com/simonw/datasette/blob/73bed175631a79e13a521eee82f8451dd0477eb3/Dockerfile#L1\r\n\r\nThere's a security fix for Python which it would be good to ship in this image (even though I'm reasonably confident it doesn't affect Datasette): https://bugs.python.org/issue42938", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1235/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 811407131, "node_id": "MDExOlB1bGxSZXF1ZXN0NTc1OTQwMTkz", "number": 1232, "title": "--crossdb option for joining across databases", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2021-02-18T19:48:50Z", "updated_at": "2021-02-18T22:09:13Z", "closed_at": "2021-02-18T22:09:12Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1232", "body": "Refs #283. Still needs:\r\n\r\n- [x] Unit test for --crossdb queries\r\n- [x] Show warning on console if it truncates at ten databases (or on web interface)\r\n- [x] Show connected databases on the `/_memory` database page\r\n- [x] Documentation\r\n- [x] https://latest.datasette.io/ demo should demonstrate this feature", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1232/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 811458446, "node_id": "MDU6SXNzdWU4MTE0NTg0NDY=", "number": 1233, "title": "\"datasette publish cloudrun\" cannot publish files with spaces in their name", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-18T21:08:31Z", "updated_at": "2021-02-18T21:10:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Got this error:\r\n```\r\nStep 6/9 : RUN datasette inspect fixtures.db extra database.db --inspect-file inspect-data.json\r\n ---> Running in db9da0068592\r\nUsage: datasette inspect [OPTIONS] [FILES]...\r\nTry 'datasette inspect --help' for help.\r\n\r\nError: Invalid value for '[FILES]...': Path 'extra' does not exist.\r\nThe command '/bin/sh -c datasette inspect fixtures.db extra database.db --inspect-file inspect-data.json' returned a non-zero code: 2\r\nERROR\r\nERROR: build step 0 \"gcr.io/cloud-builders/docker\" failed: step exited with non-zero status: 2\r\n```\r\nWhile working on the demo for #1232, using this deploy command:\r\n```\r\nGITHUB_SHA=crossdb datasette publish cloudrun fixtures.db 'extra database.db' \\\r\n -m fixtures.json \\\r\n --plugins-dir=plugins \\\r\n --branch=$GITHUB_SHA \\\r\n --version-note=$GITHUB_SHA \\\r\n --extra-options=\"--setting template_debug 1 --crossdb\" \\\r\n --install=pysqlite3-binary \\\r\n --service=datasette-latest-crossdb\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1233/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 808843401, "node_id": "MDU6SXNzdWU4MDg4NDM0MDE=", "number": 1226, "title": "--port option should validate port is between 0 and 65535", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-02-15T22:01:33Z", "updated_at": "2021-02-18T18:41:27Z", "closed_at": "2021-02-18T18:41:27Z", "author_association": "OWNER", "pull_request": null, "body": "Currently throws an ugly error message:\r\n```\r\n(datasette-graphql) datasette-graphql % datasette fivethirtyeight.db -p 80094\r\nINFO: Started server process [45497]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nTraceback (most recent call last):\r\n File \"/Users/simon/.local/share/virtualenvs/datasette-graphql-n1OSJCS8/bin/datasette\", line 8, in \r\n sys.exit(cli())\r\n...\r\n server = await loop.create_server(\r\n File \"/Users/simon/.pyenv/versions/3.8.2/lib/python3.8/asyncio/base_events.py\", line 1461, in create_server\r\n sock.bind(sa)\r\nOverflowError: bind(): port must be 0-65535.\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1226/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 811054000, "node_id": "MDU6SXNzdWU4MTEwNTQwMDA=", "number": 1230, "title": "Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages", "user": {"value": 7107523, "label": "Kabouik"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-18T12:27:02Z", "updated_at": "2021-02-18T15:22:15Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I filtered a data set on some criteria and obtain 265 results, split over three pages (100, 100, 65), and reazlized that Vega plots are only applied to the results displayed on the current page, instead of the whole filtered data, _e.g._, 100 on page 1, 100 on page 2, 65 on page 3. Is there a way to force the graphs to consider all results instead of just the page, considering that pages rarely represent sensible information?\r\n\r\nLikewise, while the cluster map does show all results on the first page, if you go to next pages, it will show all remaining results except the previous page(s), _e.g._, 265 on page 1, 165 on page 2, 65 on page 3.\r\n\r\nIn both cases, I don't see many situations where one would like to represent the data this way, and it might even lead to interpretation errors when viewing the data. Am I missing some cases where this would be best? Perhaps a clickable option to subset visual representations according visible pages _vs._ display all search results would do?\r\n\r\n[Edit] Oh, I just saw the \"Load all\" button under the cluster map as well as the [setting to alter the max number or results](https://docs.datasette.io/en/stable/settings.html#max-returned-rows). So I guess this issue only is about the Vega charts.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": null, "draft": null, "state_reason": null} {"id": 808771690, "node_id": "MDU6SXNzdWU4MDg3NzE2OTA=", "number": 1225, "title": "More flexible formatting of records with CSS grid", "user": {"value": 649467, "label": "mhalle"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-15T19:28:17Z", "updated_at": "2021-02-15T19:28:35Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "In several applications I've been experimenting with alternate formatting of datasette query results. Lately I've found that CSS grids work very well and seem quite general for formatting rows. In CSS I use grid templates to define the layout of each record and the regions for each field, hiding the fields I don't want. It's pretty flexible and looks good. It's also a great basis for highly responsive layout.\r\n\r\nI initially thought I'd only use this feature for record detail views, but now I use it for index views as well. \r\n\r\nHowever, there are some limitations:\r\n* With the existing table templates, it seems that you can change the `display` property on the enclosing `table`, `tbody`, and `tr` to make them be grid-like, but that seems hacky (convert `table` and `tbody` to be `display: block` and `tr` to be `display: grid`).\r\n* More significantly, it's very nice to have the column name available when rendering each record to display headers/field labels. The existing templates don't do that, so a custom `_table` template is necessary.\r\n* I don't know if any plugins are sensitive to whether data is rendered as a table or not since I'm not completely clear how plugins get their data. \r\n* Regardless, you need custom CSS to take full advantage of grids. I don't have a proposal on how to integrate them more deeply.\r\n\r\nIt would be helpful to at least have an official example or test that used a grid layout for records to make sure nothing in datasette breaks with it. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1225/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 806743116, "node_id": "MDU6SXNzdWU4MDY3NDMxMTY=", "number": 1220, "title": "Installing datasette via docker: Path 'fixtures.db' does not exist", "user": {"value": 30607, "label": "aborruso"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-02-11T21:09:14Z", "updated_at": "2021-02-12T21:35:17Z", "closed_at": "2021-02-12T21:35:17Z", "author_association": "NONE", "pull_request": null, "body": "Hi,\r\nIf I run\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt \\ 1 \u21b5\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0 fixtures.db\r\n```\r\n\r\nI have \r\n\r\n```\r\nError: Invalid value for '[FILES]...': Path 'fixtures.db' does not exist.\r\n```\r\n\r\nIf I run `test -f fixtures.db && echo \"it exists.\"` I have `it exists.`.\r\n\r\nWhat's my error?\r\n\r\nThank you", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1220/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 806861312, "node_id": "MDExOlB1bGxSZXF1ZXN0NTcyMjA5MjQz", "number": 1222, "title": "--ssl-keyfile and --ssl-certfile, refs #1221", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-12T00:45:58Z", "updated_at": "2021-02-12T00:52:18Z", "closed_at": "2021-02-12T00:52:17Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1222", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1222/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 792890765, "node_id": "MDU6SXNzdWU3OTI4OTA3NjU=", "number": 1200, "title": "?_size=10 option for the arbitrary query page would be useful", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-01-24T20:55:35Z", "updated_at": "2021-02-11T03:13:59Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://latest.datasette.io/fixtures?sql=select+*+from+compound_three_primary_keys&_size=10 - `_size=10` does not do anything at the moment. It would be useful if it did.\r\n\r\nWould also be good if it persisted in a hidden form field.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1200/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 803929694, "node_id": "MDU6SXNzdWU4MDM5Mjk2OTQ=", "number": 1219, "title": "Try profiling Datasette using scalene", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-02-08T20:37:06Z", "updated_at": "2021-02-08T22:13:00Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/emeryberger/scalene looks like an interesting profiling tool.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1219/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 796234313, "node_id": "MDU6SXNzdWU3OTYyMzQzMTM=", "number": 1210, "title": "Immutable Database w/ Canned Queries", "user": {"value": 525780, "label": "heyarne"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-01-28T18:08:29Z", "updated_at": "2021-02-05T11:30:34Z", "closed_at": "2021-02-05T11:30:34Z", "author_association": "NONE", "pull_request": null, "body": "I have a database that I only want to read from; when instructing datasette to treat the database as immutable my defined canned queries disappear. Are these two features incompatible or have I hit an unintended bug? Thanks for datasette in any way, it's a joy to use!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1210/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 799693777, "node_id": "MDU6SXNzdWU3OTk2OTM3Nzc=", "number": 1214, "title": "Re-submitting filter form duplicates _x querystring arguments", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-02-02T21:13:35Z", "updated_at": "2021-02-02T21:28:53Z", "closed_at": "2021-02-02T21:21:13Z", "author_association": "OWNER", "pull_request": null, "body": "Really nasty bug, caused by #1194 fix in 07e163561592c743e4117f72102fcd350a600909\r\n\r\nNavigate to this page: https://github-to-sqlite.dogsheep.net/github/labels?_search=help&_sort=id\r\n\r\nClick \"Apply\" to submit the form and the resulting URL is https://github-to-sqlite.dogsheep.net/github/labels?_search=help&_sort=id&_search=help&_sort=id\r\n\r\nThat's because the (truncated) HTML for the form looks like this:\r\n\r\n```html\r\n ... \r\n ...\r\n
\r\n \r\n
\r\n ...\r\n \r\n \r\n \r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1214/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 799663959, "node_id": "MDU6SXNzdWU3OTk2NjM5NTk=", "number": 1213, "title": "gzip support for HTML (and JSON) responses", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-02-02T20:36:28Z", "updated_at": "2021-02-02T20:41:55Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This page https://datasette-tiles-demo.datasette.io/San_Francisco/tiles is 2MB because of all of the base64 images. Gzipped it's 1.5MB.\r\n\r\nSince Datasette is usually deployed without a frontend gzipping proxy, Datasette itself needs to solve for this.\r\n\r\nGzipping everything won't work because some endpoints - the all-rows CSV endpoint and the download-database endpoint - are streaming and hence can't be buffered-and-gzipped.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1213/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 793881756, "node_id": "MDU6SXNzdWU3OTM4ODE3NTY=", "number": 1207, "title": "Document the Datasette(..., pdb=True) testing pattern", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-01-26T02:48:10Z", "updated_at": "2021-01-29T02:37:19Z", "closed_at": "2021-01-29T02:12:34Z", "author_association": "OWNER", "pull_request": null, "body": "If you're writing tests for a Datasette plugin and you get a 500 error from inside Datasette, you can cause Datasette to open a PDB session within the application server code by doing this:\r\n\r\n```python\r\nds = Datasette([db_path], pdb=True)\r\nresponse = await ds.client.get(\"/\")\r\n```\r\n\r\nYou'll need to run `pytest -s` to interact with the debugger, otherwise you'll get an error.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1207/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 795367402, "node_id": "MDU6SXNzdWU3OTUzNjc0MDI=", "number": 1209, "title": "v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround", "user": {"value": 11788561, "label": "jrdmb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-01-27T19:08:13Z", "updated_at": "2021-01-28T23:00:27Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "v0.54 500 error in sql query template; code worked in v0.53; found a workaround\r\n\r\n**schema:** \r\nCREATE TABLE \"talks\" (\"talk\" TEXT,\"series\" INTEGER, \"talkdate\" TEXT) \r\nCREATE TABLE \"series\" (\"id\" INTEGER PRIMARY KEY, \"series\" TEXT, talks_list TEXT default '', website TEXT default '');\r\n\r\n**Live example of correctly rendered template in v.053:** https://cosmotalks-cy6xkkbezq-uw.a.run.app/cosmotalks/talks/1\r\n\r\n**Description of problem:** I needed 'sql select' code in a custom row-mydatabase-mytable.html template to lookup the series name for a foreign key integer value in the talks table. So `metadata.json` specifies the `datasette-template-sql` plugin.\r\n\r\nThe code below worked perfectly in v0.53 (just the relevant sql statement part is shown; full code is [here](https://github.com/jrdmb/cosmotalks-datasette/blob/main/templates/row-cosmotalks-talks.html)):\r\n\r\n```\r\n{# custom addition #} \r\n{% for row in display_rows %} \r\n ... \r\n {% set sname = sql(\"select series from series where id = ?\", [row.series]) %} \r\n Series name: {{ sname[0].series }} \r\n ... \r\n{% endfor %} \r\n{# End of custom addition #} \r\n```\r\n\r\n**In v0.54, that code resulted in a 500 error with a 'no such table series' message.** A second query in that template also did not work but the above is fully illustrative of the problem.\r\n\r\nAll templates were up-to-date along with datasette v0.54.\r\n\r\n**Workaround:** After fiddling around with trying different things, what worked was the syntax from [Querying a different database from the datasette-template-sql github repo](https://github.com/simonw/datasette-template-sql#querying-a-different-database) to add the database name to the sql statement:\r\n\r\n`{% set sname = sql(\"select series from series where id = ?\", [row.series], database=\"cosmotalks\") %}`\r\n\r\nThough this was found to work, it should not be necessary to add `database=\"cosmotalks\"` since per the `datasette-template-sql` README, it's only needed when querying a different database, but here it's a table within the same database.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": null, "draft": null, "state_reason": null} {"id": 793027837, "node_id": "MDU6SXNzdWU3OTMwMjc4Mzc=", "number": 1205, "title": "Rename /:memory: to /_memory", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 3, "created_at": "2021-01-25T05:04:56Z", "updated_at": "2021-01-28T22:55:02Z", "closed_at": "2021-01-28T22:51:42Z", "author_association": "OWNER", "pull_request": null, "body": "For consistency with `/_internal` - and because then we don't need to escape the `:` characters.\r\n\r\nThis change would need to be in before Datasette 1.0. I could land it earlier and set up redirects from the old URLs though.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1205/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 770448622, "node_id": "MDU6SXNzdWU3NzA0NDg2MjI=", "number": 1151, "title": "Database class mechanism for cross-connection in-memory databases", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 11, "created_at": "2020-12-17T23:25:43Z", "updated_at": "2021-01-26T19:07:44Z", "closed_at": "2020-12-18T01:01:26Z", "author_association": "OWNER", "pull_request": null, "body": "> Next challenge: figure out how to use the `Database` class from https://github.com/simonw/datasette/blob/0.53/datasette/database.py for an in-memory database which persists data for the duration of the lifetime of the server, and allows access to that in-memory database from multiple threads in a way that lets them see each other's changes.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1150#issuecomment-747768112_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1151/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 714377268, "node_id": "MDU6SXNzdWU3MTQzNzcyNjg=", "number": 991, "title": "Redesign application homepage", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-10-04T18:48:45Z", "updated_at": "2021-01-26T19:06:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Most Datasette instances only host a single database, but the current homepage design assumes that it should leave plenty of space for multiple databases:\r\n\r\n\"Datasette_Fixtures__fixtures\"\r\n\r\nReconsider this design - should the default show more information?\r\n\r\nThe Covid-19 Datasette homepage looks particularly sparse I think: https://covid-19.datasettes.com/\r\n\r\n\"COVID-19_cases__using_data_from_Johns_Hopkins_CSSE__the_New_York_Times_and_the_LA_Times__covid\"", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/991/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 792904595, "node_id": "MDU6SXNzdWU3OTI5MDQ1OTU=", "number": 1201, "title": "Release notes for Datasette 0.54", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 5, "created_at": "2021-01-24T21:22:28Z", "updated_at": "2021-01-25T17:42:21Z", "closed_at": "2021-01-25T17:42:21Z", "author_association": "OWNER", "pull_request": null, "body": "These will incorporate the release notes from the alpha, much expanded: https://github.com/simonw/datasette/releases/tag/0.54a0", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1201/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 793086333, "node_id": "MDExOlB1bGxSZXF1ZXN0NTYwODMxNjM4", "number": 1206, "title": "Release 0.54", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-01-25T06:45:47Z", "updated_at": "2021-01-25T17:33:30Z", "closed_at": "2021-01-25T17:33:29Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1206", "body": "Refs #1201", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1206/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 780278550, "node_id": "MDU6SXNzdWU3ODAyNzg1NTA=", "number": 1179, "title": "Make original path available to render hooks", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2021-01-06T08:31:45Z", "updated_at": "2021-01-25T04:44:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette-export-notebook/blob/0.1/datasette_export_notebook/__init__.py\r\n\r\n```python\r\nasync def render_notebook(datasette, request):\r\n return Response.html(\r\n await datasette.render_template(\r\n \"export_notebook.html\",\r\n {\r\n \"csv_stream_url\": datasette.absolute_url(\r\n request,\r\n path_with_format(\r\n request=request, format=\"csv\", extra_qs={\"_stream\": \"on\"}\r\n ),\r\n ),\r\n \"json_url\": datasette.absolute_url(\r\n request,\r\n path_with_format(\r\n request=request, format=\"json\", extra_qs={\"_shape\": \"array\"}\r\n ),\r\n ),\r\n \"json\": json,\r\n },\r\n )\r\n )\r\n```\r\nThis results in https://latest-with-plugins.datasette.io/github/issue_comments.Notebook showing `http://latest-with-plugins.datasette.io/github/issue_comments.Notebook?_format=json&_shape=array`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1179/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712260429, "node_id": "MDU6SXNzdWU3MTIyNjA0Mjk=", "number": 983, "title": "JavaScript plugin hooks mechanism similar to pluggy", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 47, "created_at": "2020-09-30T20:32:43Z", "updated_at": "2021-01-25T04:43:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> It would be neat to provide a JavaScript plugin hook that plugins can use to add their own options to this menu. No idea what that would look like though.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/981#issuecomment-701616922_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/983/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 789336592, "node_id": "MDU6SXNzdWU3ODkzMzY1OTI=", "number": 1195, "title": "view_name = \"query\" for the query page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-01-19T20:21:36Z", "updated_at": "2021-01-25T04:40:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It uses `view_name` of `database` at the moment which isn't as useful.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1195/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712984738, "node_id": "MDU6SXNzdWU3MTI5ODQ3Mzg=", "number": 987, "title": "Documented HTML hooks for JavaScript plugin authors", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-10-01T16:10:14Z", "updated_at": "2021-01-25T04:00:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In #981 I added `data-column=` attributes to the `` on the table page. These should become part of Datasette's documented API so JavaScript plugin authors can use them to derive things about the tables shown on a page (`datasette-cluster-map uses them as-of https://github.com/simonw/datasette-cluster-map/issues/18).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/987/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 788447787, "node_id": "MDU6SXNzdWU3ODg0NDc3ODc=", "number": 1194, "title": "?_size= argument is not persisted by hidden form fields in the table filters", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 3, "created_at": "2021-01-18T17:41:52Z", "updated_at": "2021-01-25T03:10:23Z", "closed_at": "2021-01-25T03:10:23Z", "author_association": "OWNER", "pull_request": null, "body": "Click \"Apply\" on https://covid-19.datasettes.com/covid/ny_times_us_counties?_size=1000&county__exact=San+Francisco&state__exact=California&_sort_desc=date#g.mark=line&g.x_column=date&g.x_type=temporal&g.y_column=cases&g.y_type=quantitative and the `?_size=1000` parameter from the URL will no longer apply on the reloaded page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1194/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 777145954, "node_id": "MDU6SXNzdWU3NzcxNDU5NTQ=", "number": 1167, "title": "Add Prettier to contributing documentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 3, "created_at": "2020-12-31T22:00:55Z", "updated_at": "2021-01-25T02:01:19Z", "closed_at": "2021-01-25T01:58:28Z", "author_association": "OWNER", "pull_request": null, "body": "Following #1166 - the docs at https://docs.datasette.io/en/stable/contributing.html should include a section about JavaScript, and it should document how to run Prettier.\r\n\r\nI run it in VS Code but it can be run on the command-line too:\r\n\r\n npx prettier 'datasette/static/*[!.min].js' --write\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1167/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 792958773, "node_id": "MDExOlB1bGxSZXF1ZXN0NTYwNzI1NzE0", "number": 1203, "title": "Easier way to run Prettier locally", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-25T01:39:06Z", "updated_at": "2021-01-25T01:41:46Z", "closed_at": "2021-01-25T01:41:46Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1203", "body": "Refs #1167", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1203/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 771208009, "node_id": "MDU6SXNzdWU3NzEyMDgwMDk=", "number": 1154, "title": "Documentation for new _internal database and tables", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 2, "created_at": "2020-12-18T22:34:52Z", "updated_at": "2021-01-25T00:09:22Z", "closed_at": "2021-01-25T00:08:41Z", "author_association": "OWNER", "pull_request": null, "body": "> Needs documentation, but I can wait to write that until I've tested out the feature a bit more.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1150#issuecomment-748352106_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1154/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 792931244, "node_id": "MDU6SXNzdWU3OTI5MzEyNDQ=", "number": 1202, "title": "Documentation convention for marking unstable APIs.", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 2, "created_at": "2021-01-24T23:47:18Z", "updated_at": "2021-01-25T00:01:02Z", "closed_at": "2021-01-25T00:01:02Z", "author_association": "OWNER", "pull_request": null, "body": "> I'm going to document this but mark it as unstable, using a new documentation convention for marking unstable APIs.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1154#issuecomment-766462197_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1202/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 785588942, "node_id": "MDU6SXNzdWU3ODU1ODg5NDI=", "number": 1187, "title": "extra_body_script() support for script type=\"module\"", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 1, "created_at": "2021-01-14T02:01:47Z", "updated_at": "2021-01-24T21:21:44Z", "closed_at": "2021-01-14T02:14:39Z", "author_association": "OWNER", "pull_request": null, "body": "Follows #1186. The `extra_body_script()` plugin hook should provide a mechanism for specifying that the script should use `