{"id": 314834783, "node_id": "MDU6SXNzdWUzMTQ4MzQ3ODM=", "number": 219, "title": "Expose units in the JSON API?", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T22:04:25Z", "updated_at": "2018-04-16T22:04:25Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map.\r\n\r\nI'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/219/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 341228846, "node_id": "MDU6SXNzdWUzNDEyMjg4NDY=", "number": 343, "title": "Render boolean fields better by default", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-07-14T11:10:29Z", "updated_at": "2018-07-14T14:17:14Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/343/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 374953006, "node_id": "MDU6SXNzdWUzNzQ5NTMwMDY=", "number": 369, "title": "Interface should show same JSON shape options for custom SQL queries", "user": {"value": 416374, "label": "gfrmin"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2018-10-29T10:39:15Z", "updated_at": "2020-05-30T17:24:06Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the `_shape` parameter to the JSON API URL manually still works, so perhaps there should be consistency in the interface by having the same \"Advanced Export\" box for custom SQL queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/369/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 377155320, "node_id": "MDU6SXNzdWUzNzcxNTUzMjA=", "number": 370, "title": "Integration with JupyterLab", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-11-04T13:57:13Z", "updated_at": "2022-09-29T08:17:47Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I just watched a demo video for the [JupyterLab Chart Editor](https://www.crowdcast.io/e/introducing-JupyterLab-Chart-Editor/) which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue).\r\n\r\nJupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe.\r\n\r\nThis made me wonder about what `datasette` integration with JupyterLab might do.\r\n\r\nFor example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a *View / Run as datasette* file viewer option that will:\r\n\r\n- run the CSV file through `csvs-to-sqlite`;\r\n- launch the `datasette` server and display the `datasette` view in a JupyterLab panel.\r\n\r\n(? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all `datasette` viewed CSVs and runs on a single port?) \r\n\r\nAs a freebie, the `datasette` API would allow you to run efficient SQL queries against the file eg using using `pandas.read_sql()` queries in a notebook in the same space.\r\n\r\nRelated:\r\n\r\n- [JupyterLab extensions docs](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html)\r\n- a [cookiecutter for wrting JupyterLab extensions using Javascript](https://github.com/jupyterlab/extension-cookiecutter-js)\r\n- a [cookiecutter for writing JupyterLab extensions using Typescript](https://github.com/jupyterlab/extension-cookiecutter-ts)\r\n- tutorial: [Let\u2019s Make an xkcd JupyterLab Extension](https://jupyterlab.readthedocs.io/en/stable/developer/xkcd_extension_tutorial.html)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/370/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 377166793, "node_id": "MDU6SXNzdWUzNzcxNjY3OTM=", "number": 372, "title": "Docker build tools", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-11-04T16:02:35Z", "updated_at": "2018-11-04T16:02:35Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as `requirements.txt` files.\r\n\r\nIf plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base `datasette` container may be required.\r\n\r\nExamples of Dockerfile generators / container builders:\r\n\r\n- [openshift/source-to-image (s2i)](https://github.com/openshift/source-to-image)\r\n- [jupyter/repo2docker](https://github.com/jupyter/repo2docker)\r\n- [stencila/dockter](https://github.com/stencila/dockter)\r\n\r\nDiscussions / threads (via Binderhub gitter) on:\r\n- [why `repo2docker` not `s2i`](http://words.yuvi.in/post/why-not-s2i/)\r\n- [why `dockter` not `repo2docker`](https://twitter.com/choldgraf/status/1058499607309647872)\r\n- [composability in `s2i`](https://trello.com/c/AexIVZNf/1008-8-composable-builds-builds-evg)\r\n\r\nRelates to things like:\r\n\r\n- https://github.com/simonw/datasette/pull/280", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/372/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 440325850, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2", "number": 452, "title": "SQL builder utility classes", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-04T13:57:47Z", "updated_at": "2019-05-04T14:03:04Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/452", "body": "This adds a straightforward set of classes to aid in the construction of\r\nSQL queries.\r\n\r\nMy plan for this was to allow plugins to manipulate the\r\nDatasette-generated SQL in a more structured way. I'm not sure that's\r\ngoing to work, but I feel like this is still a step forward - it\r\nreduces the number of intermediate variables in `TableView.data` which\r\naids readability, and also factors out a lot of the boring string\r\nconcatenation.\r\n\r\nThere are a fair number of minor structure changes in here too as I've\r\ntried to make the ordering of `TableView.data` a bit more logical. As\r\nfar as I can tell, I haven't broken anything...", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/452/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 527710055, "node_id": "MDU6SXNzdWU1Mjc3MTAwNTU=", "number": 640, "title": "Nicer error message for heroku publish name clash", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-24T14:57:07Z", "updated_at": "2019-12-06T07:19:34Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If you try to publish to Heroku using no set name (i.e. the default `datasette` name) and a project already exists under that name, you get a meaningful error report on the first line followed by Py error messages that drown it out:\r\n\r\n```\r\nCreating datasette... !\r\n \u25b8 Name datasette is already taken\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasette\", line 10, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 764, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 717, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 956, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/NNNNN/Library/Python/3.7/lib/python/site-packages/datasette/publish/heroku.py\", line 124, in heroku\r\n create_output = check_output(cmd).decode(\"utf8\")\r\n File \"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py\", line 411, in check_output\r\n **kwargs).stdout\r\n File \"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py\", line 512, in run\r\n output=stdout, stderr=stderr)\r\nsubprocess.CalledProcessError: Command '['heroku', 'apps:create', 'datasette', '--json']' returned non-zero exit status 1.\r\n```\r\n\r\nIt would be neater if:\r\n\r\n- the Py error message was caught;\r\n- the report suggested setting a project name using `-n` etc.\r\n\r\nIt may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/640/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 546073980, "node_id": "MDU6SXNzdWU1NDYwNzM5ODA=", "number": 74, "title": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column", "user": {"value": 15092, "label": "jayvdb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-01-07T04:35:50Z", "updated_at": "2020-01-12T07:21:17Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes.\r\n\r\nMost fail on the cli exit code like\r\n```py\r\n[ 74s] =================================== FAILURES ===================================\r\n[ 74s] _________________________________ test_tables __________________________________\r\n[ 74s] \r\n[ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db'\r\n[ 74s] \r\n[ 74s] def test_tables(db_path):\r\n[ 74s] result = CliRunner().invoke(cli.cli, [\"tables\", db_path])\r\n[ 74s] > assert '[{\"table\": \"Gosh\"},\\n {\"table\": \"Gosh2\"}]' == result.output.strip()\r\n[ 74s] E assert '[{\"table\": \"...e\": \"Gosh2\"}]' == ''\r\n[ 74s] E - [{\"table\": \"Gosh\"},\r\n[ 74s] E - {\"table\": \"Gosh2\"}]\r\n[ 74s] \r\n[ 74s] tests/test_cli.py:28: AssertionError\r\n```\r\n\r\npackaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils\r\n\r\nI'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 642572841, "node_id": "MDU6SXNzdWU2NDI1NzI4NDE=", "number": 859, "title": "Database page loads too slowly with many large tables (due to table counts)", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 21, "created_at": "2020-06-21T14:23:17Z", "updated_at": "2021-08-25T21:59:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hey,\r\nI have a database that I save in HTML from couple of web scrapers. There are around 200k+, 50+ rows in a couple of tables, with sqlite file weighing around 600MB.\r\n\r\nThe app runs on a VPS with 2 core CPU, 4GB RAM and refreshing database page regularly takes more than 10 seconds. I was suspecting that counting tables was the culprit, but manually running `select count(*) from table_name` for the largest table finishes under a second.\r\n\r\nI've looked at the source code. There's a check for index page for mutable databases larger than 100MB\r\nhttps://github.com/simonw/datasette/blob/799c5d53570d773203527f19530cf772dc2eeb24/datasette/views/index.py#L15\r\n\r\nbut this check is not performed for database page. \r\nI've manually crippled `Database::table_counts` method\r\n```py\r\nasync def table_counts(self, limit=10):\r\n if not self.is_mutable and self.cached_table_counts is not None:\r\n return self.cached_table_counts\r\n # Try to get counts for each table, $limit timeout for each count\r\n counts = {}\r\n for table in await self.table_names():\r\n try:\r\n # table_count = (\r\n # await self.execute(\r\n # \"select count(*) from [{}]\".format(table),\r\n # custom_time_limit=limit,\r\n # )\r\n # ).rows[0][0]\r\n counts[table] = 10 # table_count\r\n # In some cases I saw \"SQL Logic Error\" here in addition to\r\n # QueryInterrupted - so we catch that too:\r\n except (QueryInterrupted, sqlite3.OperationalError, sqlite3.DatabaseError):\r\n counts[table] = None\r\n if not self.is_mutable:\r\n self.cached_table_counts = counts\r\n return counts\r\n```\r\n\r\nnow the page loads in <100ms.\r\n\r\nIs it possible to apply size check on database page too?\r\n\r\n
\r\n\r\n/-/versions output\r\n\r\n
\r\n{\r\n    \"python\": {\r\n        \"version\": \"3.8.0\",\r\n        \"full\": \"3.8.0 (default, Oct 28 2019, 16:14:01) \\n[GCC 8.3.0]\"\r\n    },\r\n    \"datasette\": {\r\n        \"version\": \"0.44\"\r\n    },\r\n    \"asgi\": \"3.0\",\r\n    \"uvicorn\": \"0.11.5\",\r\n    \"sqlite\": {\r\n        \"version\": \"3.22.0\",\r\n        \"fts_versions\": [\r\n            \"FTS5\",\r\n            \"FTS4\",\r\n            \"FTS3\"\r\n        ],\r\n        \"extensions\": {\r\n            \"json1\": null\r\n        },\r\n        \"compile_options\": [\r\n            \"COMPILER=gcc-7.4.0\",\r\n            \"ENABLE_COLUMN_METADATA\",\r\n            \"ENABLE_DBSTAT_VTAB\",\r\n            \"ENABLE_FTS3\",\r\n            \"ENABLE_FTS3_PARENTHESIS\",\r\n            \"ENABLE_FTS3_TOKENIZER\",\r\n            \"ENABLE_FTS4\",\r\n            \"ENABLE_FTS5\",\r\n            \"ENABLE_JSON1\",\r\n            \"ENABLE_LOAD_EXTENSION\",\r\n            \"ENABLE_PREUPDATE_HOOK\",\r\n            \"ENABLE_RTREE\",\r\n            \"ENABLE_SESSION\",\r\n            \"ENABLE_STMTVTAB\",\r\n            \"ENABLE_UNLOCK_NOTIFY\",\r\n            \"ENABLE_UPDATE_DELETE_LIMIT\",\r\n            \"HAVE_ISNAN\",\r\n            \"LIKE_DOESNT_MATCH_BLOBS\",\r\n            \"MAX_SCHEMA_RETRY=25\",\r\n            \"MAX_VARIABLE_NUMBER=250000\",\r\n            \"OMIT_LOOKASIDE\",\r\n            \"SECURE_DELETE\",\r\n            \"SOUNDEX\",\r\n            \"TEMP_STORE=1\",\r\n            \"THREADSAFE=1\"\r\n        ]\r\n    }\r\n}\r\n
\r\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/859/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 648749062, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQyNTA1MDg4", "number": 883, "title": "Skip counting hidden tables", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-07-01T07:38:08Z", "updated_at": "2020-07-02T00:25:44Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/883", "body": "Potential fix for https://github.com/simonw/datasette/issues/859.\r\n\r\nDisabling table counts for hidden tables speeds up database page quite a bit. In my setup it reduced load time by 2/3 (~300 -> ~90ms)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/883/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 688670158, "node_id": "MDU6SXNzdWU2ODg2NzAxNTg=", "number": 147, "title": "SQLITE_MAX_VARS maybe hard-coded too low", "user": {"value": 96218, "label": "simonwiles"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-08-30T07:26:45Z", "updated_at": "2021-02-15T21:27:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I came across this while about to open an issue and PR against the documentation for `batch_size`, which is a bit incomplete.\r\n\r\nAs mentioned in #145, while:\r\n\r\n> [`SQLITE_MAX_VARIABLE_NUMBER`](https://www.sqlite.org/limits.html#max_variable_number) ... defaults to 999 for SQLite versions prior to 3.32.0 (2020-05-22) or 32766 for SQLite versions after 3.32.0.\r\n\r\nit is common that it is increased at compile time. Debian-based systems, for example, seem to ship with a version of sqlite compiled with SQLITE_MAX_VARIABLE_NUMBER set to 250,000, and I believe this is the case for homebrew installations too.\r\n\r\nIn working to understand what `batch_size` was actually doing and why, I realized that by setting `SQLITE_MAX_VARS` in `db.py` to match the value my sqlite was compiled with (I'm on Debian), I was able to decrease the time to `insert_all()` my test data set (~128k records across 7 tables) from ~26.5s to ~3.5s. Given that this about .05% of my total dataset, this is time I am keen to save...\r\n\r\nUnfortunately, it seems that `sqlite3` in the python standard library doesn't expose the `get_limit()` C API (even though `pysqlite` used to), so it's hard to know what value sqlite has been compiled with (note that this could mean, I suppose, that it's less than 999, and even hardcoding `SQLITE_MAX_VARS` to the conservative default might not be adequate. It can also be lowered -- but not raised -- at runtime). The best I could come up with is `echo \"\" | sqlite3 -cmd \".limits variable_number\"` (only available in `sqlite >= 2015-05-07 (3.8.10)`).\r\n\r\nObviously this couldn't be relied upon in `sqlite_utils`, but I wonder what your opinion would be about exposing `SQLITE_MAX_VARS` as a user-configurable parameter (with suitable \"here be dragons\" warnings)? I'm going to go ahead and monkey-patch it for my purposes in any event, but it seems like it might be worth considering.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/147/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 698791218, "node_id": "MDU6SXNzdWU2OTg3OTEyMTg=", "number": 50, "title": "favorites --stop_after=N stops after min(N, 200)", "user": {"value": 370930, "label": "mikepqr"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-09-11T03:38:14Z", "updated_at": "2020-09-13T05:11:14Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "For any number greater than 200, `favorites --stop_after` stops after getting 200 tweets, e.g.\r\n```\r\n$ twitter-to-sqlite favorites tweets.db --stop_after=300\r\nImporting favorites [####################################] 199\r\n$\r\n```\r\nI don't _think_ this is a limitation of the API (if you omit `--stop_after` you get some very large number, possibly all of them), so I _think_ this is a bug.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 751195017, "node_id": "MDU6SXNzdWU3NTExOTUwMTc=", "number": 1111, "title": "Accessing a database's `.json` is slow for very large SQLite files", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-11-26T00:27:27Z", "updated_at": "2021-01-04T19:57:53Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I have a SQLite DB that's pretty large, 23GB and something like 300 million rows. I expect that most queries I run on it will be slow, which is fine, but there are some things that Datasette does that makes working with the DB very slow. Specifically, when I access the `.json` metadata for a table (which I believe it comes from `datasette/views/database.py`, it takes 43 seconds for the request to come in:\r\n\r\n```bash\r\n$ time curl localhost:9999/out.json\r\n{\"database\": \"out\", \"size\": 24291454976, \"tables\": [{\"name\": \"PageviewsHour\", \"columns\": [\"file\", \"code\", \"page\", \"pageviews\"], \"primary_keys\": [], \"count\": null, \"hidden\": false, \"fts_table\": null, \"foreign_keys\": {\"incoming\": [], \"outgoing\": [{\"other_table\": \"PageviewsHourFiles\", \"column\": \"file\", \"other_column\": \"file_id\"}]}, \"private\": false}, {\"name\": \"PageviewsHourFiles\", \"columns\": [\"file_id\", \"filename\", \"sha256\", \"size\", \"day\", \"hour\"], \"primary_keys\": [\"file_id\"], \"count\": null, \"hidden\": false, \"fts_table\": null, \"foreign_keys\": {\"incoming\": [{\"other_table\": \"PageviewsHour\", \"column\": \"file_id\", \"other_column\": \"file\"}], \"outgoing\": []}, \"private\": false}, {\"name\": \"sqlite_sequence\", \"columns\": [\"name\", \"seq\"], \"primary_keys\": [], \"count\": 1, \"hidden\": false, \"fts_table\": null, \"foreign_keys\": {\"incoming\": [], \"outgoing\": []}, \"private\": false}], \"hidden_count\": 0, \"views\": [], \"queries\": [], \"private\": false, \"allow_execute_sql\": true, \"query_ms\": 43340.23213386536}\r\nreal\t0m43.417s\r\nuser\t0m0.006s\r\nsys\t0m0.016s\r\n```\r\n\r\nI suspect this is because a `COUNT(*)` is happening under the hood, which, when I run it through sqlite directly, does take around the same time:\r\n\r\n```bash\r\n$ time sqlite3 out.db < <(echo \"select count(*) from PageviewsHour;\")\r\n362794272\r\n\r\nreal\t0m44.523s\r\nuser\t0m2.497s\r\nsys\t0m6.703s\r\n```\r\n\r\nI'm using the `.json` request in the [Observable Datasette Client](https://observablehq.com/@asg017/datasette-client) to 1) verify that a link passed in is a reachable Datasette instance, and 2) a quick way to look at metadata for a db. A few different solutions I can think of:\r\n\r\n1. Have some other endpoint, like `/-/datasette.json` that the Observable Datasette client can fetch from to verify that the passed in URL is a valid Datasette (doesnt solve the slow problem, feel free to split this issue into 2)\r\n2. Have a way to turn off table counts when accessing a database's `.json` view, like `?no_count=1` or something\r\n3. Maybe have a timeout on the `table_counts()` function if it takes too long. which is odd, because it seems like it already does that (I think?), I can debug a little more if that's the case\r\n\r\nMore than happy to debug further, or send a PR if you like one of the proposals above!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1111/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 756875827, "node_id": "MDU6SXNzdWU3NTY4NzU4Mjc=", "number": 1129, "title": "Fix footer to the bottom of the page", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-12-04T07:28:07Z", "updated_at": "2020-12-04T16:04:29Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport.\r\n\r\n![before & after](https://user-images.githubusercontent.com/3243482/101134785-f6595a80-361b-11eb-81ce-b8b5cb9c5bc2.png)\r\n\r\n\r\nThis can be fixed using flexbox.\r\n\r\n```css\r\nbody {\r\n min-height: 100vh;\r\n display: flex;\r\n flex-direction: column;\r\n}\r\n\r\n.content {\r\n flex-grow: 1;\r\n}\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1129/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 756876238, "node_id": "MDExOlB1bGxSZXF1ZXN0NTMyMzQ4OTE5", "number": 1130, "title": "Fix footer not sticking to bottom in short pages", "user": {"value": 3243482, "label": "abdusco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-12-04T07:29:01Z", "updated_at": "2021-06-15T13:27:48Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1130", "body": "Fixes https://github.com/simonw/datasette/issues/1129", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1130/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 771511344, "node_id": "MDExOlB1bGxSZXF1ZXN0NTQzMDE1ODI1", "number": 31, "title": "Update for Big Sur", "user": {"value": 41546558, "label": "RhetTbull"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2020-12-20T04:36:45Z", "updated_at": "2023-08-08T15:52:52Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep-photos/pulls/31", "body": "Refactored out the SQL for extracting aesthetic scores to use osxphotos -- adds compatbility for Big Sur via osxphotos which has been updated for new table names in Big Sur. Have not yet refactored the SQL for extracting labels which is still compatible with Big Sur.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 779088071, "node_id": "MDU6SXNzdWU3NzkwODgwNzE=", "number": 54, "title": "Archive import appears to be broken on recent exports", "user": {"value": 21148, "label": "jacobian"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-01-05T14:18:01Z", "updated_at": "2023-01-04T11:06:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I requested a Twitter export yesterday, and unfortunately they seem to have changed it such that `twitter-to-sqlite import` can't handle it anymore \ud83d\ude22 \r\n\r\nSo far I've ran into two issues. The first was easy to work around, but the second will take more investigation. If I can find the time I'll keep working on it and update this issue accordingly.\r\n\r\nThe issues (so far):\r\n\r\n### 1. Data seems to have moved to a `data/` subdirectory\r\n\r\nRunning `twitter-to-sqlite import` on the raw zip file reports a bunch of \"not yet implemented\" errors, and then exits without actually importing anything:\r\n\r\n```\r\n\u276f twitter-to-sqlite import tarchive.db twitter.zip\r\n...\r\ndata/manifest: not yet implemented\r\ndata/account-creation-ip: not yet implemented\r\ndata/account-suspension: not yet implemented\r\n... (dozens of more lines like this, including critical stuff like data/tweets) ...\r\n```\r\n\r\n(`tarchive.db` now exists, but is empty)\r\n\r\nWorkaround: unpack the zip file, and run `twitter-to-sqlite import tarchive.db path/to/archive/data`\r\n\r\nThat gets further, but:\r\n\r\n### 2. Some schema(s?) have changed\r\n\r\nAt least, the `blocks` schema seems different now:\r\n\r\n```\r\n\u276f twitter-to-sqlite import tarchive.db archive/data\r\ndirect-messages-group: not yet implemented\r\nbranch-links: not yet implemented\r\nperiscope-expired-broadcasts: not yet implemented\r\ndirect-messages: not yet implemented\r\nmute: not yet implemented\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/bin/twitter-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/twitter_to_sqlite/cli.py\", line 772, in import_\r\n archive.import_from_file(db, filepath.name, open(filepath, \"rb\").read())\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/twitter_to_sqlite/archive.py\", line 215, in import_from_file\r\n to_insert = transformer(data)\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/twitter_to_sqlite/archive.py\", line 115, in lists_member\r\n return {\"lists-member\": _list_from_common(data)}\r\n File \"/Users/jacob/Library/Caches/pypoetry/virtualenvs/jacobian-dogsheep-4AXaN4tu-py3.8/lib/python3.8/site-packages/twitter_to_sqlite/archive.py\", line 200, in _list_from_common\r\n for url in block[\"userListInfo\"][\"urls\"]:\r\nKeyError: 'urls'\r\n```\r\n\r\nThat's as far as I got before I needed to work on something else. I'll report back if I get further!", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 797097140, "node_id": "MDU6SXNzdWU3OTcwOTcxNDA=", "number": 60, "title": "Use Data from SQLite in other commands", "user": {"value": 22578954, "label": "daniel-butler"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-01-29T18:35:52Z", "updated_at": "2021-02-12T18:29:43Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "As a total beginner here how could you access data from the sqlite table to run other commands.\r\n\r\nWhat I am thinking is I want to get all the repos in an organization then using the repo list pull all the commit messages for each repo. \r\n\r\nI love this project by the way!", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 797784080, "node_id": "MDU6SXNzdWU3OTc3ODQwODA=", "number": 62, "title": "Stargazers and workflows commands always require an auth file when using GITHUB_TOKEN ", "user": {"value": 631242, "label": "frosencrantz"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-31T18:56:05Z", "updated_at": "2021-01-31T18:56:05Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Requested fix in https://github.com/dogsheep/github-to-sqlite/pull/59\r\n\r\nThe stargazers and workflows commands always require an auth file, even when using a `GITHUB_TOKEN`. Other commands don't require the auth file.\r\n\r\n", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/62/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 817989436, "node_id": "MDU6SXNzdWU4MTc5ODk0MzY=", "number": 242, "title": "Async support", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2021-02-27T18:29:38Z", "updated_at": "2021-10-28T14:37:56Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Following our conversation last week, want to note this here before I forget.\r\n\r\nI've had a couple situations where I'd like to do a bunch of updates in an async event loop, but I run into SQLite's issues with concurrent writes. This feels like something sqlite-utils could help with.\r\n\r\nPeeWee ORM has a [SQLite write queue](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq) that might be a good model. It's using threads or gevent, but I _think_ that approach would translate well enough to asyncio. \r\n\r\nHappy to help with this, too.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/242/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 843884745, "node_id": "MDU6SXNzdWU4NDM4ODQ3NDU=", "number": 1283, "title": "advanced #export causes unexpected scrolling", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-29T22:46:57Z", "updated_at": "2021-03-29T22:46:57Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "1. Visit a datasette table page\r\n2. Click on the \"(advanced)\" link. This adds a fragment identifier \"#export\" to the URL, and scrolls down to the \"Advanced export\" div with the \"export\" id.\r\n3. Manually scroll back up, and click on a suggested facet. The fragment identifier is still present, and the app scrolls back down to the \"Advanced export\" div. I think this is unwanted behavior.\r\n\r\nThe user remedy seems to be to manually remove the \"#export\" from the URL.\r\n\r\nThis behavior happens in my project, and in:\r\nhttps://covid-19.datasettes.com/covid/economist_excess_deaths (for instance) \r\nbut not in this table: \r\nhttps://global-power-plants.datasettes.com/global-power-plants/global-power-plants", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1283/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 845794436, "node_id": "MDU6SXNzdWU4NDU3OTQ0MzY=", "number": 1284, "title": "Feature or Documentation Request: Individual table as home page template", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-03-31T03:56:17Z", "updated_at": "2021-11-04T03:15:01Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be great to have a sample showing how to move a single database that has a single table, to the index page. I'm trying it now, and find there is a real depth of Datasette and Python understanding that's required to be successful. \r\n\r\nI've got all the basic jinja concepts down... variables, template control structures, template inheritance, template overrides, css, html, the --template-dir and --static arguments, etc. \r\n\r\nBut copying the table.html file to index.html doesn't work. There are undocumented functions and filters... I can figure some of them out (yay, url_builder.py and utils/__init__.py!) but it's a slog better handled by a much stronger Python developer. \r\n\r\nOne sample would make a world of difference. The ideal form of this documentation would be a diff between the default table.html and how that would look if essentially moved to index.html. The use case is for everyone who wants to create a public-facing website to explore a single table at the root directory. (Maybe a second bit of documentation for people who have a single database with multiple tables.)\r\n\r\n(Hmm... might be cool to have a setting for that, where it happens automagically! If only one table, then home page is at the table level. if only one database, then home page is at the database level.... as an option.)\r\n\r\nI suppose I could ignore this, and somehow do this in the DNS settings once I hook up Vercel to a domain name, maybe.. and remove the breadcrumbs in table.html... but for now, a documentation request in the form of a diff... for viewing a single table (or a single database) at the root.\r\n\r\n(Actually, there's probably room for a whole expanded section on templates. Noticed some nice table metadata in one of the datasette examples, for instance... Hmm... maybe a whole library of solutions in one place... maybe a documentation hackathon! If that's of interest, of course it's a separate issue. )\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1284/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 847700726, "node_id": "MDU6SXNzdWU4NDc3MDA3MjY=", "number": 1285, "title": "Feature Request or Plugin Request: Numeric Range Facets", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-01T01:50:20Z", "updated_at": "2021-04-01T02:28:19Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It would be great to offer facets for numeric data ranges. \r\n\r\nThe ranges could pull from typical GIS methods of creating choropleth maps. \r\nhttps://gisgeography.com/choropleth-maps-data-classification/\r\nOf the following, for mapping, I've always preferred a Jenks Natural Breaks, or a cross between Jenks and Pretty breaks.\r\n\r\n- Equal Intervals \r\n- Quantile (equal count) \r\n- Standard Deviation\r\n- Natural Breaks (Jenks) Classification \r\n- Pretty Breaks\r\n- Some sort of Aggregate Jenks Classification (this isn't standard, but it would be nice to be able to set classification ranges that work across tables.)\r\n\r\nHere are some links for Natural Breaks, in case this method is unfamiliar.\r\n\r\n- https://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization\r\n- http://wiki.gis.com/wiki/index.php/Jenks_Natural_Breaks_Classification\r\n- https://medium.com/analytics-vidhya/jenks-natural-breaks-best-range-finder-algorithm-8d1907192051\r\n\r\nPer that last link, there is a Jenks Python module... They also describe it as data-intensive for larger datasets. Maybe this is a good plugin idea.\r\n\r\nAn example of equal Intervals would be \r\n0 \u2013 < 10\r\n10 \u2013 < 20\r\n20 \u2013 < 30\r\n30 \u2013 < 40\r\n\r\nIt's kind of confusing to have that less-than sign in there. it could also be displayed as:\r\n0 \u2013 10\r\n10 \u2013 20\r\n20 \u2013 30\r\n30 \u2013 40\r\n\r\nBut then it's not completely clear which category 10 is in, for instance.\r\n\r\n(Best to right-justify.. and use an \"en dash\" between numbers.)\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1285/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 849220154, "node_id": "MDU6SXNzdWU4NDkyMjAxNTQ=", "number": 1286, "title": "Better default display of arrays of items", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-04-02T13:31:40Z", "updated_at": "2021-06-12T12:36:15Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Would be great to have template filters that convert array fields to bullets and/or delimited lists upon table display:\r\n```\r\n|to_bullets\r\n|to_comma_delimited\r\n|to_semicolon_delimited\r\n```\r\nor maybe: \r\n```\r\n|join_array(\"bullet\")\r\n|join_array(\"bullet\",\"square\")\r\n|join_array(\";\")\r\n|join_array(\",\")\r\n```\r\nKeeping in mind that bullets show up in html as \\ while other delimiting characters appear after the value.\r\n\r\nOf course, the fields themselves would remain as facetable arrays. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1286/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 853672224, "node_id": "MDU6SXNzdWU4NTM2NzIyMjQ=", "number": 1294, "title": "\"You can check out any time you like. But you can never leave!\"", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-08T17:02:15Z", "updated_at": "2021-04-08T18:35:50Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "(Feel free to rename this one.)\r\n\r\n- The column gear lets you \"Show not-blank rows.\" Then it places a parameter in the URL, which a web developer would notice, but a lot of users won't notice, or know to delete it. Would be good to toggle \"Show not-blank rows\" with \"Show all rows.\" (Also would be quite helpful to have a \"Show blank rows | Show all rows\" option)\r\n- The column gear lets you \"Sort ascending\" and \"Sort descending\" but then you're stuck with some sort of sorted version thereafter, unless you know to sort the ID column, or to remove the full _sort parameter and its value in the URL. Would be good to offer a \"Remove sort\" option in the gear.\r\n- These requests are in the same camp as: https://github.com/simonw/datasette-vega/issues/36\r\n- I suspect there are other url parameter instances where similar analysis would be helpful, but the three above are the use cases I've run across. \r\n\r\nUPDATE:\r\n- It would be helpful to have a \"Previous page\" available for all but the first table page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1294/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 855451460, "node_id": "MDU6SXNzdWU4NTU0NTE0NjA=", "number": 1297, "title": "Documentation: json1, and introspection endpoints", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-12T00:38:00Z", "updated_at": "2021-04-12T01:29:33Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "https://docs.datasette.io/en/stable/facets.html notes that:\r\n> If your SQLite installation provides the json1 extension (you can check using /-/versions) Datasette will automatically detect columns that contain JSON arrays...\r\n\r\nWhen I check -/versions I see two sections relevant to json1:\r\n```\r\n \"extensions\": {\r\n \"json1\": null\r\n },\r\n \"compile_options\": [\r\n ...\r\n \"ENABLE_JSON1\",\r\n```\r\n \r\n The ENABLE_JSON1 makes me think json1 is likely available. But the `\"json1\": null` made me think it wasn't available (because of the `null`). It would help if the documentation provided clarity about how to know if json1 is installed. It would also be helpful if the `/-/versions` information signalled somehow that that is to be appended to the hostname or domain name (or whatever you want to call it, or simply show it, using `example.com/-/versions` instead of `/-/versions`. Likewise on that last point, for https://docs.datasette.io/en/stable/introspection.html#introspection , at least at some point on that page detailing where those introspection endpoints go. (Sometimes documentation can be so abbreviated that it's hard for new users to figure out what's going on.)\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1297/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 855476501, "node_id": "MDU6SXNzdWU4NTU0NzY1MDE=", "number": 1298, "title": "improve table horizontal scroll experience", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-04-12T01:55:16Z", "updated_at": "2022-08-30T21:11:49Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Wide tables aren't a huge problem if you know to click and drag right. But it's not at all obvious to do that. (it also tends to blue-select any content as it's dragging.) Depending on column widths, public users might entirely miss all the columns to the right. \r\n\r\nThere is a scrollbar at the bottom of the table, but I'm displaying ALL my records because it's the only way for datasette-vega to make accurate charts. So that bottom scrollbar is likely to be missed. I wonder if some sort of javascript-y mouseover to an arrow might help, similar to those seen in image carousels. Ah: here's a perfect example:\r\n\r\n1. Visit http://google.com\r\n2. Search for: animals endangered\r\n3. Note the 'g-right-button' (in the code) that looks like a right-facing caret in a circle. \r\n4. Click on that and the carousel scrolls right (and 'g-left-button' appears on the left).\r\n\r\nMight be tricky to do that on a table, rather than a one-row carousel, but it's worth experimenting with.\r\n\r\nAnother option is just to put the scrollbars at the top of the table, too. \r\n\r\nMeantime, I'm trying to build a button like the \"View/hide all columns on https://salaries.news.baltimoresun.com/salaries-be494cf/2019+Maryland+state+salaries\r\nMight be nice to have that available by default, with settings in the metadata showing which are on by default.\r\n\r\n(I saw some other closed issues related to horizontal scrolling, and admit I don't entirely understand them. For instance, the animated gif at https://github.com/simonw/datasette/issues/998#issuecomment-714117534 confuses me. )\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1298/reactions\", \"total_count\": 4, \"+1\": 4, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 860722711, "node_id": "MDU6SXNzdWU4NjA3MjI3MTE=", "number": 1301, "title": "Publishing to cloudrun with immutable mode?", "user": {"value": 5413548, "label": "louispotok"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-04-18T17:51:46Z", "updated_at": "2022-10-07T02:38:04Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm a bit confused about immutable mode and publishing to cloudrun. (I want to publish with immutable mode so that I can support database downloads.)\r\n\r\nRunning `datasette publish cloudrun --extra-options=\"-i example.db\"` leads to an error:\r\n> Error: Invalid value for '-i' / '--immutable': Path 'example.db' does not exist. \r\n\r\nHowever, running `datasette publish cloudrun example.db` not only works but seems to publish in immutable mode anyway! I'm seeing this both with `/-/databases.json` and the fact that downloads are working.\r\n\r\nWhen I just `datasette serve` locally, this succeeds both ways and works as expected.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1301/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 860734722, "node_id": "MDU6SXNzdWU4NjA3MzQ3MjI=", "number": 1302, "title": "Fix disappearing facets", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-18T18:42:33Z", "updated_at": "2021-04-20T07:40:15Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "1. Clone https://github.com/mroswell/list-N\r\n2. Run `datasette disinfectants.db -o`\r\n3. Select the `Safer_or_Toxic` facet.\r\n4. Select `Toxic`.\r\n5. Close out the `Safer_or_Toxic` facet.\r\n6. Examine `Suggested facets` list. `Safer_or_Toxic` is GONE.\r\n7. Try some other facets. When you select an element, and then close the list, in some cases, the facet properly returns to the `Suggested facet` list... Arrays and dates properly return to the list, but fields with strings don't return to the list. \r\n\r\nSince my site is devoted to whether disinfectants are Safer or Toxic, having the suggested facet disappear from the suggested facet list is very confusing* to end-users. This, along with a few other issues, unfortunately proved beyond my own programming ability to address. So I hired a Senior-level developer to address a number of issues, including this disappearing act.\r\n\r\n8. Open a new terminal. Run `datasette disinfectants.db -m metadata.json --static static:static/ --template-dir templates/ --plugins-dir plugins/ -p 8001 -o`\r\n9. Repeat steps 3-6, but this time, the Safer_or_Toxic facet returns to the list (and the related URL parameters are removed).\r\n\r\nI'm not sure how to do a pull request for this, because the plugin contains other functionality that goes beyond this bug. I wanted the facets sorted in a certain order (both in the suggested facet list, and the detail lists) (... the detail lists were hopping around all over the place before...) I wanted the duplicate facets removed (leaving only the one where you can facet by individual item in an array.) I wanted the arrays to be presented in a prettier fashion (I did that in the template... That could be moved over to the plugin at some point)\r\n\r\nI'm thinking it'll be very helpful if applicable parts of my project's plugin (sort_suggested_facets_plugin.py) will be able to be incorporated back into datasette, but I leave that to you to consider.\r\n\r\n(* The disappearing facet bug was especially confusing because I'm removing the filters and sql from the table page, at the request of the organization. The filters and sql detail created a lot of confusion for end users who try to find disinfectants used by Hospitals, for instance, as an '=' won't find them, since they are part of the Use_site array.) My disappearing-facet confusion was documented in my own issue: https://github.com/mroswell/list-N/issues/57 (addressed by the plugin). Other facet-related issues here: https://github.com/mroswell/list-N/issues/54 (addressed by the plugin); https://github.com/mroswell/list-N/issues/15 (addressed by template); https://github.com/mroswell/list-N/issues/53 (not yet addressed). \r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1302/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 904598267, "node_id": "MDExOlB1bGxSZXF1ZXN0NjU1NzQxNDI4", "number": 1348, "title": "DRAFT: add test and scan for docker images", "user": {"value": 10801138, "label": "blairdrummond"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-05-28T03:02:12Z", "updated_at": "2021-05-28T03:06:16Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1348", "body": "**NOTE: I don't think this PR is ready, since the arm/v6 and arm/v7 images are failing pytest due to missing dependencies (gcc and friends). But it's pretty close.**\r\n\r\nCloses https://github.com/simonw/datasette/issues/1344 . Using a build-matrix for the platforms and [this test](https://github.com/simonw/datasette/issues/1344#issuecomment-849820019), we test all the platforms in parallel. I also threw in container scanning.\r\n\r\n### Switch `pip install` to use either tags or commit shas\r\n\r\nNotably! This also [changes the Dockerfile](https://github.com/blairdrummond/datasette/blob/7fe5315d68e04fce64b5bebf4e2d7feec44f8546/Dockerfile#L20) so that it accepts tags or commit-shas.\r\n\r\n```\r\n# It's backwards compatible with tags, but also lets you use shas\r\nroot@712071df17af:/# pip install git+git://github.com/simonw/datasette.git@0.56 \r\nCollecting git+git://github.com/simonw/datasette.git@0.56 \r\n Cloning git://github.com/simonw/datasette.git (to revision 0.56) to /tmp/pip-req-build-u6dhm945 \r\n Running command git clone -q git://github.com/simonw/datasette.git /tmp/pip-req-build-u6dhm945 \r\n Running command git checkout -q af5a7f1c09f6a902bb2a25e8edf39c7034d2e5de \r\nCollecting Jinja2<2.12.0,>=2.10.3 \r\n Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB) \r\n```\r\n\r\nThis lets you build the containers in CI every push for testing, which maybe resolves [this problem](https://github.com/simonw/datasette/issues/1272#issuecomment-808648974)?\r\n\r\n# Workflow run example\r\n\r\nYou can see the results in my workflow [here](https://github.com/blairdrummond/datasette/pull/2/checks?check_run_id=2690570717). The commit history is different because I squashed this branch, also in the testing branch I had to change `github.com/simonw` to `github.com/blairdrummond` for the CI to pick up my git_sha.\r\n\r\n## Why did the builds fail?\r\n\r\n**NOTE:** The results of all the tests fail, but for different reasons! A few fail to install Rust, the amd64 passes the tests (phew!) but has critical CVEs which fail the container scan, the Arm/v6 and Arm/v7 seem to fail to install the test dependencies due to missing programs like `gcc`. (`gcc` is not sufficient though, as [this run](https://github.com/blairdrummond/datasette/pull/3/checks?check_run_id=2690672982) indicates) ", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1348/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 950664971, "node_id": "MDU6SXNzdWU5NTA2NjQ5NzE=", "number": 1401, "title": "unordered list is not rendering bullet points in description_html on database page", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-07-22T13:24:18Z", "updated_at": "2021-10-23T13:09:10Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Thanks for this tremendous package, @simonw!\r\n\r\nIn the `description_html` for a database, I [have an unordered list](https://github.com/labordata/warehouse/blob/fcea4502e5b615b0eb3e0bdcb45ec634abe20bb6/warehouse_metadata.yml#L19-L22).\r\n\r\nHowever, on the database page on the deployed site, it is not rendering this as a bulleted list.\r\n\r\n![Screenshot 2021-07-22 at 09-21-51 nlrb](https://user-images.githubusercontent.com/536941/126645923-2777b7f1-fd4c-4d2d-af70-a35e49a07675.png)\r\n\r\nPage here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5\r\n\r\nThe documentation gives an [example of using an unordered list](https://docs.datasette.io/en/stable/metadata.html#using-yaml-for-metadata) in a `description_html`, so I expected this will work.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1401/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 951185411, "node_id": "MDU6SXNzdWU5NTExODU0MTE=", "number": 1402, "title": "feature request: social meta tags", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-07-23T01:57:23Z", "updated_at": "2021-07-26T19:31:41Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1402/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 959137143, "node_id": "MDU6SXNzdWU5NTkxMzcxNDM=", "number": 1415, "title": "feature request: document minimum permissions for service account for cloudrun", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-08-03T13:48:43Z", "updated_at": "2023-11-05T16:46:59Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Thanks again for such a powerful project.\r\n\r\nFor deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions.\r\n\r\nIt would be great to document what those minimum permission that need to be set in the IAM.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1415/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 959710008, "node_id": "MDU6SXNzdWU5NTk3MTAwMDg=", "number": 1419, "title": "`publish cloudrun` should deploy a more recent SQLite version", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-08-04T00:45:55Z", "updated_at": "2021-08-05T03:23:24Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I recently changed from deploying a datasette using `datasette publish heroku` to `datasette publish cloudrun`. [A query that ran on the heroku site](https://odpr.bunkum.us/odpr-6c2f4fc?sql=with+pivot_members+as+%28%0D%0A++select%0D%0A++++f_num%2C%0D%0A++++max%28union_name%29+as+union_name%2C%0D%0A++++max%28aff_abbr%29+as+abbreviation%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2007%0D%0A++++%29+as+%222007%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2008%0D%0A++++%29+as+%222008%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2009%0D%0A++++%29+as+%222009%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2010%0D%0A++++%29+as+%222010%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2011%0D%0A++++%29+as+%222011%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2012%0D%0A++++%29+as+%222012%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2013%0D%0A++++%29+as+%222013%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2014%0D%0A++++%29+as+%222014%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2015%0D%0A++++%29+as+%222015%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2016%0D%0A++++%29+as+%222016%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2017%0D%0A++++%29+as+%222017%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2018%0D%0A++++%29+as+%222018%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2019%0D%0A++++%29+as+%222019%22%0D%0A++from%0D%0A++++lm_data%0D%0A++++left+join+ar_membership+using+%28rpt_id%29%0D%0A++where%0D%0A++++members+%21%3D+%27%27%0D%0A++++and+trim%28desig_name%29+%3D+%27NHQ%27%0D%0A++++and+union_name+not+in+%28%0D%0A++++++%27AFL-CIO%27%2C%0D%0A++++++%27CHANGE+TO+WIN%27%2C%0D%0A++++++%27FOOD+ALLIED+SVC+TRADES+DEPT+AFL-CIO%27%0D%0A++++%29%0D%0A++++and+f_num+not+in+%28387%2C+296%2C+123%2C+347%2C+531897%2C+30410%2C+49%29%0D%0A++++and+lower%28ar_membership.category%29+IN+%28%0D%0A++++++%27active+members%27%2C%0D%0A++++++%27active+professional%27%2C%0D%0A++++++%27regular+members%27%2C%0D%0A++++++%27full+time+member%27%2C%0D%0A++++++%27full+time+members%27%2C%0D%0A++++++%27active+education+support+professional%27%2C%0D%0A++++++%27active%27%2C%0D%0A++++++%27active+member%27%2C%0D%0A++++++%27members%27%2C%0D%0A++++++%27see+item+69%27%2C%0D%0A++++++%27regular%27%2C%0D%0A++++++%27dues+paying+members%27%2C%0D%0A++++++%27building+trades+journeyman%27%2C%0D%0A++++++%27full+per+capita+tax+payers%27%2C%0D%0A++++++%27active+postal%27%2C%0D%0A++++++%27active+membership%27%2C%0D%0A++++++%27full-time+member%27%2C%0D%0A++++++%27regular+active+member%27%2C%0D%0A++++++%27journeyman%27%2C%0D%0A++++++%27member%27%2C%0D%0A++++++%27journeymen%27%2C%0D%0A++++++%27one+half+per+capita+tax+payers%27%2C%0D%0A++++++%27schedule+1%27%2C%0D%0A++++++%27active+memebers%27%2C%0D%0A++++++%27active+members+-+us%27%2C%0D%0A++++++%27dues+paying+membership%27%0D%0A++++%29%0D%0A++GROUP+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++pivot_members%0D%0Aorder+by%0D%0A++%222019%22+desc%3B), now throws a syntax error on the [cloudrun site](https://labordata.bunkum.us/odpr-6c2f4fc?sql=with+pivot_members+as+%28%0D%0A++select%0D%0A++++f_num%2C%0D%0A++++max%28union_name%29+as+union_name%2C%0D%0A++++max%28aff_abbr%29+as+abbreviation%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2007%0D%0A++++%29+as+%222007%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2008%0D%0A++++%29+as+%222008%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2009%0D%0A++++%29+as+%222009%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2010%0D%0A++++%29+as+%222010%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2011%0D%0A++++%29+as+%222011%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2012%0D%0A++++%29+as+%222012%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2013%0D%0A++++%29+as+%222013%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2014%0D%0A++++%29+as+%222014%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2015%0D%0A++++%29+as+%222015%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2016%0D%0A++++%29+as+%222016%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2017%0D%0A++++%29+as+%222017%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2018%0D%0A++++%29+as+%222018%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2019%0D%0A++++%29+as+%222019%22%0D%0A++from%0D%0A++++lm_data%0D%0A++++left+join+ar_membership+using+%28rpt_id%29%0D%0A++where%0D%0A++++members+%21%3D+%27%27%0D%0A++++and+trim%28desig_name%29+%3D+%27NHQ%27%0D%0A++++and+union_name+not+in+%28%0D%0A++++++%27AFL-CIO%27%2C%0D%0A++++++%27CHANGE+TO+WIN%27%2C%0D%0A++++++%27FOOD+ALLIED+SVC+TRADES+DEPT+AFL-CIO%27%0D%0A++++%29%0D%0A++++and+f_num+not+in+%28387%2C+296%2C+123%2C+347%2C+531897%2C+30410%2C+49%29%0D%0A++++and+lower%28ar_membership.category%29+IN+%28%0D%0A++++++%27active+members%27%2C%0D%0A++++++%27active+professional%27%2C%0D%0A++++++%27regular+members%27%2C%0D%0A++++++%27full+time+member%27%2C%0D%0A++++++%27full+time+members%27%2C%0D%0A++++++%27active+education+support+professional%27%2C%0D%0A++++++%27active%27%2C%0D%0A++++++%27active+member%27%2C%0D%0A++++++%27members%27%2C%0D%0A++++++%27see+item+69%27%2C%0D%0A++++++%27regular%27%2C%0D%0A++++++%27dues+paying+members%27%2C%0D%0A++++++%27building+trades+journeyman%27%2C%0D%0A++++++%27full+per+capita+tax+payers%27%2C%0D%0A++++++%27active+postal%27%2C%0D%0A++++++%27active+membership%27%2C%0D%0A++++++%27full-time+member%27%2C%0D%0A++++++%27regular+active+member%27%2C%0D%0A++++++%27journeyman%27%2C%0D%0A++++++%27member%27%2C%0D%0A++++++%27journeymen%27%2C%0D%0A++++++%27one+half+per+capita+tax+payers%27%2C%0D%0A++++++%27schedule+1%27%2C%0D%0A++++++%27active+memebers%27%2C%0D%0A++++++%27active+members+-+us%27%2C%0D%0A++++++%27dues+paying+membership%27%0D%0A++++%29%0D%0A++GROUP+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++pivot_members%0D%0Aorder+by%0D%0A++%222019%22+desc%3B).\r\n\r\nI suspect this is because they are running different versions of sqlite3.\r\n\r\n- Heroku: sqlite3 3.31.1 ([-/versions](https://odpr.bunkum.us/-/versions))\r\n- Cloudrun: sqlite3 3.27.2 ([-/versions](https://labordata.bunkum.us/-/versions))\r\n\r\nIf so, it would be great to\r\n\r\n1. harmonize the sqlite3 versions across platforms\r\n2. update the docker files so as to update the sqlite3 version for cloudrun", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1419/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 988552851, "node_id": "MDU6SXNzdWU5ODg1NTI4NTE=", "number": 1456, "title": "conda install results in non-functioning `datasette serve` due to out-of-date asgiref", "user": {"value": 51016, "label": "ctb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-05T16:59:55Z", "updated_at": "2021-09-05T16:59:55Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Over in https://github.com/ctb/2021-sourmash-datasette, I discovered that the following commands fail:\r\n```\r\nconda create -n datasette4 -y datasette=0.58.1\r\nconda activate datasette4\r\ndatasette gathertax.db\r\n```\r\nwith `ImportError: cannot import name 'WebSocketScope' from 'asgiref.typing'`.\r\n\r\nThis appears to be because asgiref 3.3.4 doesn't have WebSocketScope, but later versions do - a simple\r\n\r\n```\r\npip install asgiref==3.4.1\r\n```\r\n\r\nfixes the problem for me, at least to the point where I can run datasette and poke around as usual.\r\n\r\nI note that over in the conda-forge recipe, https://github.com/conda-forge/datasette-feedstock/blob/master/recipe/meta.yaml pins asgiref to < 3.4.0, but I'm not sure why - so I'm not sure how to best resolve this issue :).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1456/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 988556488, "node_id": "MDU6SXNzdWU5ODg1NTY0ODg=", "number": 1459, "title": "suggestion: allow `datasette --open` to take a relative URL", "user": {"value": 51016, "label": "ctb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-09-05T17:17:07Z", "updated_at": "2021-09-05T19:59:15Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "(soft suggestion because I'm not sure I'm using datasette right yet)\r\n\r\nOver at https://github.com/ctb/2021-sourmash-datasette, I'm playing around with datasette, and I'm creating some static pages to send people to the right facets. There may well be better ways of achieving this end goal, and I will find out if so, I'm sure!\r\n\r\nBut regardless I think it might be neat to support an option to allow `-o/--open` to take a relative URL, that then gets appended to the hostname and port. This would let me improve my documentation. I don't see any downsides, either, but \ud83e\udd37 there may well be some :)\r\n\r\nHappy to dig in and provide a PR if it's of interest. I'm not sure off the top of my head how to support an optional value to a parameter in argparse - the current `-o` behavior is kinda nice so it'd be suboptimal to require a url for `-o`. Maybe `--open-url=` or something would work?\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1459/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 989109888, "node_id": "MDU6SXNzdWU5ODkxMDk4ODg=", "number": 1460, "title": "Override column metadata with metadata from another column", "user": {"value": 72577720, "label": "MichaelTiemannOSC"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-06T12:13:33Z", "updated_at": "2021-09-06T12:13:33Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I have a table from the PUDL project (https://github.com/catalyst-cooperative/pudl) that looks like this:\r\n\r\n```\r\nCREATE TABLE fuel_ferc1 (\r\n\tid INTEGER NOT NULL, \r\n\trecord_id TEXT, \r\n\tutility_id_ferc1 INTEGER, \r\n\treport_year INTEGER, \r\n\tplant_name_ferc1 TEXT, \r\n\tfuel_type_code_pudl VARCHAR(7), \r\n\tfuel_unit VARCHAR(7),\r\n\tfuel_qty_burned FLOAT,\r\n\tfuel_mmbtu_per_unit FLOAT, \r\n\tfuel_cost_per_unit_burned FLOAT, \r\n\tfuel_cost_per_unit_delivered FLOAT, \r\n\tfuel_cost_per_mmbtu FLOAT, \r\n\tPRIMARY KEY (id), \r\n\tFOREIGN KEY(plant_name_ferc1, utility_id_ferc1) REFERENCES plants_ferc1 (plant_name_ferc1, utility_id_ferc1), \r\n\tCONSTRAINT fuel_ferc1_fuel_type_code_pudl_enum CHECK (fuel_type_code_pudl IN ('coal', 'oil', 'gas', 'solar', 'wind', 'hydro', 'nuclear', 'waste', 'unknown')), \r\n\tCONSTRAINT fuel_ferc1_fuel_unit_enum CHECK (fuel_unit IN ('ton', 'mcf', 'bbl', 'gal', 'kgal', 'gramsU', 'kgU', 'klbs', 'btu', 'mmbtu', 'mwdth', 'mwhth', 'unknown'))\r\n);\r\n```\r\n\r\nNote that `fuel_unit` is a unit that **pint** can understand, and that `fuel_qty_burned` is a column of data that could be expressed in terms of actual units, not merely as a dimensionless number. Ditto the `fuel_cost_per_unit_...` columns. Is there a way to give a column a default metadata unit (such as *tons* or *USD/ton*) and then let that be overridden when the metadata in another column says *barrels* or *USD/gramsU*?\r\n\r\n@catalyst-cooperative", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1460/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 991191951, "node_id": "MDU6SXNzdWU5OTExOTE5NTE=", "number": 1464, "title": "clean checkout & clean environment has test failures", "user": {"value": 51016, "label": "ctb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2021-09-08T14:16:23Z", "updated_at": "2021-09-13T22:17:17Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I followed the instructions [here](https://docs.datasette.io/en/stable/contributing.html#setting-up-a-development-environment), and even after running `python update-docs-help.py` I get the following failed tests -- any thoughts?\r\n\r\n```\r\nFAILED tests/test_api.py::test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3]\r\nFAILED tests/test_api.py::test_searchmode[table_metadata1-_search=te*+AND+do*-expected_rows1]\r\nFAILED tests/test_api.py::test_searchmode[table_metadata2-_search=te*+AND+do*&_searchmode=raw-expected_rows2]\r\n```\r\n\r\nThis is with python 3.9.7 and lots of other packages, as in attached environment listing from `conda list`.\r\n[conda-installed.txt](https://github.com/simonw/datasette/files/7129487/conda-installed.txt)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1464/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 991206402, "node_id": "MDExOlB1bGxSZXF1ZXN0NzI5NzA0NTM3", "number": 1465, "title": "add support for -o --get /path", "user": {"value": 51016, "label": "ctb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-08T14:30:42Z", "updated_at": "2021-09-08T14:31:45Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1465", "body": "Fixes https://github.com/simonw/datasette/issues/1459\r\n\r\nAdds support for `--open --get /path` to be used in combination.\r\n\r\nIf `--open` is provided alone, datasette will open a web page to a default URL.\r\nIf `--get ` is provided alone, datasette will output the result of doing a GET to that URL and then exit.\r\nIf `--open --get ` are provided together, datasette will open a web page to that URL.\r\n\r\nTODO items:\r\n- [ ] update documentation\r\n- [ ] print out error message when `--root --open --get ` is used\r\n- [ ] adjust code to require that `` start with a `/` when `-o --get ` is used\r\n- [ ] add test(s)\r\n\r\nnote, '@CTB' is used in this PR to flag code that needs revisiting.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1465/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 999902754, "node_id": "I_kwDOBm6k_c47mU4i", "number": 1473, "title": "base logo link visits `undefined` rather than href url", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-09-18T04:17:04Z", "updated_at": "2021-09-19T00:45:32Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I have two connected sites:\r\nhttp://www.SaferOrToxic.org \r\n(a Hugo website)\r\nand:\r\nhttp://disinfectants.SaferOrToxic.org/disinfectants/listN\r\n(a datasette table page)\r\n\r\nThe latter is linked as \"The List\" in the former's menu.\r\n(I'd love a prettier URL, but that's what I've got.)\r\n\r\nOn:\r\nhttp://disinfectants.SaferOrToxic.org/disinfectants/listN\r\n... all the other menu links should point back to:\r\nhttps://www.SaferOrToxic.org \r\nAnd they do!\r\n\r\nBut the logo, for some reason--though it has an href pointing to:\r\nhttps://www.SaferOrToxic.org\r\nKeeps going to this instead:\r\nhttps://disinfectants.saferortoxic.org/disinfectants/undefined\r\n\r\nWhat is causing that? How can I fix it?\r\n\r\nIn #1284 back in March, I was doing battle with the index.html template, in a still unresolved issue. (I wanted only a single table page at the root.)\r\n\r\nBut I thought, well, if I can't resolve that, at least I could just point the main website to the datasette page (\"The List,\") and then have the List point back to the home website.\r\n\r\nThe menu hrefs to https://www.SaferOrToxic.org work just fine, exactly as they should, from the datasette page. Even the Home link works properly.\r\n\r\nBut the logo link keeps rewriting to: https://disinfectants.saferortoxic.org/disinfectants/undefined\r\n\r\nThis is the HTML:\r\n```\r\n\"Logo:\r\n```\r\n\r\n\r\nIs this somehow related to cloudflare?\r\nOr something in the datasette code?\r\n\r\nI'm starting to think it's a cloudflare issue.\r\n\"Screen\r\n\r\nCan I at least rule out it being a datasette issue?\r\n\r\nMy repository is here:\r\nhttps://github.com/mroswell/list-N\r\n\r\n(BTW, I couldn't figure out how to reference a local image, either, on the datasette side, which is why I'm using the image from the www home page.)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1473/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1006781949, "node_id": "I_kwDOBm6k_c48AkX9", "number": 1478, "title": "Documentation Request: Feature alternative ID instead of default ID", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-24T19:56:13Z", "updated_at": "2021-09-25T16:18:54Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "My data already has an ID that comes from a federal agency.\r\nWould love to have documentation on how to modify the template to:\r\n- Remove the generated ID from the table\r\n- Link the federal ID to the detail page\r\n- and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key.\r\n\r\nI don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports.\r\n\r\nPerhaps this isn't a template issue, maybe more of a db manipulation...\r\n\r\nMargie", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1478/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1015646369, "node_id": "I_kwDOBm6k_c48iYih", "number": 1480, "title": "Exceeding Cloud Run memory limits when deploying a 4.8G database", "user": {"value": 110420, "label": "ghing"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2021-10-04T21:20:24Z", "updated_at": "2022-10-07T04:39:10Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message:\r\n\r\n> Memory limit of 8192M exceeded with 8826M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits\r\n\r\nUnfortunately, the maximum amount of memory that can be allocated to an instance is 8192M.\r\n\r\nNaively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally:\r\n\r\n- Real Memory Size: 70.6 MB\r\n- Virtual Memory Size: 4.51 GB\r\n- Shared Memory Size: 2.5 MB\r\n- Private Memory Size: 57.4 MB\r\n\r\nI'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow.\r\n\r\nThis is somewhat related to #1082, but on a different platform, so I decided to open a new issue.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1480/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1033678984, "node_id": "PR_kwDOBm6k_c4tjgJ8", "number": 1495, "title": "Allow routes to have extra options", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-10-22T15:00:45Z", "updated_at": "2021-11-19T15:36:27Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1495", "body": "Right now, datasette routes can only be a 2-tuple of `(regex, view_fn)`. \r\n\r\nIf it was possible for datasette to handle extra options, like [standard Django does](https://docs.djangoproject.com/en/3.2/topics/http/urls/#passing-extra-options-to-view-functions), it would add flexibility for plugin authors.\r\n\r\nFor example, if extra options were enabled, then it would be easy to make a single table the home page (#1284). This plugin would accomplish it.\r\n\r\n```python\r\nfrom datasette import hookimpl\r\nfrom datasette.views.table import TableView\r\n\r\n@hookimpl\r\ndef register_routes(datasette):\r\n return [\r\n (r\"^/$\", TableView.as_view(datasette), {'db_name': 'DB_NAME',\r\n 'table': 'TABLE_NAME'})\r\n ]\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1495/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1060631257, "node_id": "I_kwDOBm6k_c4_N_LZ", "number": 1528, "title": "Add new `\"sql_file\"` key to Canned Queries in metadata?", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-11-22T21:58:01Z", "updated_at": "2022-06-10T03:23:08Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Currently for canned queries, you have to inline SQL in your `metadata.yaml` like so:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql: |-\r\n select neighborhood, facet_cities.name, state\r\n from facetable\r\n join facet_cities on facetable.city_id = facet_cities.id\r\n where neighborhood like '%' || :text || '%'\r\n order by neighborhood\r\n title: Search neighborhoods\r\n```\r\n\r\nThis works fine, but for a few reasons, I usually have my canned queries already written in separate `.sql` files. I'd like to instead re-use those instead of re-writing it. \r\n\r\nSo, I'd like to see a new `\"sql_file\"` key that works like so:\r\n\r\n`metadata.yaml`:\r\n\r\n```yaml\r\ndatabases:\r\n fixtures:\r\n queries:\r\n neighborhood_search:\r\n sql_file: neighborhood_search.sql\r\n title: Search neighborhoods\r\n```\r\n`neighborhood_search.sql`:\r\n```sql\r\nselect neighborhood, facet_cities.name, state\r\nfrom facetable\r\njoin facet_cities on facetable.city_id = facet_cities.id\r\nwhere neighborhood like '%' || :text || '%'\r\norder by neighborhood\r\n```\r\n\r\nBoth of these would work in the exact same way, where Datasette would instead open + include `neighborhood_search.sql` on startup. \r\n\r\n\r\nA few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:\r\n\r\n- Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code\r\n- Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit\r\n- Works well with other tools (can pipe `.sql` files into the `sqlite3` CLI, or use with other SQLite clients easier)\r\n- Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible\r\n\r\nLet me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1077620955, "node_id": "I_kwDOBm6k_c5AOzDb", "number": 1549, "title": "Redesign CSV export to improve usability", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 5, "created_at": "2021-12-11T19:02:12Z", "updated_at": "2022-04-04T11:17:13Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "*Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser*\r\n\r\nRight now, if the user clicks on the CSV related to a table or a query, the response header for the content type is \r\n\r\n\"content-type: text/plain; charset=utf-8\"\r\n\r\nMost browsers will try to open a file with this content-type in the browser. \r\n\r\nThis is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser.\r\n\r\nIt would be great if the response header could be something like \r\n\r\n```\r\n'Content-type: text/csv');\r\n'Content-disposition: attachment;filename=MyVerySpecial.csv');\r\n```\r\n\r\nwhich would lead browsers to open a download dialog.\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1549/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1079111498, "node_id": "I_kwDOBm6k_c5AUe9K", "number": 1553, "title": "if csv export is truncated in non streaming mode set informative response header", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-13T22:50:44Z", "updated_at": "2021-12-16T19:17:28Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.\r\n\r\nit would be great if a response is truncated that an header signalling that was set in the header.\r\n\r\ni need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1553/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1090810196, "node_id": "I_kwDOBm6k_c5BBHFU", "number": 1583, "title": "consider adding deletion step of cloudbuild artifacts to gcloud publish", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-30T00:33:23Z", "updated_at": "2021-12-30T00:34:16Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun.\r\n\r\nafter successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1583/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1096536240, "node_id": "I_kwDOBm6k_c5BW9Cw", "number": 1586, "title": "run analyze on all databases as part of start up or publishing", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-01-07T17:52:34Z", "updated_at": "2022-02-02T07:13:37Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Running `analyze;` lets sqlite's query planner make *much* better use of any indices.\r\n\r\nIt might be nice if the analyze was run as part of the start up of \"serve\" or \"publish\".", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1586/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1108671952, "node_id": "I_kwDOBm6k_c5CFP3Q", "number": 1605, "title": "Scripted exports", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2022-01-19T23:45:55Z", "updated_at": "2022-11-30T15:06:38Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Posting this while I'm thinking about it: I mentioned at the end of [this thread](https://twitter.com/eyeseast/status/1483893011658551299) that I'm usually doing `datasette --get` to export canned queries.\r\n\r\nI used to use a tool called [datafreeze](https://github.com/pudo/datafreeze) to do scripted exports, but that project looks dead now. The ergonomics of it are pretty nice, though, and the `Freezefile.yml` structure is actually not too far from Datasette's canned queries.\r\n\r\nThis is related to the idea for `datasette query` (#1356) but I think it's a distinct feature. It's most likely a plugin, but I want to raise it here because it's probably something other people have thought about.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1605/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1160034488, "node_id": "I_kwDOCGYnMM5FJLi4", "number": 411, "title": "Support for generated columns", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-04T20:41:33Z", "updated_at": "2022-03-11T22:32:43Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "This is a fairly new feature -- SQLite version 3.31.0 (2020-01-22) -- that I, admittedly, haven't gotten to work yet. But it looks _incredibly_ useful: https://dgl.cx/2020/06/sqlite-json-support\r\n\r\nI'm not sure if this is an option on `add-column` or a separate command like `add-generated-column`. Either way, it needs an argument to populate it. It could be something like this:\r\n\r\n```sh\r\nsqlite-utils add-column data.db table-name generated --as 'json_extract(data, \"$.field\")' --virtual\r\n```\r\n\r\nMore here: https://www.sqlite.org/gencol.html", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/411/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1163369515, "node_id": "I_kwDOBm6k_c5FV5wr", "number": 1655, "title": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-09T00:56:40Z", "updated_at": "2023-10-17T21:53:17Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "[this page](https://labordata.bunkum.us/opdr-8335ea3?sql=with+most_recent_lu+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%28%0D%0A++++++select%0D%0A++++++++*%0D%0A++++++from%0D%0A++++++++lm_data%0D%0A++++++order+by%0D%0A++++++++f_num%2C%0D%0A++++++++receive_date+desc%0D%0A++++%29+t%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++aff_abbr+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+abbr_local_name%2C%0D%0A++coalesce%28%0D%0A++++regexp_match%28%27%28.*%3F%29%28%2C%3F+AFL-CIO%24%29%27%2C+union_name%29%2C%0D%0A++++regexp_match%28%27%28.*%3F%29%28+IND%24%29%27%2C+union_name%29%2C%0D%0A++++union_name%0D%0A++%29+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+full_local_name%2C%0D%0A++*%0D%0Afrom%0D%0A++most_recent_lu%0D%0Awhere+%28desig_num+IS+NOT+NULL+OR+unit_name+IS+NOT+NULL%29+AND+desig_name+%21%3D+%27HQ%27%0D%0Alimit%0D%0A++5000+offset+0)\r\n\r\nis using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb.\r\n\r\nit's using over a 1G on chrome 99.\r\n\r\ni found this because, i was trying to figure out why editing the SQL was getting very slow.\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1655/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1182227211, "node_id": "I_kwDOBm6k_c5Gd1sL", "number": 1692, "title": "[plugins][feature request]: Support additional script tag attributes when loading custom JS", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-27T01:16:03Z", "updated_at": "2022-03-30T06:14:51Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "## Motivation\r\n\r\n- The build system for my new [plugin](https://github.com/hydrosquall/datasette-nteract-data-explorer) has two output JS files, one for browsers that support ES modules, one for browsers that don't. At present, I'm only passing one of them into Datasette.\r\n- I'd like to specify the non-es-module script as a fallback for older browsers. I don't want to load it by default, because browsers will only need one, and it's heavy, so for now I'm only supporting modern browsers. \r\n\r\nTo be able to support legacy browsers without slowing down users with modern browsers, I would like to be able to set additional HTML attributes on the tag fallback script, `nomodule` and `defer`. My injected scripts should look something like this:\r\n\r\n```html\r\n\r\n\r\n```\r\n\r\n## Proposal\r\n\r\nTo achieve this, I propose additional optional properties to the API accepted by the `extra_js_urls` hook and custom JS field the `metadata.json` [described here](https://docs.datasette.io/en/stable/custom_templates.html#custom-css-and-javascript). \r\n\r\nUnder this API, I'd write something like this to get the above HTML rendered in Datasette.\r\n\r\n```json\r\n{\r\n \"extra_js_urls\": [\r\n {\r\n \"url\": \"/index.my-es-module-bundle.js\",\r\n \"module\": true,\r\n },\r\n {\r\n \"url\": \"/index.my-legacy-fallback-bundle.js\",\r\n \"nomodule\": \"\",\r\n \"defer\": true\r\n }\r\n ]\r\n}\r\n```\r\n\r\n## Resources\r\n\r\n- [MDN on the script tag](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script)\r\n - There may be other properties that could be added that are potentially valuable, like `async` or `referrerpolicy`, but I don't have an immediate need for those.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1692/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1193090967, "node_id": "I_kwDOBm6k_c5HHR-X", "number": 1699, "title": "Proposal: datasette query", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-04-05T12:36:43Z", "updated_at": "2022-04-11T01:32:12Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I started sketching out a plugin to add a `datasette query` subcommand to export data from the command line. This is based on discussions in #1356 and #1605. Before I get too far down this rabbit hole, I figure it's worth getting some feedback here (unless this should happen in `Discussions`). Here's what I'm thinking:\r\n\r\nAt its most basic, it will write the results of a query to STDOUT.\r\n\r\n```sh\r\ndatasette query -d data.db 'select * from data' > results.json\r\n```\r\n\r\nThis isn't much improvement over using [sqlite-utils](https://github.com/simonw/sqlite-utils). To make better use of datasette and its ecosystem, run `datasette query` using a canned query defined in a `metadata.yml` file.\r\n\r\nFor example, using the metadata file from [alltheplaces-datasette](https://github.com/eyeseast/alltheplaces-datasette/blob/main/metadata.yml):\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query -d alltheplaces.db -m metadata.yml count_by_spider\r\n```\r\n\r\nThat query would be good to get as CSV, and we can auto-discover metadata and databases in the current directory:\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query count_by_spider -f csv\r\n```\r\n\r\nIn this case, `count_by_spider` is a canned query defined on the `alltheplaces` database. If the same query is defined on multiple databases or its otherwise unclear which database `query` should use, pass the `-d` or `--database` option.\r\n\r\nIf a query takes parameters, I can pass them in at runtime, using the `--param` or `-p` option:\r\n\r\n```sh\r\ndatasette query -d data.db -p value something 'select * from neighborhoods where some_column = :value'\r\n```\r\n\r\nI'm very interested in feedback on this, including whether it should be a plugin or in Datasette core. (I don't have a strong opinion about this, but I'm prototyping it as a plugin to start.)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1699/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1198822563, "node_id": "I_kwDOBm6k_c5HdJSj", "number": 1706, "title": "[feature] immutable mode for a directory, not just individual sqlite file", "user": {"value": 9020979, "label": "hydrosquall"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-04-10T00:50:57Z", "updated_at": "2022-12-09T19:11:40Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "## Motivation\r\n\r\n- I have a directory of sqlite databases\r\n- I'd like to use immutable mode when opening them for better performance [docs](https://docs.datasette.io/en/0.54/performance.html#immutable-mode)\r\n- Currently using this flag throws the following error\r\n\r\n IsADirectoryError: [Errno 21] Is a directory: '/name-of-directory'\r\n\r\n## Proposal\r\n\r\nImmutable flag works for both single files and directories\r\n\r\n datasette -i /folder-of-sqlite-files", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1706/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1200224939, "node_id": "I_kwDOBm6k_c5Hifqr", "number": 1707, "title": "[feature] expanded detail page", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-11T16:29:17Z", "updated_at": "2022-04-11T16:33:00Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Right now, if click on the detail page for a row you get the info for the row and links to related tables:\r\n![Screenshot 2022-04-11 at 12-27-26 lm20 filing](https://user-images.githubusercontent.com/536941/162786802-90ac1a71-4624-47c4-ae55-b783f4f6c92d.png)\r\n\r\nIt would be very cool if there was an option to expand the rows of the related tables from within this detail view.\r\n\r\nIf you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1707/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1310243385, "node_id": "I_kwDOCGYnMM5OGLo5", "number": 456, "title": "feature request: pivot command", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-07-20T00:58:08Z", "updated_at": "2022-07-20T17:50:50Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "pivoting long-format table to wide-format tables is pretty common and kind of pain. would love to see this feature in sqlite-utils!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/456/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1324659241, "node_id": "I_kwDOCGYnMM5O9LIp", "number": 459, "title": "Single quoted transform recipes on Windows do not work as expected ", "user": {"value": 19921, "label": "shakeel"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-08-01T16:14:54Z", "updated_at": "2022-08-01T16:14:54Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Trying to follow the tutorial for sqlite-utils and datasette https://datasette.io/tutorials/clean-data on Windows 11 OS `Microsoft Windows [Version 10.0.22622.440]`, with sqlite-utils and datasette installed using pipx.\r\n\r\n```\r\npipx list\r\npackage datasette 0.61.1, installed using Python 3.10.4\r\n - datasette.exe\r\npackage sqlite-utils 3.28, installed using Python 3.10.4\r\n - sqlite-utils.exe\r\n``` \r\n\r\nIn the step to transform dates into ISO dates the quoted value `'r.parsedatetime(value)'` is copied verbatim into the columns instead of applying the output of the Python recipe.\r\n\r\n```\r\nsqlite-utils convert manatees.db locations \\\r\n REPDATE created_date last_edited_date \\\r\n 'r.parsedatetime(value)' --dry-run\r\n\r\n1975/01/31 00:00:00+00\r\n --- becomes:\r\nr.parsedatetime(value)\r\n\r\nWould affect 13568 rows\r\n```\r\n\r\nHowever, if I change the code from single quotes to double quotes, it works as expected.\r\n\r\n```\r\nsqlite-utils convert manatees.db locations \\\r\n REPDATE created_date last_edited_date \\\r\n \"r.parsedatetime(value)\" --dry-run\r\n\r\n1975/01/31 00:00:00+00\r\n --- becomes:\r\n1975-01-31T00:00:00+00:00\r\n\r\nWould affect 13568 rows\r\n```\r\n\r\nSpecifying the transform code recipe should work with single quotes on Windows.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/459/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1355193529, "node_id": "I_kwDOCGYnMM5Qxpy5", "number": 479, "title": "OperationalError: cannot VACUUM from within a transaction", "user": {"value": 7908073, "label": "chapmanjacobd"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-08-30T05:34:24Z", "updated_at": "2022-08-30T05:34:24Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Maybe when calling `.vacuum()` and other DB-level write-lock operations `sqlite_utils` could guard against this error message by automatically committing first?\r\n\r\n```\r\n 46 db[\"media\"].optimize() # type: ignore\r\n---> 47 db.vacuum()\r\n\r\nFile ~/.local/lib/python3.10/site-packages/sqlite_utils/db.py:1047, in Database.vacuum(self)\r\n 1045 def vacuum(self):\r\n 1046 \"Run a SQLite ``VACUUM`` against the database.\"\r\n-> 1047 self.execute(\"VACUUM;\")\r\n\r\nFile ~/.local/lib/python3.10/site-packages/sqlite_utils/db.py:470, in Database.execute(self, sql, parameters)\r\n 468 return self.conn.execute(sql, parameters)\r\n 469 else:\r\n--> 470 return self.conn.execute(sql)\r\n\r\nOperationalError: cannot VACUUM from within a transaction\r\n```\r\n\r\nIt might also be nice to add a sentence or two about how transactions are committed on the [docs page](https://sqlite-utils.datasette.io/en/latest/python-api.html#detect-fts). When I was swapping out my sqlite3 code for this library it was nice that everything was pretty much drop-in but I was/am unsure what to do about the places I explicitly call `.commit()` in my code\r\n\r\nRelated to https://github.com/simonw/sqlite-utils/issues/121", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/479/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1393202060, "node_id": "I_kwDOCGYnMM5TCpOM", "number": 496, "title": "devrel/python api: Pylance type hinting", "user": {"value": 7908073, "label": "chapmanjacobd"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-10-01T03:03:34Z", "updated_at": "2023-05-03T05:53:27Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Pylance is generally pretty good at figuring out stuff but `sqlite-utils` has some quirks which make type hinting kinda useless. Maybe you don't care but I thought I would bring it to your attention.\r\n\r\nFor example:\r\n\r\n```\r\ndb[\"subs\"].insert_all(subs, pk=\"index\")\r\n```\r\n\r\n```\r\nCannot access member \"insert_all\" for type \"View\"\r\n Member \"insert_all\" is unknown\r\n```\r\n\r\n`insert_all` and all the other methods show up as a type issues because the program can't know whether something is a View or a Table. Fair enough. But that basically throws all type checking out the window.\r\n\r\n`pk=\"index\"` also shows up as a type issue:\r\n\r\n```\r\nArgument of type \"Literal['index']\" cannot be assigned to parameter \"pk\" of type \"Default\" in function \"insert_all\"\r\n \"Literal['index']\" is incompatible with \"Default\"\r\n```\r\n\r\nI think this is because DEFAULT is an empty class? \r\n\r\nmaybe a few small changes could be made to make the library more type-friendly\r\n\r\nThe interim solution is of course to turn off type hints completely for the line\r\n```\r\ndb[\"subs\"].insert_all(subs, pk=\"index\") # type: ignore\r\n```\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/496/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1400374908, "node_id": "I_kwDOBm6k_c5TeAZ8", "number": 1836, "title": "docker image is duplicating db files somehow", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2022-10-06T22:35:54Z", "updated_at": "2022-10-08T16:56:51Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "if you look into the docker image created by docker publish, the `datasette inspect` line is duplicating the db files.\r\n\r\nhere's the result of the inspect command:\r\n\r\n\"Screen\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1836/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1405196044, "node_id": "PR_kwDOCGYnMM5AmYzy", "number": 499, "title": "feat: recreate fts triggers after table transform", "user": {"value": 7908073, "label": "chapmanjacobd"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-10-11T20:35:39Z", "updated_at": "2022-10-26T17:54:51Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/499", "body": "https://github.com/simonw/sqlite-utils/pull/498\r\n\r\n\r\n----\r\n:books: Documentation preview :books:: https://sqlite-utils--499.org.readthedocs.build/en/499/\r\n\r\n\r\n\r\nalternatively, `self.disable_fts()`", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/499/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1410548368, "node_id": "I_kwDODFdgUs5UE0KQ", "number": 77, "title": "Feature: Support GitHub discussions", "user": {"value": 631242, "label": "frosencrantz"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-10-16T16:53:38Z", "updated_at": "2022-10-16T16:53:38Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Hi @simonw I've been a happy user of this tool. Thank you for writing it and sharing it.\r\n\r\nI wanted to suggest a feature request to support Discussions. For example the VisiData project has discussions https://github.com/saulpw/visidata/discussions , and it would be useful if there was a way to pull that data into the database.\r\n\r\nHowever, I'm not offering a pull request.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/77/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1426379903, "node_id": "PR_kwDOBm6k_c5BtJNn", "number": 1870, "title": "don't use immutable=1, only mode=ro", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-10-27T23:33:04Z", "updated_at": "2023-10-03T19:12:37Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1870", "body": "Opening db files in immutable mode sometimes leads to the file being mutated, which causes duplication in the docker image layers: see #1836, #1480\r\n\r\nThat this happens in \"immutable\" mode is surprising, because the sqlite docs say that setting this should open the database as read only. \r\n\r\nhttps://www.sqlite.org/c3ref/open.html\r\n\r\n> immutable: The immutable parameter is a boolean query parameter that indicates that the database file is stored on read-only media. When immutable is set, SQLite assumes that the database file cannot be changed, even by a process with higher privilege, and so the database is opened read-only and all locking and change detection is disabled. Caution: Setting the immutable property on a database file that does in fact change can result in incorrect query results and/or [SQLITE_CORRUPT](https://www.sqlite.org/rescode.html#corrupt) errors. See also: [SQLITE_IOCAP_IMMUTABLE](https://www.sqlite.org/c3ref/c_iocap_atomic.html).\r\n\r\nPerhaps this is a bug in sqlite?\r\n\r\n\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--1870.org.readthedocs.build/en/1870/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1870/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1439009231, "node_id": "I_kwDOBm6k_c5VxYnP", "number": 1884, "title": "Exclude virtual tables from datasette inspect", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2022-11-07T21:26:01Z", "updated_at": "2022-11-21T04:40:56Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Ran `inspect` on a spatialite database and got these warnings:\r\n\r\n```\r\nERROR: conn=, sql = 'select count(*) from [SpatialIndex]', params = None: no such module: VirtualSpatialIndex\r\nERROR: conn=, sql = 'select count(*) from [ElementaryGeometries]', params = None: no such module: VirtualElementary\r\nERROR: conn=, sql = 'select count(*) from [KNN]', params = None: no such module: VirtualKNN\r\n```\r\n\r\nIt still worked, but probably want to catch this.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1884/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1469796454, "node_id": "I_kwDOBm6k_c5Xm1Bm", "number": 1920, "title": "Document Datasette.metadata() method", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-30T15:10:36Z", "updated_at": "2022-11-30T15:10:36Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Code is here: https://github.com/simonw/datasette/blob/main/datasette/app.py#L503\r\n\r\nThis will be the official way to access metadata from plugins.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1920/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1469821027, "node_id": "I_kwDOBm6k_c5Xm7Bj", "number": 1921, "title": "Document methods to get canned queries", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-30T15:26:33Z", "updated_at": "2022-11-30T23:34:21Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Two methods will get canned queries for a Datasette instance:\r\n\r\n[`Datasette.get_canned_queries`](https://github.com/simonw/datasette/blob/main/datasette/app.py#L575) will return all canned queries for a database that an `actor` can see.\r\n\r\n[`Datasette.get_canned_query`](https://github.com/simonw/datasette/blob/main/datasette/app.py#L592) will return a single canned query by name.\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1921/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1509783085, "node_id": "I_kwDOBm6k_c5Z_XYt", "number": 1969, "title": "sql-formatter javascript is not now working with CloudFlare rocketloader", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-23T21:14:06Z", "updated_at": "2023-01-10T01:56:33Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "This is probably not a bug with datasette, but I thought you might want to know, @simonw.\r\n\r\nI noticed today that my CloudFlare proxied datasette instance lost the \"Format SQL\" option. I'm pretty sure it was there last week.\r\n\r\nIn the CloudFlare settings, if I turn off [Rocket Loader](https://developers.cloudflare.com/fundamentals/speed/rocket-loader/), I get the \"Format SQL\" option back.\r\n\r\nRocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading?\r\n\r\nI'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1969/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1515815014, "node_id": "I_kwDOBm6k_c5aWYBm", "number": 1973, "title": "render_cell plugin hook's row object is not a sqlite.Row", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2023-01-01T20:27:46Z", "updated_at": "2023-01-29T00:40:31Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-row-value-column-table-database-datasette:\r\n\r\n> row - sqlite.Row\r\n> The SQLite row object that the value being rendered is part of\r\n\r\nThis appears to actually be a [CustomRow](https://github.com/simonw/datasette/blob/f0fadc28ddb9f82e5cc1ecaa51e8a342eb6dc528/datasette/utils/__init__.py#L773-L789), but I think that's unrelated to my issue.\r\n\r\nI have a table:\r\n\r\n```sql\r\nCREATE TABLE IF NOT EXISTS \"dss_job_stats\"(\r\n job_id integer not null references dss_job(id) on delete cascade,\r\n host text not null,\r\n // other columns elided as irrelevant\r\n primary key (job_id, host)\r\n);\r\n```\r\n\r\nOn datasette 0.63.2, the `render_cell` hook receives a `row` value that looks like:\r\n\r\n```\r\nCustomRow([('job_id', {'value': 2, 'label': '2'}), ('host', 'cldellow.com')])\r\n```\r\n\r\nI expected the `job_id` value to be `2`, but it's actually `{'value': 2, 'label': '2'}`.\r\n\r\nI can work around this, but was wondering if this was intended behaviour?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1973/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1538342965, "node_id": "PR_kwDOBm6k_c5HpNYo", "number": 1996, "title": "Document custom json encoder", "user": {"value": 25778, "label": "eyeseast"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-18T16:54:14Z", "updated_at": "2023-01-19T12:55:57Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1996", "body": "Closes #1983 \r\n\r\nAll documentation here. Edits welcome.\r\n\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--1996.org.readthedocs.build/en/1996/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1996/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1552368054, "node_id": "I_kwDOBm6k_c5ch0G2", "number": 2000, "title": "rewrite_sql hook", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-23T01:02:52Z", "updated_at": "2023-01-23T06:08:01Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm not sold that this is a good idea, but thought it'd be worth writing up a ticket. Proposal: add a hook like\r\n\r\n```python\r\ndef rewrite_sql(datasette, database, request, fn, sql, params)\r\n```\r\n\r\nIt would be called from Database.execute, Database.execute_write, Database.execute_write_script, Database.execute_write_many before running the user's SQL. `fn` would indicate which method was being used, in case that's relevant for the SQL inspection -- for example `execute` only permits a single statement.\r\n\r\nThe hook could return a SQL statement to be executed instead, or an async function to be awaited on that returned the SQL to be executed.\r\n\r\nPlugins that could be written with this hook:\r\n\r\n- https://github.com/cldellow/datasette-ersatz-table-valued-functions would use this to avoid monkey-patching\r\n- a plugin to inspect and reject unsafe Spatialite function calls (reported by [Simon in Discord](https://discord.com/channels/823971286308356157/823971286941302908/1066438832293159004))\r\n- a plugin to do more general rewrites of queries to enforce table or row-level security, for example, based on the currently logged in actor's ID\r\n- a plugin to maintain audit tables when users write to a table\r\n- a plugin to cache expensive queries (eg the queries that drive facets) - these could allow stale reads if previously cached, then refresh them in an offline queue\r\n\r\nFlaws with this idea:\r\n\r\n`execute_fn` and `execute_write_fn` would not go through this hook, which limits the guarantees you can make about it for security purposes.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2000/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1555701851, "node_id": "PR_kwDOBm6k_c5IdsD7", "number": 2003, "title": "Show referring tables and rows when the referring foreign key is compound", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-01-24T21:31:31Z", "updated_at": "2023-01-25T18:44:42Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2003", "body": "sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys.\r\n\r\nin particular, \r\n\r\n1. in a table view, there is not a link from the row to the referenced row if the foreign key is compound\r\n2. in a row view, there is no listing of tables and rows that refer to the focal row if those referencing foreign keys are compound.\r\n\r\nBoth of these issues are discussed in #1099. \r\n\r\nThis PR only fixes the second one, because it's not clear what the right UX is for the first issue.\r\n\r\n![Screenshot 2023-01-24 at 19-47-40 nlrb bargaining_unit](https://user-images.githubusercontent.com/536941/214454749-d53deead-4151-4329-a5d4-8a7a454de7d3.png)\r\n\r\nSome things that might not be desirable about this approach.\r\n\r\n1. it changes the external API, by changing `column` => `columns` and `other_column` => `other_columns` (see inline comment for more discussion.\r\n2. There are various places where the plural foreign keys have to be checked for length and discarded or transformed to singular. \r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2003/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1556065335, "node_id": "PR_kwDOBm6k_c5Ie5nA", "number": 2004, "title": "use single quotes for string literals, fixes #2001", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-25T05:08:46Z", "updated_at": "2023-02-01T06:37:18Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2004", "body": "This modernizes some uses of double quotes for string literals to use only single quotes, fixes simonw/datasette#2001\r\n\r\nWhile developing it, I manually enabled the stricter mode by using the code snippet at https://gist.github.com/cldellow/85bba507c314b127f85563869cd94820\r\n\r\nI think that code snippet isn't generally safe/portable, so I haven't tried to automate it in the tests.\r\n\r\n\r\n----\r\n:books: Documentation preview :books:: https://datasette--2004.org.readthedocs.build/en/2004/\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2004/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1560651350, "node_id": "I_kwDOCGYnMM5dBaZW", "number": 523, "title": "Feature request: trim all leading and trailing white space for all columns for all tables in a database", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-01-28T02:40:10Z", "updated_at": "2023-01-28T02:41:14Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "It's pretty common that i need to trim leading or trailing white space from lots of columns in a database a part of an initial ETL.\r\n\r\nI use the following recipe a lot, and it would be great to include this functionality into sqlite-utils\r\n\r\n`trimify.sql`\r\n```sql\r\nselect 'select group_concat(''update [' || name || '] set ['' || name || ''] = trim(['' || name || ''])'', '';\r\n'') || '';\r\n'' as sql_to_run from pragma_table_info('''||name||''');' from sqlite_schema;\r\n```\r\n\r\nthen something like:\r\n\r\n```bash\r\n\tsqlite3 example.db < scripts/trimify.sql > table_trim.sql && \\\r\n sqlite3 $example.db < table_trim.sql > trim.sql && \\\r\n sqlite3 $example.db < trim.sql\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/523/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1560982210, "node_id": "PR_kwDOBm6k_c5IvYKw", "number": 2008, "title": "array facet: don't materialize unnecessary columns", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2023-01-28T19:33:40Z", "updated_at": "2023-01-29T18:17:40Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2008", "body": "The presence of `inner.*` causes SQLite to materialize a row with all the columns. Those columns will be discarded later.\r\n\r\nInstead, we can select only the column we'll use. This lets SQLite's optimizer realize that the other columns in the CTE definition aren't needed.\r\n\r\nOn a test table with 278K rows, 98K of which had an array, this speeds up the facet calculation from 4 sec to 1 sec.\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2008.org.readthedocs.build/en/2008/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2008/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1565179870, "node_id": "I_kwDOBm6k_c5dSr_e", "number": 2013, "title": "Datasette uses non-standard quoting for identifiers", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-01T00:05:39Z", "updated_at": "2023-02-01T00:06:30Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Related to #2001, but where #2001 was about literals, this is about identifiers\r\n\r\nFrom https://www.sqlite.org/lang_keywords.html:\r\n\r\n> \"keyword\" A keyword in double-quotes is an identifier.\r\n> [keyword] A keyword enclosed in square brackets is an identifier. This is not standard SQL. This quoting mechanism is used by MS Access and SQL Server and is included in SQLite for compatibility.\r\n\r\nDatasette uses this quoting here -- https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/__init__.py#L345-L349, in some of the other DB access code, and in some of the test fixtures.\r\n\r\nMigrating to standard double quote identifiers would make it easier to get Datasette working with alternative backends", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2013/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1571711808, "node_id": "I_kwDOBm6k_c5drmtA", "number": 2018, "title": "`check_visibility` gives confusing (wrong?) results if permission is `None`", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-06T01:03:08Z", "updated_at": "2023-02-06T01:03:46Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm trying to gate access to an edit UI on the user having `update-row` on the underlying view or table.\r\n\r\nI expected [datasette.check_visibility](https://docs.datasette.io/en/latest/internals.html#await-check-visibility-actor-action-none-resource-none-permissions-none) to be a good way to do this:\r\n\r\n```python\r\n visible, private = await datasette.check_visibility(\r\n request.actor,\r\n permissions=[\r\n (\"update-row\", (database, table)),\r\n ],\r\n )\r\n\r\n if not visible:\r\n return None\r\n```\r\n\r\nBut `visible` is returning true, even when there is no explicit `update-row` permission. (In this case, `request.actor` is `None`.)\r\n\r\nBased on [the update-row permissions docs](https://docs.datasette.io/en/latest/authentication.html#update-row), I expected this to be default deny, and so no explicit permission would result in false.\r\n\r\nI think the root cause is that `check_visibility` calls `ensure_permissions` and expects it to throw if the permission is not available.\r\n\r\nBut `ensure_permissions` does not throw when `permission_allowed` returns None: https://github.com/simonw/datasette/blob/1.0a2/datasette/app.py#L825-L829", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2018/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1595340692, "node_id": "I_kwDOCGYnMM5fFveU", "number": 530, "title": "add ability to configure \"on delete\" and \"on update\" attributes of foreign keys:", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-02-22T15:44:14Z", "updated_at": "2023-05-08T20:39:01Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils.\r\n\r\nhttps://www.sqlite.org/foreignkeys.html#fk_actions", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1605959201, "node_id": "I_kwDOBm6k_c5fuP4h", "number": 2032, "title": "datasette errors when foreign key integrity is enabled", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-02T01:27:51Z", "updated_at": "2023-03-02T01:31:58Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "By default, [SQLite does not enforce foreign key constraints](https://www.sqlite.org/foreignkeys.html#fk_enable). I typically enable these checks by running:\r\n\r\n```sql\r\nPRAGMA foreign_keys = ON;\r\n```\r\n\r\ninside of a `prepare_connection` hook.\r\n\r\nIf a plugin causes the schema to change (eg datasette-scraper creating a new table, or datasette-edit-schema changing a column), then https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/internal_db.py#L71-L77 will fail with:\r\n\r\n```\r\nFOREIGN KEY constraint failed\r\n```\r\n\r\nThis could be resolved by either:\r\n- deleting from the `tables` column last\r\n- changing the schema so that the foreign keys have [ON DELETE CASCADE](https://www.sqlite.org/foreignkeys.html#fk_actions)\r\n\r\nLet me know if you'd be open to a PR that addresses this -- since foreign key constraints aren't enabled by default, I guess it's questionable whether this is a bug. I think I can workaround this by inspecting the database parameter in `prepare_connection` and trying not to enable fkey checks on the `_internal` database.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2032/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1620515757, "node_id": "I_kwDOBm6k_c5glxut", "number": 2039, "title": "Subtle bug with `--load-extension` and `--static` flags with absolute Windows paths with`C:\\`", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-12T21:18:52Z", "updated_at": "2023-03-12T21:18:52Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From the Datasette discord: A user tried running the following command on windows:\r\n\r\n```\r\ndatasette --load-extension=\"C:\\spatialite\\mod_spatialite-5.0.1-win-x86\\mod_spatialite.dll\"\r\n```\r\nThis failed with `\"The specified module could not be found\"`, because the entrypoint option introduced in #1789 splits the input differently. Instead of loading the extension found at `\"C:\\spatialite\\mod_spatialite-5.0.1-win-x86\\mod_spatialite.dll\"`, it instead tried to load the extension at `\"C\"` with entrypoint `\"\\spatialite\\mod_spatialite-5.0.1-win-x86\\mod_spatialite.dll\". \r\n\r\nThis is hard because most absolute windows paths have a colon in them, like `C:\\foo.txt` or `D:\\bar.txt`. I'd image the `--static` flag is also vulnerable to this type of bug.\r\n\r\n\r\nThe \"solution\" is to use a relative path instead, but that doesn't feel that great. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2039/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1740026046, "node_id": "I_kwDOCGYnMM5ntrC-", "number": 556, "title": "Support storing incrementally piped values", "user": {"value": 601708, "label": "mcint"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-06-04T00:45:23Z", "updated_at": "2023-06-04T01:21:15Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm trying to use sqlite-utils to data generated incrementally. There are a few\naspects of this that I don't currently know how to handle. I would like an option\nto apply writes incrementally, line-by-line as they are received. I would like an\noption to echo incremental progress. And, it would be nice to have \n\nIn particular, I'm using CoreLocationCLI -w -j to generate, newline-delimited JSON.\n\nOne variant of the command \n\n`stdbuf -oL CoreLocationCLI -w -j | pee 'sqlite-utils insert loc.db loc -' nl`\n\n`pee`, from `moreutils`, is like `tee` but spawns and pipes to the processes\ncreated by invoking each of its arguments, so, for gratuitous demonstration,\n`pee 'sponge out.log' cat` would behave like `tee`.\n\nIt looks like I can get what I want with:\n`stdbuf -oL CoreLocationCLI -w -j | while read line; do <<<\"$line\" sqlite-utils insert loc.db loc -; echo \"$line\"; done | nl`\n\n\n\n\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/556/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1781530343, "node_id": "I_kwDOBm6k_c5qL_7n", "number": 2093, "title": "Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2023-06-29T21:18:23Z", "updated_at": "2023-09-11T20:19:32Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Very often I get tripped up when trying to configure my Datasette instances. For example: if I want to change the port my app listen too, do I do that with a CLI flag, a `--setting` flag, inside `metadata.json`, or an env var? If I want to up the time limit of SQL statements, is that under `metadata.json` or a setting? Where does my plugin configuration go?\r\n\r\nNormally I need to look it up in Datasette docs, and I quickly find my answer, but the number of places where \"config\" goes it overwhelming.\r\n\r\n- Flat CLI flags like `--port`, `--host`, `--cors`, etc.\r\n- `--setting`, like `default_page_size`, `sql_time_limit_ms` etc\r\n- Inside `metadata.json`, including plugin configuration\r\n\r\nTypically my Datasette deploys are extremely long shell commands, with multiple `--setting` and other CLI flags.\r\n\r\n## Proposal: Consolidate all \"config\" into `datasette.toml`\r\n\r\nI propose that we add a new `datasette.toml` that combines \"settings\", \"metadata\", and other common CLI flags like `--port` and `--cors` into a single file. It would be similar to \"Cargo.toml\" in Rust projects, \"package.json\" in Node projects, and \"pyproject.toml\" in Python, etc.\r\n\r\nA sample of what it could look like:\r\n\r\n```toml\r\n# \"top level\" configuration that are currently CLI flags on `datasette serve`\r\n[config]\r\nport = 8020\r\nhost = \"0.0.0.0\"\r\ncors = true\r\n\r\n# replaces multiple `--setting` flags\r\n[settings]\r\nbase_url = \"/app/datasette/\"\r\ndefault_allow_sql = true\r\nsql_time_limit_ms = 3500\r\n\r\n# replaces `metadata.json`.\r\n# The contents of datasette-metadata.json could be defined in this file instead, but supporting separate files is nice (since those are easy to machine-generate)\r\n[metadata]\r\ninclude=\"./datasette-metadata.json\"\r\n\r\n# plugin-specific \r\n[plugins]\r\n[plugins.datasette-auth-github]\r\nclient_id = {env = \"DATASETTE_AUTH_GITHUB_CLIENT_ID\"}\r\nclient_secret = {env = \"GITHUB_CLIENT_SECRET\"}\r\n\r\n[plugins.datasette-cluster-map]\r\n\r\nlatitude_column = \"lat\"\r\nlongitude_column = \"lon\"\r\n```\r\n\r\n## Pros\r\n- Instead of multiple files and CLI flags, everything could be in one tidy file\r\n- Editing config in a separate file is easier than editing CLI flags, since you don't have to kill a process + edit a command every time\r\n- New users will know \"just edit my `datasette.toml` instead of needing to learn metadata + settings + CLI flags\r\n- Better dev experience for multiple environment. For example, could have `datasette -c datasette-dev.toml` for local dev environments (enables SQL, debug plugins, long timeouts, etc.), and a `datasette -c datasette-prod.toml` for \"production\" (lower timeouts, less plugins, monitoring plugins, etc.)\r\n\r\n## Cons\r\n- Yet another config-management system. Now Datasette users will need to know about metadata, settings, CLI flags, _and_ `datasette.toml`. However with enough documentation + announcements + examples, I think we can get ahead of it.\r\n- If toml is chosen, would need to add a toml parser for Python version <3.11\r\n- Multiple sources of config require priority. For example: Would `--setting default_allow_sql off` override the value inside `[settings]`? What about `--port`? \r\n\r\n## Other Notes\r\n\r\n### Toml\r\n\r\nI chose toml over json because toml supports comments. I chose toml over yaml because Python 3.11 has builtin support for it. I also find toml easier to work with since it doesn't have the odd \"gotchas\" that YAML has (\"ex `3.10` resolving to `3.1`, Norway `NO` resolving to `false`, etc.). It also mimics `pyproject.toml` which is nice. Happy to change my mind about this however\r\n\r\n\r\n### Plugin config will be difficult\r\n\r\nPlugin config is currently in `metadata.json` in two places:\r\n\r\n1. Top level, under `\"plugins.[plugin-name]\"`. This fits well into `datasette.toml` as `[plugins.plugin-name]`\r\n2. Table level, under `\"databases.[db-name].tables.[table-name].plugins.[plugin-name]`. This doesn't fit that well into `datasette.toml`, unless it's nested under `[metadata]`?\r\n\r\n### Extensions, static, one-off plugins?\r\n\r\nWe could also include equivalents of `--plugins-dir`, `--static`, and `--load-extension` into `datasette.toml`, but I'd imagine there's a few security concerns there to think through. \r\n\r\n\r\n### Explicitly list with plugins to use?\r\n\r\nI believe Datasette by default will load all install plugins on startup, but maybe `datasette.toml` can specify a list of plugins to use? For example, a dev version of `datasette.toml` can specify `datasette-pretty-traces`, but the prod version can leave it out", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2093/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1783304750, "node_id": "I_kwDOBm6k_c5qSxIu", "number": 2094, "title": "JS Plugin Hooks for the Code Editor", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-01T00:51:57Z", "updated_at": "2023-07-01T00:51:57Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When #2052 merges, I'd like to add support to add extensions/functions to the Datasette code editor. \r\n\r\nI'd eventually like to build a JS plugin for [`sqlite-docs`](https://github.com/asg017/sqlite-docs), to add things like:\r\n\r\n- Inline documentation for tables/columns on hover\r\n- Inline docs for custom functions that are loaded in\r\n- More detailed autocomplete for tables/columns/functions\r\n\r\nI did some hacking to see what this would look like, see here:\r\n\r\n\"image\"\r\n\"image\"\r\n\r\nThere can be a new hook that allows JS plugins to add new \"extension\" in the CodeMirror editorview here:\r\n\r\nhttps://github.com/simonw/datasette/blob/8cd60fd1d899952f1153460469b3175465f33f80/datasette/static/cm-editor-6.0.1.js#L25\r\n\r\nWill need some more planning. For example, the Codemirror bundle in Datasette has functions that we could re-export for plugins to use (so we don't load 2 version of `\"@codemirror/autocomplete\"`, for example. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2094/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1821108702, "node_id": "I_kwDOCGYnMM5si-ne", "number": 579, "title": "Special handling for SQLite column of type `JSON`", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-25T20:37:23Z", "updated_at": "2023-07-25T20:37:23Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "`sqlite-utils` should detect and have specially handling for column with a `JSON` column. For example:\r\n\r\n```sql\r\nCREATE TABLE \"dogs\" (\r\n id INTEGER PRIMARY KEY,\r\n name TEXT,\r\n friends JSON \r\n);\r\n```\r\n\r\n## Automatic Nesting\r\n\r\nAccording to [\"Nested JSON Values\"](https://sqlite-utils.datasette.io/en/stable/cli.html#nested-json-values), sqlite-utils will only expand JSON if the `--json-cols` flag is passed. It looks like it'll try to `json.load` all text column to test if its JSON, which can get expensive on non-json columns. \r\n\r\nInstead, `sqlite-utils` should be default (ie without the `--json-cols` flags) do the `maybe_json()` operation on columns with a declared `JSON` type. So the above table would expand the `\"friends\"` column as expected, withoutthe `--json-cols` flag:\r\n\r\n```bash\r\nsqlite-utils dogs.db \"select * from dogs\" | python -mjson.tool\r\n```\r\n\r\n```\r\n[\r\n {\r\n \"id\": 1,\r\n \"name\": \"Cleo\",\r\n \"friends\": [\r\n {\r\n \"name\": \"Pancakes\"\r\n },\r\n {\r\n \"name\": \"Bailey\"\r\n }\r\n ]\r\n }\r\n]\r\n```\r\n\r\n---\r\n\r\nI'm sure there's other ways `sqlite-utils` can specially handle JSON columns, so keeping this open while I think of more", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/579/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1822813627, "node_id": "I_kwDOBm6k_c5spe27", "number": 2108, "title": "some (many?) SQL syntax errors are not throwing errors with a .csv endpoint", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-26T16:57:45Z", "updated_at": "2023-07-26T16:58:07Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "here's a CTE query that should always fail with a syntax error:\r\n\r\n```sql\r\nwith foo as (nonsense)\r\nselect\r\n *\r\nfrom\r\n foo;\r\n```\r\n\r\nwhen we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B\r\n\r\nbut, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B\r\n\r\nsame with this bad sql\r\n\r\n```sql\r\nselect\r\n a,\r\nfrom\r\n foo;\r\n```\r\n\r\nhttps://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B\r\n\r\nvs \r\n\r\nhttps://global-power-plants.datasettes.com/global-power-plants.csv?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B\r\n\r\nbut, datasette catches this bad sql at both endpoints:\r\n\r\n```sql\r\nslect\r\n a\r\nfrom\r\n foo;\r\n```\r\n\r\nhttps://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B\r\nhttps://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2108/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1855885427, "node_id": "I_kwDOBm6k_c5unpBz", "number": 2143, "title": "De-tangling Metadata before Datasette 1.0", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 24, "created_at": "2023-08-18T00:51:50Z", "updated_at": "2023-08-24T18:28:27Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Metadata in Datasette is a really powerful feature, but is a bit difficult to work with. It was initially a way to add \"metadata\" about your \"data\" in Datasette instances, like descriptions for databases/tables/columns, titles, source URLs, licenses, etc. But it later became the go-to spot for other Datasette features that have nothing to do with metadata, like permissions/plugins/canned queries. \r\n\r\nSpecifically, I've found the following problems when working with Datasette metadata:\r\n\r\n1. Metadata cannot be updated without re-starting the entire Datasette instance.\r\n2. The `metadata.json`/`metadata.yaml` has become a kitchen sink of unrelated (imo) features like plugin config, authentication config, canned queries\r\n3. The Python APIs for defining extra metadata are a bit awkward (the `datasette.metadata()` class, `get_metadata()` hook, etc.)\r\n\r\n## Possible solutions\r\n\r\nHere's a few ideas of Datasette core changes we can make to address these problems. \r\n\r\n### Re-vamp the Datasette Python metadata APIs\r\n\r\nThe Datasette object has a single `datasette.metadata()` method that's a bit difficult to work with. There's also no Python API for inserted new metadata, so plugins have to rely on the `get_metadata()` hook.\r\n\r\nThe `get_metadata()` hook can also be improved - it doesn't work with async functions yet, so you're quite limited to what you can do.\r\n\r\n(I'm a bit fuzzy on what to actually do here, but I imagine it'll be very small breaking changes to a few Python methods)\r\n\r\n### Add an optional `datasette_metadata` table\r\n\r\nDatasette should detect and use metadata stored in a new special table called `datasette_metadata`. This would be a regular table that a user can edit on their own, and would serve as a \"live updating\" source of metadata, than can be changed while the Datasette instance is running.\r\n\r\nNot too sure what the schema would look like, but I'd imagine:\r\n\r\n```sql\r\nCREATE TABLE datasette_metadata(\r\n level text,\r\n target any,\r\n key text,\r\n value any,\r\n primary key (level, target)\r\n)\r\n```\r\n\r\nEvery row in this table would map to a single metadata \"entry\".\r\n\r\n- `level` would be one of \"datasette\", \"database\", \"table\", \"column\", which is the \"level\" the entry describes. For example, `level=\"table\"` means it is metadata about a specific table, `level=\"database\"` for a specific database, or `level=\"datasette\"` for the entire Datasette instance.\r\n- `target` would \"point\" to the specific object the entry metadata is about, and would depend on what `level` is specific. \r\n - `level=\"database\"`: `target` would be the string name of the database that the metadata entry is about. ex `\"fixtures\"`\r\n - `level=\"table\"`: `target` would be a JSON array of two strings. The first element would be the database name, and the second would be the table name. ex `[\"fixtures\", \"students\"]`\r\n - `level=\"column\"`: `target` would be a JSON array of 3 strings: The database name, table name, and column name. Ex `[\"fixtures\", \"students\", \"student_id\"`]\r\n- `key` would be the type of metadata entry the row has, similar to the current \"keys\" that exist in `metadata.json`. Ex `\"about_url\"`, `\"source\"`, `\"description\"`, etc\r\n- `value` would be the text value of be metadata entry. The literal text value of a description, about_url, column_label, etc\r\n\r\nA quick sample:\r\n\r\nlevel | target | key | value\r\n-- | -- | -- | --\r\ndatasette | NULL | title | my datasette title...\r\ndb | fixtures | source | \r\ntable | [\"fixtures\", \"students\"] | label_column | student_name\r\ncolumn | [\"fixtures\", \"students\", \"birthdate\"] | description | \r\n\r\nThis `datasette_metadata` would be configured with other tools, and hopefully not manually by end users. Datasette Core could also offer a UI for editing entries in `datasette_metadata`, to update descriptions/columns on the fly.\r\n\r\n### Re-vamp `metadata.json` and move non-metadata config to another place\r\n\r\nThe motivation behind this is that it's awkward that `metadata.json` contains config about things that are not strictly metadata, including:\r\n\r\n- Plugin configuration\r\n- [Authentication/permissions](https://docs.datasette.io/en/latest/authentication.html#access-permissions-in-metadata) (ex the `allow` key on datasettes/databases/tables\r\n- Canned queries. might be controversial, but in my mind, canned queries are application-specific code and configuration, and don't describe the data that exists in SQLite databases. \r\n\r\nI think we should move these outside of `metadata.json` and into a different file. The `datasette.json` idea in #2093 may be a good solution here: plugin/permissions/canned queries can be defined in `datasette.json`, while `metadata.json`/`datasette_metadata` will strictly be about documenting databases/tables/columns. \r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2143/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1864112887, "node_id": "PR_kwDOBm6k_c5Yo7bk", "number": 2151, "title": "Test Datasette on multiple SQLite versions", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-08-23T22:42:51Z", "updated_at": "2023-08-23T22:58:13Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2151", "body": "still testing, hope it works!\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2151.org.readthedocs.build/en/2151/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2151/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 1865869205, "node_id": "I_kwDOBm6k_c5vNueV", "number": 2157, "title": "Proposal: Make the `_internal` database persistent, customizable, and hidden", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-08-24T20:54:29Z", "updated_at": "2023-08-31T02:45:56Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "The current `_internal` database is used by Datasette core to cache info about databases/tables/columns/foreign keys of databases in a Datasette instance. It's a temporary database created at startup, that can only be seen by the root user. See an [example `_internal` DB here](https://latest.datasette.io/_internal), after [logging in as root](https://latest.datasette.io/login-as-root).\r\n\r\nThe current `_internal` database has a few rough edges:\r\n\r\n- It's part of `datasette.databases`, so many plugins have to specifically exclude `_internal` from their queries [examples here](https://github.com/search?q=datasette+hookimpl+%22_internal%22+language%3APython+-path%3Adatasette%2F&ref=opensearch&type=code)\r\n- It's only used by Datasette core and can't be used by plugins or 3rd parties\r\n- It's created from scratch at startup and stored in memory. Why is fine, the performance is great, but persistent storage would be nice.\r\n\r\nAdditionally, it would be really nice if plugins could use this `_internal` database to store their own configuration, secrets, and settings. For example:\r\n\r\n- `datasette-auth-tokens` [creates a `_datasette_auth_tokens` table](https://github.com/simonw/datasette-auth-tokens/blob/main/datasette_auth_tokens/__init__.py#L15) to store auth token metadata. This could be moved into the `_internal` database to avoid writing to the gues database\r\n- `datasette-socrata` [creates a `socrata_imports`](https://github.com/simonw/datasette-socrata/blob/1409aa9b4d2fc3aff286b52e73af33b5786d56d0/datasette_socrata/__init__.py#L190-L198) table, which also can be in `_internal`\r\n- `datasette-upload-csvs` [creates a `_csv_progress_`](https://github.com/simonw/datasette-upload-csvs/blob/main/datasette_upload_csvs/__init__.py#L154) table, which can be in `_internal`\r\n- `datasette-write-ui` wants to have the ability for users to toggle whether a table appears editable, which can be either in `datasette.yaml` or on-the-fly by storing config in `_internal`\r\n\r\n\r\nIn general, these are specific features that Datasette plugins would have access to if there was a central internal database they could read/write to:\r\n\r\n- **Dynamic configuration**. Changing the `datasette.yaml` file works, but can be tedious to restart the server every time. Plugins can define their own configuration table in `_internal`, and could read/write to it to store configuration based on user actions (cell menu click, API access, etc.)\r\n- **Caching**. If a plugin or Datasette Core needs to cache some expensive computation, they can store it inside `_internal` (possibly as a temporary table) instead of managing their own caching solution.\r\n- **Audit logs**. If a plugin performs some sensitive operations, they can log usage info to `_internal` for others to audit later. \r\n- **Long running process status**. Many plugins (`datasette-upload-csvs`, `datasette-litestream`, `datasette-socrata`) perform tasks that run for a really long time, and want to give continue status updates to the user. They can store this info inside` _internal`\r\n- **Safer authentication**. Passwords and authentication plugins usually store credentials/hashed secrets in configuration files or environment variables, which can be difficult to handle. Now, they can store them in `_internal` \r\n\r\n## Proposal\r\n\r\n- We remove `_internal` from [`datasette.databases`](https://docs.datasette.io/en/latest/internals.html#databases) property.\r\n- We add new `datasette.get_internal_db()` method that returns the `_internal` database, for plugins to use\r\n- We add a new `--internal internal.db` flag. If provided, then the `_internal` DB will be sourced from that file, and further updates will be persisted to that file (instead of an in-memory database)\r\n- When creating internal.db, create a new `_datasette_internal` table to mark it a an \"datasette internal database\"\r\n- In `datasette serve`, we check for the existence of the `_datasette_internal` table. If it exists, we assume the user provided that file in error and raise an error. This is to limit the chance that someone accidentally publishes their internal database to the internet. We could optionally add a `--unsafe-allow-internal` flag (or database plugin) that allows someone to do this if they really want to.\r\n\r\n\r\n## New features unlocked with this\r\n\r\nThese features don't really need a standardized `_internal` table per-say (plugins could currently configure their own long-time storage features if they really wanted to), but it would make it much simpler to create these kinds of features with a persistent application database.\r\n\r\n- **`datasette-comments`** : A plugin for commenting on rows or specific values in a database. Comment contents + threads + email notification info can be stored in `_internal`\r\n- **Bookmarks**: \"Bookmarking\" an SQL query could be stored in `_internal`, or a URL link shortener\r\n- **Webhooks**: If a plugin wants to either consume a webhook or create a new one, they can store hashed credentials/API endpoints in `_internal`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2157/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1884330740, "node_id": "PR_kwDOBm6k_c5ZszDF", "number": 2174, "title": "Use $DATASETTE_INTERNAL in absence of --internal", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-09-06T16:07:15Z", "updated_at": "2023-09-08T00:46:13Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2174", "body": "#refs 2157, specifically [this comment](https://github.com/simonw/datasette/issues/2157#issuecomment-1700291967)\r\n\r\nPassing in `--internal my_internal.db` over and over again can get repetitive. \r\n\r\nThis PR adds a new configurable env variable `DATASETTE_INTERNAL_DB_PATH`. If it's defined, then it takes place as the path to the internal database. Users can still overwrite this behavior by passing in their own `--internal internal.db` flag.\r\n\r\nIn draft mode for now, needs tests and documentation. \r\n\r\nSide note: Maybe we can have a sections in the docs that lists all the \"configuration environment variables\" that Datasette respects? I did a quick grep and found:\r\n\r\n- `DATASETTE_LOAD_PLUGINS`\r\n- `DATASETTE_SECRETS`\r\n\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2174.org.readthedocs.build/en/2174/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2174/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1900026059, "node_id": "I_kwDOBm6k_c5xQBjL", "number": 2188, "title": "Plugin Hooks for \"compile to SQL\" languages", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-09-18T01:37:15Z", "updated_at": "2023-09-18T06:58:53Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "There's a ton of tools/languages that compile to SQL, which may be nice in Datasette. Some examples:\r\n\r\n- Logica https://logica.dev\r\n- PRQL https://prql-lang.org\r\n- Malloy, but not sure if it works with SQLite? https://github.com/malloydata/malloy\r\n\r\nIt would be cool if plugins could extend Datasette to use these languages, in both the code editor and API usage.\r\n\r\nA few things I'd imagine a `datasette-prql` or `datasette-logica` plugin would do:\r\n\r\n- `prql=` instead of `sql=`\r\n- Code editor support (syntax highlighting, autocomplete)\r\n- Hide/show SQL", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2188/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1983600865, "node_id": "PR_kwDOBm6k_c5e7WH7", "number": 2206, "title": "Bump the python-packages group with 1 update", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-11-08T13:18:56Z", "updated_at": "2023-12-08T13:46:24Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2206", "body": "Bumps the python-packages group with 1 update: [black](https://github.com/psf/black).\n\n
\nRelease notes\n

Sourced from black's releases.

\n
\n

23.11.0

\n

Highlights

\n
    \n
  • Support formatting ranges of lines with the new --line-ranges command-line option\n(#4020)
  • \n
\n

Stable style

\n
    \n
  • Fix crash on formatting bytes strings that look like docstrings (#4003)
  • \n
  • Fix crash when whitespace followed a backslash before newline in a docstring (#4008)
  • \n
  • Fix standalone comments inside complex blocks crashing Black (#4016)
  • \n
  • Fix crash on formatting code like await (a ** b) (#3994)
  • \n
  • No longer treat leading f-strings as docstrings. This matches Python's behaviour and\nfixes a crash (#4019)
  • \n
\n

Preview style

\n
    \n
  • Multiline dicts and lists that are the sole argument to a function are now\nindented less (#3964)
  • \n
  • Multiline unpacked dicts and lists as the sole argument to a function are now also\nindented less (#3992)
  • \n
  • In f-string debug expressions, quote types that are visible in the final string\nare now preserved (#4005)
  • \n
  • Fix a bug where long case blocks were not split into multiple lines. Also enable\ngeneral trailing comma rules on case blocks (#4024)
  • \n
  • Keep requiring two empty lines between module-level docstring and first function or\nclass definition (#4028)
  • \n
  • Add support for single-line format skip with other comments on the same line (#3959)
  • \n
\n

Configuration

\n
    \n
  • Consistently apply force exclusion logic before resolving symlinks (#4015)
  • \n
  • Fix a bug in the matching of absolute path names in --include (#3976)
  • \n
\n

Performance

\n
    \n
  • Fix mypyc builds on arm64 on macOS (#4017)
  • \n
\n

Integrations

\n
    \n
  • Black's pre-commit integration will now run only on git hooks appropriate for a code\nformatter (#3940)
  • \n
\n

23.10.1

\n

Highlights

\n
    \n
  • Maintanence release to get a fix out for GitHub Action edge case (#3957)
  • \n
\n

Preview style

\n\n
\n

... (truncated)

\n
\n
\nChangelog\n

Sourced from black's changelog.

\n
\n

23.11.0

\n

Highlights

\n
    \n
  • Support formatting ranges of lines with the new --line-ranges command-line option\n(#4020)
  • \n
\n

Stable style

\n
    \n
  • Fix crash on formatting bytes strings that look like docstrings (#4003)
  • \n
  • Fix crash when whitespace followed a backslash before newline in a docstring (#4008)
  • \n
  • Fix standalone comments inside complex blocks crashing Black (#4016)
  • \n
  • Fix crash on formatting code like await (a ** b) (#3994)
  • \n
  • No longer treat leading f-strings as docstrings. This matches Python's behaviour and\nfixes a crash (#4019)
  • \n
\n

Preview style

\n
    \n
  • Multiline dicts and lists that are the sole argument to a function are now indented\nless (#3964)
  • \n
  • Multiline unpacked dicts and lists as the sole argument to a function are now also\nindented less (#3992)
  • \n
  • In f-string debug expressions, quote types that are visible in the final string are\nnow preserved (#4005)
  • \n
  • Fix a bug where long case blocks were not split into multiple lines. Also enable\ngeneral trailing comma rules on case blocks (#4024)
  • \n
  • Keep requiring two empty lines between module-level docstring and first function or\nclass definition (#4028)
  • \n
  • Add support for single-line format skip with other comments on the same line (#3959)
  • \n
\n

Configuration

\n
    \n
  • Consistently apply force exclusion logic before resolving symlinks (#4015)
  • \n
  • Fix a bug in the matching of absolute path names in --include (#3976)
  • \n
\n

Performance

\n
    \n
  • Fix mypyc builds on arm64 on macOS (#4017)
  • \n
\n

Integrations

\n
    \n
  • Black's pre-commit integration will now run only on git hooks appropriate for a code\nformatter (#3940)
  • \n
\n

23.10.1

\n

Highlights

\n
    \n
  • Maintenance release to get a fix out for GitHub Action edge case (#3957)
  • \n
\n\n
\n

... (truncated)

\n
\n
\nCommits\n
    \n
  • 2a1c67e Prepare release 23.11.0 (#4032)
  • \n
  • 72e7a2e Remove redundant condition from has_magic_trailing_comma (#4023)
  • \n
  • 1a7d9c2 Preserve visible quote types for f-string debug expressions (#4005)
  • \n
  • f4c7be5 docs: fix minor typo (#4030)
  • \n
  • 2e4fac9 Apply force exclude logic before symlink resolution (#4015)
  • \n
  • 66008fd [563] Fix standalone comments inside complex blocks crashing Black (#4016)
  • \n
  • 50ed622 Fix long case blocks not split into multiple lines (#4024)
  • \n
  • 46be1f8 Support formatting specified lines (#4020)
  • \n
  • ecbd9e8 Fix crash with f-string docstrings (#4019)
  • \n
  • e808e61 Preview: Keep requiring two empty lines between module-level docstring and fi...
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=black&package-manager=pip&previous-version=23.9.1&new-version=23.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency\n- `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)\n- `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)\n- `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)\n- `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency\n- `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2206.org.readthedocs.build/en/2206/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2206/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1994857251, "node_id": "I_kwDOBm6k_c525xsj", "number": 2208, "title": "No suggested facets when a column named 'value' is included", "user": {"value": 198537, "label": "rgieseke"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-11-15T14:11:17Z", "updated_at": "2023-11-15T14:18:59Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'.\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L168-L174\r\n\r\nCurrently the following is shown (from https://latest.datasette.io/fixtures/facetable)\r\n\r\n![image](https://github.com/simonw/datasette/assets/198537/a919509a-ea88-461b-b25b-8b776720c7c5)\r\n\r\nWhen I add a column named 'value' only the JSON facets are processed.\r\n\r\n![image](https://github.com/simonw/datasette/assets/198537/092bd0b3-4c20-434e-88f8-47e2b8994a1d)\r\n\r\nI think that not using aliases could be a solution (except if someone wants to use a column named `count(*)` though this seems to be unlikely). I'll open a PR with that.\r\n\r\nThere is also a TODO with a similar question in the same file. I have not looked into that yet.\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L512", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2208/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1994861266, "node_id": "PR_kwDOBm6k_c5fhgOS", "number": 2209, "title": "Fix query for suggested facets with column named value", "user": {"value": 198537, "label": "rgieseke"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2023-11-15T14:13:30Z", "updated_at": "2023-11-15T15:31:12Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2209", "body": "See discussion in https://github.com/simonw/datasette/issues/2208\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2209.org.readthedocs.build/en/2209/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2209/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 2028698018, "node_id": "I_kwDOBm6k_c5463mi", "number": 2213, "title": "feature request: gzip compression of database downloads", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-12-06T14:35:03Z", "updated_at": "2023-12-06T15:05:46Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. \r\n\r\nthis is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of \"application\"\r\n\r\n(see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2213/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 892383270, "node_id": "MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4", "number": 12, "title": "Recovering of malformed ENEX file", "user": {"value": 8431437, "label": "engdan77"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-05-15T07:49:31Z", "updated_at": "2021-05-15T19:57:50Z", "closed_at": null, "author_association": "FIRST_TIMER", "pull_request": "dogsheep/evernote-to-sqlite/pulls/12", "body": "Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :)\r\n\r\nSome background to this PR... \r\nI've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services. \r\n\r\nNow I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process. \r\n\r\nMy first attempt I tried to adapt to more robust lxml package allowing huge data and with \"recover\", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble.\r\n\r\nAnd with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a \"hugexmlparser\" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end before new (malformed?) it would add this after encounter a new start-tag.\r\n\r\nThe code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted.\r\n\r\nNow with the above we pass this a minor changed version of save_note_recovery() assure the existing works.\r\nAlso adding this as a new recover-enex command to click and kept the original options. \r\nA couple of new tests was added as well to check against using this command.\r\n\r\nNow this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository.\r\n\r\nAs a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-) ", "repo": {"value": 303218369, "label": "evernote-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 355299310, "node_id": "MDExOlB1bGxSZXF1ZXN0MjExODYwNzA2", "number": 363, "title": "Search all apps during heroku publish", "user": {"value": 436032, "label": "kevboh"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-08-29T19:25:10Z", "updated_at": "2018-08-31T14:39:45Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/363", "body": "Adds the `-A` option to include apps from all organizations when searching app names for publish.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/363/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 359075028, "node_id": "MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx", "number": 364, "title": "Support for other types of databases using external connectors", "user": {"value": 11912854, "label": "jsancho-gpl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-09-11T14:31:47Z", "updated_at": "2018-09-11T14:31:47Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/364", "body": "This PR is related to #293, but now all commits have been merged.\r\n\r\nThe purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points.\r\n\r\nThe modifications in the original datasette code are minimal and many are in a separated file.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/364/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 646448486, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0", "number": 868, "title": "initial windows ci setup", "user": {"value": 702729, "label": "joshmgrant"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-26T18:49:13Z", "updated_at": "2021-07-10T23:41:43Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/868", "body": "Picking up the work done on #557 with a new PR. Seeing if I can get this working.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/868/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 655974395, "node_id": "MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw", "number": 30, "title": "Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces)", "user": {"value": 110038, "label": "scanner"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-13T16:15:26Z", "updated_at": "2020-07-13T16:15:26Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep-photos/pulls/30", "body": "Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3.\r\nHad to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop.\r\n\r\nAlso probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 718395987, "node_id": "MDExOlB1bGxSZXF1ZXN0NTAwNzk4MDkx", "number": 1008, "title": "Add json_loads and json_dumps jinja2 filters", "user": {"value": 649467, "label": "mhalle"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-10-09T20:11:34Z", "updated_at": "2020-12-15T02:30:28Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1008", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1008/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 723499985, "node_id": "MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4", "number": 5, "title": "Add fitbit-to-sqlite", "user": {"value": 4632208, "label": "mrphil007"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-10-16T20:04:05Z", "updated_at": "2020-10-16T20:04:05Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/5", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}