{"id": 2023057255, "node_id": "I_kwDOBm6k_c54lWdn", "number": 2212, "title": "Can't filter with numbers", "user": {"value": 605070, "label": "fzakaria"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-04T05:26:29Z", "updated_at": "2023-12-04T05:26:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean).\r\nI can't seem to get the facet to work or even filtering on this column.\r\n\r\nMy guess is that Datasette is \"stringifying\" the number and it's not matching?\r\nExample: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2212/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 2019811176, "node_id": "I_kwDOBm6k_c54Y99o", "number": 2211, "title": "Unreachable exception handlers for `sqlite3.OperationalError`", "user": {"value": 1214074, "label": "mattparmett"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-01T00:50:22Z", "updated_at": "2023-12-01T00:50:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There are several places where `sqlite3.OperationalError` is caught as part of an exception handler which catches multiple exceptions, but is then caught again immediately afterwards by a dedicated exception handler.\r\n\r\nBecause the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then `sqlite3.OperationalError` should be removed from the tuple of exceptions in the first handler.\r\n\r\nThis issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it.\r\n\r\nOne example:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/views/database.py#L534-L537\r\n\r\nOther instances:\r\n\r\nhttps://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270\r\nhttps://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2211/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1994845152, "node_id": "I_kwDOBm6k_c525uvg", "number": 2207, "title": "ModuleNotFoundError: No module named 'click_default_group", "user": {"value": 283441, "label": "honzajavorek"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-15T14:04:32Z", "updated_at": "2023-11-15T14:04:32Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "No matter what I do, I'm getting this error:\r\n\r\n```\r\n$ datasette\r\nTraceback (most recent call last):\r\n File \"/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/bin/datasette\", line 5, in \r\n from datasette.cli import cli\r\n File \"/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/lib/python3.11/site-packages/datasette/cli.py\", line 6, in \r\n from click_default_group import DefaultGroup\r\nModuleNotFoundError: No module named 'click_default_group'\r\n```\r\n\r\nI have datasette in my dependencies like this:\r\n\r\n```toml\r\n[tool.poetry.group.dev.dependencies]\r\ndatasette = {version = \"1.0a7\", allow-prereleases = true}\r\n```\r\n\r\nI had the latest regular version (not pre-release) there originally, but the result was the same:\r\n\r\n```toml\r\n[tool.poetry.group.dev.dependencies]\r\ndatasette = \"0.64.5\"\r\n```\r\n\r\nFull pyproject.toml is at https://github.com/honzajavorek/junior.guru/ Previously datasette worked for me, but I guess something had to upgrade and now I can't even launch it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2207/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978603203, "node_id": "I_kwDOCGYnMM517xbD", "number": 602, "title": "`sqlite-utils transform` removes the `AUTOINCREMENT` keyword", "user": {"value": 4472046, "label": "ArsTapatun"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-06T08:48:43Z", "updated_at": "2023-11-06T08:48:43Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "### Context\r\n\r\nWe ran into this bug randomly, noticing that deleted `ROWID` would get reused after migrating the DB. Using `transform` to change any column in the table will also unexpectedly strip away the `AUTOINCREMENT` keyword from the primary key definition, even if it was not the transformation target.\r\n\r\n### Reproducible example\r\n\r\n**Original database**\r\n\r\n```sql\r\n$ sqlite3 test.db << EOF\r\nCREATE TABLE mytable (\r\n col1 INTEGER PRIMARY KEY AUTOINCREMENT,\r\n col2 TEXT NOT NULL\r\n)\r\nEOF\r\n\r\n$ sqlite3 test.db \".schema mytable\"\r\nCREATE TABLE mytable (\r\n col1 INTEGER PRIMARY KEY AUTOINCREMENT,\r\n col2 TEXT NOT NULL\r\n);\r\n```\r\n\r\n**Modified database after sqlite-utils**\r\n\r\n```sql\r\n$ sqlite-utils transform test.db mytable --rename col2 renamedcol2\r\n\r\n$ sqlite3 test.db \"SELECT sql FROM sqlite_master WHERE name = 'mytable';\"\r\nCREATE TABLE IF NOT EXISTS \"mytable\" (\r\n [col1] INTEGER PRIMARY KEY,\r\n [renamedcol2] TEXT NOT NULL\r\n);\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/602/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1978022687, "node_id": "I_kwDOBm6k_c515jsf", "number": 2204, "title": "request.post_body() can only be called once", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T23:22:03Z", "updated_at": "2023-11-05T23:23:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This code here:\r\n\r\nhttps://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L127-L135\r\n\r\nIt consumes the messages, which means if you try to call it a second time you won't be able to get at the body.\r\n\r\nThis is efficient - we don't end up with a `request` object property with potentially megabytes of content that we never look at again - but it's inconvenient for cases like middleware or functions where we don't know if the body has been consumed yet or not.\r\n\r\nPotential solution: set `request._body` the first time it is called, and return that on subsequent calls.\r\n\r\nPotential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call `post_body()` multiple times against one of those larger bodies.\r\n\r\nI'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2204/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1977726056, "node_id": "I_kwDOBm6k_c514bRo", "number": 2203, "title": "custom plugin not seen as sql function", "user": {"value": 7113541, "label": "LyzardKing"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-05T10:30:19Z", "updated_at": "2023-11-05T10:30:19Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi, I'm not sure if this is the right repo for this issue.\r\n\r\nI'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly.\r\n\r\nNow I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works).\r\nIf I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error:\r\n```duckdb.duckdb.CatalogException: Catalog Error: Scalar Function with name hello_world does not exist!```\r\n\r\nSince the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2203/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1977155641, "node_id": "I_kwDOCGYnMM512QA5", "number": 601, "title": "Move plugin directory into documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-11-04T04:07:52Z", "updated_at": "2023-11-04T04:07:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils-plugins should be in the official documentation.\r\n\r\nI can use the same pattern as https://llm.datasette.io/en/stable/plugins/directory.html\r\n\r\nhttps://til.simonwillison.net/readthedocs/stable-docs", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/601/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1553425465, "node_id": "I_kwDOCGYnMM5cl2Q5", "number": 522, "title": "Add COLUMN_TYPE_MAPPING for timedelta", "user": {"value": 81377, "label": "maport"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-23T16:49:54Z", "updated_at": "2023-11-04T00:49:51Z", "closed_at": "2023-11-04T00:49:51Z", "author_association": "NONE", "pull_request": null, "body": "Currently trying to create a column with Python type `datetime.timedelta` results in an error:\r\n\r\n```\r\n>>> from sqlite_utils import Database\r\n>>> db = Database(\"test.db\")\r\n>>> test_tbl = db['test']\r\n>>> test_tbl.insert({'col1': datetime.timedelta()})\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 2979, in insert\r\n return self.insert_all(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 3082, in insert_all\r\n self.create(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 1574, in create\r\n self.db.create_table(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 961, in create_table\r\n sql = self.create_table_sql(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 852, in create_table_sql\r\n column_type=COLUMN_TYPE_MAPPING[column_type],\r\nKeyError: \r\n```\r\n\r\nThe reason this would be useful is that `MySQLdb` uses `timedelta` for MySQL `TIME` columns:\r\n\r\n```\r\n>>> import MySQLdb\r\n>>> conn = MySQLdb.connect(host='database', user='user', passwd='pw')\r\n>>> csr = conn.cursor()\r\n>>> csr.execute(\"SELECT CAST('11:20' AS TIME)\")\r\n>>> tuple(csr)\r\n((datetime.timedelta(seconds=40800),),)\r\n```\r\n\r\nSo currently any attempt to convert a MySQL DB with a `TIME` column using `db-to-sqlite` will result in the above error.\r\n\r\nI was rather surprised that `MySQLdb` uses `timedelta` for `TIME` columns but I see that [this column type](https://dev.mysql.com/doc/refman/8.0/en/time.html) is intended for time intervals as well as the time of day so it makes sense. \r\n\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/522/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1955676270, "node_id": "I_kwDOBm6k_c50kUBu", "number": 2201, "title": "Discord invite link is invalid", "user": {"value": 11708906, "label": "andrewsanchez"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-21T21:50:05Z", "updated_at": "2023-10-21T21:50:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following:\r\n\r\n\"CleanShot\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2201/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1943259395, "node_id": "I_kwDOEhK-wc5z08kD", "number": 16, "title": " time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'", "user": {"value": 3746270, "label": "linonetwo"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-14T13:24:39Z", "updated_at": "2023-10-14T13:24:39Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "\r\n```\r\nevernote-to-sqlite enex evernote.db ./\u6211\u7684\u7b14\u8bb0.enex\r\nImporting from ENEX [#####-------------------------------] 14%\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/evernote-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n ^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1157, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1078, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1434, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 783, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/cli.py\", line 31, in enex\r\n save_note(db, note)\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py\", line 46, in save_note\r\n \"created\": convert_datetime(created),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py\", line 111, in convert_datetime\r\n return datetime.datetime.strptime(s, \"%Y%m%dT%H%M%SZ\").isoformat()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py\", line 568, in _strptime_datetime\r\n tt, fraction, gmtoff_fraction = _strptime(data_string, format)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py\", line 349, in _strptime\r\n raise ValueError(\"time data %r does not match format %r\" %\r\nValueError: time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'\r\n```\r\n\r\nenex is exported by evernote mac client ", "repo": {"value": 303218369, "label": "evernote-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1931794126, "node_id": "I_kwDOBm6k_c5zJNbO", "number": 2198, "title": "--load-extension=spatialite not working with Windows", "user": {"value": 363004, "label": "hcarter333"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-08T12:50:22Z", "updated_at": "2023-10-08T12:50:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Using each of\r\n`python -m datasette counties.db -m metadata.yml --load-extension=SpatiaLite`\r\n\r\nand \r\n\r\n`python -m datasette counties.db --load-extension=\"C:\\Windows\\System32\\mod_spatialite.dll\"`\r\n\r\nand\r\n\r\n`python -m datasette counties.db --load-extension=C:\\Windows\\System32\\mod_spatialite.dll`\r\n\r\nI got the error:\r\n\r\n```\r\n File \"C:\\Users\\m3n7es\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\datasette\\database.py\", line 209, in in_thread\r\n self.ds._prepare_connection(conn, self.name)\r\n File \"C:\\Users\\m3n7es\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\datasette\\app.py\", line 596, in _prepare_connection\r\n conn.execute(\"SELECT load_extension(?, ?)\", [path, entrypoint])\r\nsqlite3.OperationalError: The specified module could not be found.\r\n\r\n```\r\n\r\nI finally tried modifying the code in app.py to read:\r\n\r\n```\r\n def _prepare_connection(self, conn, database):\r\n conn.row_factory = sqlite3.Row\r\n conn.text_factory = lambda x: str(x, \"utf-8\", \"replace\")\r\n if self.sqlite_extensions:\r\n conn.enable_load_extension(True)\r\n for extension in self.sqlite_extensions:\r\n # \"extension\" is either a string path to the extension\r\n # or a 2-item tuple that specifies which entrypoint to load.\r\n #if isinstance(extension, tuple):\r\n # path, entrypoint = extension\r\n # conn.execute(\"SELECT load_extension(?, ?)\", [path, entrypoint])\r\n #else:\r\n conn.execute(\"SELECT load_extension('C:\\Windows\\System32\\mod_spatialite.dll')\")\r\n\r\n```\r\nAt which point the counties example worked. \r\n\r\nIs there a correct way to install/use the extension on Windows? My method will cause issues if there's a second extension to be used.\r\n\r\nOn an unrelated note, my next step is to figure out how to write a query across the two loaded databases supplied from the command line:\r\n`python -m datasette rm_toucans_23_10_07.db counties.db -m metadata.yml --load-extension=SpatiaLite`\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2198/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1920416843, "node_id": "I_kwDOCGYnMM5ydzxL", "number": 597, "title": "sqlite-utils insert-files should be able to convert fields", "user": {"value": 1737541, "label": "grimnight"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-30T22:20:47Z", "updated_at": "2023-09-30T22:20:47Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Currently using both `insert-files` and `convert` is needed in order to create sqlar files, it would be more convenient if it could be done with just one command.\r\n\r\n```shell\r\n~\r\n\u276f cat test.py\r\nimport os\r\n\r\nclass Example:\r\n def __init__(self, arg1, arg2):\r\n self.arg1 = arg1\r\n\r\n~\r\n\u276f sqlite-utils insert-files test.sqlar sqlar test.py -c name:name -c data:content -c mode:mode -c mtime:mtime -c sz:size --pk=name\r\n [####################################] 100%\r\n\r\n~\r\n\u276f sqlite-utils convert test.sqlar sqlar data \"zlib.compress(value)\" --import=zlib --where \"name = 'test.py'\"\r\n[####################################] 100%\r\n\r\n~\r\n\u276f cat test.py | sqlite-utils convert test.sqlar sqlar data \"zlib.compress(sys.stdin.buffer.read())\" --import=zlib --import=sys --where \"name = 'test.py'\" # Alternative way\r\n [####################################] 100%\r\n\r\n~\r\n\u276f sqlite3 test.sqlar \"SELECT hex(data) FROM sqlar WHERE name = 'test.py';\" | python3 -c \"import sys, zlib; sys.stdout.buffer.write(zlib.decompress(bytes.fromhex(sys.stdin.read())))\"\r\nimport os\r\n\r\nclass Example:\r\n def __init__(self, arg1, arg2):\r\n self.arg1 = arg1\r\n\r\n~\r\n\u276f rm test.py\r\n\r\n~\r\n\u276f sqlar -l test.sqlar\r\ntest.py\r\n\r\n~\r\n\u276f sqlar -x test.sqlar\r\n\r\n~\r\n\u276f cat test.py\r\nimport os\r\n\r\nclass Example:\r\n def __init__(self, arg1, arg2):\r\n self.arg1 = arg1\r\n\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/597/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1899310542, "node_id": "I_kwDOBm6k_c5xNS3O", "number": 2187, "title": "Datasette for serving JSON only", "user": {"value": 19705106, "label": "geofinder"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-16T05:48:29Z", "updated_at": "2023-09-16T05:48:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi, is there any way to use datasette for serving json only without displaying webpage? I've tried to search about this in documentation but didn't get any information", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2187/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1895266807, "node_id": "I_kwDOBm6k_c5w93n3", "number": 2184, "title": "Design decision - should configuration be exposed at /-/config ?", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-13T21:07:08Z", "updated_at": "2023-09-13T21:07:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> This made me think. That `{\"$env\": \"ENV_VAR\"}` hack was introduced back here:\r\n>\r\n> - https://github.com/simonw/datasette/issues/538\r\n>\r\n> The problem it was solving was that metadata was visible to everyone with access to the instance at `/-/metadata` but plugins clearly needed a way to set secret settings.\r\n>\r\n> Now that this stuff is moving to config, we have some decisions to make:\r\n>\r\n> 1. Add `/-/config` to let people see the configuration of their instance, and keep the `$env` trick for secret settings.\r\n> 2. Say all configuration aside from metadata is secret and make `$env` optional or ditch it entirely.\r\n> 3. Allow plugins to announce which of their configuration options are secret so we can automatically redact them from `/-/config`\r\n>\r\n> I've found `/-/metadata` extraordinarily useful as a user of Datasette - it really helps me understand exactly what's going on if I run into any problems with a plugin, if I can quickly check what the settings look like.\r\n>\r\n> So I'm leaning towards option 1 or 3.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/pull/2183#discussion_r1325076924_\r\n\r\nAlso refs:\r\n- #2093", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2184/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1888477283, "node_id": "I_kwDOC8SPRc5wj-Bj", "number": 38, "title": "Run `rebuild_fts` after building the index", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-08T23:17:45Z", "updated_at": "2023-09-08T23:17:45Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "In:\r\n- https://github.com/simonw/datasette.io/issues/152#issuecomment-1712323347\r\n\r\nThis turned out to be the fix:\r\n\r\n```bash\r\ndogsheep-beta index dogsheep-index.db templates/dogsheep-beta.yml\r\nsqlite-utils rebuild-fts dogsheep-index.db\r\n```", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/38/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1886649402, "node_id": "I_kwDOBm6k_c5wc_w6", "number": 2179, "title": "Flaky test: test_hidden_sqlite_stat1_table", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-07T22:48:43Z", "updated_at": "2023-09-07T22:51:19Z", "closed_at": "2023-09-07T22:51:19Z", "author_association": "OWNER", "pull_request": null, "body": "This test here: https://github.com/simonw/datasette/blob/fbcb103c0cb6668018ace539a01a6a1f156e8d6a/tests/test_api.py#L1011-L1020\r\n\r\nIt failed for me like this:\r\n\r\n`E AssertionError: assert [('normal', False), ('sqlite_stat1', True), ('sqlite_stat4', True)] in ([('normal', False), ('sqlite_stat1', True)],)`\r\n\r\nLooks like some builds of SQLite include a `sqlite_stat4` table.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2179/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1884499674, "node_id": "PR_kwDODFE5qs5ZtYMc", "number": 13, "title": "use poetry for packages, asdf for versioning, and gh actions for ci", "user": {"value": 150855, "label": "iloveitaly"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-06T17:59:16Z", "updated_at": "2023-09-06T17:59:16Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/google-takeout-to-sqlite/pulls/13", "body": "- build: use poetry for package management, asdf for python version\n- build: cleanup poetry config, add keywords, ignore dist\n- ci: migrate circleci to gh actions\n- fix: dup method definition\n", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/13/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1884333600, "node_id": "PR_kwDOBm6k_c5Zszqk", "number": 2175, "title": "Test against Python 3.12 preview", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-06T16:09:05Z", "updated_at": "2023-09-06T16:16:28Z", "closed_at": "2023-09-06T16:16:27Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/2175", "body": "https://dev.to/hugovk/help-test-python-312-beta-1508/\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2175.org.readthedocs.build/en/2175/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2175/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1880968405, "node_id": "PR_kwDOJHON9s5ZhYny", "number": 14, "title": "fix: fix the problem of Chinese character garbling", "user": {"value": 2698003, "label": "barretlee"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-09-04T23:48:28Z", "updated_at": "2023-09-04T23:48:28Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/apple-notes-to-sqlite/pulls/14", "body": "1. The code uses two different ways of writing encoding formats, `mac_roman` and `macroman`. It is uncertain whether there are any typo errors.\r\n2. When there are Chinese characters in the content, exporting it results in garbled code. Changing it to `utf8` can fix the issue.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/14/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1876407598, "node_id": "I_kwDOBm6k_c5v17Uu", "number": 2169, "title": "execute-sql on a database should imply view-database/view-permission", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-31T22:45:56Z", "updated_at": "2023-08-31T22:46:28Z", "closed_at": "2023-08-31T22:46:28Z", "author_association": "OWNER", "pull_request": null, "body": "I noticed that a token with `execute-sql` permission alone did not work, because it was not allowed to view the instance of the database.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2169/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1875739055, "node_id": "I_kwDOBm6k_c5vzYGv", "number": 2167, "title": "Document return type of await ds.permission_allowed()", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-31T15:14:23Z", "updated_at": "2023-08-31T15:14:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The return type isn't documented here: https://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/docs/internals.rst#L327-L350\r\n\r\nOn inspecting the code I'm not 100% sure if it's possible for this. method to return `None`, or if it can only return `True` or `False`. Need to confirm that.\r\n\r\nhttps://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/datasette/app.py#L822C15-L853", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2167/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1871935751, "node_id": "I_kwDOD079W85vk3kH", "number": 40, "title": " ImportError: cannot import name 'formatargspec' from 'inspect'", "user": {"value": 36752421, "label": "hosslikw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-29T15:36:31Z", "updated_at": "2023-08-31T03:18:07Z", "closed_at": "2023-08-31T03:18:06Z", "author_association": "NONE", "pull_request": null, "body": "I get the following error when running \"pip3 install dogsheep-photos\"\r\n\" from inspect import ismethod, isclass, formatargspec\r\n ImportError: cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?\"\r\n \r\nPython 3.12.0rc1\r\nsqlite 3.43.0\r\ndatasette, version 0.64.3", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/40/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 594237015, "node_id": "MDU6SXNzdWU1OTQyMzcwMTU=", "number": 718, "title": "Plugin idea: datasette-redirects", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-05T03:41:38Z", "updated_at": "2023-08-30T22:17:31Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin.\r\n\r\nhttps://github.com/simonw/museums/blob/6b1faf00c463b2228860d4d62d104b11935e01b1/plugins/redirect_www.py", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/718/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "reopened"} {"id": 1866815458, "node_id": "PR_kwDOBm6k_c5YyF-C", "number": 2159, "title": "Implement Dark Mode colour scheme", "user": {"value": 3315059, "label": "jamietanna"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-25T10:46:23Z", "updated_at": "2023-08-25T10:46:35Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2159", "body": "Closes #2095.\n\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2159.org.readthedocs.build/en/2159/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2159/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 1, "state_reason": null} {"id": 1865983069, "node_id": "PR_kwDOBm6k_c5YvQSi", "number": 2158, "title": "add brand option to metadata.json.", "user": {"value": 52261150, "label": "publicmatt"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-24T22:37:41Z", "updated_at": "2023-08-24T22:37:57Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2158", "body": "This adds a brand link to the top navbar if 'brand' key is populated in metadata.json. The link will be either '#' or use the contents of 'brand_url' in metadata.json for href.\r\n\r\nI was able to get this done on my own site by replacing `templates/_crumbs.html` with a custom version, but I thought it would be nice to incorporate this in the tool directly.\r\n\r\n![image](https://github.com/simonw/datasette/assets/52261150/fdfe9bb5-fee4-466c-8074-6132071d94e6)\r\n\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2158.org.readthedocs.build/en/2158/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2158/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1795051447, "node_id": "I_kwDOBm6k_c5q_k-3", "number": 2097, "title": "Drop Python 3.7", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-08T18:39:44Z", "updated_at": "2023-08-23T18:18:00Z", "closed_at": "2023-08-23T18:18:00Z", "author_association": "OWNER", "pull_request": null, "body": "> I'm going to drop Python 3.7.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1153#issuecomment-1627455892_\r\n\r\nIt's not supported any more: https://devguide.python.org/versions/", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2097/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1754174496, "node_id": "I_kwDOCGYnMM5ojpQg", "number": 558, "title": "Ability to define unique columns when creating a table", "user": {"value": 1910303, "label": "aguinane"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-13T06:56:19Z", "updated_at": "2023-08-18T01:06:03Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When creating a new table, it would be good to have an option to set unique columns similar to how not_null is set.\r\n\r\n```python\r\nfrom sqlite_utils import Database\r\n\r\ncolumns = {\"mRID\": str, \"name\": str}\r\ndb = Database(\"example.db\")\r\ndb[\"ExampleTable\"].create(columns, pk=\"mRID\", not_null=[\"mRID\"], if_not_exists=True)\r\ndb[\"ExampleTable\"].create_index([\"mRID\"], unique=True, if_not_exists=True)\r\n```\r\n\r\nSo something like this would add the UNIQUE flag to the table definition. \r\n\r\n```python\r\ndb[\"ExampleTable\"].create(columns, pk=\"mRID\", not_null=[\"mRID\"], unique=[\"mRID\"], if_not_exists=True)\r\n```\r\n\r\n```sql\r\nCREATE TABLE ExampleTable (\r\n mRID TEXT PRIMARY KEY\r\n NOT NULL\r\n UNIQUE,\r\n name TEXT\r\n);\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1802613340, "node_id": "PR_kwDOBm6k_c5VZhfw", "number": 2100, "title": "Make primary key view accessible to render_cell hook", "user": {"value": 1563881, "label": "meowcat"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-13T09:30:36Z", "updated_at": "2023-08-10T13:15:41Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2100", "body": "\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2100/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1822940964, "node_id": "I_kwDOBm6k_c5sp98k", "number": 2115, "title": "Ensure all tests pass against new query view JSON", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 9700784, "label": "Datasette 1.0a3"}, "comments": 0, "created_at": "2023-07-26T18:25:20Z", "updated_at": "2023-08-08T02:01:39Z", "closed_at": "2023-08-08T02:01:38Z", "author_association": "OWNER", "pull_request": null, "body": "- #2109 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2115/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1839344979, "node_id": "I_kwDOCGYnMM5toi1T", "number": 582, "title": "Handling CSV/file input that contains NUL bytes", "user": {"value": 1448859, "label": "betatim"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-07T12:24:14Z", "updated_at": "2023-08-07T12:24:14Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I was using sqlite-utils to create a DB from a CSV and it turns out the CSV contains a NUL byte.\r\n\r\nWhen the processing reaches the line that contains the NUL an exception is raised.\r\n\r\nI'm wondering if there is something that can be done in `sqlite-utils` to say \"skip lines with encoding errors\" or some such. I think it isn't super straightforward though as the exception comes from inside the `csv` module that does all the parsing.\r\n\r\nConcretely the file is the `KernelVersions.csv` from https://www.kaggle.com/datasets/kaggle/meta-kaggle\r\n\r\nThis is the command and output:\r\n```\r\n$ sqlite-utils insert --csv kaggle.db kaggle KernelVersions.csv\r\n [------------------------------------] 0%\r\n [#####################---------------] 60% 00:04:24Traceback (most recent call last):\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/bin/sqlite-utils\", line 10, in \r\n sys.exit(cli())\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/cli.py\", line 1223, in insert\r\n insert_upsert_implementation(\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/cli.py\", line 1085, in insert_upsert_implementation\r\n db[table].insert_all(\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/db.py\", line 3198, in insert_all\r\n chunk = list(chunk)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/db.py\", line 3742, in fix_square_braces\r\n for record in records:\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/cli.py\", line 1071, in \r\n docs = (decode_base64_values(doc) for doc in docs)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/cli.py\", line 1068, in \r\n docs = (verify_is_dict(doc) for doc in docs)\r\n File \"/home/foobar/miniconda/envs/meta-kaggle/lib/python3.10/site-packages/sqlite_utils/cli.py\", line 1003, in \r\n docs = (dict(zip(headers, row)) for row in reader)\r\n_csv.Error: line contains NUL\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/582/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1827436260, "node_id": "PR_kwDOD079W85WtVyk", "number": 39, "title": "Missing option in datasette instructions", "user": {"value": 319473, "label": "coldclimate"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-29T10:34:48Z", "updated_at": "2023-07-29T10:34:48Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep-photos/pulls/39", "body": "Gotta tell it where to look", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/39/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1824457306, "node_id": "I_kwDOBm6k_c5svwJa", "number": 2122, "title": "Parameters on canned queries: fixed or query-generated list?", "user": {"value": 1563881, "label": "meowcat"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-27T14:07:07Z", "updated_at": "2023-07-27T14:07:07Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi,\r\n\r\ncurrently parameters in canned queries are just text fields. It would be cool to have one of the options below. Would you accept a PR doing something in this direction? (Possibly this could even work as a plugin.)\r\n\r\n* adding facets, which would work like facets on tables or views, giving a list of selectable options (and leaving parameters as is)\r\n* making it possible to provide a query which returns selectable values for a parameter, e.g.\r\n``` \r\ncalendar_entries_current_instrument:\r\n sql: | \r\n select * from calendar_entries \r\n where \r\n DTEND_UNIX > UNIXEPOCH() and\r\n DTSTART_UNIX < UNIXEPOCH() + :days *24*60*60 and\r\n current = 1 and\r\n MACHINE = :instrument\r\n order by\r\n DTSTART_UNIX\r\n params:\r\n days: \r\n sql: \"SELECT VALUE FROM generate_series(1, 30, 1)\"\r\n # this obviously requires the corresponding sqlite extension\r\n instrument:\r\n sql: \"SELECT DISTINCT MACHINE FROM calendar_entries\"\r\n```\r\n* making it possible to provide a fixed list of parameters\r\n``` \r\ncalendar_entries_current_instrument:\r\n sql: | \r\n select * from calendar_entries \r\n where \r\n DTEND_UNIX > UNIXEPOCH() and\r\n DTSTART_UNIX < UNIXEPOCH() + :days *24*60*60 and\r\n current = 1 and\r\n MACHINE = :instrument\r\n order by\r\n DTSTART_UNIX\r\n params:\r\n days: \r\n values: [1, 2, 3, 5, 10, 20, 30]\r\n instrument:\r\n values: [supermachine, crappymachine, boringmachine]\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2122/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1823428714, "node_id": "I_kwDOBm6k_c5sr1Bq", "number": 2120, "title": "Add __all__ to datasette/__init__.py", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-27T01:07:10Z", "updated_at": "2023-07-27T01:07:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Currently looks like this: https://github.com/simonw/datasette/blob/08181823990a71ffa5a1b57b37259198eaa43e06/datasette/__init__.py#L1-L6\r\n\r\nAdding `__all__ = [\"Permission\", \"Forbidden\"...]` would let me get rid of those `# noqa` comments.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2120/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1822918995, "node_id": "I_kwDOCGYnMM5sp4lT", "number": 580, "title": "Add way to export to a csv file using the Python library", "user": {"value": 44324811, "label": "kevinlinxc"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-26T18:09:26Z", "updated_at": "2023-07-26T18:09:26Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "According to the documentation, we can make a csv output using the CLI tool, but not the Python library. Could we have the latter?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/580/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1822813627, "node_id": "I_kwDOBm6k_c5spe27", "number": 2108, "title": "some (many?) SQL syntax errors are not throwing errors with a .csv endpoint", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-26T16:57:45Z", "updated_at": "2023-07-26T16:58:07Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "here's a CTE query that should always fail with a syntax error:\r\n\r\n```sql\r\nwith foo as (nonsense)\r\nselect\r\n *\r\nfrom\r\n foo;\r\n```\r\n\r\nwhen we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B\r\n\r\nbut, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B\r\n\r\nsame with this bad sql\r\n\r\n```sql\r\nselect\r\n a,\r\nfrom\r\n foo;\r\n```\r\n\r\nhttps://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B\r\n\r\nvs \r\n\r\nhttps://global-power-plants.datasettes.com/global-power-plants.csv?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B\r\n\r\nbut, datasette catches this bad sql at both endpoints:\r\n\r\n```sql\r\nslect\r\n a\r\nfrom\r\n foo;\r\n```\r\n\r\nhttps://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B\r\nhttps://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2108/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1821108702, "node_id": "I_kwDOCGYnMM5si-ne", "number": 579, "title": "Special handling for SQLite column of type `JSON`", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-25T20:37:23Z", "updated_at": "2023-07-25T20:37:23Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "`sqlite-utils` should detect and have specially handling for column with a `JSON` column. For example:\r\n\r\n```sql\r\nCREATE TABLE \"dogs\" (\r\n id INTEGER PRIMARY KEY,\r\n name TEXT,\r\n friends JSON \r\n);\r\n```\r\n\r\n## Automatic Nesting\r\n\r\nAccording to [\"Nested JSON Values\"](https://sqlite-utils.datasette.io/en/stable/cli.html#nested-json-values), sqlite-utils will only expand JSON if the `--json-cols` flag is passed. It looks like it'll try to `json.load` all text column to test if its JSON, which can get expensive on non-json columns. \r\n\r\nInstead, `sqlite-utils` should be default (ie without the `--json-cols` flags) do the `maybe_json()` operation on columns with a declared `JSON` type. So the above table would expand the `\"friends\"` column as expected, withoutthe `--json-cols` flag:\r\n\r\n```bash\r\nsqlite-utils dogs.db \"select * from dogs\" | python -mjson.tool\r\n```\r\n\r\n```\r\n[\r\n {\r\n \"id\": 1,\r\n \"name\": \"Cleo\",\r\n \"friends\": [\r\n {\r\n \"name\": \"Pancakes\"\r\n },\r\n {\r\n \"name\": \"Bailey\"\r\n }\r\n ]\r\n }\r\n]\r\n```\r\n\r\n---\r\n\r\nI'm sure there's other ways `sqlite-utils` can specially handle JSON columns, so keeping this open while I think of more", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/579/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1786243905, "node_id": "I_kwDOCGYnMM5qd-tB", "number": 564, "title": "Document that running `db.transform()` tidies up the schema indentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-03T13:59:28Z", "updated_at": "2023-07-22T22:15:34Z", "closed_at": "2023-07-22T22:15:34Z", "author_association": "OWNER", "pull_request": null, "body": "> ... and it turns out running `.transform()` with no arguments still fixes the format of the schema!\r\n\r\n```pycon\r\n>>> db[\"log\"].add_column(\"foo\", str)\r\n\r\n>>> db[\"log\"].add_column(\"bar\", str)\r\n
\r\n>>> db[\"log\"].add_column(\"baz\", str)\r\n
\r\n>>> print(db[\"log\"].schema)\r\nCREATE TABLE \"log\" (\r\n [id] INTEGER PRIMARY KEY,\r\n [name2] TEXT,\r\n [age] INTEGER,\r\n [weight] FLOAT\r\n, [foo] TEXT, [bar] TEXT, [baz] TEXT)\r\n>>> db[\"log\"].transform()\r\n
\r\n>>> print(db[\"log\"].schema)\r\nCREATE TABLE \"log\" (\r\n [id] INTEGER PRIMARY KEY,\r\n [name2] TEXT,\r\n [age] INTEGER,\r\n [weight] FLOAT,\r\n [foo] TEXT,\r\n [bar] TEXT,\r\n [baz] TEXT\r\n)\r\n```\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/llm/issues/65#issuecomment-1618347727_\r\n ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/564/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 1}", "draft": null, "state_reason": "completed"} {"id": 1816857105, "node_id": "I_kwDOCGYnMM5sSwoR", "number": 570, "title": "`sqlite-utils install -e` option", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-22T18:32:23Z", "updated_at": "2023-07-22T18:55:59Z", "closed_at": "2023-07-22T18:32:56Z", "author_association": "OWNER", "pull_request": null, "body": "As seen in LLM.\r\n\r\nNeeded while working on:\r\n- #567", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/570/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1816830546, "node_id": "I_kwDODEm0Qs5sSqJS", "number": 73, "title": "Twitter v1 API shutdown", "user": {"value": 6341745, "label": "david-perez"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-22T16:57:41Z", "updated_at": "2023-07-22T16:57:41Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I've been using this project reliably over the past two years to periodically download my liked tweets, but unfortunately since 19th July I get:\r\n\r\n```\r\n[2023-07-19 21:00:04.937536] File \"/home/pi/code/liked-tweets/lib/python3.7/site-packages/twitter_to_sqlite/utils.py\", line 202, in fetch_timeline\r\n[2023-07-19 21:00:04.937606] raise Exception(str(tweets[\"errors\"]))\r\n[2023-07-19 21:00:04.937678] Exception: [{'message': 'You currently have access to a subset of Twitter API v2 endpoints and limited v1.1 endpoints (e.g. media post, oauth) only. If you need access to this endpoint, you may need a different access level. You can learn more here: https://developer.twitter.com/en/portal/product', 'code': 453}]\r\n```\r\n\r\nIt appears like Twitter has now shut down their v1 endpoints, which is rather gracious of them, considering they [announced they'd be deprecated on 29th April](https://twittercommunity.com/t/reminder-to-migrate-to-the-new-free-basic-or-enterprise-plans-of-the-twitter-api/189737).\r\n\r\nUnfortunately [retrieving likes using the v2 API](https://developer.twitter.com/en/docs/twitter-api/tweets/likes/introduction) is not part of their [free plan](https://developer.twitter.com/en/portal/products). In fact, with the free plan one can only post and delete tweets and retrieve information about oneself.\r\n\r\nSo I'm afraid this is the end of this very nice project. It was very useful, thank you!\r\n", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/73/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 1}", "draft": null, "state_reason": null} {"id": 1798901709, "node_id": "PR_kwDOBm6k_c5VM2MK", "number": 2099, "title": "Bump black from 23.3.0 to 23.7.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-11T13:05:53Z", "updated_at": "2023-07-21T21:19:25Z", "closed_at": "2023-07-21T21:19:24Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2099", "body": "Bumps [black](https://github.com/psf/black) from 23.3.0 to 23.7.0.\n
\nRelease notes\n

Sourced from black's releases.

\n
\n

23.7.0

\n

Highlights

\n
    \n
  • Runtime support for Python 3.7 has been removed. Formatting 3.7 code will still be\nsupported until further notice (#3765)
  • \n
\n

Stable style

\n
    \n
  • Fix a bug where an illegal trailing comma was added to return type annotations using\nPEP 604 unions (#3735)
  • \n
  • Fix several bugs and crashes where comments in stub files were removed or mishandled\nunder some circumstances (#3745)
  • \n
  • Fix a crash with multi-line magic comments like type: ignore within parentheses\n(#3740)
  • \n
  • Fix error in AST validation when Black removes trailing whitespace in a type comment\n(#3773)
  • \n
\n

Preview style

\n
    \n
  • Implicitly concatenated strings used as function args are no longer wrapped inside\nparentheses (#3640)
  • \n
  • Remove blank lines between a class definition and its docstring (#3692)
  • \n
\n

Configuration

\n
    \n
  • The --workers argument to Black can now be specified via the BLACK_NUM_WORKERS\nenvironment variable (#3743)
  • \n
  • .pytest_cache, .ruff_cache and .vscode are now excluded by default (#3691)
  • \n
  • Fix Black not honouring pyproject.toml settings when running --stdin-filename\nand the pyproject.toml found isn't in the current working directory (#3719)
  • \n
  • Black will now error if exclude and extend-exclude have invalid data types in\npyproject.toml, instead of silently doing the wrong thing (#3764)
  • \n
\n

Packaging

\n
    \n
  • Upgrade mypyc from 0.991 to 1.3 (#3697)
  • \n
  • Remove patching of Click that mitigated errors on Python 3.6 with LANG=C (#3768)
  • \n
\n

Parser

\n
    \n
  • Add support for the new PEP 695 syntax in Python 3.12 (#3703)
  • \n
\n

Performance

\n
    \n
  • Speed up Black significantly when the cache is full (#3751)
  • \n
  • Avoid importing IPython in a case where we wouldn't need it (#3748)
  • \n
\n

Output

\n\n
\n

... (truncated)

\n
\n
\nChangelog\n

Sourced from black's changelog.

\n
\n

23.7.0

\n

Highlights

\n
    \n
  • Runtime support for Python 3.7 has been removed. Formatting 3.7 code will still be\nsupported until further notice (#3765)
  • \n
\n

Stable style

\n
    \n
  • Fix a bug where an illegal trailing comma was added to return type annotations using\nPEP 604 unions (#3735)
  • \n
  • Fix several bugs and crashes where comments in stub files were removed or mishandled\nunder some circumstances (#3745)
  • \n
  • Fix a crash with multi-line magic comments like type: ignore within parentheses\n(#3740)
  • \n
  • Fix error in AST validation when Black removes trailing whitespace in a type comment\n(#3773)
  • \n
\n

Preview style

\n
    \n
  • Implicitly concatenated strings used as function args are no longer wrapped inside\nparentheses (#3640)
  • \n
  • Remove blank lines between a class definition and its docstring (#3692)
  • \n
\n

Configuration

\n
    \n
  • The --workers argument to Black can now be specified via the BLACK_NUM_WORKERS\nenvironment variable (#3743)
  • \n
  • .pytest_cache, .ruff_cache and .vscode are now excluded by default (#3691)
  • \n
  • Fix Black not honouring pyproject.toml settings when running --stdin-filename\nand the pyproject.toml found isn't in the current working directory (#3719)
  • \n
  • Black will now error if exclude and extend-exclude have invalid data types in\npyproject.toml, instead of silently doing the wrong thing (#3764)
  • \n
\n

Packaging

\n
    \n
  • Upgrade mypyc from 0.991 to 1.3 (#3697)
  • \n
  • Remove patching of Click that mitigated errors on Python 3.6 with LANG=C (#3768)
  • \n
\n

Parser

\n
    \n
  • Add support for the new PEP 695 syntax in Python 3.12 (#3703)
  • \n
\n

Performance

\n
    \n
  • Speed up Black significantly when the cache is full (#3751)
  • \n
  • Avoid importing IPython in a case where we wouldn't need it (#3748)
  • \n
\n

Output

\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=black&package-manager=pip&previous-version=23.3.0&new-version=23.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2099.org.readthedocs.build/en/2099/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2099/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1808116827, "node_id": "I_kwDOBm6k_c5rxaxb", "number": 2103, "title": "data attribute on Datasette tables exposing the primary key of the row", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-17T16:18:25Z", "updated_at": "2023-07-17T16:18:25Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Maybe put it on the `` but probably better to go on the `td.type-pk`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2103/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1794604602, "node_id": "PR_kwDOBm6k_c5U-akg", "number": 2096, "title": "Clarify docs for descriptions in metadata", "user": {"value": 15906, "label": "garthk"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-08T01:57:58Z", "updated_at": "2023-07-08T01:58:13Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2096", "body": "G'day! I got confused while debugging, earlier today. That's on me, but it does strike me a little repetition in the metadata documentation might help those flicking around it rather than reading it from top to bottom. No worries if you think otherwise.\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2096.org.readthedocs.build/en/2096/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2096/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1794097871, "node_id": "I_kwDOBm6k_c5q78LP", "number": 2095, "title": "Introduce \"dark mode\" CSS", "user": {"value": 3315059, "label": "jamietanna"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-07T19:15:58Z", "updated_at": "2023-07-07T19:15:58Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Using [the CSS media query `prefers-color-scheme`](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme) we can provide a dark-mode version of Datasette", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2095/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1783304750, "node_id": "I_kwDOBm6k_c5qSxIu", "number": 2094, "title": "JS Plugin Hooks for the Code Editor", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-01T00:51:57Z", "updated_at": "2023-07-01T00:51:57Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When #2052 merges, I'd like to add support to add extensions/functions to the Datasette code editor. \r\n\r\nI'd eventually like to build a JS plugin for [`sqlite-docs`](https://github.com/asg017/sqlite-docs), to add things like:\r\n\r\n- Inline documentation for tables/columns on hover\r\n- Inline docs for custom functions that are loaded in\r\n- More detailed autocomplete for tables/columns/functions\r\n\r\nI did some hacking to see what this would look like, see here:\r\n\r\n\"image\"\r\n\"image\"\r\n\r\nThere can be a new hook that allows JS plugins to add new \"extension\" in the CodeMirror editorview here:\r\n\r\nhttps://github.com/simonw/datasette/blob/8cd60fd1d899952f1153460469b3175465f33f80/datasette/static/cm-editor-6.0.1.js#L25\r\n\r\nWill need some more planning. For example, the Codemirror bundle in Datasette has functions that we could re-export for plugins to use (so we don't load 2 version of `\"@codemirror/autocomplete\"`, for example. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2094/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1756975532, "node_id": "PR_kwDOBm6k_c5S_5Jl", "number": 2083, "title": "Bump blacken-docs from 1.13.0 to 1.14.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-14T13:57:52Z", "updated_at": "2023-06-29T14:31:55Z", "closed_at": "2023-06-29T14:31:54Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2083", "body": "Bumps [blacken-docs](https://github.com/asottile/blacken-docs) from 1.13.0 to 1.14.0.\n
\nChangelog\n

Sourced from blacken-docs's changelog.

\n
\n

1.14.0 (2023-06-13)

\n
    \n
  • Support Python 3.12.
  • \n
\n
\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=blacken-docs&package-manager=pip&previous-version=1.13.0&new-version=1.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2083.org.readthedocs.build/en/2083/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2083/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1773450152, "node_id": "I_kwDOCGYnMM5ptLOo", "number": 559, "title": "sqlean support", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-25T19:27:26Z", "updated_at": "2023-06-25T23:25:53Z", "closed_at": "2023-06-25T23:25:53Z", "author_association": "OWNER", "pull_request": null, "body": "If sqlean is available, use that.\r\n\r\nRefs:\r\n- https://github.com/nalgeon/sqlean.py/issues/1#issuecomment-1605707788\r\n\r\nThis will provide a good workaround for:\r\n- #235 ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/559/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1764792125, "node_id": "I_kwDOBm6k_c5pMJc9", "number": 2086, "title": "Show information on startup in directory configuration mode", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-20T07:13:33Z", "updated_at": "2023-06-20T07:13:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://discord.com/channels/823971286308356157/823971286941302908/1120516587036889098\r\n\r\n> One thing that would be helpful would be message at launch indicating a metadata.json is getting picked up. I'm using directory mode and was editing the wrong file for awhile before I realize nothing I was doing was having any effect.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2086/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1762180409, "node_id": "I_kwDOBm6k_c5pCL05", "number": 2085, "title": "Interactive row selection in Datasette ", "user": {"value": 24938923, "label": "learning4life"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-18T08:29:45Z", "updated_at": "2023-06-18T08:31:23Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Simon did a excellent [prototype](https://til.simonwillison.net/datasette/row-selection-prototype) of an interactive row selection in Datasette.\r\n\r\nI hope this [functionality](https://camo.githubusercontent.com/3d4a0f31fb6a27fd279f809af5b53dc3b76faa63c7721e228951c5252b645a77/68747470733a2f2f7374617469632e73696d6f6e77696c6c69736f6e2e6e65742f7374617469632f323032332f6461746173657474652d7069636b65722e676966) can be turned into a Datasette plugin.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2085/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1761613778, "node_id": "I_kwDOBm6k_c5pABfS", "number": 2084, "title": "Support facets for columns that contain timestamps", "user": {"value": 19492893, "label": "devxpy"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-17T03:33:54Z", "updated_at": "2023-06-17T03:33:54Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "\r\nDjango has this very nice filter for datetime fields -\r\n\r\n\"image\"\r\n\r\nIt would be nice to have something similar to facet by a field that contains a timestamp in datasette too - Which doesn't seem to do anything with timestamps right now...\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2084/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1751214236, "node_id": "I_kwDOC8SPRc5oYWic", "number": 36, "title": "Getting sqlite_master may not be modified when creating dogsheep index", "user": {"value": 8711912, "label": "khushmeeet"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-11T03:21:53Z", "updated_at": "2023-06-11T03:21:53Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When creating a `dogsheep` index from `config.yml` file on pocket.db (created using pocket-to-sqlite), I am getting this error\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/bin/dogsheep-beta\", line 8, in \r\n sys.exit(cli())\r\n ^^^^^\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/click/core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/click/core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/click/core.py\", line 1657, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/click/core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/dogsheep_beta/cli.py\", line 36, in index\r\n run_indexer(\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/dogsheep_beta/utils.py\", line 32, in run_indexer\r\n ensure_table_and_indexes(db, tokenize)\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/dogsheep_beta/utils.py\", line 91, in ensure_table_and_indexes\r\n table.add_foreign_key(*fk)\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/sqlite_utils/db.py\", line 2155, in add_foreign_key\r\n self.db.add_foreign_keys([(self.name, column, other_table, other_column)])\r\n File \"/Users/khushmeeet/.pyenv/versions/3.11.2/lib/python3.11/site-packages/sqlite_utils/db.py\", line 1116, in add_foreign_keys\r\n cursor.execute(\r\nsqlite3.OperationalError: table sqlite_master may not be modified\r\n```\r\n\r\nCommand I ran to get this error\r\n```\r\ndogsheep-beta index pocket.db config.yml\r\n```\r\n\r\nDogsheep version\r\n```\r\ndogsheep-beta, version 0.10.2\r\n```\r\n\r\nPython version \r\n```\r\nPython 3.11.2\r\n```", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/36/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1734786661, "node_id": "PR_kwDOBm6k_c5R0fcK", "number": 2082, "title": "Catch query interrupted on facet suggest row count", "user": {"value": 10843208, "label": "redraw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-05-31T18:42:46Z", "updated_at": "2023-05-31T18:45:26Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2082", "body": "Just like facet's `suggest()` is trapping `QueryInterrupted` for facet columns, we also need to trap `get_row_count()`, which can reach timeout if database tables are big enough. \r\n\r\nI've included `get_columns()` inside the block as that's just another query, despite it's a really cheap one and might never raise the exception.\r\n\r\n\r\n----\r\n:books: Documentation preview :books:: https://datasette--2082.org.readthedocs.build/en/2082/\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2082/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1727478903, "node_id": "I_kwDOBm6k_c5m9zx3", "number": 2081, "title": "Update Endpoints defined in metadata throws 403 Forbidden after a while", "user": {"value": 15085007, "label": "cutmasta-kun"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-05-26T11:52:30Z", "updated_at": "2023-05-26T11:52:30Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hello. I expose an endpoint to update `tasks`:\r\n```\r\n{\r\n \"title\": \"My Datasette Instance\",\r\n \"databases\": {\r\n \"tasks\": {\r\n \"queries\": {\r\n \"update_task\": {\r\n \"sql\": \"UPDATE tasks SET status = :status, result = :result, systemMessage = :systemMessage WHERE queueID = :queueID\",\r\n \"write\": true,\r\n \"on_success_message\": \"Task updated\",\r\n \"on_success_redirect\": \"/tasks/tasks.json\",\r\n \"on_error_message\": \"Task update failed\",\r\n \"on_error_redirect\": \"/tasks.json\",\r\n \"params\": [\"queueID\", \"taskData\", \"status\", \"result\", \"systemMessage\"]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis works really well! But after a while, the Datasette Instanz answers with **403 Forbidden**.\r\nI have to delete the database and recreate it in order to work again.\r\n\r\nAny help here? (\u00b4\u3002\uff3f\u3002\uff40)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2081/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1715468032, "node_id": "PR_kwDOBm6k_c5QzEAM", "number": 2076, "title": "Datsette gpt plugin", "user": {"value": 130708713, "label": "StudioCordillera"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-05-18T11:22:30Z", "updated_at": "2023-05-18T11:22:45Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2076", "body": "\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2076/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1708981860, "node_id": "PR_kwDOBm6k_c5QdMea", "number": 2074, "title": "sort files by mtime", "user": {"value": 3919561, "label": "abbbi"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-05-14T15:25:15Z", "updated_at": "2023-05-14T15:25:29Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2074", "body": "serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;)\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2074/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1701018909, "node_id": "I_kwDOCGYnMM5lY30d", "number": 543, "title": "Tests broken on Windows due to new convert() lambda names", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-05-08T22:11:29Z", "updated_at": "2023-05-08T22:19:04Z", "closed_at": "2023-05-08T22:19:04Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils/actions/runs/4920084038/jobs/8788501314\r\n```python\r\nsql = 'update [example] set [dt] = lambda_-9223371942137158589([dt]);'\r\n```\r\nFrom:\r\n- #526", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/543/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1576990618, "node_id": "PR_kwDOCGYnMM5JkkED", "number": 526, "title": "Fix repeated calls to `Table.convert()`", "user": {"value": 167893, "label": "mcarpenter"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-09T00:14:49Z", "updated_at": "2023-05-08T21:56:05Z", "closed_at": "2023-05-08T21:53:58Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/526", "body": "Fixes #525. All tests pass.\r\n\r\nThere's perhaps a better way to name lambdas? There could be a collision if a caller passes a function with name like `lambda_123456`.\r\n\r\nSQLite [documentation](https://www.sqlite.org/appfunc.html) is a little, ah, lite on function name specs. If there is a character that can be used in place of underscore in a SQLite function name that is not permitted in a Python function identifier then that could be a good way to prevent accidental collisions. (I tried dash, colon, dot, no joy).\r\n\r\nOtherwise, there is little chance of this happening and if it should happen the risk is mitigated by now throwing an exception in the case of a (name, arity) collision without `replace=True`.\r\n\r\n\r\n----\r\n:books: Documentation preview :books:: https://sqlite-utils--526.org.readthedocs.build/en/526/\r\n\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/526/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1432377191, "node_id": "I_kwDOCGYnMM5VYFdn", "number": 509, "title": "`sqlite-utils transform` breaks DEFAULT string values and STRFTIME()", "user": {"value": 2199875, "label": "kennysong"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-02T02:32:23Z", "updated_at": "2023-05-08T21:13:38Z", "closed_at": "2023-05-08T21:13:38Z", "author_association": "NONE", "pull_request": null, "body": "Very nice library! Our team found sqlite-utils through @simonw's [comment on the \"Simple declarative schema migration for SQLite\" article](https://news.ycombinator.com/item?id=31249823), and we were excited to use it, but unfortunately `sqlite-utils transform` seems to break our DB. \r\n\r\nRunning `sqlite-utils transform` to modify a column mangles their DEFAULT values:\r\n\r\n- Default string values are wrapped in extra single quotes\r\n- Function expressions such as [`STRFTIME()`](https://www.sqlite.org/lang_datefunc.html) are turned into strings!\r\n\r\n------\r\n\r\nHere are steps to reproduce:\r\n\r\n**Original database**\r\n\r\n```\r\n$ sqlite3 test.db << EOF\r\nCREATE TABLE mytable (\r\n col1 TEXT DEFAULT 'foo',\r\n col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))\r\n)\r\nEOF\r\n\r\n$ sqlite3 test.db \"SELECT sql FROM sqlite_master WHERE name = 'mytable';\"\r\nCREATE TABLE mytable (\r\n col1 TEXT DEFAULT 'foo',\r\n col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))\r\n)\r\n```\r\n\r\n**Modified database after sqlite-utils**\r\n\r\n```\r\n$ sqlite3 test.db \"INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;\"\r\nfoo|2022-11-02 02:26:58.038\r\n\r\n$ sqlite-utils transform test.db mytable --rename col1 renamedcol1\r\n\r\n$ sqlite3 test.db \"SELECT sql FROM sqlite_master WHERE name = 'mytable';\"\r\nCREATE TABLE \"mytable\" (\r\n [renamedcol1] TEXT DEFAULT '''foo''',\r\n [col2] TEXT DEFAULT 'STRFTIME(''%Y-%m-%d %H:%M:%f'', ''NOW'')'\r\n)\r\n\r\n$ sqlite3 test.db \"INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;\"\r\nfoo|2022-11-02 02:26:58.038\r\n'foo'|STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')\r\n```\r\n\r\n(Related: #336)", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/509/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1674322631, "node_id": "PR_kwDOBm6k_c5OpEz_", "number": 2061, "title": "Add \"Packaging a plugin using Poetry\" section in docs", "user": {"value": 1238873, "label": "rclement"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-19T07:23:28Z", "updated_at": "2023-04-19T07:27:18Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2061", "body": "This PR adds a new section about packaging a plugin using `poetry` within the \"Writing plugins\" page of the documentation.\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2061/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1665510265, "node_id": "I_kwDOBm6k_c5jRat5", "number": 2060, "title": "Clean up a bunch of warnings from ruff", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-13T01:23:02Z", "updated_at": "2023-04-13T01:23:02Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "See:\r\n- #2056\r\n\r\n`ruff` spots a bunch of warnings about things like unused variables - would be good to clean up as many of these as possible.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2060/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1661617056, "node_id": "I_kwDODD6af85jCkOg", "number": 15, "title": "ambiguous column name: createdAt - on checkin_details view", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-11T01:07:47Z", "updated_at": "2023-04-11T03:16:37Z", "closed_at": "2023-04-11T03:16:37Z", "author_association": "MEMBER", "pull_request": null, "body": "It looks like Swarm changed their schema and now both `venues` and `checkins` have `createdAt` fields.\r\n\r\nWhich breaks this view: https://github.com/dogsheep/swarm-to-sqlite/blob/719b6e96a016d0ca8b316d3bed9c2a7a0cb499ee/swarm_to_sqlite/utils.py#L171-L188", "repo": {"value": 205429375, "label": "swarm-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/15/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1650984552, "node_id": "PR_kwDOJHON9s5NbyYN", "number": 13, "title": "use universal command", "user": {"value": 14314871, "label": "amlestin"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-02T15:10:54Z", "updated_at": "2023-04-02T15:37:34Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/apple-notes-to-sqlite/pulls/13", "body": null, "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/13/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1650981564, "node_id": "I_kwDOJHON9s5iZ_q8", "number": 12, "title": "Error running pytest", "user": {"value": 14314871, "label": "amlestin"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-02T15:02:36Z", "updated_at": "2023-04-02T15:07:10Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "`______________________________________________________ ERROR collecting tests/test_apple_notes_to_sqlite.py _______________________________________________________\r\nImportError while importing test module '/Users/lol/development/apple-notes-to-sqlite/tests/test_apple_notes_to_sqlite.py'.\r\nHint: make sure your test modules/packages have valid Python names.\r\nTraceback:\r\n/opt/homebrew/Cellar/python@3.9/3.9.16/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\ntests/test_apple_notes_to_sqlite.py:2: in \r\n from apple_notes_to_sqlite.cli import cli, COUNT_SCRIPT, FOLDERS_SCRIPT\r\nE ModuleNotFoundError: No module named 'apple_notes_to_sqlite'`\r\n\r\nSolution:\r\nThis is likely a PYTHONPATH issue due to having pytest installed both globally and in the venv. We can guarantee the tests run by adding the current directory to sys.path automatically using\r\n\r\n`python -m pytest`\r\n\r\nThe alternative is to activate the venv, install pytest, deactivate, then activate the venv again (https://stackoverflow.com/questions/35045038/how-do-i-use-pytest-with-virtualenv)", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1649793525, "node_id": "I_kwDOBm6k_c5iVdn1", "number": 2051, "title": "`?_extra=row_urls` for table pages", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-31T17:58:36Z", "updated_at": "2023-03-31T17:58:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Provides URLs to the JSON version of those rows. Maybe it persists the `?_shape=` option too? Not sure about that.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2051/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1646068413, "node_id": "I_kwDOBm6k_c5iHQK9", "number": 2048, "title": "Test failures encountered while packaging for GNU Guix", "user": {"value": 8332263, "label": "Apteryks"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-29T15:36:54Z", "updated_at": "2023-03-29T15:36:54Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hello,\r\n\r\nWhile reviewing a packaged submitted to Guix to add `datasette`, the test suite produces the following errors:\r\n```\r\n=================================== FAILURES ===================================\r\n_________________________ test_row_strange_table_name __________________________\r\n[gw21] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\n\r\n def test_row_strange_table_name(app_client):\r\n response = app_client.get(\r\n \"/fixtures/table~2Fwith~2Fslashes~2Ecsv/3.json?_shape=objects\"\r\n )\r\n> assert response.status == 200\r\nE assert 400 == 200\r\nE + where 400 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:701: AssertionError\r\n----------------------------- Captured stderr call -----------------------------\r\nERROR: conn=, sql = 'select rowid, * from [table%7E2Fwith%7E2Fslashes%7E2Ecsv] where \"rowid\"=:p0', params = {'p0': '3'}: no such table: table%7E2Fwith%7E2Fslashes%7E2Ecsv\r\n_______________ test_database_page_for_database_with_dot_in_name _______________\r\n[gw15] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_with_dot = \r\n\r\n def test_database_page_for_database_with_dot_in_name(app_client_with_dot):\r\n response = app_client_with_dot.get(\"/fixtures~2Edot.json\")\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:633: AssertionError\r\n___________________ test_tilde_encoded_database_names[fo%o] ____________________\r\n[gw6] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\ndb_name = 'fo%o'\r\n\r\n @pytest.mark.asyncio\r\n @pytest.mark.parametrize(\"db_name\", (\"foo\", r\"fo%o\", \"f~/c.d\"))\r\n async def test_tilde_encoded_database_names(db_name):\r\n ds = Datasette()\r\n ds.add_memory_database(db_name)\r\n response = await ds.client.get(\"/.json\")\r\n assert db_name in response.json().keys()\r\n path = response.json()[db_name][\"path\"]\r\n # And the JSON for that database\r\n response2 = await ds.client.get(path + \".json\")\r\n> assert response2.status_code == 200\r\nE assert 302 == 200\r\nE + where 302 = .status_code\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:983: AssertionError\r\n__________________ test_tilde_encoded_database_names[f~/c.d] ___________________\r\n[gw7] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\ndb_name = 'f~/c.d'\r\n\r\n @pytest.mark.asyncio\r\n @pytest.mark.parametrize(\"db_name\", (\"foo\", r\"fo%o\", \"f~/c.d\"))\r\n async def test_tilde_encoded_database_names(db_name):\r\n ds = Datasette()\r\n ds.add_memory_database(db_name)\r\n response = await ds.client.get(\"/.json\")\r\n assert db_name in response.json().keys()\r\n path = response.json()[db_name][\"path\"]\r\n # And the JSON for that database\r\n response2 = await ds.client.get(path + \".json\")\r\n> assert response2.status_code == 200\r\nE assert 302 == 200\r\nE + where 302 = .status_code\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:983: AssertionError\r\n______________ test_database_with_space_in_name[/searchable.json] ______________\r\n[gw21] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_two_attached_databases = \r\npath = '/searchable.json'\r\n\r\n @pytest.mark.parametrize(\r\n \"path\",\r\n (\r\n \"/\",\r\n \".json\",\r\n \"/searchable\",\r\n \"/searchable.json\",\r\n \"/searchable_view\",\r\n \"/searchable_view.json\",\r\n ),\r\n )\r\n def test_database_with_space_in_name(app_client_two_attached_databases, path):\r\n> response = app_client_two_attached_databases.get(\r\n \"/extra~20database\" + path, follow_redirects=True\r\n )\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in __call__\r\n return call_result.result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result\r\n return self.__get_result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap\r\n result = await self.awaitable(*args, **kwargs)\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get\r\n return await self._request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request\r\n httpx_response = await self.ds.client.request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request\r\n return await client.request(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request\r\n return await self.send(request, auth=auth, follow_redirects=follow_redirects)\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send\r\n response = await self._send_handling_auth(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth\r\n response = await self._send_handling_redirects(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = \r\nrequest = \r\nfollow_redirects = True\r\nhistory = [, , , , , , ...]\r\n\r\n async def _send_handling_redirects(\r\n self,\r\n request: Request,\r\n follow_redirects: bool,\r\n history: typing.List[Response],\r\n ) -> Response:\r\n while True:\r\n if len(history) > self.max_redirects:\r\n> raise TooManyRedirects(\r\n \"Exceeded maximum allowed redirects.\", request=request\r\n )\r\nE httpx.TooManyRedirects: Exceeded maximum allowed redirects.\r\n\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects\r\n___________________ test_database_with_space_in_name[.json] ____________________\r\n[gw19] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_two_attached_databases = \r\npath = '.json'\r\n\r\n @pytest.mark.parametrize(\r\n \"path\",\r\n (\r\n \"/\",\r\n \".json\",\r\n \"/searchable\",\r\n \"/searchable.json\",\r\n \"/searchable_view\",\r\n \"/searchable_view.json\",\r\n ),\r\n )\r\n def test_database_with_space_in_name(app_client_two_attached_databases, path):\r\n> response = app_client_two_attached_databases.get(\r\n \"/extra~20database\" + path, follow_redirects=True\r\n )\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in __call__\r\n return call_result.result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result\r\n return self.__get_result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap\r\n result = await self.awaitable(*args, **kwargs)\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get\r\n return await self._request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request\r\n httpx_response = await self.ds.client.request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request\r\n return await client.request(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request\r\n return await self.send(request, auth=auth, follow_redirects=follow_redirects)\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send\r\n response = await self._send_handling_auth(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth\r\n response = await self._send_handling_redirects(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = \r\nrequest = \r\nfollow_redirects = True\r\nhistory = [, , , , , , ...]\r\n\r\n async def _send_handling_redirects(\r\n self,\r\n request: Request,\r\n follow_redirects: bool,\r\n history: typing.List[Response],\r\n ) -> Response:\r\n while True:\r\n if len(history) > self.max_redirects:\r\n> raise TooManyRedirects(\r\n \"Exceeded maximum allowed redirects.\", request=request\r\n )\r\nE httpx.TooManyRedirects: Exceeded maximum allowed redirects.\r\n\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects\r\n______________ test_database_with_space_in_name[/searchable_view] ______________\r\n[gw22] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_two_attached_databases = \r\npath = '/searchable_view'\r\n\r\n @pytest.mark.parametrize(\r\n \"path\",\r\n (\r\n \"/\",\r\n \".json\",\r\n \"/searchable\",\r\n \"/searchable.json\",\r\n \"/searchable_view\",\r\n \"/searchable_view.json\",\r\n ),\r\n )\r\n def test_database_with_space_in_name(app_client_two_attached_databases, path):\r\n> response = app_client_two_attached_databases.get(\r\n \"/extra~20database\" + path, follow_redirects=True\r\n )\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in __call__\r\n return call_result.result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result\r\n return self.__get_result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap\r\n result = await self.awaitable(*args, **kwargs)\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get\r\n return await self._request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request\r\n httpx_response = await self.ds.client.request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request\r\n return await client.request(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request\r\n return await self.send(request, auth=auth, follow_redirects=follow_redirects)\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send\r\n response = await self._send_handling_auth(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth\r\n response = await self._send_handling_redirects(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = \r\nrequest = \r\nfollow_redirects = True\r\nhistory = [, , , , , , ...]\r\n\r\n async def _send_handling_redirects(\r\n self,\r\n request: Request,\r\n follow_redirects: bool,\r\n history: typing.List[Response],\r\n ) -> Response:\r\n while True:\r\n if len(history) > self.max_redirects:\r\n> raise TooManyRedirects(\r\n \"Exceeded maximum allowed redirects.\", request=request\r\n )\r\nE httpx.TooManyRedirects: Exceeded maximum allowed redirects.\r\n\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects\r\n_____________________ test_database_with_space_in_name[/] ______________________\r\n[gw18] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_two_attached_databases = \r\npath = '/'\r\n\r\n @pytest.mark.parametrize(\r\n \"path\",\r\n (\r\n \"/\",\r\n \".json\",\r\n \"/searchable\",\r\n \"/searchable.json\",\r\n \"/searchable_view\",\r\n \"/searchable_view.json\",\r\n ),\r\n )\r\n def test_database_with_space_in_name(app_client_two_attached_databases, path):\r\n> response = app_client_two_attached_databases.get(\r\n \"/extra~20database\" + path, follow_redirects=True\r\n )\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in __call__\r\n return call_result.result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result\r\n return self.__get_result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap\r\n result = await self.awaitable(*args, **kwargs)\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get\r\n return await self._request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request\r\n httpx_response = await self.ds.client.request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request\r\n return await client.request(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request\r\n return await self.send(request, auth=auth, follow_redirects=follow_redirects)\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send\r\n response = await self._send_handling_auth(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth\r\n response = await self._send_handling_redirects(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = \r\nrequest = \r\nfollow_redirects = True\r\nhistory = [, , , , , , ...]\r\n\r\n async def _send_handling_redirects(\r\n self,\r\n request: Request,\r\n follow_redirects: bool,\r\n history: typing.List[Response],\r\n ) -> Response:\r\n while True:\r\n if len(history) > self.max_redirects:\r\n> raise TooManyRedirects(\r\n \"Exceeded maximum allowed redirects.\", request=request\r\n )\r\nE httpx.TooManyRedirects: Exceeded maximum allowed redirects.\r\n\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects\r\n________________ test_database_with_space_in_name[/searchable] _________________\r\n[gw20] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_two_attached_databases = \r\npath = '/searchable'\r\n\r\n @pytest.mark.parametrize(\r\n \"path\",\r\n (\r\n \"/\",\r\n \".json\",\r\n \"/searchable\",\r\n \"/searchable.json\",\r\n \"/searchable_view\",\r\n \"/searchable_view.json\",\r\n ),\r\n )\r\n def test_database_with_space_in_name(app_client_two_attached_databases, path):\r\n> response = app_client_two_attached_databases.get(\r\n \"/extra~20database\" + path, follow_redirects=True\r\n )\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in __call__\r\n return call_result.result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result\r\n return self.__get_result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap\r\n result = await self.awaitable(*args, **kwargs)\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get\r\n return await self._request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request\r\n httpx_response = await self.ds.client.request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request\r\n return await client.request(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request\r\n return await self.send(request, auth=auth, follow_redirects=follow_redirects)\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send\r\n response = await self._send_handling_auth(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth\r\n response = await self._send_handling_redirects(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = \r\nrequest = \r\nfollow_redirects = True\r\nhistory = [, , , , , , ...]\r\n\r\n async def _send_handling_redirects(\r\n self,\r\n request: Request,\r\n follow_redirects: bool,\r\n history: typing.List[Response],\r\n ) -> Response:\r\n while True:\r\n if len(history) > self.max_redirects:\r\n> raise TooManyRedirects(\r\n \"Exceeded maximum allowed redirects.\", request=request\r\n )\r\nE httpx.TooManyRedirects: Exceeded maximum allowed redirects.\r\n\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects\r\n___________ test_database_with_space_in_name[/searchable_view.json] ____________\r\n[gw23] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client_two_attached_databases = \r\npath = '/searchable_view.json'\r\n\r\n @pytest.mark.parametrize(\r\n \"path\",\r\n (\r\n \"/\",\r\n \".json\",\r\n \"/searchable\",\r\n \"/searchable.json\",\r\n \"/searchable_view\",\r\n \"/searchable_view.json\",\r\n ),\r\n )\r\n def test_database_with_space_in_name(app_client_two_attached_databases, path):\r\n> response = app_client_two_attached_databases.get(\r\n \"/extra~20database\" + path, follow_redirects=True\r\n )\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in __call__\r\n return call_result.result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result\r\n return self.__get_result()\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n/gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap\r\n result = await self.awaitable(*args, **kwargs)\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get\r\n return await self._request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request\r\n httpx_response = await self.ds.client.request(\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request\r\n return await client.request(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request\r\n return await self.send(request, auth=auth, follow_redirects=follow_redirects)\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send\r\n response = await self._send_handling_auth(\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth\r\n response = await self._send_handling_redirects(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = \r\nrequest = \r\nfollow_redirects = True\r\nhistory = [, , , , , , ...]\r\n\r\n async def _send_handling_redirects(\r\n self,\r\n request: Request,\r\n follow_redirects: bool,\r\n history: typing.List[Response],\r\n ) -> Response:\r\n while True:\r\n if len(history) > self.max_redirects:\r\n> raise TooManyRedirects(\r\n \"Exceeded maximum allowed redirects.\", request=request\r\n )\r\nE httpx.TooManyRedirects: Exceeded maximum allowed redirects.\r\n\r\n/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects\r\n________________ test_weird_database_names[database (1).sqlite] ________________\r\n[gw7] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\ntmpdir = local('/tmp/guix-build-datasette-0.64.2.drv-0/pytest-of-nixbld/pytest-0/popen-gw7/test_weird_database_names_data0')\r\nfilename = 'database (1).sqlite'\r\n\r\n @pytest.mark.parametrize(\r\n \"filename\", [\"test-database (1).sqlite\", \"database (1).sqlite\"]\r\n )\r\n def test_weird_database_names(tmpdir, filename):\r\n # https://github.com/simonw/datasette/issues/1181\r\n runner = CliRunner()\r\n db_path = str(tmpdir / filename)\r\n sqlite3.connect(db_path).execute(\"vacuum\")\r\n result1 = runner.invoke(cli, [db_path, \"--get\", \"/\"])\r\n assert result1.exit_code == 0, result1.output\r\n filename_no_stem = filename.rsplit(\".\", 1)[0]\r\n expected_link = '{}'.format(\r\n tilde_encode(filename_no_stem), filename_no_stem\r\n )\r\n assert expected_link in result1.output\r\n # Now try hitting that database page\r\n result2 = runner.invoke(\r\n cli, [db_path, \"--get\", \"/{}\".format(tilde_encode(filename_no_stem))]\r\n )\r\n> assert result2.exit_code == 0, result2.output\r\nE AssertionError: \r\nE \r\nE assert 1 == 0\r\nE + where 1 = .exit_code\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_cli.py:321: AssertionError\r\n_____________ test_weird_database_names[test-database (1).sqlite] ______________\r\n[gw6] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\ntmpdir = local('/tmp/guix-build-datasette-0.64.2.drv-0/pytest-of-nixbld/pytest-0/popen-gw6/test_weird_database_names_test0')\r\nfilename = 'test-database (1).sqlite'\r\n\r\n @pytest.mark.parametrize(\r\n \"filename\", [\"test-database (1).sqlite\", \"database (1).sqlite\"]\r\n )\r\n def test_weird_database_names(tmpdir, filename):\r\n # https://github.com/simonw/datasette/issues/1181\r\n runner = CliRunner()\r\n db_path = str(tmpdir / filename)\r\n sqlite3.connect(db_path).execute(\"vacuum\")\r\n result1 = runner.invoke(cli, [db_path, \"--get\", \"/\"])\r\n assert result1.exit_code == 0, result1.output\r\n filename_no_stem = filename.rsplit(\".\", 1)[0]\r\n expected_link = '{}'.format(\r\n tilde_encode(filename_no_stem), filename_no_stem\r\n )\r\n assert expected_link in result1.output\r\n # Now try hitting that database page\r\n result2 = runner.invoke(\r\n cli, [db_path, \"--get\", \"/{}\".format(tilde_encode(filename_no_stem))]\r\n )\r\n> assert result2.exit_code == 0, result2.output\r\nE AssertionError: \r\nE \r\nE assert 1 == 0\r\nE + where 1 = .exit_code\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_cli.py:321: AssertionError\r\n_ test_row_html_compound_primary_key[/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd-expected1] _\r\n[gw11] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\npath = '/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd'\r\nexpected = [['', '', '']]\r\n\r\n @pytest.mark.parametrize(\r\n \"path,expected\",\r\n (\r\n (\r\n \"/fixtures/compound_primary_key/a,b\",\r\n [\r\n [\r\n '',\r\n '',\r\n '',\r\n ]\r\n ],\r\n ),\r\n (\r\n \"/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd\",\r\n [\r\n [\r\n '',\r\n '',\r\n '',\r\n ]\r\n ],\r\n ),\r\n ),\r\n )\r\n def test_row_html_compound_primary_key(app_client, path, expected):\r\n response = app_client.get(path)\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:370: AssertionError\r\n_ test_css_classes_on_body[/fixtures/table~2Fwith~2Fslashes~2Ecsv-expected_classes5] _\r\n[gw3] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\npath = '/fixtures/table~2Fwith~2Fslashes~2Ecsv'\r\nexpected_classes = ['table', 'db-fixtures', 'table-tablewithslashescsv-fa7563']\r\n\r\n @pytest.mark.parametrize(\r\n \"path,expected_classes\",\r\n [\r\n (\"/\", [\"index\"]),\r\n (\"/fixtures\", [\"db\", \"db-fixtures\"]),\r\n (\"/fixtures?sql=select+1\", [\"query\", \"db-fixtures\"]),\r\n (\r\n \"/fixtures/simple_primary_key\",\r\n [\"table\", \"db-fixtures\", \"table-simple_primary_key\"],\r\n ),\r\n (\r\n \"/fixtures/neighborhood_search\",\r\n [\"query\", \"db-fixtures\", \"query-neighborhood_search\"],\r\n ),\r\n (\r\n \"/fixtures/table~2Fwith~2Fslashes~2Ecsv\",\r\n [\"table\", \"db-fixtures\", \"table-tablewithslashescsv-fa7563\"],\r\n ),\r\n (\r\n \"/fixtures/simple_primary_key/1\",\r\n [\"row\", \"db-fixtures\", \"table-simple_primary_key\"],\r\n ),\r\n ],\r\n )\r\n def test_css_classes_on_body(app_client, path, expected_classes):\r\n response = app_client.get(path)\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:238: AssertionError\r\n_ test_templates_considered[/fixtures/table~2Fwith~2Fslashes~2Ecsv-table-fixtures-tablewithslashescsv-fa7563.html, *table.html] _\r\n[gw3] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\npath = '/fixtures/table~2Fwith~2Fslashes~2Ecsv'\r\nexpected_considered = 'table-fixtures-tablewithslashescsv-fa7563.html, *table.html'\r\n\r\n @pytest.mark.parametrize(\r\n \"path,expected_considered\",\r\n [\r\n (\"/\", \"*index.html\"),\r\n (\"/fixtures\", \"database-fixtures.html, *database.html\"),\r\n (\r\n \"/fixtures/simple_primary_key\",\r\n \"table-fixtures-simple_primary_key.html, *table.html\",\r\n ),\r\n (\r\n \"/fixtures/table~2Fwith~2Fslashes~2Ecsv\",\r\n \"table-fixtures-tablewithslashescsv-fa7563.html, *table.html\",\r\n ),\r\n (\r\n \"/fixtures/simple_primary_key/1\",\r\n \"row-fixtures-simple_primary_key.html, *row.html\",\r\n ),\r\n ],\r\n )\r\n def test_templates_considered(app_client, path, expected_considered):\r\n response = app_client.get(path)\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:264: AssertionError\r\n_ test_alternate_url_json[/fixtures/table~2Fwith~2Fslashes~2Ecsv-http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json] _\r\n[gw21] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\npath = '/fixtures/table~2Fwith~2Fslashes~2Ecsv'\r\nexpected = 'http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json'\r\n\r\n @pytest.mark.parametrize(\r\n \"path,expected\",\r\n (\r\n # Instance index page\r\n (\"/\", \"http://localhost/.json\"),\r\n # Table page\r\n (\"/fixtures/facetable\", \"http://localhost/fixtures/facetable.json\"),\r\n (\r\n \"/fixtures/table~2Fwith~2Fslashes~2Ecsv\",\r\n \"http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json\",\r\n ),\r\n # Row page\r\n (\r\n \"/fixtures/no_primary_key/1\",\r\n \"http://localhost/fixtures/no_primary_key/1.json\",\r\n ),\r\n # Database index page\r\n (\r\n \"/fixtures\",\r\n \"http://localhost/fixtures.json\",\r\n ),\r\n # Custom query page\r\n (\r\n \"/fixtures?sql=select+*+from+facetable\",\r\n \"http://localhost/fixtures.json?sql=select+*+from+facetable\",\r\n ),\r\n # Canned query page\r\n (\r\n \"/fixtures/neighborhood_search?text=town\",\r\n \"http://localhost/fixtures/neighborhood_search.json?text=town\",\r\n ),\r\n # /-/ pages\r\n (\r\n \"/-/plugins\",\r\n \"http://localhost/-/plugins.json\",\r\n ),\r\n ),\r\n )\r\n def test_alternate_url_json(app_client, path, expected):\r\n response = app_client.get(path)\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:948: AssertionError\r\n_ test_edit_sql_link_on_canned_queries[/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC-/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B] _\r\n[gw18] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\npath = '/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC'\r\nexpected = '/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B'\r\n\r\n @pytest.mark.parametrize(\r\n \"path,expected\",\r\n [\r\n (\r\n \"/fixtures/neighborhood_search\",\r\n \"/fixtures?sql=%0Aselect+_neighborhood%2C+facet_cities.name%2C+state%0Afrom+facetable%0A++++join+facet_cities%0A++++++++on+facetable._city_id+%3D+facet_cities.id%0Awhere+_neighborhood+like+%27%25%27+%7C%7C+%3Atext+%7C%7C+%27%25%27%0Aorder+by+_neighborhood%3B%0A&text=\",\r\n ),\r\n (\r\n \"/fixtures/neighborhood_search?text=ber\",\r\n \"/fixtures?sql=%0Aselect+_neighborhood%2C+facet_cities.name%2C+state%0Afrom+facetable%0A++++join+facet_cities%0A++++++++on+facetable._city_id+%3D+facet_cities.id%0Awhere+_neighborhood+like+%27%25%27+%7C%7C+%3Atext+%7C%7C+%27%25%27%0Aorder+by+_neighborhood%3B%0A&text=ber\",\r\n ),\r\n (\"/fixtures/pragma_cache_size\", None),\r\n (\r\n # /fixtures/\ud835\udc1c\ud835\udc22\ud835\udc2d\ud835\udc22\ud835\udc1e\ud835\udc2c\r\n \"/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC\",\r\n \"/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B\",\r\n ),\r\n (\"/fixtures/magic_parameters\", None),\r\n ],\r\n )\r\n def test_edit_sql_link_on_canned_queries(app_client, path, expected):\r\n response = app_client.get(path)\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:841: AssertionError\r\n_______________________ test_table_with_slashes_in_name ________________________\r\n[gw9] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\n\r\n def test_table_with_slashes_in_name(app_client):\r\n response = app_client.get(\r\n \"/fixtures/table~2Fwith~2Fslashes~2Ecsv.json?_shape=objects\"\r\n )\r\n> assert response.status == 200\r\nE assert 302 == 200\r\nE + where 302 = .status\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:141: AssertionError\r\n__________________ test_custom_query_with_unicode_characters ___________________\r\n[gw8] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\n\r\n def test_custom_query_with_unicode_characters(app_client):\r\n # /fixtures/\ud835\udc1c\ud835\udc22\ud835\udc2d\ud835\udc22\ud835\udc1e\ud835\udc2c.json\r\n response = app_client.get(\r\n \"/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC.json?_shape=array\"\r\n )\r\n> assert [{\"id\": 1, \"name\": \"San Francisco\"}] == response.json\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:1042: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:40: in json\r\n return json.loads(self.text)\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/json/__init__.py:346: in loads\r\n return _default_decoder.decode(s)\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/json/decoder.py:337: in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = , s = '', idx = 0\r\n\r\n def raw_decode(self, s, idx=0):\r\n \"\"\"Decode a JSON document from ``s`` (a ``str`` beginning with\r\n a JSON document) and return a 2-tuple of the Python\r\n representation and the index in ``s`` where the document ended.\r\n \r\n This can be used to decode a JSON document from a string that may\r\n have extraneous data at the end.\r\n \r\n \"\"\"\r\n try:\r\n obj, end = self.scan_once(s, idx)\r\n except StopIteration as err:\r\n> raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\nE json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\n/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/json/decoder.py:355: JSONDecodeError\r\n_ test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3] _\r\n[gw13] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\napp_client = \r\npath = '/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw'\r\nexpected_rows = [[1, 'barry cat', 'terry dog', 'panther'], [2, 'terry dog', 'sara weasel', 'puma']]\r\n\r\n @pytest.mark.parametrize(\r\n \"path,expected_rows\",\r\n [\r\n (\r\n \"/fixtures/searchable.json?_search=dog\",\r\n [\r\n [1, \"barry cat\", \"terry dog\", \"panther\"],\r\n [2, \"terry dog\", \"sara weasel\", \"puma\"],\r\n ],\r\n ),\r\n (\r\n # Special keyword shouldn't break FTS query\r\n \"/fixtures/searchable.json?_search=AND\",\r\n [],\r\n ),\r\n (\r\n # Without _searchmode=raw this should return no results\r\n \"/fixtures/searchable.json?_search=te*+AND+do*\",\r\n [],\r\n ),\r\n (\r\n # _searchmode=raw\r\n \"/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw\",\r\n [\r\n [1, \"barry cat\", \"terry dog\", \"panther\"],\r\n [2, \"terry dog\", \"sara weasel\", \"puma\"],\r\n ],\r\n ),\r\n (\r\n # _searchmode=raw combined with _search_COLUMN\r\n \"/fixtures/searchable.json?_search_text2=te*&_searchmode=raw\",\r\n [\r\n [1, \"barry cat\", \"terry dog\", \"panther\"],\r\n ],\r\n ),\r\n (\r\n \"/fixtures/searchable.json?_search=weasel\",\r\n [[2, \"terry dog\", \"sara weasel\", \"puma\"]],\r\n ),\r\n (\r\n \"/fixtures/searchable.json?_search_text2=dog\",\r\n [[1, \"barry cat\", \"terry dog\", \"panther\"]],\r\n ),\r\n (\r\n \"/fixtures/searchable.json?_search_name%20with%20.%20and%20spaces=panther\",\r\n [[1, \"barry cat\", \"terry dog\", \"panther\"]],\r\n ),\r\n ],\r\n )\r\n def test_searchable(app_client, path, expected_rows):\r\n response = app_client.get(path)\r\n> assert expected_rows == response.json[\"rows\"]\r\nE AssertionError: assert [[1, 'barry cat', 'terry dog', 'panther'],\\n [2, 'terry dog', 'sara weasel', 'puma']] == []\r\nE Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']\r\nE Full diff:\r\nE [\r\nE - ,\r\nE + [1,\r\nE + 'barry cat',\r\nE + 'terry dog',\r\nE + 'panther'],\r\nE + [2,\r\nE + 'terry dog',\r\nE + 'sara weasel',\r\nE + 'puma'],\r\nE ]\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:402: AssertionError\r\n_____ test_searchmode[table_metadata1-_search=te*+AND+do*-expected_rows1] ______\r\n[gw20] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\ntable_metadata = {'searchmode': 'raw'}, querystring = '_search=te*+AND+do*'\r\nexpected_rows = [[1, 'barry cat', 'terry dog', 'panther'], [2, 'terry dog', 'sara weasel', 'puma']]\r\n\r\n @pytest.mark.parametrize(\r\n \"table_metadata,querystring,expected_rows\",\r\n [\r\n (\r\n {},\r\n \"_search=te*+AND+do*\",\r\n [],\r\n ),\r\n (\r\n {\"searchmode\": \"raw\"},\r\n \"_search=te*+AND+do*\",\r\n _SEARCHMODE_RAW_RESULTS,\r\n ),\r\n (\r\n {},\r\n \"_search=te*+AND+do*&_searchmode=raw\",\r\n _SEARCHMODE_RAW_RESULTS,\r\n ),\r\n # Can be over-ridden with _searchmode=escaped\r\n (\r\n {\"searchmode\": \"raw\"},\r\n \"_search=te*+AND+do*&_searchmode=escaped\",\r\n [],\r\n ),\r\n ],\r\n )\r\n def test_searchmode(table_metadata, querystring, expected_rows):\r\n with make_app_client(\r\n metadata={\"databases\": {\"fixtures\": {\"tables\": {\"searchable\": table_metadata}}}}\r\n ) as client:\r\n response = client.get(\"/fixtures/searchable.json?\" + querystring)\r\n> assert expected_rows == response.json[\"rows\"]\r\nE AssertionError: assert [[1, 'barry cat', 'terry dog', 'panther'],\\n [2, 'terry dog', 'sara weasel', 'puma']] == []\r\nE Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']\r\nE Full diff:\r\nE [\r\nE - ,\r\nE + [1,\r\nE + 'barry cat',\r\nE + 'terry dog',\r\nE + 'panther'],\r\nE + [2,\r\nE + 'terry dog',\r\nE + 'sara weasel',\r\nE + 'puma'],\r\nE ]\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:442: AssertionError\r\n_ test_searchmode[table_metadata2-_search=te*+AND+do*&_searchmode=raw-expected_rows2] _\r\n[gw20] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python\r\n\r\ntable_metadata = {}, querystring = '_search=te*+AND+do*&_searchmode=raw'\r\nexpected_rows = [[1, 'barry cat', 'terry dog', 'panther'], [2, 'terry dog', 'sara weasel', 'puma']]\r\n\r\n @pytest.mark.parametrize(\r\n \"table_metadata,querystring,expected_rows\",\r\n [\r\n (\r\n {},\r\n \"_search=te*+AND+do*\",\r\n [],\r\n ),\r\n (\r\n {\"searchmode\": \"raw\"},\r\n \"_search=te*+AND+do*\",\r\n _SEARCHMODE_RAW_RESULTS,\r\n ),\r\n (\r\n {},\r\n \"_search=te*+AND+do*&_searchmode=raw\",\r\n _SEARCHMODE_RAW_RESULTS,\r\n ),\r\n # Can be over-ridden with _searchmode=escaped\r\n (\r\n {\"searchmode\": \"raw\"},\r\n \"_search=te*+AND+do*&_searchmode=escaped\",\r\n [],\r\n ),\r\n ],\r\n )\r\n def test_searchmode(table_metadata, querystring, expected_rows):\r\n with make_app_client(\r\n metadata={\"databases\": {\"fixtures\": {\"tables\": {\"searchable\": table_metadata}}}}\r\n ) as client:\r\n response = client.get(\"/fixtures/searchable.json?\" + querystring)\r\n> assert expected_rows == response.json[\"rows\"]\r\nE AssertionError: assert [[1, 'barry cat', 'terry dog', 'panther'],\\n [2, 'terry dog', 'sara weasel', 'puma']] == []\r\nE Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']\r\nE Full diff:\r\nE [\r\nE - ,\r\nE + [1,\r\nE + 'barry cat',\r\nE + 'terry dog',\r\nE + 'panther'],\r\nE + [2,\r\nE + 'terry dog',\r\nE + 'sara weasel',\r\nE + 'puma'],\r\nE ]\r\n\r\n/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:442: AssertionError\r\n=========================== short test summary info ============================\r\nFAILED tests/test_api.py::test_row_strange_table_name - assert 400 == 200\r\nFAILED tests/test_api.py::test_database_page_for_database_with_dot_in_name - ...\r\nFAILED tests/test_api.py::test_tilde_encoded_database_names[fo%o] - assert 30...\r\nFAILED tests/test_api.py::test_tilde_encoded_database_names[f~/c.d] - assert ...\r\nFAILED tests/test_api.py::test_database_with_space_in_name[/searchable.json]\r\nFAILED tests/test_api.py::test_database_with_space_in_name[.json] - httpx.Too...\r\nFAILED tests/test_api.py::test_database_with_space_in_name[/searchable_view]\r\nFAILED tests/test_api.py::test_database_with_space_in_name[/] - httpx.TooMany...\r\nFAILED tests/test_api.py::test_database_with_space_in_name[/searchable] - htt...\r\nFAILED tests/test_api.py::test_database_with_space_in_name[/searchable_view.json]\r\nFAILED tests/test_cli.py::test_weird_database_names[database (1).sqlite] - As...\r\nFAILED tests/test_cli.py::test_weird_database_names[test-database (1).sqlite]\r\nFAILED tests/test_html.py::test_row_html_compound_primary_key[/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd-expected1]\r\nFAILED tests/test_html.py::test_css_classes_on_body[/fixtures/table~2Fwith~2Fslashes~2Ecsv-expected_classes5]\r\nFAILED tests/test_html.py::test_templates_considered[/fixtures/table~2Fwith~2Fslashes~2Ecsv-table-fixtures-tablewithslashescsv-fa7563.html, *table.html]\r\nFAILED tests/test_html.py::test_alternate_url_json[/fixtures/table~2Fwith~2Fslashes~2Ecsv-http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json]\r\nFAILED tests/test_html.py::test_edit_sql_link_on_canned_queries[/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC-/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B]\r\nFAILED tests/test_table_api.py::test_table_with_slashes_in_name - assert 302 ...\r\nFAILED tests/test_table_api.py::test_custom_query_with_unicode_characters - j...\r\nFAILED tests/test_table_api.py::test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3]\r\nFAILED tests/test_table_api.py::test_searchmode[table_metadata1-_search=te*+AND+do*-expected_rows1]\r\nFAILED tests/test_table_api.py::test_searchmode[table_metadata2-_search=te*+AND+do*&_searchmode=raw-expected_rows2]\r\n=========== 22 failed, 1049 passed, 3 skipped in 1522.28s (0:25:22) ============\r\nerror: in phase 'check': uncaught exception:\r\n%exception #<&invoke-error program: \"/gnu/store/ziqwkzz6znb5d3c245xn0cq5ra2ly0w3-python-pytest-7.1.3/bin/pytest\" arguments: (\"-vv\" \"-n\" \"24\" \"-m\" \"not serial\") exit-status: 1 term-signal: #f stop-signal: #f> \r\nphase `check' failed after 1523.3 seconds\r\n```\r\nThe tests run in a private namespace without internet connectivity, and the Python dependencies are at:\r\n```\r\npython-aiofiles@0.6.0 python-asgi-csrf@0.9 python-asgiref@3.4.1\r\n+ python-beautifulsoup4@4.11.1 python-black@22.3.0 python-click-default-group@1.2.2 python-click@8.1.3\r\n+ python-cogapp@3.3.0 python-httpx@0.23.0 python-hupper@1.10.3 python-itsdangerous@2.0.1\r\n+ python-janus@1.0.0 python-jinja2@3.1.1 python-mergedeep@1.3.4 python-pint@0.20.1 python-pluggy@1.0.0\r\n+ python-pytest-asyncio@0.17.2 python-pytest-runner@5.2 python-pytest-timeout@2.0.2\r\n+ python-pytest-xdist@2.5.0 python-pytest@7.1.3 python-pyyaml@6.0 python-setuptools@64.0.3\r\n+ python-trustme@0.9.0 python-uvicorn@0.17.6\r\n```\r\nWith Python 3.9.9.\r\n\r\nThank you!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2048/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1645098678, "node_id": "PR_kwDOBm6k_c5NIQri", "number": 2047, "title": "Bump black from 22.12.0 to 23.3.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-29T06:09:06Z", "updated_at": "2023-03-29T06:12:21Z", "closed_at": "2023-03-29T06:12:05Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2047", "body": "Bumps [black](https://github.com/psf/black) from 22.12.0 to 23.3.0.\n
\nRelease notes\n

Sourced from black's releases.

\n
\n

23.3.0

\n

Highlights

\n

This release fixes a longstanding confusing behavior in Black's GitHub action, where the\nversion of the action did not determine the version of Black being run (issue #3382). In\naddition, there is a small bug fix around imports and a number of improvements to the\npreview style.

\n

Please try out the\npreview style\nwith black --preview and tell us your feedback. All changes in the preview style are\nexpected to become part of Black's stable style in January 2024.

\n

Stable style

\n
    \n
  • Import lines with # fmt: skip and # fmt: off no longer have an extra blank line\nadded when they are right after another import line (#3610)
  • \n
\n

Preview style

\n
    \n
  • Add trailing commas to collection literals even if there's a comment after the last\nentry (#3393)
  • \n
  • async def, async for, and async with statements are now formatted consistently\ncompared to their non-async version. (#3609)
  • \n
  • with statements that contain two context managers will be consistently wrapped in\nparentheses (#3589)
  • \n
  • Let string splitters respect East Asian Width\n(#3445)
  • \n
  • Now long string literals can be split after East Asian commas and periods (\u3001 U+3001\nIDEOGRAPHIC COMMA, \u3002 U+3002 IDEOGRAPHIC FULL STOP, & \uff0c U+FF0C FULLWIDTH COMMA)\nbesides before spaces (#3445)
  • \n
  • For stubs, enforce one blank line after a nested class with a body other than just\n... (#3564)
  • \n
  • Improve handling of multiline strings by changing line split behavior (#1879)
  • \n
\n

Parser

\n
    \n
  • Added support for formatting files with invalid type comments (#3594)
  • \n
\n

Integrations

\n
    \n
  • Update GitHub Action to use the version of Black equivalent to action's version if\nversion input is not specified (#3543)
  • \n
  • Fix missing Python binary path in autoload script for vim (#3508)
  • \n
\n

Documentation

\n
    \n
  • Document that only the most recent release is supported for security issues;\nvulnerabilities should be reported through Tidelift (#3612)
  • \n
\n\n
\n

... (truncated)

\n
\n
\nChangelog\n

Sourced from black's changelog.

\n
\n

23.3.0

\n

Highlights

\n

This release fixes a longstanding confusing behavior in Black's GitHub action, where the\nversion of the action did not determine the version of Black being run (issue #3382). In\naddition, there is a small bug fix around imports and a number of improvements to the\npreview style.

\n

Please try out the\npreview style\nwith black --preview and tell us your feedback. All changes in the preview style are\nexpected to become part of Black's stable style in January 2024.

\n

Stable style

\n
    \n
  • Import lines with # fmt: skip and # fmt: off no longer have an extra blank line\nadded when they are right after another import line (#3610)
  • \n
\n

Preview style

\n
    \n
  • Add trailing commas to collection literals even if there's a comment after the last\nentry (#3393)
  • \n
  • async def, async for, and async with statements are now formatted consistently\ncompared to their non-async version. (#3609)
  • \n
  • with statements that contain two context managers will be consistently wrapped in\nparentheses (#3589)
  • \n
  • Let string splitters respect East Asian Width\n(#3445)
  • \n
  • Now long string literals can be split after East Asian commas and periods (\u3001 U+3001\nIDEOGRAPHIC COMMA, \u3002 U+3002 IDEOGRAPHIC FULL STOP, & \uff0c U+FF0C FULLWIDTH COMMA)\nbesides before spaces (#3445)
  • \n
  • For stubs, enforce one blank line after a nested class with a body other than just\n... (#3564)
  • \n
  • Improve handling of multiline strings by changing line split behavior (#1879)
  • \n
\n

Parser

\n
    \n
  • Added support for formatting files with invalid type comments (#3594)
  • \n
\n

Integrations

\n
    \n
  • Update GitHub Action to use the version of Black equivalent to action's version if\nversion input is not specified (#3543)
  • \n
  • Fix missing Python binary path in autoload script for vim (#3508)
  • \n
\n

Documentation

\n
    \n
  • Document that only the most recent release is supported for security issues;\nvulnerabilities should be reported through Tidelift (#3612)
  • \n
\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=black&package-manager=pip&previous-version=22.12.0&new-version=23.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2047.org.readthedocs.build/en/2047/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2047/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1528995601, "node_id": "PR_kwDOBm6k_c5HJ55o", "number": 1986, "title": "Bump sphinx from 6.1.2 to 6.1.3", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-11T13:02:36Z", "updated_at": "2023-03-29T06:09:50Z", "closed_at": "2023-03-29T06:09:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1986", "body": "Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 6.1.2 to 6.1.3.\n
\nRelease notes\n

Sourced from sphinx's releases.

\n
\n

v6.1.3

\n

Changelog: https://www.sphinx-doc.org/en/master/changes.html

\n
\n
\n
\nChangelog\n

Sourced from sphinx's changelog.

\n
\n

Release 6.1.3 (released Jan 10, 2023)

\n

Bugs fixed

\n
    \n
  • #11116: Reverted to previous Sphinx 5 node copying method
  • \n
  • #11117: Reverted changes to parallel image processing from Sphinx 6.1.0
  • \n
  • #11119: Supress ValueError in the linkcheck builder
  • \n
\n
\n
\n
\nCommits\n
    \n
  • 776d01e Bump to 6.1.3 final
  • \n
  • a2e922a CHANGES for Sphinx 6.1.3
  • \n
  • 31162a9 Handle exceptions for get_node_source and get_node_line
  • \n
  • dcb4429 Restore Sphinx 5 nodes.Element copying behaviour
  • \n
  • 2a7c40d Undo parallel image changes
  • \n
  • 7841d3d Ignore more checks in Ruff 0.0.214
  • \n
  • ddbc5b5 Bump version
  • \n
  • See full diff in compare view
  • \n
\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=sphinx&package-manager=pip&previous-version=6.1.2&new-version=6.1.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--1986.org.readthedocs.build/en/1986/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1986/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1644018605, "node_id": "PR_kwDOBm6k_c5NEqBO", "number": 2046, "title": "Bump furo from 2022.12.7 to 2023.3.27", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-28T13:58:14Z", "updated_at": "2023-03-29T06:08:02Z", "closed_at": "2023-03-29T06:08:01Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2046", "body": "Bumps [furo](https://github.com/pradyunsg/furo) from 2022.12.7 to 2023.3.27.\n
\nChangelog\n

Sourced from furo's changelog.

\n
\n

Changelog

\n\n

2023.03.27 -- Tasty Tangerine

\n
    \n
  • Regenerate with newer version of sphinx-theme-builder, to fix RECORD hashes.
  • \n
  • Add missing class to Font Awesome examples
  • \n
\n

2023.03.23 -- Sassy Saffron

\n
    \n
  • Update Python version classifiers.
  • \n
  • Increase the icon size in mobile header.
  • \n
  • Increase admonition title bg opacity.
  • \n
  • Change the default API background to transparent.
  • \n
  • Transition the API background change.
  • \n
  • Remove the "indent" of API entries which have a background.
  • \n
  • Break long inline code literals.
  • \n
\n

2022.12.07 -- Reverent Raspberry

\n
    \n
  • \u2728 Add support for Sphinx 6.
  • \n
  • \u2728 Improve footnote presentation with docutils 0.18+.
  • \n
  • Drop support for Sphinx 4.
  • \n
  • Improve documentation about what the edit button does.
  • \n
  • Improve handling of empty-flexboxes for better print experience on Chrome.
  • \n
  • Improve styling for inline signatures.
  • \n
  • Replace the meta generator tag with a comment.
  • \n
  • Tweak labels with icons to prevent users selecting icons as text on touch.
  • \n
\n

2022.09.29 -- Quaint Quartz

\n
    \n
  • Add ability to set arbitrary URLs for edit button.
  • \n
  • Add support for aligning text in MyST-parser generated tables.
  • \n
\n

2022.09.15 -- Pragmatic Pistachio

\n
    \n
  • Add a minimum version constraint on pygments.
  • \n
  • Add an explicit dependency on sass.
  • \n
  • Change right sidebar title from "Contents" to "On this page".
  • \n
  • Correctly position sidebars on small screens.
  • \n
\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=furo&package-manager=pip&previous-version=2022.12.7&new-version=2023.3.27)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2046.org.readthedocs.build/en/2046/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2046/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1641013220, "node_id": "I_kwDOBm6k_c5hz9_k", "number": 2045, "title": "First column on a view page has no facet option in cog menu", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 0, "created_at": "2023-03-26T18:02:47Z", "updated_at": "2023-03-26T18:02:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "e.g. first column on this page - cog menu has no option to facet.\r\n\r\nhttps://datasette.io/content/tools\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2045/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1639873822, "node_id": "PR_kwDOBm6k_c5M29tt", "number": 2044, "title": "Expand labels in row view as well (patch for 0.64.x branch)", "user": {"value": 82332573, "label": "tmcl-it"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-24T18:44:44Z", "updated_at": "2023-03-24T18:44:57Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2044", "body": "This is a version of #2031 for the 0.64.x branch.\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2044.org.readthedocs.build/en/2044/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2044/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1636616315, "node_id": "I_kwDOBm6k_c5hjMh7", "number": 2042, "title": "Gather feedback on new ?_extra= design", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-22T23:07:43Z", "updated_at": "2023-03-22T23:08:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Now that I've landed:\r\n- #1999\r\n\r\nSee also:\r\n- #262\r\n\r\nI want to get some feedback from people on the design of the new `?_extra=` feature, before freezing it into Datasette 1.0.\r\n\r\nThe big change is that the default JSON representation is now MUCH slimmer - it only gives you keys for `\"next\"` and `\"rows\"`, where rows is a list of JSON objects (not a list of arrays as was previously the default) - for example https://latest.datasette.io/fixtures/sortable.json\r\n\r\nIf you want extra stuff you can ask for it with the new `?_extra=` parameter - e.g. https://latest.datasette.io/fixtures/sortable.json?_extra=columns&_extra=suggested_facets\r\n\r\nYou can use `?_extra=extras` to see a list of available extras: https://latest.datasette.io/fixtures/sortable.json?_extra=extras\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2042/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1620515757, "node_id": "I_kwDOBm6k_c5glxut", "number": 2039, "title": "Subtle bug with `--load-extension` and `--static` flags with absolute Windows paths with`C:\\`", "user": {"value": 15178711, "label": "asg017"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-12T21:18:52Z", "updated_at": "2023-03-12T21:18:52Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From the Datasette discord: A user tried running the following command on windows:\r\n\r\n```\r\ndatasette --load-extension=\"C:\\spatialite\\mod_spatialite-5.0.1-win-x86\\mod_spatialite.dll\"\r\n```\r\nThis failed with `\"The specified module could not be found\"`, because the entrypoint option introduced in #1789 splits the input differently. Instead of loading the extension found at `\"C:\\spatialite\\mod_spatialite-5.0.1-win-x86\\mod_spatialite.dll\"`, it instead tried to load the extension at `\"C\"` with entrypoint `\"\\spatialite\\mod_spatialite-5.0.1-win-x86\\mod_spatialite.dll\". \r\n\r\nThis is hard because most absolute windows paths have a colon in them, like `C:\\foo.txt` or `D:\\bar.txt`. I'd image the `--static` flag is also vulnerable to this type of bug.\r\n\r\n\r\nThe \"solution\" is to use a relative path instead, but that doesn't feel that great. ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2039/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1617962395, "node_id": "I_kwDOJHON9s5gcCWb", "number": 10, "title": "Include schema in README", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-09T20:38:59Z", "updated_at": "2023-03-09T20:48:18Z", "closed_at": "2023-03-09T20:48:18Z", "author_association": "MEMBER", "pull_request": null, "body": "As seen in other tools like https://github.com/simonw/git-history", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/10/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1617938730, "node_id": "I_kwDOJHON9s5gb8kq", "number": 9, "title": "Default to just storing plaintext, store HTML if `--html` is passed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-09T20:19:06Z", "updated_at": "2023-03-09T20:19:06Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The full `body` version of the notes can get HUGE, due to embedded images. It turns out for my own purposes I'm usually happy with just the `plaintext` version.\r\n\r\nI'm tempted to say you don't get HTML unless you pass a `--html` option.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/9/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1616440856, "node_id": "I_kwDOJHON9s5gWO4Y", "number": 5, "title": "Configure full text search", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-09T05:20:46Z", "updated_at": "2023-03-09T05:20:46Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "FTS would be useful.\r\n\r\nMaybe even extract the plain text from the notes to make that index easier to create, rather than creating it against the HTML. Can use the `plaintext` property for that.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1616422013, "node_id": "I_kwDOJHON9s5gWKR9", "number": 3, "title": "`apple-notes-to-sqlite --dump` option", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-09T05:05:49Z", "updated_at": "2023-03-09T05:06:14Z", "closed_at": "2023-03-09T05:06:14Z", "author_association": "MEMBER", "pull_request": null, "body": "Option that doesn't write to the database at all, it just outputs all the notes to stdout as newline-delimited JSON.", "repo": {"value": 611552758, "label": "apple-notes-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1605959201, "node_id": "I_kwDOBm6k_c5fuP4h", "number": 2032, "title": "datasette errors when foreign key integrity is enabled", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-02T01:27:51Z", "updated_at": "2023-03-02T01:31:58Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "By default, [SQLite does not enforce foreign key constraints](https://www.sqlite.org/foreignkeys.html#fk_enable). I typically enable these checks by running:\r\n\r\n```sql\r\nPRAGMA foreign_keys = ON;\r\n```\r\n\r\ninside of a `prepare_connection` hook.\r\n\r\nIf a plugin causes the schema to change (eg datasette-scraper creating a new table, or datasette-edit-schema changing a column), then https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/internal_db.py#L71-L77 will fail with:\r\n\r\n```\r\nFOREIGN KEY constraint failed\r\n```\r\n\r\nThis could be resolved by either:\r\n- deleting from the `tables` column last\r\n- changing the schema so that the foreign keys have [ON DELETE CASCADE](https://www.sqlite.org/foreignkeys.html#fk_actions)\r\n\r\nLet me know if you'd be open to a PR that addresses this -- since foreign key constraints aren't enabled by default, I guess it's questionable whether this is a bug. I think I can workaround this by inspecting the database parameter in `prepare_connection` and trying not to enable fkey checks on the `_internal` database.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2032/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1592327343, "node_id": "I_kwDOBm6k_c5e6Pyv", "number": 2029, "title": "Sorry Simon, didn't know how else to contact you", "user": {"value": 5804626, "label": "llchristopherson"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-20T19:02:53Z", "updated_at": "2023-02-20T19:02:53Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi Simon,\r\n\r\nWould you be willing to chat with me about Datasette? I have some questions. I am working on a project to evaluate data ingestion tools for a research organization and I ran across Datasette. I have looked through a lot of your documentation, but still have some questions, which are very specific. If you would be willing to write me back about this, my email is laura@renci.org.\r\n\r\nThanks,\r\nLaura", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2029/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1586980089, "node_id": "PR_kwDOBm6k_c5KF-by", "number": 2026, "title": "Avoid repeating primary key columns if included in _col args", "user": {"value": 8513, "label": "runderwood"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-16T04:16:25Z", "updated_at": "2023-02-16T04:16:41Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2026", "body": "...while maintaining given order.\r\n\r\nFixes #1975 (if I'm understanding correctly).\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2026/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1581218043, "node_id": "PR_kwDOBm6k_c5JyqPy", "number": 2025, "title": "Add database metadata to index.html template context", "user": {"value": 9993, "label": "palewire"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-12T11:16:58Z", "updated_at": "2023-02-12T11:17:14Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2025", "body": "Fixes #2016 \r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2025.org.readthedocs.build/en/2025/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2025/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1577548579, "node_id": "I_kwDOBm6k_c5eB3sj", "number": 2021, "title": "Docker images for 1.0 alphas?", "user": {"value": 1563881, "label": "meowcat"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-09T09:35:52Z", "updated_at": "2023-02-09T09:35:52Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi,\r\nwould you consider putting 1.0alpha images on Dockerhub? \r\n\r\n(Also, how usable are the alphas?)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2021/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1575880841, "node_id": "I_kwDOBm6k_c5d7giJ", "number": 2020, "title": "Documentation refers to \"off\" setting; doesn't seem to work, \"false\" does", "user": {"value": 1350673, "label": "dmick"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-08T10:38:10Z", "updated_at": "2023-02-08T10:38:10Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "https://docs.datasette.io/en/stable/settings.html#suggest-facets, among others, suggests using \"off\" to disable the setting; however, this doesn't appear to work in the JSON config files, where it apparently needs to be a \"JSON boolean\" and have the values \"true\" or \"false\". Perhaps the Python code is more flexible?...but either way, the documentation probably should mention it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2020/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1571711808, "node_id": "I_kwDOBm6k_c5drmtA", "number": 2018, "title": "`check_visibility` gives confusing (wrong?) results if permission is `None`", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-06T01:03:08Z", "updated_at": "2023-02-06T01:03:46Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I'm trying to gate access to an edit UI on the user having `update-row` on the underlying view or table.\r\n\r\nI expected [datasette.check_visibility](https://docs.datasette.io/en/latest/internals.html#await-check-visibility-actor-action-none-resource-none-permissions-none) to be a good way to do this:\r\n\r\n```python\r\n visible, private = await datasette.check_visibility(\r\n request.actor,\r\n permissions=[\r\n (\"update-row\", (database, table)),\r\n ],\r\n )\r\n\r\n if not visible:\r\n return None\r\n```\r\n\r\nBut `visible` is returning true, even when there is no explicit `update-row` permission. (In this case, `request.actor` is `None`.)\r\n\r\nBased on [the update-row permissions docs](https://docs.datasette.io/en/latest/authentication.html#update-row), I expected this to be default deny, and so no explicit permission would result in false.\r\n\r\nI think the root cause is that `check_visibility` calls `ensure_permissions` and expects it to throw if the permission is not available.\r\n\r\nBut `ensure_permissions` does not throw when `permission_allowed` returns None: https://github.com/simonw/datasette/blob/1.0a2/datasette/app.py#L825-L829", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2018/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1565179870, "node_id": "I_kwDOBm6k_c5dSr_e", "number": 2013, "title": "Datasette uses non-standard quoting for identifiers", "user": {"value": 193185, "label": "cldellow"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-01T00:05:39Z", "updated_at": "2023-02-01T00:06:30Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Related to #2001, but where #2001 was about literals, this is about identifiers\r\n\r\nFrom https://www.sqlite.org/lang_keywords.html:\r\n\r\n> \"keyword\" A keyword in double-quotes is an identifier.\r\n> [keyword] A keyword enclosed in square brackets is an identifier. This is not standard SQL. This quoting mechanism is used by MS Access and SQL Server and is included in SQLite for compatibility.\r\n\r\nDatasette uses this quoting here -- https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/__init__.py#L345-L349, in some of the other DB access code, and in some of the test fixtures.\r\n\r\nMigrating to standard double quote identifiers would make it easier to get Datasette working with alternative backends", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2013/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1564774831, "node_id": "I_kwDOBm6k_c5dRJGv", "number": 2012, "title": "Missing space in database summary", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-31T18:01:13Z", "updated_at": "2023-01-31T18:01:13Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spotted this on an instance index page:\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2012/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1557599877, "node_id": "I_kwDODFE5qs5c1xaF", "number": 12, "title": "location history changes", "user": {"value": 14809320, "label": "gerardrbentley"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-26T03:57:25Z", "updated_at": "2023-01-26T03:57:25Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "not sure if each download is unique, but I had to change some things to work with the takeout zip I made 2023-01-25\r\n\r\nfilename changed from \"Location History.json\" to \"Records.json\"\r\n\r\n`\"timestampMs\"` is not present, `\"timestamp\"` is roughly iso timestamp\r\n\r\n```py\r\ndef get_timestamp_ms(raw_timestamp):\r\n try:\r\n return datetime.datetime.strptime(raw_timestamp, \"%Y-%m-%dT%H:%M:%SZ\").timestamp()\r\n except ValueError:\r\n return datetime.datetime.strptime(raw_timestamp, \"%Y-%m-%dT%H:%M:%S.%fZ\").timestamp()\r\n\r\ndef save_location_history(db, zf):\r\n location_history = json.load(\r\n zf.open(\"Takeout/Location History/Records.json\")\r\n )\r\n db[\"location_history\"].upsert_all(\r\n (\r\n {\r\n \"id\": id_for_location_history(row),\r\n \"latitude\": row[\"latitudeE7\"] / 1e7,\r\n \"longitude\": row[\"longitudeE7\"] / 1e7,\r\n \"accuracy\": row[\"accuracy\"],\r\n \"timestampMs\": get_timestamp_ms(row[\"timestamp\"]),\r\n \"when\": row[\"timestamp\"],\r\n }\r\n for row in location_history[\"locations\"]\r\n ),\r\n pk=\"id\",\r\n )\r\n\r\n\r\ndef id_for_location_history(row):\r\n # We want an ID that is unique but can be sorted by in\r\n # date order - so we use the isoformat date + the first\r\n # 6 characters of a hash of the JSON\r\n first_six = hashlib.sha1(\r\n json.dumps(row, separators=(\",\", \":\"), sort_keys=True).encode(\"utf8\")\r\n ).hexdigest()[:6]\r\n return \"{}-{}\".format(\r\n row['timestamp'],\r\n first_six,\r\n )\r\n```\r\n\r\nexample locations from mine\r\n\r\n```json\r\n{\r\n \"latitudeE7\": 427220206,\r\n \"longitudeE7\": -923423972,\r\n \"accuracy\": 10,\r\n \"deviceTag\": -1312429967,\r\n \"deviceDesignation\": \"PRIMARY\",\r\n \"timestamp\": \"2019-01-08T23:31:50.867Z\"\r\n }\r\n```\r\n\r\n```json\r\n{\r\n \"latitudeE7\": 427011317,\r\n \"longitudeE7\": -923448300,\r\n \"accuracy\": 5,\r\n \"deviceTag\": -1312429967,\r\n \"deviceDesignation\": \"PRIMARY\",\r\n \"timestamp\": \"2019-01-08T23:33:53Z\"\r\n }, \r\n```", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/12/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 2}", "draft": null, "state_reason": null} {"id": 1554032168, "node_id": "I_kwDOBm6k_c5coKYo", "number": 2002, "title": "Document how actors are displayed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-24T00:08:49Z", "updated_at": "2023-01-24T00:08:49Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/e4ebef082de90db4e1b8527abc0d582b7ae0bc9d/datasette/utils/__init__.py#L1052-L1056\r\n\r\nThis logic should be reflected in the documentation on https://docs.datasette.io/en/stable/authentication.html#actors", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2002/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1550536442, "node_id": "I_kwDOCGYnMM5ca076", "number": 521, "title": "Custom JSON encoder", "user": {"value": 31504, "label": "janrito"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-20T09:19:40Z", "updated_at": "2023-01-20T09:19:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It would be nice if we could specify a custom encoder (and decoder) for types that will need extra deserialisation \u2013 e.g., sets, enums or sparse matrices \u2013 or even project-specific types", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/521/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1533673397, "node_id": "I_kwDOBm6k_c5baf-1", "number": 1991, "title": "fts5 tables are not auto-detected and hidden", "user": {"value": 83819, "label": "keturn"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-15T06:00:42Z", "updated_at": "2023-01-20T04:54:24Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I set up a [Datasette instance](https://huggingface.co/spaces/Sygil/INE-dataset-explorer/tree/main) and was following the docs on full-text search.\r\n\r\nWhen I used fts4, datasette automatically hid the FTS tables and added the FTS search box where appropriate, but when I changed to fts5 it no longer does either.\r\n\r\nIf I [manually set](https://huggingface.co/spaces/keturn/INED-datasette/blob/main/metadata.json#L9) `fts_table` for a view, then search does work as expected.\r\n\r\nMy table and view creation code looks like this:\r\n```py\r\nconnection.execute(\"\"\"CREATE TABLE IF NOT EXISTS\r\n captions(image_key text PRIMARY KEY, caption text NOT NULL)\r\n\"\"\")\r\n\u00a0\r\nconnection.execute(\"\"\"CREATE VIRTUAL TABLE\r\n captions_fts USING\r\n fts5(caption, image_key UNINDEXED, content=captions)\r\n\"\"\")\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1991/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1538197093, "node_id": "I_kwDOBm6k_c5brwZl", "number": 1995, "title": "foreign_keys error 500", "user": {"value": 137183, "label": "jonschoning"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-18T15:27:36Z", "updated_at": "2023-01-18T16:44:01Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "**Error 500 expected string or bytes-like object**\r\n\r\n[espial-new.sqlite3.zip](https://github.com/simonw/datasette/files/10447965/espial-new.sqlite3.zip)\r\n\r\nrun `datasette espial-new.sqlite3` & click on any table other than `User`\r\n\r\n```\r\n/home/jon/.local/lib/python3.10/site-packages/datasette/app.py:814 in \u2502\r\n\u2502 expand_foreign_keys \u2502\r\n\u2502 \u2502\r\n\u2502 811 \u2502 \u2502 \u2502 from {other_table} \u2502\r\n\u2502 812 \u2502 \u2502 \u2502 where {other_column} in ({placeholders}) \u2502\r\n\u2502 813 \u2502 \u2502 \"\"\".format( \u2502\r\n\u2502 \u2771 814 \u2502 \u2502 \u2502 other_column=escape_sqlite(fk[\"other_column\"]), \u2502\r\n\u2502 815 \u2502 \u2502 \u2502 label_column=escape_sqlite(label_column), \u2502\r\n\u2502 816 \u2502 \u2502 \u2502 other_table=escape_sqlite(fk[\"other_table\"]), \u2502\r\n\u2502 817 \u2502 \u2502 \u2502 placeholders=\", \".join([\"?\"] * len(set(values))), \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 locals \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 column = 'user_id' \u2502 \u2502\r\n\u2502 \u2502 database = 'espial-new' \u2502 \u2502\r\n\u2502 \u2502 db = \u2502 \u2502\r\n\u2502 \u2502 fk = { \u2502 \u2502\r\n\u2502 \u2502 \u2502 'column': 'user_id', \u2502 \u2502\r\n\u2502 \u2502 \u2502 'other_table': 'user', \u2502 \u2502\r\n\u2502 \u2502 \u2502 'other_column': None \u2502 \u2502\r\n\u2502 \u2502 } \u2502 \u2502\r\n\u2502 \u2502 foreign_keys = [ \u2502 \u2502\r\n\u2502 \u2502 \u2502 { \u2502 \u2502\r\n\u2502 \u2502 \u2502 \u2502 'column': 'user_id', \u2502 \u2502\r\n\u2502 \u2502 \u2502 \u2502 'other_table': 'user', \u2502 \u2502\r\n\u2502 \u2502 \u2502 \u2502 'other_column': None \u2502 \u2502\r\n\u2502 \u2502 \u2502 } \u2502 \u2502\r\n\u2502 \u2502 ] \u2502 \u2502\r\n\u2502 \u2502 label_column = 'name' \u2502 \u2502\r\n\u2502 \u2502 labeled_fks = {} \u2502 \u2502\r\n\u2502 \u2502 self = \u2502 \u2502\r\n\u2502 \u2502 table = 'bookmark' \u2502 \u2502\r\n\u2502 \u2502 values = [] \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 /home/jon/.local/lib/python3.10/site-packages/datasette/utils/__init__.py:346 \u2502\r\n\u2502 in escape_sqlite \u2502\r\n\u2502 \u2502\r\n\u2502 343 \u2502\r\n\u2502 344 \u2502\r\n\u2502 345 def escape_sqlite(s): \u2502\r\n\u2502 \u2771 346 \u2502 if _boring_keyword_re.match(s) and (s.lower() not in reserved_words) \u2502\r\n\u2502 347 \u2502 \u2502 return s \u2502\r\n\u2502 348 \u2502 else: \u2502\r\n\u2502 349 \u2502 \u2502 return f\"[{s}]\" \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500 locals \u2500\u256e \u2502\r\n\u2502 \u2502 s = None \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nTypeError: expected string or bytes-like object\r\nTraceback (most recent call last):\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/app.py\", line 1354, in route_path\r\n response = await view(request, send)\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/views/base.py\", line 134, in view\r\n return await self.dispatch_request(request)\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/views/base.py\", line 91, in dispatch_request\r\n return await handler(request)\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/views/base.py\", line 361, in get\r\n response_or_template_contexts = await self.data(request, **data_kwargs)\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/views/table.py\", line 158, in data\r\n return await self._data_traced(request, default_labels, _next, _size)\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/views/table.py\", line 603, in _data_traced\r\n await self.ds.expand_foreign_keys(\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/app.py\", line 814, in expand_foreign_keys\r\n other_column=escape_sqlite(fk[\"other_column\"]),\r\n File \"/home/jon/.local/lib/python3.10/site-packages/datasette/utils/__init__.py\", line 346, in escape_sqlite\r\n if _boring_keyword_re.match(s) and (s.lower() not in reserved_words):\r\nTypeError: expected string or bytes-like object\r\nINFO: 127.0.0.1:38574 - \"GET /espial-new/bookmark HTTP/1.1\" 500 Internal Server Error\r\nINFO: 127.0.0.1:38574 - \"GET /-/static/app.css?d59929 HTTP/1.1\" 200 OK\r\n```\r\n\r\nSchema:\r\n```\r\nCREATE TABLE IF NOT EXISTS \"user\"\r\n (\r\n \"id\" INTEGER PRIMARY KEY,\r\n \"name\" VARCHAR NOT NULL,\r\n \"password_hash\" VARCHAR NOT NULL,\r\n \"api_token\" VARCHAR NULL,\r\n \"private_default\" BOOLEAN NOT NULL,\r\n \"archive_default\" BOOLEAN NOT NULL,\r\n \"privacy_lock\" BOOLEAN NOT NULL,\r\n CONSTRAINT \"unique_user_name\" UNIQUE (\"name\")\r\n );\r\n\r\nCREATE TABLE IF NOT EXISTS \"bookmark\"\r\n (\r\n \"id\" INTEGER PRIMARY KEY,\r\n \"user_id\" INTEGER NOT NULL REFERENCES \"user\" ON DELETE RESTRICT ON UPDATE RESTRICT,\r\n \"slug\" VARCHAR NOT NULL DEFAULT (Lower(Hex(Randomblob(6)))),\r\n \"href\" VARCHAR NOT NULL,\r\n \"description\" VARCHAR NOT NULL,\r\n \"extended\" VARCHAR NOT NULL,\r\n \"time\" TIMESTAMP NOT NULL,\r\n \"shared\" BOOLEAN NOT NULL,\r\n \"to_read\" BOOLEAN NOT NULL,\r\n \"selected\" BOOLEAN NOT NULL,\r\n \"archive_href\" VARCHAR NULL,\r\n CONSTRAINT \"unique_user_href\" UNIQUE (\"user_id\", \"href\"),\r\n CONSTRAINT \"unique_user_slug\" UNIQUE (\"user_id\", \"slug\")\r\n );\r\n\r\nCREATE TABLE IF NOT EXISTS \"bookmark_tag\"\r\n (\r\n \"id\" INTEGER PRIMARY KEY,\r\n \"user_id\" INTEGER NOT NULL REFERENCES \"user\" ON DELETE RESTRICT ON UPDATE RESTRICT,\r\n \"tag\" VARCHAR NOT NULL,\r\n \"bookmark_id\" INTEGER NOT NULL REFERENCES \"bookmark\" ON DELETE RESTRICT ON UPDATE RESTRICT,\r\n \"seq\" INTEGER NOT NULL,\r\n CONSTRAINT \"unique_user_tag_bookmark_id\" UNIQUE (\"user_id\", \"tag\", \"bookmark_id\"),\r\n CONSTRAINT \"unique_user_bookmark_id_tag_seq\" UNIQUE (\"user_id\", \"bookmark_id\", \"tag\", \"seq\")\r\n );\r\n\r\nCREATE TABLE IF NOT EXISTS \"note\"\r\n (\r\n \"id\" INTEGER PRIMARY KEY,\r\n \"user_id\" INTEGER NOT NULL REFERENCES \"user\" ON DELETE RESTRICT ON UPDATE RESTRICT,\r\n \"slug\" VARCHAR NOT NULL DEFAULT (Lower(Hex(Randomblob(10)))),\r\n \"length\" INTEGER NOT NULL,\r\n \"title\" VARCHAR NOT NULL,\r\n \"text\" VARCHAR NOT NULL,\r\n \"is_markdown\" BOOLEAN NOT NULL,\r\n \"shared\" BOOLEAN NOT NULL DEFAULT false,\r\n \"created\" TIMESTAMP NOT NULL,\r\n \"updated\" TIMESTAMP NOT NULL\r\n );\r\nCREATE INDEX idx_bookmark_time ON bookmark (user_id, time DESC);\r\nCREATE INDEX idx_bookmark_tag_bookmark_id ON bookmark_tag (bookmark_id, id, tag, seq);\r\nCREATE INDEX idx_note_user_created ON note (user_id, created DESC); \r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1995/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1532000914, "node_id": "I_kwDOBm6k_c5bUHqS", "number": 1990, "title": "Suggestion: Highlight error messages ('These facets timed out')", "user": {"value": 116795, "label": "pax"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-13T09:40:58Z", "updated_at": "2023-01-13T09:40:58Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I had trouble figuring out why faceting didn't work in some instances, it took a while before I noticed the _These facets timed out_ notice. \r\n\r\nIt might help if that would be highlighted, or fading out highlight - if one might think it would be too visually disturbing.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1990/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1509783085, "node_id": "I_kwDOBm6k_c5Z_XYt", "number": 1969, "title": "sql-formatter javascript is not now working with CloudFlare rocketloader", "user": {"value": 536941, "label": "fgregg"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-23T21:14:06Z", "updated_at": "2023-01-10T01:56:33Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "This is probably not a bug with datasette, but I thought you might want to know, @simonw.\r\n\r\nI noticed today that my CloudFlare proxied datasette instance lost the \"Format SQL\" option. I'm pretty sure it was there last week.\r\n\r\nIn the CloudFlare settings, if I turn off [Rocket Loader](https://developers.cloudflare.com/fundamentals/speed/rocket-loader/), I get the \"Format SQL\" option back.\r\n\r\nRocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading?\r\n\r\nI'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1969/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1524431805, "node_id": "I_kwDODEm0Qs5a3Pu9", "number": 72, "title": "Import thread, including self- and others' replies", "user": {"value": 601708, "label": "mcint"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-08T09:51:06Z", "updated_at": "2023-01-08T09:51:06Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "statuses-lookup, home-timeline, mentions (only for auth'ed user) don't cover this.\r\n\r\n`twitter-to-sqlite fetch-thread tw-group1.db 1234123412341234`\r\n\r\ntwitter-to-sqlite focuses on archiving users, but does not easily support archiving conversations or community activity.\r\n\r\nFor reference, this is [implemented in twarc](https://sourcegraph.com/github.com/DocNow/twarc/-/blob/twarc/client.py?L708-766&subtree=true), using a search, optionally recursively.\r\n\r\nOther research suggests that this formerly, or currently, requires a [search query](https://stackoverflow.com/a/30480103/1020467), use of [undocumented `related_results` api](https://stackoverflow.com/a/9419346/1020467), or with requested inclusion of [newer conversation_id](https://stackoverflow.com/a/68115718/1020467) with subsequent query.\r\n\r\n", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/72/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1516815571, "node_id": "I_kwDOBm6k_c5aaMTT", "number": 1975, "title": "_col=id can cause id column to export twice in CSV export", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-03T00:25:15Z", "updated_at": "2023-01-03T00:25:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://datasette.simonwillison.net/simonwillisonblog/blog_entry.csv?_col=id&_col=title&_col=body&_labels=on&_size=1\r\n\r\n```csv\r\nid,id,title,body\r\n1,1,WaSP Phase II,\"

The Web Standards project has launched Phase II.

\"\r\n```\r\nThat should not have two `id` columns.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1975/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1515717718, "node_id": "PR_kwDOC8tyDs5Gc-VH", "number": 23, "title": "Include workout statistics", "user": {"value": 2129, "label": "badboy"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-01T17:29:57Z", "updated_at": "2023-01-01T17:29:57Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/healthkit-to-sqlite/pulls/23", "body": "Not sure when this changed (iOS 16 maybe?), but the `WorkoutStatistics` now has a whole bunch of information about workouts, e.g. for runs it contains the distance (as a `` element).\r\n\r\nAdding it as another column at leat allows me to pull these out (using SQLite's JSON support).\r\nI'm running with this patch on my own data now.", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/23/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1515186569, "node_id": "I_kwDOBm6k_c5aT-mJ", "number": 1972, "title": "Fix Sphinx warning about extlink extension", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-31T19:12:04Z", "updated_at": "2022-12-31T19:13:26Z", "closed_at": "2022-12-31T19:13:26Z", "author_association": "OWNER", "pull_request": null, "body": "```\r\n[sphinx-autobuild] > sphinx-build -b html /Users/simon/Dropbox/Development/datasette/docs /Users/simon/Dropbox/Development/datasette/docs/_build\r\nRunning Sphinx v5.3.0\r\nloading pickled environment... done\r\nWARNING: extlinks: Sphinx-6.0 will require a caption string to contain exactly one '%s' and all other '%' need to be escaped as '%%'.\r\n```\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1971#issuecomment-1368266904_\r\n ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1972/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1000275035, "node_id": "PR_kwDOCGYnMM4r7n-9", "number": 327, "title": "Extract expand: Support JSON Arrays", "user": {"value": 101753, "label": "phaer"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-09-19T10:34:30Z", "updated_at": "2022-12-29T09:05:36Z", "closed_at": "2022-12-29T09:05:36Z", "author_association": "NONE", "pull_request": "simonw/sqlite-utils/pulls/327", "body": "Hi,\r\n\r\nI needed to extract data in JSON Arrays to normalize data imports. I've quickly hacked the following together based on #241 which refers to #239 where you, @simonw, wrote:\r\n\r\n> Could this handle lists of objects too? That would be pretty amazing - if the column has a [{...}, {...}] list in it could turn that into a many-to-many.\r\n\r\nThey way this works in my work is that many-to-many relationships are created for anything that maps to an dictionary in a list, and many-to-one relations for everything else (assumed to be scalar values). Not sure what the best approach here would be? Are many-to-one relationships are at all useful here?\r\n\r\nWhat do you think about this approach? I could try to add it to the cli interface and documentation if wanted.\r\n\r\nThanks for this awesome piece of software in any case! :sun_with_face: ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/327/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1513238455, "node_id": "PR_kwDODEm0Qs5GUoPm", "number": 71, "title": "Archive: Fix \"ni devices\" typo in importer", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:33:31Z", "updated_at": "2022-12-28T23:33:31Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/71", "body": null, "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/71/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1513238314, "node_id": "PR_kwDODEm0Qs5GUoN6", "number": 70, "title": "Archive: Import Twitter Circle data", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:33:09Z", "updated_at": "2022-12-28T23:33:09Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/70", "body": null, "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/70/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1513238152, "node_id": "PR_kwDODEm0Qs5GUoMM", "number": 69, "title": "Archive: Import new tweets table name", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:32:44Z", "updated_at": "2022-12-28T23:32:44Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/69", "body": "Given the code here, it seems like in the past this file was named \"tweet.js\". In recent exports, it's named \"tweets.js\". The archive importer needs to be modified to take this into account. Existing logic is reused for importing this table. (However, the resulting table name will be different, matching the different file name -- archive_tweets, rather than archive_tweet).", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/69/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1513237982, "node_id": "PR_kwDODEm0Qs5GUoKL", "number": 68, "title": "Archive: Import mute table", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:32:06Z", "updated_at": "2022-12-28T23:32:06Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/68", "body": null, "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/68/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
a/b.c-dcabca/b.c-dc