{"id": 351017365, "node_id": "MDExOlB1bGxSZXF1ZXN0MjA4NzE5MDQz", "number": 361, "title": " Import pysqlite3 if available, closes #360 ", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-08-16T00:52:21Z", "updated_at": "2018-08-16T00:58:57Z", "closed_at": "2018-08-16T00:58:57Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/361", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/361/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1871935751, "node_id": "I_kwDOD079W85vk3kH", "number": 40, "title": " ImportError: cannot import name 'formatargspec' from 'inspect'", "user": {"value": 36752421, "label": "hosslikw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-29T15:36:31Z", "updated_at": "2023-08-31T03:18:07Z", "closed_at": "2023-08-31T03:18:06Z", "author_association": "NONE", "pull_request": null, "body": "I get the following error when running \"pip3 install dogsheep-photos\"\r\n\" from inspect import ismethod, isclass, formatargspec\r\n ImportError: cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?\"\r\n \r\nPython 3.12.0rc1\r\nsqlite 3.43.0\r\ndatasette, version 0.64.3", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/40/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1943259395, "node_id": "I_kwDOEhK-wc5z08kD", "number": 16, "title": " time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'", "user": {"value": 3746270, "label": "linonetwo"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-14T13:24:39Z", "updated_at": "2023-10-14T13:24:39Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "\r\n```\r\nevernote-to-sqlite enex evernote.db ./\u6211\u7684\u7b14\u8bb0.enex\r\nImporting from ENEX [#####-------------------------------] 14%\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/evernote-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n ^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1157, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1078, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1434, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 783, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/cli.py\", line 31, in enex\r\n save_note(db, note)\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py\", line 46, in save_note\r\n \"created\": convert_datetime(created),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py\", line 111, in convert_datetime\r\n return datetime.datetime.strptime(s, \"%Y%m%dT%H%M%SZ\").isoformat()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py\", line 568, in _strptime_datetime\r\n tt, fraction, gmtoff_fraction = _strptime(data_string, format)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py\", line 349, in _strptime\r\n raise ValueError(\"time data %r does not match format %r\" %\r\nValueError: time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'\r\n```\r\n\r\nenex is exported by evernote mac client ", "repo": {"value": 303218369, "label": "evernote-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 761713079, "node_id": "MDU6SXNzdWU3NjE3MTMwNzk=", "number": 1138, "title": "\"Powered by Datasette\" should link to new datasette.io site", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-12-10T23:33:41Z", "updated_at": "2020-12-15T02:28:10Z", "closed_at": "2020-12-10T23:37:14Z", "author_association": "OWNER", "pull_request": null, "body": "https://datasette.io/", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1138/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1084007781, "node_id": "I_kwDOBm6k_c5AnKVl", "number": 1572, "title": "\"Query took\" should be \"Queries took\"", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 7571612, "label": "Datasette 0.60"}, "comments": 0, "created_at": "2021-12-19T04:03:00Z", "updated_at": "2022-01-13T22:27:43Z", "closed_at": "2021-12-19T04:03:24Z", "author_association": "OWNER", "pull_request": null, "body": "This is misleading, since usually there have been more than one query executed:\r\n\r\n![CleanShot 2021-12-18 at 20 02 35@2x](https://user-images.githubusercontent.com/9599/146663457-9c4c2900-5cc0-4650-a565-bb1ff0b8a725.png)\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1572/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275228834, "node_id": "MDU6SXNzdWUyNzUyMjg4MzQ=", "number": 136, "title": "\"Reformat SQL\" button next to SQL editor textarea", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-11-20T03:42:19Z", "updated_at": "2019-10-14T03:46:13Z", "closed_at": "2019-10-14T03:46:13Z", "author_association": "OWNER", "pull_request": null, "body": "Can use this:\r\n\r\nhttps://github.com/zeroturnaround/sql-formatter\r\nhttps://zeroturnaround.github.io/sql-formatter/\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/136/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 853672224, "node_id": "MDU6SXNzdWU4NTM2NzIyMjQ=", "number": 1294, "title": "\"You can check out any time you like. But you can never leave!\"", "user": {"value": 192568, "label": "mroswell"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-04-08T17:02:15Z", "updated_at": "2021-04-08T18:35:50Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "(Feel free to rename this one.)\r\n\r\n- The column gear lets you \"Show not-blank rows.\" Then it places a parameter in the URL, which a web developer would notice, but a lot of users won't notice, or know to delete it. Would be good to toggle \"Show not-blank rows\" with \"Show all rows.\" (Also would be quite helpful to have a \"Show blank rows | Show all rows\" option)\r\n- The column gear lets you \"Sort ascending\" and \"Sort descending\" but then you're stuck with some sort of sorted version thereafter, unless you know to sort the ID column, or to remove the full _sort parameter and its value in the URL. Would be good to offer a \"Remove sort\" option in the gear.\r\n- These requests are in the same camp as: https://github.com/simonw/datasette-vega/issues/36\r\n- I suspect there are other url parameter instances where similar analysis would be helpful, but the three above are the use cases I've run across. \r\n\r\nUPDATE:\r\n- It would be helpful to have a \"Previous page\" available for all but the first table page.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1294/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 679637501, "node_id": "MDU6SXNzdWU2Nzk2Mzc1MDE=", "number": 934, "title": "--get doesn't fully invoke the startup routine", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-08-15T20:30:25Z", "updated_at": "2020-08-15T20:53:49Z", "closed_at": "2020-08-15T20:53:49Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/7702ea602188899ee9b0446a874a6a9b546b564d/datasette/cli.py#L417-L433\r\n\r\nSpotted this working on https://github.com/simonw/latest-datasette-with-all-plugins/issues/3 - I'd like to be able to use `datasette --get /` as a sanity checking test, but that doesn't work if the init hooks aren't fully executed.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/934/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1931794126, "node_id": "I_kwDOBm6k_c5zJNbO", "number": 2198, "title": "--load-extension=spatialite not working with Windows", "user": {"value": 363004, "label": "hcarter333"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-08T12:50:22Z", "updated_at": "2023-10-08T12:50:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Using each of\r\n`python -m datasette counties.db -m metadata.yml --load-extension=SpatiaLite`\r\n\r\nand \r\n\r\n`python -m datasette counties.db --load-extension=\"C:\\Windows\\System32\\mod_spatialite.dll\"`\r\n\r\nand\r\n\r\n`python -m datasette counties.db --load-extension=C:\\Windows\\System32\\mod_spatialite.dll`\r\n\r\nI got the error:\r\n\r\n```\r\n File \"C:\\Users\\m3n7es\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\datasette\\database.py\", line 209, in in_thread\r\n self.ds._prepare_connection(conn, self.name)\r\n File \"C:\\Users\\m3n7es\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\datasette\\app.py\", line 596, in _prepare_connection\r\n conn.execute(\"SELECT load_extension(?, ?)\", [path, entrypoint])\r\nsqlite3.OperationalError: The specified module could not be found.\r\n\r\n```\r\n\r\nI finally tried modifying the code in app.py to read:\r\n\r\n```\r\n def _prepare_connection(self, conn, database):\r\n conn.row_factory = sqlite3.Row\r\n conn.text_factory = lambda x: str(x, \"utf-8\", \"replace\")\r\n if self.sqlite_extensions:\r\n conn.enable_load_extension(True)\r\n for extension in self.sqlite_extensions:\r\n # \"extension\" is either a string path to the extension\r\n # or a 2-item tuple that specifies which entrypoint to load.\r\n #if isinstance(extension, tuple):\r\n # path, entrypoint = extension\r\n # conn.execute(\"SELECT load_extension(?, ?)\", [path, entrypoint])\r\n #else:\r\n conn.execute(\"SELECT load_extension('C:\\Windows\\System32\\mod_spatialite.dll')\")\r\n\r\n```\r\nAt which point the counties example worked. \r\n\r\nIs there a correct way to install/use the extension on Windows? My method will cause issues if there's a second extension to be used.\r\n\r\nOn an unrelated note, my next step is to figure out how to write a query across the two loaded databases supplied from the command line:\r\n`python -m datasette rm_toucans_23_10_07.db counties.db -m metadata.yml --load-extension=SpatiaLite`\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2198/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 665701216, "node_id": "MDU6SXNzdWU2NjU3MDEyMTY=", "number": 123, "title": "--raw option for outputting binary content", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-26T03:35:39Z", "updated_at": "2020-07-26T16:44:11Z", "closed_at": "2020-07-26T16:44:11Z", "author_association": "OWNER", "pull_request": null, "body": "Related to the `insert-files` work in #122 - it should be easy to get binary data back out of the database again.\r\n\r\nOne way to do that could be:\r\n\r\n sqlite-utils files.db \"select content from files where key = 'foo.jpg'\" --raw\r\n\r\nThe `--raw` option would cause just the contents of the first column to be output directly to stdout.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/123/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 758944006, "node_id": "MDU6SXNzdWU3NTg5NDQwMDY=", "number": 57, "title": "--readme throws 404 error if README does not exist in repo", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-12-07T23:58:49Z", "updated_at": "2020-12-16T18:17:54Z", "closed_at": "2020-12-16T18:17:54Z", "author_association": "MEMBER", "pull_request": null, "body": "It should fail silently (populate the column with a null) instead.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/57/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 610853393, "node_id": "MDU6SXNzdWU2MTA4NTMzOTM=", "number": 104, "title": "--schema option to \"sqlite-utils tables\"", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-01T16:55:49Z", "updated_at": "2020-05-01T17:12:37Z", "closed_at": "2020-05-01T17:12:37Z", "author_association": "OWNER", "pull_request": null, "body": "Adds output showing the table schema.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/104/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 806861312, "node_id": "MDExOlB1bGxSZXF1ZXN0NTcyMjA5MjQz", "number": 1222, "title": "--ssl-keyfile and --ssl-certfile, refs #1221", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-12T00:45:58Z", "updated_at": "2021-02-12T00:52:18Z", "closed_at": "2021-02-12T00:52:17Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1222", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1222/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 735648209, "node_id": "MDU6SXNzdWU3MzU2NDgyMDk=", "number": 193, "title": "--tsv output format option", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6079500, "label": "3.0"}, "comments": 0, "created_at": "2020-11-03T21:31:18Z", "updated_at": "2020-11-07T00:09:52Z", "closed_at": "2020-11-07T00:09:52Z", "author_association": "OWNER", "pull_request": null, "body": "We already support `--csv` for output, and the `insert` command accepts `--tsv`. The output format options should accept `--tsv` too.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/193/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 783910901, "node_id": "MDU6SXNzdWU3ODM5MTA5MDE=", "number": 221, "title": ".add_missing_columns() does not take case insensitivity into account", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-12T05:01:00Z", "updated_at": "2021-01-12T23:17:33Z", "closed_at": "2021-01-12T23:17:33Z", "author_association": "OWNER", "pull_request": null, "body": "SQLite columns are case insensitive - but the `.add_missing_columns()` method doesn't know that. This means that it can crash if it identifies a column that is a case-insensitive duplicate of an existing column. https://github.com/simonw/sqlite-utils/blob/4cc82fd0bccc9d2eeb3510beb4e691d7da099f84/sqlite_utils/db.py#L1974-L1980", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/221/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 787900412, "node_id": "MDU6SXNzdWU3ODc5MDA0MTI=", "number": 222, "title": ".m2m() should accept alter=True parameter", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-18T04:15:43Z", "updated_at": "2021-01-18T04:26:10Z", "closed_at": "2021-01-18T04:26:10Z", "author_association": "OWNER", "pull_request": null, "body": "Needed by https://github.com/dogsheep/swarm-to-sqlite/issues/11", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/222/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 561460274, "node_id": "MDU6SXNzdWU1NjE0NjAyNzQ=", "number": 84, "title": ".upsert() with hash_id throws error", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-02-07T07:08:19Z", "updated_at": "2020-02-07T07:17:11Z", "closed_at": "2020-02-07T07:17:11Z", "author_association": "OWNER", "pull_request": null, "body": "```python\r\ndb[table_name].upsert_all(rows, hash_id=\"pk\")\r\n```\r\nThis throws an error: `PrimaryKeyRequired('upsert() requires a pk')`\r\n\r\nThe problem is, if you try this:\r\n\r\n```python\r\ndb[table_name].upsert_all(rows, hash_id=\"pk\", pk=\"pk\")\r\n```\r\nYou get this error: `AssertionError('Use either pk= or hash_id=')`\r\n\r\n`hash_id=` should imply that `pk=` that column.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/84/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 315548495, "node_id": "MDU6SXNzdWUzMTU1NDg0OTU=", "number": 225, "title": "/-/(inspect|metadata|plugins)(.json)? introspection", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-18T16:14:58Z", "updated_at": "2018-04-19T05:25:33Z", "closed_at": "2018-04-19T05:25:33Z", "author_association": "OWNER", "pull_request": null, "body": "3 pages (and accompanying .json endpoints) for viewing:\r\n\r\n* the metadata.json that datasette was loaded with\r\n* the output of ds.inspect()\r\n* a list of installed plugins, detected by pluggy", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/225/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 778530523, "node_id": "MDU6SXNzdWU3Nzg1MzA1MjM=", "number": 1172, "title": "/-/static should be excluded from auth and permission checks", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-01-05T02:53:41Z", "updated_at": "2021-01-05T02:53:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I want to set far future / immutable cache headers on everything served from `/-/static` and `/-/static-plugins`\r\n\r\nThis has security implications since it will be possible to see what plugins are installed by checking for known static URLs. I'm fine with that - performance is more important here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1172/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 321624016, "node_id": "MDU6SXNzdWUzMjE2MjQwMTY=", "number": 252, "title": "/-/versions should report the FTS version supported by SQLite", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-09T15:43:47Z", "updated_at": "2018-05-11T13:19:52Z", "closed_at": "2018-05-11T13:19:52Z", "author_association": "OWNER", "pull_request": null, "body": "I can copy this function from `csvs-to-sqlite`: https://github.com/simonw/csvs-to-sqlite/blob/dccbf65b37bc9eed50e9edb80a42f257e93edb1f/csvs_to_sqlite/utils.py#L283-L293", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/252/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 267886865, "node_id": "MDU6SXNzdWUyNjc4ODY4NjU=", "number": 28, "title": "/database?sql= should redirect correctly", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 0, "created_at": "2017-10-24T03:38:44Z", "updated_at": "2017-10-24T23:54:30Z", "closed_at": "2017-10-24T23:54:30Z", "author_association": "OWNER", "pull_request": null, "body": "Needs to redirect to the location with the hash while retaining the query string. This should also work with the .json extension.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/28/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 647095808, "node_id": "MDU6SXNzdWU2NDcwOTU4MDg=", "number": 874, "title": "/favicon.ico 500 error", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5533512, "label": "Datasette 0.45"}, "comments": 0, "created_at": "2020-06-29T04:04:22Z", "updated_at": "2020-06-29T04:27:18Z", "closed_at": "2020-06-29T04:27:18Z", "author_association": "OWNER", "pull_request": null, "body": "```\r\nTraceback (most recent call last):\r\n File \"...datasette/datasette/app.py\", line 969, in route_path\r\n response = await view(request, send)\r\nTypeError: favicon() missing 1 required positional argument: 'send'\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/874/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 802583450, "node_id": "MDU6SXNzdWU4MDI1ODM0NTA=", "number": 226, "title": "3.4 release is broken - includes a rogue line", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-02-06T02:08:01Z", "updated_at": "2021-02-06T02:10:26Z", "closed_at": "2021-02-06T02:10:26Z", "author_association": "OWNER", "pull_request": null, "body": "I started seeing weird errors, caused by this line: https://github.com/simonw/sqlite-utils/blob/f8010ca78fed8c5fca6cde19658ec09fdd468420/sqlite_utils/cli.py#L1-L3\r\n\r\nThat was added by accident in 1b666f9315d4ea6bb332b2e75e48480c26100199\r\n\r\nI'm surprised the tests didn't catch this!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/226/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 405801771, "node_id": "MDExOlB1bGxSZXF1ZXN0MjQ5NjgwOTQ0", "number": 9, "title": ":pencil: Updates my_database.py to my_database.db", "user": {"value": 50527, "label": "jefftriplett"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-02-01T17:35:43Z", "updated_at": "2019-02-24T03:55:04Z", "closed_at": "2019-02-24T03:55:04Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/9", "body": "I noticed that both `.py` and `.db` were used in the docs and assumed you'd prefer `.db`. ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/9/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 426722204, "node_id": "MDU6SXNzdWU0MjY3MjIyMDQ=", "number": 423, "title": "?_search_col=X not reflected correctly in the UI", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-03-28T21:48:19Z", "updated_at": "2020-11-03T19:01:59Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "e.g. https://latest.datasette.io/fixtures/searchable?_search_text1=barry\r\n\r\n![2019-03-28 at 2 47 PM](https://user-images.githubusercontent.com/9599/55195035-84ebb800-5168-11e9-910b-fc9868bcd93e.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/423/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 570327466, "node_id": "MDExOlB1bGxSZXF1ZXN0Mzc5Mzc4Nzgw", "number": 686, "title": "?_searchmode=raw option", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-02-25T05:45:50Z", "updated_at": "2020-02-25T05:56:09Z", "closed_at": "2020-02-25T05:56:04Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/686", "body": "Closes #676", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/686/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 610342575, "node_id": "MDU6SXNzdWU2MTAzNDI1NzU=", "number": 748, "title": "?_searchmode=raw should be documented on full-text search page", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-30T19:50:06Z", "updated_at": "2020-04-30T21:06:12Z", "closed_at": "2020-04-30T21:06:12Z", "author_association": "OWNER", "pull_request": null, "body": "It's currently documented here: https://datasette.readthedocs.io/en/stable/json_api.html#special-table-arguments\r\n\r\nBut it should also be described here: https://datasette.readthedocs.io/en/stable/full_text_search.html#the-table-view-api", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/748/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 319371036, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg1MzA3NDA3", "number": 246, "title": "?_shape=array and _timelimit=", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-02T00:18:54Z", "updated_at": "2018-05-02T00:20:41Z", "closed_at": "2018-05-02T00:20:40Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/246", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/246/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 432792459, "node_id": "MDExOlB1bGxSZXF1ZXN0MjcwMTkxMDg0", "number": 430, "title": "?_where= parameter on table views, closes #429", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-04-13T01:15:09Z", "updated_at": "2019-04-13T01:37:23Z", "closed_at": "2019-04-13T01:37:23Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/430", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/430/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 433297989, "node_id": "MDU6SXNzdWU0MzMyOTc5ODk=", "number": 433, "title": "?column__in=value1,value2,value3 filter", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-04-15T13:58:24Z", "updated_at": "2019-04-15T23:00:20Z", "closed_at": "2019-04-15T23:00:20Z", "author_association": "OWNER", "pull_request": null, "body": "Support for the SQL `where column in (...)` construct, inspired by the new design for facet configuration in #427\r\n\r\n`?column__in=value1,value2,value3` will map to `where column in (\"value1\", \"value2\", \"value3\")`\r\n\r\nIf comma separation won't work (because the values themselves contain commas) you can do this instead:\r\n\r\n`?column__in=[\"value1\",\"value2\",\"value3,with-comma\"]`\r\n\r\nSee also #288", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/433/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1345561209, "node_id": "I_kwDOBm6k_c5QM6J5", "number": 1790, "title": "A better HTML title for canned query pages", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-08-21T18:27:46Z", "updated_at": "2022-08-21T18:27:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://scotrail.datasette.io/scotrail/assemble_sentence?terms=This+train+is+formed+of%2Cbomb+which\r\n\r\nCurrent title is:\r\n\r\n`scotrail: with phrases as ( select key, value from json_each('["' || replace(:terms, ',', '","') || '"]')),matches as (select phrases.key, phrases.value, ( select File from announcements where announcements.Transcription like '%' || trim(phrases.value) || '%' order by length(announcements.Transcription) limit 1 ) as Filefrom phrases),results as ( select key, announcements.Transcription, announcements.mp3 from announcements join matches on announcements.File = matches.File order by key)select 'Combined sentence:' as mp3, group_concat(Transcription, ' ') as Transcription, -1 as keyfrom results unionselect mp3, Transcription, keyfrom resultsorder by key`\r\n\r\nI think a better title would be:\r\n\r\n`scotrail: assemble_sentence, terms = This train is formed of,bomb which`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1790/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 413857257, "node_id": "MDU6SXNzdWU0MTM4NTcyNTc=", "number": 15, "title": "Ability to add columns to tables", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-02-24T19:20:51Z", "updated_at": "2019-02-24T20:04:40Z", "closed_at": "2019-02-24T20:04:40Z", "author_association": "OWNER", "pull_request": null, "body": "Makes sense to do this before foreign keys in #2\r\n\r\nPython:\r\n\r\n db[\"table\"].add_column(\"new_column\", int)\r\n\r\nCLI:\r\n\r\n $ sqlite-utils add-column table new_column INTEGER\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/15/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 906355849, "node_id": "MDExOlB1bGxSZXF1ZXN0NjU3MzczNzI2", "number": 262, "title": "Ability to add descending order indexes", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-05-29T04:51:04Z", "updated_at": "2021-05-29T05:01:42Z", "closed_at": "2021-05-29T05:01:39Z", "author_association": "OWNER", "pull_request": "simonw/sqlite-utils/pulls/262", "body": "Refs #260", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/262/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1447465004, "node_id": "I_kwDOBm6k_c5WRpAs", "number": 1889, "title": "Ability to create new tokens via the API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 8755003, "label": "Datasette 1.0a-next"}, "comments": 0, "created_at": "2022-11-14T06:21:36Z", "updated_at": "2022-12-13T05:29:08Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Refs:\r\n- #1850\r\n\r\nInitially I decided that the API shouldn't be able to create new tokens at all - I don't like the idea of an API token holder creating themselves additional tokens.\r\n\r\nThen I realized that two of the API features are specifically more useful if you can generate fresh tokens via the API:\r\n\r\n- Tokes that expire after a time limit are MUCH more useful if they can be automatically generated\r\n- Likewise, tokens that are restricted to a subset of permissions (see #1855) make more sense to be generated like this, especially in conjunction with expiry times", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1889/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1754174496, "node_id": "I_kwDOCGYnMM5ojpQg", "number": 558, "title": "Ability to define unique columns when creating a table", "user": {"value": 1910303, "label": "aguinane"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-13T06:56:19Z", "updated_at": "2023-08-18T01:06:03Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When creating a new table, it would be good to have an option to set unique columns similar to how not_null is set.\r\n\r\n```python\r\nfrom sqlite_utils import Database\r\n\r\ncolumns = {\"mRID\": str, \"name\": str}\r\ndb = Database(\"example.db\")\r\ndb[\"ExampleTable\"].create(columns, pk=\"mRID\", not_null=[\"mRID\"], if_not_exists=True)\r\ndb[\"ExampleTable\"].create_index([\"mRID\"], unique=True, if_not_exists=True)\r\n```\r\n\r\nSo something like this would add the UNIQUE flag to the table definition. \r\n\r\n```python\r\ndb[\"ExampleTable\"].create(columns, pk=\"mRID\", not_null=[\"mRID\"], unique=[\"mRID\"], if_not_exists=True)\r\n```\r\n\r\n```sql\r\nCREATE TABLE ExampleTable (\r\n mRID TEXT PRIMARY KEY\r\n NOT NULL\r\n UNIQUE,\r\n name TEXT\r\n);\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 975049826, "node_id": "MDExOlB1bGxSZXF1ZXN0NzE2MjYyODI5", "number": 1444, "title": "Ability to deploy demos of branches", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-08-19T21:08:04Z", "updated_at": "2021-08-19T21:09:44Z", "closed_at": "2021-08-19T21:09:39Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/1444", "body": "See #1442.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1444/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 897212458, "node_id": "MDU6SXNzdWU4OTcyMTI0NTg=", "number": 63, "title": "Ability to fetch commits from branches other than the default", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-05-20T17:58:08Z", "updated_at": "2021-05-20T17:58:08Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "This tool is currently almost entirely ignorant of the concept of branches. One example: you can't retrieve commits from any branch other than the default (usually main).", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/63/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 508024032, "node_id": "MDU6SXNzdWU1MDgwMjQwMzI=", "number": 22, "title": "Ability to import from uncompressed archive or from specific files", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-16T18:31:57Z", "updated_at": "2019-10-16T18:53:36Z", "closed_at": "2019-10-16T18:53:36Z", "author_association": "MEMBER", "pull_request": null, "body": "Currently you can only import like this:\r\n\r\n $ twitter-to-sqlite import path-to-twitter.zip\r\n\r\nIt would be useful if you could import from a folder that was decompressed from that zip:\r\n\r\n $ twitter-to-sqlite import path-to-twitter/\r\n\r\nAND from individual files within that folder - since that would allow you to e.g. selectively import certain files:\r\n\r\n $ twitter-to-sqlite import path-to-twitter/favorites.js path-to-twitter/tweets.js", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/22/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 488338965, "node_id": "MDU6SXNzdWU0ODgzMzg5NjU=", "number": 59, "title": "Ability to introspect triggers", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-09-02T23:47:16Z", "updated_at": "2019-09-03T01:52:36Z", "closed_at": "2019-09-03T00:09:42Z", "author_association": "OWNER", "pull_request": null, "body": "Now that we're creating triggers (thanks to @amjith in #57) it would be neat if we could introspect them too.\r\n\r\nI'm thinking:\r\n\r\n`db.triggers` - lists all triggers for the database\r\n`db[\"tablename\"].triggers` - lists triggers for that table\r\n\r\nThe underlying query for this is `select * from sqlite_master where type = 'trigger'`\r\n\r\nI'll return the trigger information in a new namedtuple, similar to how Indexes and ForeignKeys work.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/59/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 312395790, "node_id": "MDU6SXNzdWUzMTIzOTU3OTA=", "number": 197, "title": "Ability to sort by more than one column", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:13:30Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189.\r\n\r\nI'd like to support \"sort by X descending, then by Y ascending if there are dupes for X\" as well. Suggested syntax for that:\r\n\r\n ?_sort_desc=X&_sort=Y\r\n\r\nwe currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example:\r\n\r\n ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/197/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312396095, "node_id": "MDU6SXNzdWUzMTIzOTYwOTU=", "number": 198, "title": "Ability to sort with nulls last", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:15:40Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189\r\n\r\nHere's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid\r\n\r\n order by case when career_ranypa is null then 1 else 0 end, career_ranypa", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/198/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1399933513, "node_id": "I_kwDOBm6k_c5TcUpJ", "number": 1833, "title": "Ability to submit long queries by POST", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-10-06T16:03:26Z", "updated_at": "2022-10-06T16:18:00Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette doesn't limit URL lengths but some common web proxies do - the one in front of Google Cloud Run for example limits to 8KB total for incoming request headers: https://cloud.google.com/load-balancing/docs/quotas#https-lb-header-limits\r\n\r\nThis means longer SQL queries can break!\r\n\r\nNeed an optional mechanism for submitting queries by POST instead.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1833/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1674322631, "node_id": "PR_kwDOBm6k_c5OpEz_", "number": 2061, "title": "Add \"Packaging a plugin using Poetry\" section in docs", "user": {"value": 1238873, "label": "rclement"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-04-19T07:23:28Z", "updated_at": "2023-04-19T07:27:18Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2061", "body": "This PR adds a new section about packaging a plugin using `poetry` within the \"Writing plugins\" page of the documentation.\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2061/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1243715381, "node_id": "I_kwDOCGYnMM5KIZc1", "number": 436, "title": "Add \"copy to clipboard\" button to code examples in documentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-05-20T21:53:23Z", "updated_at": "2022-05-20T21:57:53Z", "closed_at": "2022-05-20T21:57:53Z", "author_association": "OWNER", "pull_request": null, "body": "Follows:\r\n- #435\r\n\r\nImitates:\r\n- https://github.com/simonw/datasette/issues/1748\r\n\r\nI'll use https://github.com/executablebooks/sphinx-copybutton - here's the Datasette commit: https://github.com/simonw/datasette/commit/1465fea4798599eccfe7e8f012bd8d9adfac3039", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/436/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 668064778, "node_id": "MDU6SXNzdWU2NjgwNjQ3Nzg=", "number": 912, "title": "Add \"publishing to Vercel\" to the publish docs", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-07-29T18:50:58Z", "updated_at": "2020-07-31T17:06:35Z", "closed_at": "2020-07-31T17:06:35Z", "author_association": "OWNER", "pull_request": null, "body": "https://datasette.readthedocs.io/en/0.45/publish.html#datasette-publish currently only lists Cloud Run, Heroku and Fly. It should list Vercel too.\r\n\r\n(I should probably rename `datasette-publish-now` to `datasette-publish-vercel`)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/912/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 924991194, "node_id": "MDU6SXNzdWU5MjQ5OTExOTQ=", "number": 280, "title": "Add --encoding option to sqlite-utils memory", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-18T15:03:32Z", "updated_at": "2021-06-18T15:29:46Z", "closed_at": "2021-06-18T15:29:46Z", "author_association": "OWNER", "pull_request": null, "body": "Follow-on from #272 - this will work like `--encoding` on `sqlite-utils insert` and will affect all CSV files processed by `sqlite-utils memory`.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/280/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273895344, "node_id": "MDU6SXNzdWUyNzM4OTUzNDQ=", "number": 92, "title": "Add --license --license_url --source --source_url --title arguments to datasette publish", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-11-14T18:27:07Z", "updated_at": "2017-11-15T05:04:41Z", "closed_at": "2017-11-15T05:04:41Z", "author_association": "OWNER", "pull_request": null, "body": "I keep on using the `echo '{\"source\": \"...\"}' | datasette publish now --metadata=-` pattern, which suggests it makes sense for us to support these as optional arguments.\r\n\r\nhttps://gist.github.com/simonw/9f8bf23b37a42d7628c4dcc4bba10253", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/92/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1553425465, "node_id": "I_kwDOCGYnMM5cl2Q5", "number": 522, "title": "Add COLUMN_TYPE_MAPPING for timedelta", "user": {"value": 81377, "label": "maport"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-23T16:49:54Z", "updated_at": "2023-11-04T00:49:51Z", "closed_at": "2023-11-04T00:49:51Z", "author_association": "NONE", "pull_request": null, "body": "Currently trying to create a column with Python type `datetime.timedelta` results in an error:\r\n\r\n```\r\n>>> from sqlite_utils import Database\r\n>>> db = Database(\"test.db\")\r\n>>> test_tbl = db['test']\r\n>>> test_tbl.insert({'col1': datetime.timedelta()})\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 2979, in insert\r\n return self.insert_all(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 3082, in insert_all\r\n self.create(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 1574, in create\r\n self.db.create_table(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 961, in create_table\r\n sql = self.create_table_sql(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 852, in create_table_sql\r\n column_type=COLUMN_TYPE_MAPPING[column_type],\r\nKeyError: \r\n```\r\n\r\nThe reason this would be useful is that `MySQLdb` uses `timedelta` for MySQL `TIME` columns:\r\n\r\n```\r\n>>> import MySQLdb\r\n>>> conn = MySQLdb.connect(host='database', user='user', passwd='pw')\r\n>>> csr = conn.cursor()\r\n>>> csr.execute(\"SELECT CAST('11:20' AS TIME)\")\r\n>>> tuple(csr)\r\n((datetime.timedelta(seconds=40800),),)\r\n```\r\n\r\nSo currently any attempt to convert a MySQL DB with a `TIME` column using `db-to-sqlite` will result in the above error.\r\n\r\nI was rather surprised that `MySQLdb` uses `timedelta` for `TIME` columns but I see that [this column type](https://dev.mysql.com/doc/refman/8.0/en/time.html) is intended for time intervals as well as the time of day so it makes sense. \r\n\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/522/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 635076066, "node_id": "MDU6SXNzdWU2MzUwNzYwNjY=", "number": 821, "title": "Add Response class to internals documentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5512395, "label": "Datasette 0.44"}, "comments": 0, "created_at": "2020-06-09T03:11:06Z", "updated_at": "2020-06-09T03:32:16Z", "closed_at": "2020-06-09T03:32:16Z", "author_association": "OWNER", "pull_request": null, "body": "> I'll need to add documentation of the `Response` object (and `Response.html()` and `Response.text()` class methods - I should add `Response.json()` too) to the internals page https://datasette.readthedocs.io/en/stable/internals.html\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/215#issuecomment-640971470_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/821/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273537940, "node_id": "MDU6SXNzdWUyNzM1Mzc5NDA=", "number": 77, "title": "Add Travis CI badge to README", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 0, "created_at": "2017-11-13T18:52:25Z", "updated_at": "2017-11-13T21:24:15Z", "closed_at": "2017-11-13T21:24:15Z", "author_association": "OWNER", "pull_request": null, "body": "Also fix this newline issue:\r\n\r\n\"simonw_datasette__instant_json_api_for_your_sqlite_database\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/77/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1823428714, "node_id": "I_kwDOBm6k_c5sr1Bq", "number": 2120, "title": "Add __all__ to datasette/__init__.py", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-27T01:07:10Z", "updated_at": "2023-07-27T01:07:10Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Currently looks like this: https://github.com/simonw/datasette/blob/08181823990a71ffa5a1b57b37259198eaa43e06/datasette/__init__.py#L1-L6\r\n\r\nAdding `__all__ = [\"Permission\", \"Forbidden\"...]` would let me get rid of those `# noqa` comments.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2120/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 931752773, "node_id": "MDU6SXNzdWU5MzE3NTI3NzM=", "number": 294, "title": "Add a `sqlite-utils memory` example to the README", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-28T16:35:59Z", "updated_at": "2021-08-18T21:40:03Z", "closed_at": "2021-08-18T21:40:03Z", "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/294/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 830901133, "node_id": "MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1", "number": 16, "title": "Add a fallback ID, print if no ID found", "user": {"value": 1234956, "label": "n8henrie"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-13T13:38:29Z", "updated_at": "2021-03-13T14:44:04Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/healthkit-to-sqlite/pulls/16", "body": "Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/14\n", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 439487648, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1MjgxMzA3", "number": 444, "title": "Add a max-line-length setting for flake8", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-02T08:58:57Z", "updated_at": "2019-05-04T09:44:48Z", "closed_at": "2019-05-03T13:11:28Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/444", "body": "This stops my automatic editor linting from flagging lines which are too\r\nlong. It's been lingering in my checkout for ages.\r\n\r\n160 is an arbitrary large number - we could alter it if we have any\r\nopinions (but I find the line length limit to be my least favourite part\r\nof PEP8).", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/444/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 689800307, "node_id": "MDU6SXNzdWU2ODk4MDAzMDc=", "number": 1, "title": "Add an index on the timestamp column", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-01T04:33:37Z", "updated_at": "2020-09-01T04:49:23Z", "closed_at": "2020-09-01T04:49:23Z", "author_association": "MEMBER", "pull_request": null, "body": "Since default view will likely be ordered by timestamp descending.", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1102966378, "node_id": "I_kwDOBm6k_c5Bve5q", "number": 1599, "title": "Add architecture documentation", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-14T04:55:38Z", "updated_at": "2022-01-14T04:56:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Inspired by https://matklad.github.io/2021/02/06/ARCHITECTURE.md.html\r\n\r\nGood example: https://github.com/rust-analyzer/rust-analyzer/blob/d7c99931d05e3723d878bea5dc26766791fa4e69/docs/dev/architecture.md", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1599/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 558715564, "node_id": "MDExOlB1bGxSZXF1ZXN0MzcwMDI0Njk3", "number": 4, "title": "Add beeminder-to-sqlite", "user": {"value": 706257, "label": "bcongdon"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-02-02T15:51:36Z", "updated_at": "2020-10-12T00:36:16Z", "closed_at": "2020-10-12T00:36:16Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/4", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/4/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 927789811, "node_id": "MDU6SXNzdWU5Mjc3ODk4MTE=", "number": 292, "title": "Add contributing documentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-23T02:13:05Z", "updated_at": "2021-06-25T17:53:51Z", "closed_at": "2021-06-25T17:53:51Z", "author_association": "OWNER", "pull_request": null, "body": "Like https://docs.datasette.io/en/latest/contributing.html (but simpler) - should cover how to run `black` and `flake8` and `mypy` and how to run the tests.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/292/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1581218043, "node_id": "PR_kwDOBm6k_c5JyqPy", "number": 2025, "title": "Add database metadata to index.html template context", "user": {"value": 9993, "label": "palewire"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-12T11:16:58Z", "updated_at": "2023-02-12T11:17:14Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2025", "body": "Fixes #2016 \r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2025.org.readthedocs.build/en/2025/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2025/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 464994105, "node_id": "MDU6SXNzdWU0NjQ5OTQxMDU=", "number": 548, "title": "Add datasette-cors and datasette-auth-github plugins to Ecosystem page", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4471010, "label": "Datasette 0.29"}, "comments": 0, "created_at": "2019-07-07T21:14:14Z", "updated_at": "2019-07-08T02:02:36Z", "closed_at": "2019-07-08T02:02:36Z", "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/548/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 836064851, "node_id": "MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw", "number": 18, "title": "Add datetime parsing", "user": {"value": 1234956, "label": "n8henrie"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-03-19T14:34:22Z", "updated_at": "2021-03-19T14:34:22Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/healthkit-to-sqlite/pulls/18", "body": "Parses the datetime columns so they are subsequently properly recognized as\ndatetime.\n\nFixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17\n", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 291451116, "node_id": "MDExOlB1bGxSZXF1ZXN0MTY1MDI5ODA3", "number": 182, "title": "Add db filesize next to download link", "user": {"value": 3433657, "label": "raynae"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-01-25T04:58:56Z", "updated_at": "2019-03-22T13:50:57Z", "closed_at": "2019-02-06T04:59:38Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/182", "body": "Took a stab at #172, will this do the trick?", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/182/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1396977994, "node_id": "I_kwDOBm6k_c5TRDFK", "number": 1830, "title": "Add documentation for writing tests with signed actor cookies", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-10-04T23:51:26Z", "updated_at": "2022-10-04T23:51:26Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I use this pattirn in a lot of plugin tests, e.g. https://github.com/simonw/datasette-edit-templates/blob/087f6a6cabc20020f2b0524f11aa3a7836320848/tests/test_edit_templates.py#L55-L58\r\n```python\r\n actor = ds.sign({\"a\": {\"id\": \"root\"}}, \"actor\")\r\n response1 = await ds.client.get(\r\n \"/-/edit-templates/_footer.html\", cookies={\"ds_actor\": actor}\r\n )\r\n```\r\nI should add this to the documentation on this page: https://docs.datasette.io/en/latest/testing_plugins.html", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1830/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1099897648, "node_id": "I_kwDOCGYnMM5Bjxsw", "number": 384, "title": "Add examples to every `--help`", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-12T05:31:25Z", "updated_at": "2022-01-26T03:15:02Z", "closed_at": "2022-01-26T03:15:02Z", "author_association": "OWNER", "pull_request": null, "body": "Everything on https://sqlite-utils.datasette.io/en/stable/cli-reference.html would benefit from an example.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/384/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 723499985, "node_id": "MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4", "number": 5, "title": "Add fitbit-to-sqlite", "user": {"value": 4632208, "label": "mrphil007"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-10-16T20:04:05Z", "updated_at": "2020-10-16T20:04:05Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/5", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1178456794, "node_id": "I_kwDOCGYnMM5GPdLa", "number": 418, "title": "Add generated files to .gitignore", "user": {"value": 25778, "label": "eyeseast"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-23T17:48:12Z", "updated_at": "2022-03-24T21:01:44Z", "closed_at": "2022-03-24T21:01:44Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "I end up with these in my local directory:\r\n\r\n\t.hypothesis/\r\n\tPipfile\r\n\tPipfile.lock\r\n\tpyproject.toml\r\n\r\nMight as well gitignore them.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/418/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 925384329, "node_id": "MDExOlB1bGxSZXF1ZXN0NjczODcyOTc0", "number": 7, "title": "Add instagram-to-sqlite", "user": {"value": 36654812, "label": "gavindsouza"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-19T12:26:16Z", "updated_at": "2021-07-28T07:58:59Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/7", "body": "The tool covers only chat imports at the time of opening this PR but I'm planning to import everything else that I feel inquisitive about\r\n\r\nref: https://github.com/gavindsouza/instagram-to-sqlite", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/7/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 541274681, "node_id": "MDU6SXNzdWU1NDEyNzQ2ODE=", "number": 2, "title": "Add linkedin-to-sqlite", "user": {"value": 881925, "label": "mnp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-21T03:13:40Z", "updated_at": "2019-12-21T03:13:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There is an API available. https://developer.linkedin.com/docs/rest-api#\r\n\r\nAt the minimum, I would think contact list and messages would be of interest.", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 504720731, "node_id": "MDU6SXNzdWU1MDQ3MjA3MzE=", "number": 1, "title": "Add more details on how to request data from google takeout correctly.", "user": {"value": 1055831, "label": "dazzag24"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-09T15:17:34Z", "updated_at": "2019-10-09T15:17:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:\r\n\r\n- My Activity\r\n- Location History\r\n\r\nIn addition unless you specify that \"My Activity\" is downloaded in JSON format the default is HTML. This then causes the \r\n\r\n`google-takeout-to-sqlite my-activity takeout.db takeout.zip`\r\n\r\ncommand to fail as it only contains html files not json files.\r\n\r\nThanks", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 697162939, "node_id": "MDU6SXNzdWU2OTcxNjI5Mzk=", "number": 20, "title": "Add more tags so people can find your project.", "user": {"value": 7902810, "label": "ran88dom99"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-09T21:14:09Z", "updated_at": "2020-09-09T21:14:09Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "quantified-self habit-tracking google-fit time-tracking wearables quantifiedself \r\nfor example", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 1, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1453134846, "node_id": "I_kwDOCGYnMM5WnRP-", "number": 513, "title": "Add or document streamlined workflow for importing Datasette csv / json exports", "user": {"value": 19328961, "label": "henry501"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-17T10:54:47Z", "updated_at": "2022-11-17T10:54:47Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm working on some small front-end enhancements to the laion-aesthetic-datasette project, and I wanted to partially populate a database directly using exports from the existing Datasette instance instead of downloading the parquet files and creating my own multi-GB database.\r\n\r\nThere have been a number of small issues that are certainly related to my relative lack of familiarity with the toolkit, but that are still surprising. \r\n\r\nFor example: a CSV export of the images table (http://laion-aesthetic.datasette.io/laion-aesthetic-6pls.csv?sql=select+rowid%2C+url%2C+text%2C+domain_id%2C+width%2C+height%2C+similarity%2C+punsafe%2C+pwatermark%2C+aesthetic%2C+hash%2C+__index_level_0__+from+images+order+by+random%28%29+limit+100) has nested single quotes, double quotes, and commas that aren't handled by rows_from_file. Similarly, the json output has to be manually transformed to add the column names and remove extraneous information before sqlite_utils can import it.\r\n\r\nI was able to work through these issues, but as an enhancement it would be really helpful to create or document a clear workflow that avoids the friction of this data transformation.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/513/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 519979091, "node_id": "MDExOlB1bGxSZXF1ZXN0MzM4NjQ3Mzc4", "number": 1, "title": "Add parkrun-to-sqlite", "user": {"value": 1101318, "label": "mrw34"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-11-08T12:05:32Z", "updated_at": "2020-10-12T00:35:16Z", "closed_at": "2020-10-12T00:35:16Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/1", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 915488244, "node_id": "MDU6SXNzdWU5MTU0ODgyNDQ=", "number": 1372, "title": "Add section to \"writing plugins\" about security, e.g. avoiding XSS", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-06-08T20:49:33Z", "updated_at": "2021-06-08T20:49:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/stable/writing_plugins.html should have tips on writing secure plugins.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1372/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1513237712, "node_id": "PR_kwDODEm0Qs5GUoG_", "number": 67, "title": "Add support for app-only bearer tokens", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:31:20Z", "updated_at": "2022-12-28T23:31:20Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/67", "body": "Previously, twitter-to-sqlite only supported OAuth1 authentication, and the token must be on behalf of a user. However, Twitter also supports application-only bearer tokens, documented here:\r\nhttps://developer.twitter.com/en/docs/authentication/oauth-2-0/bearer-tokens This PR adds support to twitter-to-sqlite for using application-only bearer tokens. To use, the auth.json file just needs to contain a \"bearer_token\" key instead of \"api_key\", \"api_secret_key\", etc.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/67/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1013506559, "node_id": "PR_kwDODFdgUs4skaNS", "number": 68, "title": "Add support for retrieving teams / members", "user": {"value": 68329, "label": "philwills"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2021-10-01T15:55:02Z", "updated_at": "2021-10-01T15:59:53Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/github-to-sqlite/pulls/68", "body": "Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table `team_members` beteween `teams` and `users`.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/68/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 543717994, "node_id": "MDExOlB1bGxSZXF1ZXN0MzU3OTc0MzI2", "number": 3, "title": "Add todoist-to-sqlite", "user": {"value": 706257, "label": "bcongdon"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-30T04:02:59Z", "updated_at": "2020-10-12T00:35:58Z", "closed_at": "2020-10-12T00:35:57Z", "author_association": "CONTRIBUTOR", "pull_request": "dogsheep/dogsheep.github.io/pulls/3", "body": "Really enjoying getting into the dogsheep/datasette ecosystem. I made a downloader for Todoist, and I think/hope others might find this useful", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1822918995, "node_id": "I_kwDOCGYnMM5sp4lT", "number": 580, "title": "Add way to export to a csv file using the Python library", "user": {"value": 44324811, "label": "kevinlinxc"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-26T18:09:26Z", "updated_at": "2023-07-26T18:09:26Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "According to the documentation, we can make a csv output using the CLI tool, but not the Python library. Could we have the latter?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/580/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 453829910, "node_id": "MDU6SXNzdWU0NTM4Mjk5MTA=", "number": 505, "title": "Add white-space: pre-wrap to SQL create statement", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": {"value": 9599, "label": "simonw"}, "milestone": {"value": 4471010, "label": "Datasette 0.29"}, "comments": 0, "created_at": "2019-06-08T19:59:56Z", "updated_at": "2019-07-07T20:26:55Z", "closed_at": "2019-07-07T20:26:55Z", "author_association": "OWNER", "pull_request": null, "body": "Right now a super-long CREATE TABLE statement causes the table page to be even wider than the table itself:\r\n\r\n\"many-photos-tables__RKMaster__1_row_where_where_modelId___18479_and_New_Issue_\u00b7_simonw_datasette_and_many-photos-tables__RKPlace__1_274_rows\"\r\n\r\nAdding `white-space: pre-wrap` to that `
` element is an easy fix:\r\n\r\n\"many-photos-tables__RKMaster__28_260_rows_and_New_Issue_\u00b7_simonw_datasette\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/505/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}
{"id": 463531894, "node_id": "MDExOlB1bGxSZXF1ZXN0MjkzOTkyMzgy", "number": 535, "title": "Added asgi_wrapper plugin hook, closes #520", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-07-03T03:58:00Z", "updated_at": "2019-07-03T04:06:26Z", "closed_at": "2019-07-03T04:06:26Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/535", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/535/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1244082183, "node_id": "PR_kwDODEm0Qs44PPLy", "number": 66, "title": "Ageinfo workaround", "user": {"value": 11887, "label": "ashanan"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-05-21T21:08:29Z", "updated_at": "2022-05-21T21:09:16Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/66", "body": "I'm not sure if this is due to a new format or just because my ageinfo file is blank, but trying to import an archive would crash when it got to that file.  This PR adds a guard clause in the `ageinfo` transformer and sets a default value that doesn't throw an exception.  Seems likely to be the same issue mentioned by danp in https://github.com/dogsheep/twitter-to-sqlite/issues/54, my ageinfo file looks the same.  Added that same ageinfo file to the test archive as well to help confirm my workaround didn't break anything.\r\n\r\nLet me know if you want any changes!", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/66/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1251700382, "node_id": "I_kwDOBm6k_c5Km26e", "number": 1750, "title": "Allow `label_column` to specify array of columns", "user": {"value": 408765, "label": "knutwannheden"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-05-28T18:45:48Z", "updated_at": "2022-05-28T18:45:48Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I think it would be great if the Datasette metadata would allow the `label_column` table key to list multiple columns. Something like:\r\n```json\r\n    \"tables\": {\r\n        \"person\": {\r\n            \"label_column\": [\"first_name\", \"last_name\"]\r\n        },\r\n```\r\nIt would even be interesting with a \"label expression\" similar to a Python f-string. E.g. `{row.last_name}, {row.first_name}`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1750/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}
{"id": 1504352503, "node_id": "I_kwDOBm6k_c5Zqpj3", "number": 1968, "title": "Allow to hide some queries in metadata.yml", "user": {"value": 562352, "label": "CharlesNepote"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-20T10:45:41Z", "updated_at": "2022-12-20T10:45:41Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "By default all queries are displayed.\r\n\r\nBut there are many cases where it would be interesting to hide the queries by default:\r\n* the website is targeting non-tech people\r\n* the query is veeeeeery long ([eg.](https://mirabelle.openfoodfacts.org/products/energy_calculator))\r\n* reading the query is not important for the users, they only want to see the result\r\n\r\nOf course, the user still could have the option to see the query.\r\n\r\nIt could be an option in the metadata file:\r\n```yml\r\ndatabases:\r\n  awesome_db:\r\n    tables:\r\n      products:\r\n        hide_sql: true\r\n    queries:\r\n      great_query:\r\n        hide_sql: true\r\n        sql: select * from products where code = :barcode\r\n```\r\n\r\nThe priority could be:\r\n* no option in the metadata and nothing in the URL: query displayed\r\n* hide_sql in the metadata and nothing in the URL: query displayed as asked in the metadata\r\n* hide_sql in the metadata and &_hide_sql= in the URL: query as asked in the URL\r\n\r\nSee also: #1824\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1968/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}
{"id": 440237422, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1ODYxNTU5", "number": 449, "title": "Apply black to everything", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-03T21:57:26Z", "updated_at": "2019-05-04T02:17:14Z", "closed_at": "2019-05-04T02:15:15Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/449", "body": "I've been hesitating on this for literally months, because I'm not at all excited about the giant diff that will result. But I've been using black on many of my other projects (most actively [sqlite-utils](https://github.com/simonw/sqlite-utils)) and the productivity boost is undeniable: I don't have to spend a single second thinking about code formatting any more!\r\n\r\nSo it's worth swallowing the one-off pain and moving on in a new, black-enabled world.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/449/reactions\", \"total_count\": 4, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 4, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1513238455, "node_id": "PR_kwDODEm0Qs5GUoPm", "number": 71, "title": "Archive: Fix \"ni devices\" typo in importer", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:33:31Z", "updated_at": "2022-12-28T23:33:31Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/71", "body": null, "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/71/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1513238314, "node_id": "PR_kwDODEm0Qs5GUoN6", "number": 70, "title": "Archive: Import Twitter Circle data", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:33:09Z", "updated_at": "2022-12-28T23:33:09Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/70", "body": null, "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/70/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1513237982, "node_id": "PR_kwDODEm0Qs5GUoKL", "number": 68, "title": "Archive: Import mute table", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:32:06Z", "updated_at": "2022-12-28T23:32:06Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/68", "body": null, "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/68/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1513238152, "node_id": "PR_kwDODEm0Qs5GUoMM", "number": 69, "title": "Archive: Import new tweets table name", "user": {"value": 26161409, "label": "sometimes-i-send-pull-requests"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-28T23:32:44Z", "updated_at": "2022-12-28T23:32:44Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "dogsheep/twitter-to-sqlite/pulls/69", "body": "Given the code here, it seems like in the past this file was named \"tweet.js\".  In recent exports, it's named \"tweets.js\".  The archive importer needs to be modified to take this into account.  Existing logic is reused for importing this table.  (However, the resulting table name will be different, matching the different file name -- archive_tweets, rather than archive_tweet).", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/69/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 627836898, "node_id": "MDExOlB1bGxSZXF1ZXN0NDI1NTMxMjA1", "number": 783, "title": "Authentication: plugin hooks plus default --root auth mechanism", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-30T22:25:47Z", "updated_at": "2020-06-01T01:16:44Z", "closed_at": "2020-06-01T01:16:43Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/783", "body": "See #699", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/783/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1586980089, "node_id": "PR_kwDOBm6k_c5KF-by", "number": 2026, "title": "Avoid repeating primary key columns if included in _col args", "user": {"value": 8513, "label": "runderwood"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-02-16T04:16:25Z", "updated_at": "2023-02-16T04:16:41Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2026", "body": "...while maintaining given order.\r\n\r\nFixes #1975 (if I'm understanding correctly).\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2026/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1452360613, "node_id": "I_kwDOBm6k_c5WkUOl", "number": 1895, "title": "Avoid using host name when building absolute URLs?", "user": {"value": 14294, "label": "hubgit"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-16T22:21:27Z", "updated_at": "2022-11-16T22:21:27Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When deploying Datasette to Cloud Run and rewriting certain routes from a Firebase app to the Cloud Run service, some of the URLs in the page start with `https://[service].run.app` rather than the (custom) domain of the Firebase app. \r\n\r\nI guess this is because a) the custom domain of the Firebase app isn't being passed through in the `host` header of the request to the Cloud Run instance and b) the `absolute_url` function in Datasette is using information from the request to build the URL.\r\n\r\nWould it be possible to not use the host name when building the absolute URLs, i.e. only include the path in the URL?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1895/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}
{"id": 625922239, "node_id": "MDExOlB1bGxSZXF1ZXN0NDI0MDMyNDQ1", "number": 769, "title": "Backport of Python 3.8 shutil.copytree", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5471110, "label": "Datasette 0.43"}, "comments": 0, "created_at": "2020-05-27T18:17:15Z", "updated_at": "2020-05-27T20:21:56Z", "closed_at": "2020-05-27T18:17:44Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/769", "body": "Closes #744", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/769/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1196327155, "node_id": "I_kwDOBm6k_c5HToDz", "number": 1702, "title": "Be more consistent with column quoting", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-04-07T16:59:20Z", "updated_at": "2022-04-07T16:59:20Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This tutorial made me notice that Datasette is pretty inconsistent with how column quoting works: https://datasette.io/tutorials/learn-sql\r\n\r\nIt has examples of each of `\"table_name\"` and `[table_name]` and `table_name`, and it uses single quoted values too.\r\n\r\nDatasette should generate SQL as consistently as possible to support learners.\r\n\r\nThat tutorial should also provide a tiny bit of extra information about what's going on here.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1702/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}
{"id": 1107557831, "node_id": "I_kwDOCGYnMM5CA_3H", "number": 386, "title": "Better \"contributing\" documentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-01-19T02:11:48Z", "updated_at": "2022-01-19T02:15:21Z", "closed_at": "2022-01-19T02:15:21Z", "author_association": "OWNER", "pull_request": null, "body": "This page jumps straight into running the tests: https://sqlite-utils.datasette.io/en/latest/contributing.html\r\n\r\nIt should add a little more about expected collaboration styles - opening an issue before filing a pull request - and probably link to https://simonwillison.net/2022/Jan/12/how-i-build-a-feature/", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/386/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}
{"id": 267516329, "node_id": "MDU6SXNzdWUyNjc1MTYzMjk=", "number": 6, "title": "Better JSON response options", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 0, "created_at": "2017-10-23T01:18:47Z", "updated_at": "2017-10-24T15:07:58Z", "closed_at": "2017-10-24T15:07:58Z", "author_association": "OWNER", "pull_request": null, "body": "Default returns this:\r\n\r\n    {\r\n        \u201cColumns\u201d: [\u201cid\u201d, \u201cname\u201d, \u201cage\u201d],\r\n        \u201cRows\u201d: [\r\n             [45, \u201cSimon\u201d, 36]\r\n        ]\r\n    }\r\n\r\n.jsono instead returns a list of objects each duplicating the headers in its keys.\r\n\r\nThey both probably share the same pagination mechanism so it might not be a jsono flat list.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}
{"id": 1182143895, "node_id": "I_kwDOBm6k_c5GdhWX", "number": 1691, "title": "Bug in pytest-httpx example", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-03-26T22:45:30Z", "updated_at": "2022-03-26T22:46:09Z", "closed_at": "2022-03-26T22:46:09Z", "author_association": "OWNER", "pull_request": null, "body": "https://docs.datasette.io/en/0.61.1/testing_plugins.html#testing-outbound-http-calls-with-pytest-httpx says:\r\n\r\n```python\r\nasync def test_outbound_http_call(httpx_mock):\r\n    httpx_mock.add_response(\r\n        url='https://www.example.com/',\r\n        data='Hello world',\r\n    )\r\n```\r\nThat's wrong - `data=` should be `text=`.\r\n\r\nhttps://github.com/Colin-b/pytest_httpx/blob/v0.20.0/README.md#reply-with-custom-body", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1691/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}
{"id": 703962917, "node_id": "MDU6SXNzdWU3MDM5NjI5MTc=", "number": 22, "title": "Bug: UI says sorted by relevance in timeline view", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-17T23:02:07Z", "updated_at": "2020-09-17T23:13:14Z", "closed_at": "2020-09-17T23:13:14Z", "author_association": "MEMBER", "pull_request": null, "body": "In regular timeline view sort defaults to newest, not relevance - so this UI is incorrect:\r\n\r\n\"Dogsheep_Beta_and_The_latest_news_from_the_Datasette_ecosystem_of_tools_-_Datasette_Newsletter_and_Ability_to_invite_people_to_a_team_who_don_t_have_acounts_yet_\u00b7_Issue__307_\u00b7_simonw_datasettecloud\"\r\n", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/22/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}
{"id": 591613579, "node_id": "MDU6SXNzdWU1OTE2MTM1Nzk=", "number": 41, "title": "Bug: recorded a since_id for None, None", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-01T04:29:43Z", "updated_at": "2020-04-01T04:31:11Z", "closed_at": "2020-04-01T04:31:11Z", "author_association": "MEMBER", "pull_request": null, "body": "This shouldn't happen in the `since_ids` table (relates to #39):\r\n\r\n\"twitter__since_ids__2_rows\"\r\n", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/41/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}
{"id": 340733753, "node_id": "MDExOlB1bGxSZXF1ZXN0MjAxMDc1NTMy", "number": 341, "title": "Bump aiohttp to fix compatibility with Python 3.7", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-07-12T17:41:24Z", "updated_at": "2018-07-12T18:07:38Z", "closed_at": "2018-07-12T18:07:38Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/341", "body": "Tests failed here: https://travis-ci.org/simonw/datasette/jobs/403223333", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/341/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}
{"id": 1491840863, "node_id": "PR_kwDOBm6k_c5FMKSG", "number": 1944, "title": "Bump black from 22.10.0 to 22.12.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-12T13:05:11Z", "updated_at": "2022-12-13T05:23:31Z", "closed_at": "2022-12-13T05:23:30Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/1944", "body": "Bumps [black](https://github.com/psf/black) from 22.10.0 to 22.12.0.\n
\nRelease notes\n

Sourced from black's releases.

\n
\n

22.12.0

\n

Preview style

\n\n
    \n
  • Enforce empty lines before classes and functions with sticky leading comments (#3302)
  • \n
  • Reformat empty and whitespace-only files as either an empty file (if no newline is\npresent) or as a single newline character (if a newline is present) (#3348)
  • \n
  • Implicitly concatenated strings used as function args are now wrapped inside\nparentheses (#3307)
  • \n
  • Correctly handle trailing commas that are inside a line's leading non-nested parens\n(#3370)
  • \n
\n

Configuration

\n\n
    \n
  • Fix incorrectly applied .gitignore rules by considering the .gitignore location\nand the relative path to the target file (#3338)
  • \n
  • Fix incorrectly ignoring .gitignore presence when more than one source directory is\nspecified (#3336)
  • \n
\n

Parser

\n\n
    \n
  • Parsing support has been added for walruses inside generator expression that are\npassed as function args (for example,\nany(match := my_re.match(text) for text in texts)) (#3327).
  • \n
\n

Integrations

\n\n
    \n
  • Vim plugin: Optionally allow using the system installation of Black via\nlet g:black_use_virtualenv = 0(#3309)
  • \n
\n
\n
\n
\nChangelog\n

Sourced from black's changelog.

\n
\n

22.12.0

\n

Preview style

\n\n
    \n
  • Enforce empty lines before classes and functions with sticky leading comments (#3302)
  • \n
  • Reformat empty and whitespace-only files as either an empty file (if no newline is\npresent) or as a single newline character (if a newline is present) (#3348)
  • \n
  • Implicitly concatenated strings used as function args are now wrapped inside\nparentheses (#3307)
  • \n
  • Correctly handle trailing commas that are inside a line's leading non-nested parens\n(#3370)
  • \n
\n

Configuration

\n\n
    \n
  • Fix incorrectly applied .gitignore rules by considering the .gitignore location\nand the relative path to the target file (#3338)
  • \n
  • Fix incorrectly ignoring .gitignore presence when more than one source directory is\nspecified (#3336)
  • \n
\n

Parser

\n\n
    \n
  • Parsing support has been added for walruses inside generator expression that are\npassed as function args (for example,\nany(match := my_re.match(text) for text in texts)) (#3327).
  • \n
\n

Integrations

\n\n
    \n
  • Vim plugin: Optionally allow using the system installation of Black via\nlet g:black_use_virtualenv = 0(#3309)
  • \n
\n
\n
\n
\nCommits\n
    \n
  • 2ddea29 Prepare release 22.12.0 (#3413)
  • \n
  • 5b1443a release: skip bad macos wheels for now (#3411)
  • \n
  • 9ace064 Bump peter-evans/find-comment from 2.0.1 to 2.1.0 (#3404)
  • \n
  • 19c5fe4 Fix CI with latest flake8-bugbear (#3412)
  • \n
  • d4a8564 Bump sphinx-copybutton from 0.5.0 to 0.5.1 in /docs (#3390)
  • \n
  • 2793249 Wordsmith current_style.md (#3383)
  • \n
  • d97b789 Remove whitespaces of whitespace-only files (#3348)
  • \n
  • c23a5c1 Clarify that Black runs with --safe by default (#3378)
  • \n
  • 8091b25 Correctly handle trailing commas that are inside a line's leading non-nested ...
  • \n
  • ffaaf48 Compare each .gitignore found with an appropiate relative path (#3338)
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=black&package-manager=pip&previous-version=22.10.0&new-version=22.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1944/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1645098678, "node_id": "PR_kwDOBm6k_c5NIQri", "number": 2047, "title": "Bump black from 22.12.0 to 23.3.0", "user": {"value": 49699333, "label": "dependabot[bot]"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-03-29T06:09:06Z", "updated_at": "2023-03-29T06:12:21Z", "closed_at": "2023-03-29T06:12:05Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/2047", "body": "Bumps [black](https://github.com/psf/black) from 22.12.0 to 23.3.0.\n
\nRelease notes\n

Sourced from black's releases.

\n
\n

23.3.0

\n

Highlights

\n

This release fixes a longstanding confusing behavior in Black's GitHub action, where the\nversion of the action did not determine the version of Black being run (issue #3382). In\naddition, there is a small bug fix around imports and a number of improvements to the\npreview style.

\n

Please try out the\npreview style\nwith black --preview and tell us your feedback. All changes in the preview style are\nexpected to become part of Black's stable style in January 2024.

\n

Stable style

\n
    \n
  • Import lines with # fmt: skip and # fmt: off no longer have an extra blank line\nadded when they are right after another import line (#3610)
  • \n
\n

Preview style

\n
    \n
  • Add trailing commas to collection literals even if there's a comment after the last\nentry (#3393)
  • \n
  • async def, async for, and async with statements are now formatted consistently\ncompared to their non-async version. (#3609)
  • \n
  • with statements that contain two context managers will be consistently wrapped in\nparentheses (#3589)
  • \n
  • Let string splitters respect East Asian Width\n(#3445)
  • \n
  • Now long string literals can be split after East Asian commas and periods (\u3001 U+3001\nIDEOGRAPHIC COMMA, \u3002 U+3002 IDEOGRAPHIC FULL STOP, & \uff0c U+FF0C FULLWIDTH COMMA)\nbesides before spaces (#3445)
  • \n
  • For stubs, enforce one blank line after a nested class with a body other than just\n... (#3564)
  • \n
  • Improve handling of multiline strings by changing line split behavior (#1879)
  • \n
\n

Parser

\n
    \n
  • Added support for formatting files with invalid type comments (#3594)
  • \n
\n

Integrations

\n
    \n
  • Update GitHub Action to use the version of Black equivalent to action's version if\nversion input is not specified (#3543)
  • \n
  • Fix missing Python binary path in autoload script for vim (#3508)
  • \n
\n

Documentation

\n
    \n
  • Document that only the most recent release is supported for security issues;\nvulnerabilities should be reported through Tidelift (#3612)
  • \n
\n\n
\n

... (truncated)

\n
\n
\nChangelog\n

Sourced from black's changelog.

\n
\n

23.3.0

\n

Highlights

\n

This release fixes a longstanding confusing behavior in Black's GitHub action, where the\nversion of the action did not determine the version of Black being run (issue #3382). In\naddition, there is a small bug fix around imports and a number of improvements to the\npreview style.

\n

Please try out the\npreview style\nwith black --preview and tell us your feedback. All changes in the preview style are\nexpected to become part of Black's stable style in January 2024.

\n

Stable style

\n
    \n
  • Import lines with # fmt: skip and # fmt: off no longer have an extra blank line\nadded when they are right after another import line (#3610)
  • \n
\n

Preview style

\n
    \n
  • Add trailing commas to collection literals even if there's a comment after the last\nentry (#3393)
  • \n
  • async def, async for, and async with statements are now formatted consistently\ncompared to their non-async version. (#3609)
  • \n
  • with statements that contain two context managers will be consistently wrapped in\nparentheses (#3589)
  • \n
  • Let string splitters respect East Asian Width\n(#3445)
  • \n
  • Now long string literals can be split after East Asian commas and periods (\u3001 U+3001\nIDEOGRAPHIC COMMA, \u3002 U+3002 IDEOGRAPHIC FULL STOP, & \uff0c U+FF0C FULLWIDTH COMMA)\nbesides before spaces (#3445)
  • \n
  • For stubs, enforce one blank line after a nested class with a body other than just\n... (#3564)
  • \n
  • Improve handling of multiline strings by changing line split behavior (#1879)
  • \n
\n

Parser

\n
    \n
  • Added support for formatting files with invalid type comments (#3594)
  • \n
\n

Integrations

\n
    \n
  • Update GitHub Action to use the version of Black equivalent to action's version if\nversion input is not specified (#3543)
  • \n
  • Fix missing Python binary path in autoload script for vim (#3508)
  • \n
\n

Documentation

\n
    \n
  • Document that only the most recent release is supported for security issues;\nvulnerabilities should be reported through Tidelift (#3612)
  • \n
\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=black&package-manager=pip&previous-version=22.12.0&new-version=23.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n
\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--2047.org.readthedocs.build/en/2047/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2047/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null}