{"id": 803356942, "node_id": "MDU6SXNzdWU4MDMzNTY5NDI=", "number": 1218, "title": " /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory", "user": {"value": 11855322, "label": "robmarkcole"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-02-08T09:07:00Z", "updated_at": "2021-02-23T12:12:17Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Error as above, however I do have python3.8 and the readme indicates this is supported.\r\n\r\n```\r\n(venv) (base) Robins-MacBook:datasette robin$ ls /usr/local/opt/python3/bin/\r\n\r\n.. pip3 python3 python3.8\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1218/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1871935751, "node_id": "I_kwDOD079W85vk3kH", "number": 40, "title": " ImportError: cannot import name 'formatargspec' from 'inspect'", "user": {"value": 36752421, "label": "hosslikw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-08-29T15:36:31Z", "updated_at": "2023-08-31T03:18:07Z", "closed_at": "2023-08-31T03:18:06Z", "author_association": "NONE", "pull_request": null, "body": "I get the following error when running \"pip3 install dogsheep-photos\"\r\n\" from inspect import ismethod, isclass, formatargspec\r\n ImportError: cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?\"\r\n \r\nPython 3.12.0rc1\r\nsqlite 3.43.0\r\ndatasette, version 0.64.3", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/40/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1622640374, "node_id": "I_kwDOCGYnMM5gt4b2", "number": 534, "title": " ResourceWarning: unclosed file", "user": {"value": 1244826, "label": "djhenderson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-03-14T03:02:18Z", "updated_at": "2023-05-08T19:56:29Z", "closed_at": "2023-05-08T19:56:29Z", "author_association": "NONE", "pull_request": null, "body": "Issuing either\r\n\r\n```\r\npy -Wdefault -m sqlite_utils insert dogs.db dogs dogs0.csv --csv\r\n [#############-----------------------] 36%\r\n [####################################] 100%C:\\Users\\Doug\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlite_utils\\cli.py:1187: ResourceWarning: unclosed file <_io.TextIOWrapper name='dogs0.csv' encoding='utf-8-sig'>\r\n insert_upsert_implementation(\r\nResourceWarning: Enable tracemalloc to get the object allocation traceback\r\n```\r\nor\r\n```\r\nset pythonwarnings=default\r\nsqlite-utils insert dogs.db dogs dogs0.csv --csv\r\n [#############-----------------------] 36%\r\n [####################################] 100%C:\\Users\\Doug\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlite_utils\\cli.py:1187: ResourceWarning: unclosed file <_io.TextIOWrapper name='dogs0.csv' encoding='utf-8-sig'>\r\n insert_upsert_implementation(\r\nResourceWarning: Enable tracemalloc to get the object allocation traceback\r\n```\r\n\r\nexhibits a ResourceWarning indicating that the CSV file being loaded is not closed.\r\n\r\nsqlite-utils --version\r\nsqlite-utils, version 3.30\r\npy --version\r\nPython 3.11.2\r\nWindows Version 10.0.19045 Build 19045\r\nSQLite version 3.41.0 2023-02-21 18:09:37\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/534/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1943259395, "node_id": "I_kwDOEhK-wc5z08kD", "number": 16, "title": " time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'", "user": {"value": 3746270, "label": "linonetwo"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-14T13:24:39Z", "updated_at": "2023-10-14T13:24:39Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "\r\n```\r\nevernote-to-sqlite enex evernote.db ./\u6211\u7684\u7b14\u8bb0.enex\r\nImporting from ENEX [#####-------------------------------] 14%\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/evernote-to-sqlite\", line 8, in \r\n sys.exit(cli())\r\n ^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1157, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1078, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 1434, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/click/core.py\", line 783, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/cli.py\", line 31, in enex\r\n save_note(db, note)\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py\", line 46, in save_note\r\n \"created\": convert_datetime(created),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py\", line 111, in convert_datetime\r\n return datetime.datetime.strptime(s, \"%Y%m%dT%H%M%SZ\").isoformat()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py\", line 568, in _strptime_datetime\r\n tt, fraction, gmtoff_fraction = _strptime(data_string, format)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py\", line 349, in _strptime\r\n raise ValueError(\"time data %r does not match format %r\" %\r\nValueError: time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'\r\n```\r\n\r\nenex is exported by evernote mac client ", "repo": {"value": 303218369, "label": "evernote-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1665053646, "node_id": "I_kwDOBm6k_c5jPrPO", "number": 2059, "title": "\"Deceptive site ahead\" alert on Heroku deployment", "user": {"value": 1186275, "label": "mtdukes"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-04-12T18:34:51Z", "updated_at": "2023-04-13T01:13:01Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I deployed a fairly basic instance of Datasette (`datasette-auth-passwords` is the only plugin) using Heroku. The deployed URL now gives a \"Deceptive site ahead\" warning to users.\r\n\r\nIs there way around this? Maybe a way to add ownership verification [through Google's search console](https://search.google.com/search-console/welcome)? ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2059/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1180427792, "node_id": "I_kwDOCGYnMM5GW-YQ", "number": 421, "title": "\"Error: near \"(\": syntax error\" when using sqlite-utils indexes CLI", "user": {"value": 24938923, "label": "learning4life"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2022-03-25T07:12:51Z", "updated_at": "2022-04-13T22:41:59Z", "closed_at": "2022-04-13T22:41:59Z", "author_association": "NONE", "pull_request": null, "body": "This bug relates to https://github.com/simonw/sqlite-utils/issues/408#issuecomment-1066139147\r\n\r\n**New error when using CLI: \"sqlite-utils indexes global.db --table\"**\r\n\r\n```\r\n(app-root) sqlite-utils indexes global.db --table\r\nError: near \"(\": syntax error\r\n(app-root) sqlite-utils --version\r\nsqlite-utils, version 3.25.1\r\n(app-root) sqlite3 --version\r\n3.36.0 2021-06-18 18:36:39\r\n(app-root) python --version\r\nPython 3.8.11\r\n```\r\n\r\n\r\nDockerfile\r\n```\r\nFROM centos/python-38-centos7\r\n\r\nUSER root\r\n\r\nRUN yum update -y\r\nRUN yum upgrade -y\r\n\r\n\r\n# epel\r\nRUN yum -y install epel-release && yum clean all\r\n\r\n# SQLite\r\nRUN yum -y install zlib-devel geos geos-devel proj proj-devel freexl freexl-devel libxml2-devel \r\n\r\nWORKDIR /build/\r\nCOPY sqlite-autoconf-3360000.tar.gz ./\r\nRUN tar -zxf sqlite-autoconf-3360000.tar.gz\r\nWORKDIR /build/sqlite-autoconf-3360000\r\nRUN ./configure\r\nRUN make\r\nRUN make install\r\n\r\n# \r\nRUN /opt/app-root/bin/python3.8 -m pip install --upgrade pip\r\nRUN pip install sqlite-utils\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/421/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 959999095, "node_id": "MDU6SXNzdWU5NTk5OTkwOTU=", "number": 1421, "title": "\"Query parameters\" form shows wrong input fields if query contains \"03:31\" style times", "user": {"value": 6988, "label": "j4mie"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 11, "created_at": "2021-08-04T07:29:04Z", "updated_at": "2021-08-09T03:41:07Z", "closed_at": "2021-08-09T03:33:02Z", "author_association": "NONE", "pull_request": null, "body": "Datasette version `0.58.1`.\r\n\r\nI'm guessing this is a bug in the code that looks for `:param`-style query parameters..\r\n\r\n\"image\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1421/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 573583971, "node_id": "MDU6SXNzdWU1NzM1ODM5NzE=", "number": 689, "title": "\"Templates considered\" comment broken in >=0.35", "user": {"value": 35075, "label": "chrishas35"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2020-03-01T17:31:21Z", "updated_at": "2020-04-05T19:39:44Z", "closed_at": "2020-04-05T19:39:44Z", "author_association": "NONE", "pull_request": null, "body": "Noticed that the \"Templates Considered\" comment is missing in 0.37. Believe I traced it back to #664 as you can see it in https://v0-34.datasette.io/ but not https://v0-35.datasette.io/. Looking at the template context debug between the two you can see what is missing from 0.35 vs. 0.34:\r\n\r\n```diff\r\n< \"datasette_version\": \"0.34\",\r\n< \"app_css_hash\": \"ffa51a\",\r\n< \"select_templates\": [\r\n< \"*index.html\"\r\n< ],\r\n< \"zip\": \"\",\r\n< \"body_scripts\": [],\r\n< \"extra_css_urls\": \"\",\r\n< \"extra_js_urls\": \"\",\r\n< \"format_bytes\": \"\",\r\n< \"database_url\": \">\",\r\n< \"database_color\": \">\"\r\n---\r\n> \"datasette_version\": \"0.35\",\r\n> \"database_url\": \">\",\r\n> \"database_color\": \">\"\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/689/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 760312579, "node_id": "MDU6SXNzdWU3NjAzMTI1Nzk=", "number": 1134, "title": "\"_searchmode=raw\" throws an index out of range error when combined with \"_search_COLUMN\"", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-12-09T13:05:37Z", "updated_at": "2020-12-10T05:57:17Z", "closed_at": "2020-12-09T19:56:55Z", "author_association": "NONE", "pull_request": null, "body": "Hi Simon!\r\nMaybe it's just me, but when [using _searchmode=raw (trying to enable wildcard-searching) in combination with the \"_search_COLUMN\"-table argument](https://byraadsarkivet.aarhus.dk/db/cases?_searchmode=raw&_search_title=sundhedsfrem*), I get a list index out of range error. [When combining with the simpler \"_search\"-argument everything works, including wildcard-seaches.](https://byraadsarkivet.aarhus.dk/db/cases?_search=sundhedsfrem*&_searchmode=raw). Here's the traceback:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 122, in route_path\r\n return await view(new_scope, receive, send)\r\n File \"/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 196, in view\r\n request, **scope[\"url_route\"][\"kwargs\"]\r\n File \"/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py\", line 204, in get\r\n request, database, hash, correct_hash_provided, **kwargs\r\n File \"/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py\", line 342, in view_get\r\n request, database, hash, **kwargs\r\n File \"/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/table.py\", line 393, in data\r\n search_col = key.split(\"_search_\", 1)[1]\r\nIndexError: list index out of range\r\n\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1134/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 457147936, "node_id": "MDU6SXNzdWU0NTcxNDc5MzY=", "number": 512, "title": "\"about\" parameter in metadata does not appear when alone", "user": {"value": 7936571, "label": "chrismp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-17T21:04:20Z", "updated_at": "2019-10-11T15:49:13Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Here's an example of metadata I have for one database on datasette.\r\n\r\n```\r\n\"Records-requests\": {\r\n\t\"tables\": {\r\n\t\t\"Some table\": {\r\n\t\t\t\"about\": \"This table has data.\"\r\n\t\t}\r\n\t}\r\n}\r\n```\r\n\r\nThe text in `about` does not show up when I publish the data. But it shows up after I add a `\"source\"` parameter in the metadata.\r\n\r\nIs this intended?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/512/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1303169663, "node_id": "I_kwDOCGYnMM5NrMp_", "number": 453, "title": "'unclosed file' warning when using insert_upsert_implementation from Python", "user": {"value": 311257, "label": "makkus"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-07-13T09:34:35Z", "updated_at": "2022-07-15T21:52:25Z", "closed_at": "2022-07-15T21:52:21Z", "author_association": "NONE", "pull_request": null, "body": "I'm using the `[insert_upsert_implementation](https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/cli.py)` function directly in my Python code to import a csv file with all the bells and whistles `sqlite-utils` provides, but I'm getting a resource warning that a io.TextWrapper object is not closed.\r\n\r\nThe warning goes away when wrapping the code from [this line](https://github.com/simonw/sqlite-utils/blob/42440d6345c242ee39778045e29143fb550bd2c2/sqlite_utils/cli.py#L924) in a try/finally block like:\r\n\r\n```\r\ntry:\r\n ...\r\n ...\r\nfinally:\r\n decoded.close()\r\n```\r\n(might be that `sniff_buffer` must also be closed if non null, but I might be wrong)\r\n\r\nI suspect Python closes the reference automatically when the sqlite-utils cli run is done, but since my code doesn't exit, I'm getting the warning.\r\n\r\nAlternatively, it'd be cool if the 'import csv/tsv' functionality could be added directly to the Database class.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/453/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 567902704, "node_id": "MDU6SXNzdWU1Njc5MDI3MDQ=", "number": 675, "title": "--cp option for datasette publish and datasette package for shipping additional files and directories", "user": {"value": 141844, "label": "aviflax"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2020-02-19T22:55:56Z", "updated_at": "2020-12-28T18:49:21Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I\u2019m working on integrating Datasette into a documentation-oriented publishing workflow internally in my company, and in order to deploy the Docker image created by `datasette package` I need to add an additional file to the image \u2014 in my case, it\u2019s a sort of a deployment directive. I\u2019ve worked out a way to do this after the image has been created, but it\u2019s convoluted and brittle.\r\n\r\nSo it\u2019d be excellent if there was an additional option for this command, something like, like, `--copy`.\r\n\r\nI\u2019d envision it looking something like:\r\n\r\n```shell\r\n$ datasette package --copy /the/source/path:/the/target/path data.db\r\n```\r\n\r\nI\u2019d be happy to help design, specify, implement, and test this feature, if you\u2019d be interested.\r\n\r\nThanks for the fantastic tools!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/675/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1931794126, "node_id": "I_kwDOBm6k_c5zJNbO", "number": 2198, "title": "--load-extension=spatialite not working with Windows", "user": {"value": 363004, "label": "hcarter333"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-10-08T12:50:22Z", "updated_at": "2023-10-08T12:50:22Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Using each of\r\n`python -m datasette counties.db -m metadata.yml --load-extension=SpatiaLite`\r\n\r\nand \r\n\r\n`python -m datasette counties.db --load-extension=\"C:\\Windows\\System32\\mod_spatialite.dll\"`\r\n\r\nand\r\n\r\n`python -m datasette counties.db --load-extension=C:\\Windows\\System32\\mod_spatialite.dll`\r\n\r\nI got the error:\r\n\r\n```\r\n File \"C:\\Users\\m3n7es\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\datasette\\database.py\", line 209, in in_thread\r\n self.ds._prepare_connection(conn, self.name)\r\n File \"C:\\Users\\m3n7es\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\datasette\\app.py\", line 596, in _prepare_connection\r\n conn.execute(\"SELECT load_extension(?, ?)\", [path, entrypoint])\r\nsqlite3.OperationalError: The specified module could not be found.\r\n\r\n```\r\n\r\nI finally tried modifying the code in app.py to read:\r\n\r\n```\r\n def _prepare_connection(self, conn, database):\r\n conn.row_factory = sqlite3.Row\r\n conn.text_factory = lambda x: str(x, \"utf-8\", \"replace\")\r\n if self.sqlite_extensions:\r\n conn.enable_load_extension(True)\r\n for extension in self.sqlite_extensions:\r\n # \"extension\" is either a string path to the extension\r\n # or a 2-item tuple that specifies which entrypoint to load.\r\n #if isinstance(extension, tuple):\r\n # path, entrypoint = extension\r\n # conn.execute(\"SELECT load_extension(?, ?)\", [path, entrypoint])\r\n #else:\r\n conn.execute(\"SELECT load_extension('C:\\Windows\\System32\\mod_spatialite.dll')\")\r\n\r\n```\r\nAt which point the counties example worked. \r\n\r\nIs there a correct way to install/use the extension on Windows? My method will cause issues if there's a second extension to be used.\r\n\r\nOn an unrelated note, my next step is to figure out how to write a query across the two loaded databases supplied from the command line:\r\n`python -m datasette rm_toucans_23_10_07.db counties.db -m metadata.yml --load-extension=SpatiaLite`\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2198/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 555832585, "node_id": "MDU6SXNzdWU1NTU4MzI1ODU=", "number": 661, "title": "--port option to expose a port other than 8001 in \"datasette package\"", "user": {"value": 134771, "label": "dvhthomas"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-01-27T21:05:56Z", "updated_at": "2020-01-30T04:17:52Z", "closed_at": "2020-01-29T22:46:45Z", "author_association": "NONE", "pull_request": null, "body": "I see how to alter the port using `datasette serve -p XXX` per the docs. However, I'm packaging up to server the container on AppEngine flexible, which [requires](https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build#listening_to_port_8080) that the container is serving traffic on port 8080.\r\n\r\nhttps://github.com/simonw/datasette/blob/7950105c278b140e6cb665c68b59df219870f9bc/Dockerfile#L41\r\n\r\nIs there a way to inject a non-default port into the Dockerfile, or should I just do something like `sed` to replace 8001 with 8080 after `dataset package` has done it's thing? Thanks for the advice.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/661/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 771324837, "node_id": "MDU6SXNzdWU3NzEzMjQ4Mzc=", "number": 53, "title": "--since support for favorites", "user": {"value": 27, "label": "anotherjesse"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-12-19T07:08:23Z", "updated_at": "2020-12-19T07:47:11Z", "closed_at": "2020-12-19T07:47:11Z", "author_association": "NONE", "pull_request": null, "body": "Having support for `--since` for updating your favorites would be ideal as the api is both slow and it only returns ~3k most recent favorites.\r\n\r\nhttps://twittercommunity.com/t/cant-get-all-favorite-tweets-by-rest-api/22007/3\r\n\r\nThe api seems to take an optional `since_id` parameter - https://developer.twitter.com/en/docs/twitter-api/v1/tweets/post-and-engage/api-reference/get-favorites-list", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1345452427, "node_id": "I_kwDODLZ_YM5QMfmL", "number": 11, "title": "-a option is used for \"--auth\" and for \"--all\"", "user": {"value": 2467, "label": "fernand0"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-08-21T10:50:48Z", "updated_at": "2022-08-21T21:11:57Z", "closed_at": "2022-08-21T21:11:57Z", "author_association": "NONE", "pull_request": null, "body": "I'm not sure which option is best, instead of -a -all.", "repo": {"value": 213286752, "label": "pocket-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/11/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 702386948, "node_id": "MDU6SXNzdWU3MDIzODY5NDg=", "number": 159, "title": ".delete_where() does not auto-commit (unlike .insert() or .upsert())", "user": {"value": 11712349, "label": "spdkils"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2020-09-16T01:55:52Z", "updated_at": "2023-04-01T17:21:05Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When you use the delete_where() function on a table, it never commits....\r\n\r\nIs that intentional?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/159/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1199158210, "node_id": "I_kwDOCGYnMM5HebPC", "number": 423, "title": ".extract() doesn't set foreign key when extracted columns contain NULL value", "user": {"value": 37447552, "label": "jlieth"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-04-10T20:05:30Z", "updated_at": "2022-08-27T14:45:04Z", "closed_at": "2022-08-27T14:45:04Z", "author_association": "NONE", "pull_request": null, "body": "I've run into an issue with `extract` and I don't believe this is the intended behaviour.\r\n\r\nI'm working with a database with music listening information. Currently it has one large table `listens` that contains all information. I'm trying to normalize the database by extracting relevant columns to separate tables (`artists`, `tracks`, `albums`). Not every track has an album.\r\n\r\nA simplified demonstration with just `track_title` and `album_title` columns:\r\n```ipython\r\nIn [1]: import sqlite_utils\r\n\r\nIn [2]: db = sqlite_utils.Database(memory=True)\r\n\r\nIn [3]: db[\"listens\"].insert_all([\r\n ...: {\"id\": 1, \"track_title\": \"foo\", \"album_title\": \"bar\"},\r\n ...: {\"id\": 2, \"track_title\": \"baz\", \"album_title\": None}\r\n ...: ], pk=\"id\")\r\nOut[3]: \r\n```\r\n\r\nThe track in the first row has an album, the second track doesn't. Now I extract album information into a separate column:\r\n```ipython\r\nIn [4]: db[\"listens\"].extract(columns=[\"album_title\"], table=\"albums\", fk_column=\"album_id\")\r\nOut[4]:
\r\n\r\nIn [5]: list(db[\"albums\"].rows)\r\nOut[5]: [{'id': 1, 'album_title': 'bar'}, {'id': 2, 'album_title': None}]\r\n\r\nIn [6]: list(db[\"listens\"].rows)\r\nOut[6]: \r\n[{'id': 1, 'track_title': 'foo', 'album_id': 1},\r\n {'id': 2, 'track_title': 'baz', 'album_id': None}]\r\n```\r\n\r\nThis behaves as expected -- the `album` table contains entries for both the existing album and the NULL album. The `listens` table has a foreign key only for the first row (since the album in the second row was empty).\r\n\r\nNow I want to extract the track information as well. Album information belongs to the track so I want to extract both columns to a new table.\r\n```ipython\r\nIn [7]: db[\"listens\"].extract(columns=[\"track_title\", \"album_id\"], table=\"tracks\", fk_column=\"track_id\")\r\nOut[7]:
\r\n\r\nIn [8]: list(db[\"tracks\"].rows)\r\nOut[8]: \r\n[{'id': 1, 'track_title': 'foo', 'album_id': 1},\r\n {'id': 2, 'track_title': 'baz', 'album_id': None}]\r\n\r\nIn [9]: list(db[\"listens\"].rows)\r\nOut[9]: [{'id': 1, 'track_id': 1}, {'id': 2, 'track_id': None}]\r\n```\r\n\r\nExtracting to the `tracks` table worked fine (both tracks are present with correct columns). However, the `listens` table only has a foreign key to the newly created tracks for the first row, the foreign key in the second row is NULL.\r\n\r\nChanging the order of extracts doesn't help.\r\n\r\nI poked around in the source a bit and I believe [this line](https://github.com/simonw/sqlite-utils/blob/433813612ff9b4b501739fd7543bef0040dd51fe/sqlite_utils/db.py#L1737) (essentially comparing `NULL = NULL`) is the problem, but I don't know enough about SQL to create a reliable fix myself.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/423/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 435819321, "node_id": "MDU6SXNzdWU0MzU4MTkzMjE=", "number": 436, "title": "400 Error when trying to register new user via https://publish.datasettes.com/", "user": {"value": 317694, "label": "nniiicc"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-04-22T17:55:00Z", "updated_at": "2021-01-04T20:15:42Z", "closed_at": "2021-01-04T20:15:41Z", "author_association": "NONE", "pull_request": null, "body": "Behavior: When registering a new user via Zeit - confirmation is sent and screen acknowledges registered user... When clicking grant access the next screen is a white 400 error message. \r\n\r\nReplicated: Chrome and Firefox; 2 different email accounts", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/436/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 951817328, "node_id": "MDU6SXNzdWU5NTE4MTczMjg=", "number": 12, "title": "403 when getting token", "user": {"value": 285352, "label": "treyhunner"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-07-23T18:43:26Z", "updated_at": "2021-10-12T18:31:57Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I tried to use https://your-foursquare-oauth-token.glitch.me/ to get my Swarm auth token and got a 403 after I clicked the Allow button:\r\n\r\n![image](https://user-images.githubusercontent.com/285352/126826478-60e53614-263d-40bb-9f1d-c1a676644eb0.png)\r\n\r\nI'm not sure if this is the right repo to report this in", "repo": {"value": 205429375, "label": "swarm-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 810397025, "node_id": "MDU6SXNzdWU4MTAzOTcwMjU=", "number": 1228, "title": "500 error caused by faceting if a column called `n` exists", "user": {"value": 7107523, "label": "Kabouik"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-02-17T17:41:20Z", "updated_at": "2022-03-19T06:44:40Z", "closed_at": "2022-03-19T01:38:04Z", "author_association": "NONE", "pull_request": null, "body": "I recently discovered `datasette` thanks to your great talk at FOSDEM and would like to use it for some projects. However, when trying to use it on databases created from some csv ot tsv files, I am sometimes getting this issue when going to http://127.0.0.1:8001/databasetest/databasetest and I don't exactly understand what it refers to.\r\n\r\nSo far, I couldn't find anything relevant when reviewing the raw text files that could explain this issue, nor could I find something obvious between the files that generate this issue and those that don't. Does the error ring a bell and, if so, could you please point me to the right direction?\r\n\r\n```\r\n$ datasette databasetest.db \r\nINFO: Started server process [1408482]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nINFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)\r\nINFO: 127.0.0.1:56394 - \"GET / HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56394 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56396 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56398 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nTraceback (most recent call last):\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/app.py\", line 1099, in route_path\r\n response = await view(request, send)\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 147, in view\r\n request, **request.scope[\"url_route\"][\"kwargs\"]\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 121, in dispatch_request\r\n return await handler(request, *args, **kwargs)\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 260, in get\r\n request, database, hash, correct_hash_provided, **kwargs\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/base.py\", line 434, in view_get\r\n request, database, hash, **kwargs\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/views/table.py\", line 782, in data\r\n suggested_facets.extend(await facet.suggest())\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/facets.py\", line 168, in suggest\r\n and any(r[\"n\"] > 1 for r in distinct_values)\r\n File \"/home/kabouik/.local/lib/python3.7/site-packages/datasette/facets.py\", line 168, in \r\n and any(r[\"n\"] > 1 for r in distinct_values)\r\nTypeError: '>' not supported between instances of 'str' and 'int'\r\nINFO: 127.0.0.1:56402 - \"GET /databasetest/databasetest HTTP/1.1\" 500 Internal Server Error\r\nINFO: 127.0.0.1:56402 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET / HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56406 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /databasetest HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56406 - \"GET /-/static/codemirror-5.57.0.min.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56410 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56414 - \"GET /-/static/codemirror-5.57.0-sql.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56412 - \"GET /-/static/codemirror-5.57.0.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static/sql-formatter-2.3.3.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /databasetest?sql=select+*+from+databasetest HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56410 - \"GET /-/static/app.css?4e362c HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static-plugins/datasette_vega/main.2acbb312.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56412 - \"GET /-/static/codemirror-5.57.0.min.css HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56404 - \"GET /-/static/sql-formatter-2.3.3.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56406 - \"GET /-/static/codemirror-5.57.0.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56414 - \"GET /-/static-plugins/datasette_vega/main.08f5d3d8.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56408 - \"GET /-/static/codemirror-5.57.0-sql.min.js HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:56410 - \"GET /databasetest.json?sql=select+*+from+databasetest&_shape=array&_shape=array HTTP/1.1\" 200 OK\r\n^CINFO: Shutting down\r\nINFO: Waiting for application shutdown.\r\nINFO: Application shutdown complete.\r\nINFO: Finished server process [1408482]\r\n```\r\n\r\nNote that there is no error if I go to http://127.0.0.1:8001/databasetest and then click on `Run SQL`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1228/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 292011379, "node_id": "MDU6SXNzdWUyOTIwMTEzNzk=", "number": 184, "title": "500 from missing table name", "user": {"value": 222245, "label": "carlmjohnson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2018-01-26T19:46:45Z", "updated_at": "2019-05-21T16:17:29Z", "closed_at": "2018-04-13T18:18:59Z", "author_association": "NONE", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/56623e48da5412b25fb39cc26b9c743b684dd968/datasette/app.py#L517-L519 throws an error if it gets an empty list back. Simplest solution is to write a helper func that just says \r\n\r\n```python\r\nresult = list(await self.execute(name, sql, params)\r\nif result:\r\n return result[0][0]\r\n```\r\n\r\nand use it anywhere `[0][0]` is now.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/184/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 568091133, "node_id": "MDU6SXNzdWU1NjgwOTExMzM=", "number": 676, "title": "?_searchmode=raw option for running FTS searches without escaping characters", "user": {"value": 58088336, "label": "tunguyenatwork"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2020-02-20T06:56:57Z", "updated_at": "2020-02-25T05:57:24Z", "closed_at": "2020-02-25T05:56:04Z", "author_association": "NONE", "pull_request": null, "body": "After the version 0.34. I am not able to use the wildchar in the _search option( or the full text search). It will not return any result unless I specify the whole word for text search. \r\n\r\nIf I use 'match :search || \"*\" ' in the sql statement then it will work as expected.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/676/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 995098231, "node_id": "MDU6SXNzdWU5OTUwOTgyMzE=", "number": 1470, "title": "?_sort=rowid with _next= returns error", "user": {"value": 19851673, "label": "eigenfoo"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-09-13T16:36:15Z", "updated_at": "2021-10-18T19:30:15Z", "closed_at": "2021-10-10T01:15:03Z", "author_association": "NONE", "pull_request": null, "body": "For example:\r\n\r\n- Go to https://cryptics.eigenfoo.xyz/clues/clues?_next=100 (this is the second page of results in a Datasette site)\r\n- Search anything using the FTS search bar. For example, searching for `hello` will take you to https://cryptics.eigenfoo.xyz/clues/clues?_search=hello&_sort=rowid&_next=100\r\n- A `500 Error: list index out of range` is raised.\r\n\r\nThis is because the search URL includes the `&_next=100` UTM parameter, carried over from where the FTS search was run. However, there isn't a second page in the search results, so a `list index out of range` error is raised. You can confirm that removing this UTM parameter from the URL returns the appropriate search results.\r\n\r\nThe FTS search request should strip any `_next` UTM parameter.\r\n\r\n---\r\n\r\n```bash\r\ndatasette, version 0.58.1\r\nsqlite-utils, version 3.17\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1470/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 277589569, "node_id": "MDU6SXNzdWUyNzc1ODk1Njk=", "number": 155, "title": "A primary key column that has foreign key restriction associated won't rendering label column", "user": {"value": 388154, "label": "wsxiaoys"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 4, "created_at": "2017-11-29T00:40:02Z", "updated_at": "2017-12-07T05:39:53Z", "closed_at": "2017-12-07T05:39:53Z", "author_association": "NONE", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/155/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 448189298, "node_id": "MDU6SXNzdWU0NDgxODkyOTg=", "number": 486, "title": "Ability to add extra routes and related templates", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-24T14:04:25Z", "updated_at": "2019-05-24T14:43:28Z", "closed_at": "2019-05-24T14:43:09Z", "author_association": "NONE", "pull_request": null, "body": "Hi Simon\r\n\r\nThank for an excellent job! Datasette is such an obviously good idea (once you have that idea!) and so well done. The only thing that I miss, is the ability to add extras routes (with associated jinja2-templates). For most of the datasets, that I would like to publish, I would also like at least a page, that describes the data (semantics, provenance, biases...) and a page explaining our cookie- and privacy-policies (which would allows us to use something like Goggle Analytics).\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/486/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 276842536, "node_id": "MDU6SXNzdWUyNzY4NDI1MzY=", "number": 153, "title": "Ability to customize presentation of specific columns in HTML view", "user": {"value": 20264, "label": "ftrain"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 14, "created_at": "2017-11-26T17:46:11Z", "updated_at": "2017-12-10T02:08:45Z", "closed_at": "2017-12-07T06:17:33Z", "author_association": "NONE", "pull_request": null, "body": "This ties into https://github.com/simonw/datasette/issues/3 in some ways. It would be great to have some adaptability in the HTML views and to specific some columns as displaying in certain ways.\r\n\r\n- [x] 1. **Auto-parsing URIs into in-browser links.** Why? Lots of public data around cultural commons stuff links to a specific URL. This would be a great utility to turn on at the command line, just parse everything for URLs. Maybe they need to be underlined or represented in a different way than internal URLs.\r\n- [x] 2. **Ability to identify a column as plain/preformatted text.** Why? Was trying to import the Enron emails, the body collapses. Hard to read. These fields also tend to screw up the ability to scan a table view. If you knew it was text the system could set an `overflow` property on the relevant CSS, so you could still scan.\r\n- [x] 3. **Ability to identify a column as HTML.** Why? I want to spider some stuff and drop sections into SQLite, and just keep them as HTML.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/153/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1754174496, "node_id": "I_kwDOCGYnMM5ojpQg", "number": 558, "title": "Ability to define unique columns when creating a table", "user": {"value": 1910303, "label": "aguinane"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-06-13T06:56:19Z", "updated_at": "2023-08-18T01:06:03Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When creating a new table, it would be good to have an option to set unique columns similar to how not_null is set.\r\n\r\n```python\r\nfrom sqlite_utils import Database\r\n\r\ncolumns = {\"mRID\": str, \"name\": str}\r\ndb = Database(\"example.db\")\r\ndb[\"ExampleTable\"].create(columns, pk=\"mRID\", not_null=[\"mRID\"], if_not_exists=True)\r\ndb[\"ExampleTable\"].create_index([\"mRID\"], unique=True, if_not_exists=True)\r\n```\r\n\r\nSo something like this would add the UNIQUE flag to the table definition. \r\n\r\n```python\r\ndb[\"ExampleTable\"].create(columns, pk=\"mRID\", not_null=[\"mRID\"], unique=[\"mRID\"], if_not_exists=True)\r\n```\r\n\r\n```sql\r\nCREATE TABLE ExampleTable (\r\n mRID TEXT PRIMARY KEY\r\n NOT NULL\r\n UNIQUE,\r\n name TEXT\r\n);\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/558/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1382457780, "node_id": "I_kwDOCGYnMM5SZqG0", "number": 490, "title": "Ability to insert multi-line files", "user": {"value": 6180701, "label": "jeqo"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-09-22T13:29:22Z", "updated_at": "2022-09-26T18:24:44Z", "closed_at": "2022-09-23T16:37:58Z", "author_association": "NONE", "pull_request": null, "body": "I was looking into how to parse application log files that contain multiline text (e.g. Java stack traces) into sqlite. \r\nI can see that at the moment `--lines` helps, but falls short when processing multi-line texts.\r\n\r\nI wonder if this functionality would be useful for sqlite-utils. A similar approach to Elastic logstash/filebeat can be adopted: https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html \r\n\r\nPotential changes:\r\n\r\n- add a `--multiline` option\r\n- additional properties for\r\n - multiline-pattern (regex expression)\r\n - multiline-negate: true/false\r\n - multiline-what: previous or next\r\n\r\nOr if this is achievable in a different way, please share. Thanks!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/490/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 480961330, "node_id": "MDU6SXNzdWU0ODA5NjEzMzA=", "number": 54, "title": "Ability to list views, and to access db[\"view_name\"].rows / rows_where / etc", "user": {"value": 20264, "label": "ftrain"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2019-08-15T02:00:28Z", "updated_at": "2019-08-23T12:41:09Z", "closed_at": "2019-08-23T12:20:15Z", "author_association": "NONE", "pull_request": null, "body": "The docs show me how to create a view via `db.create_view()` but I can't seem to get back to that view post-creation; if I query it as a table it returns `None`, and it doesn't appear in the table listing, even though querying the view works fine from inside the sqlite3 command-line.\r\n\r\nIt'd be great to have the view as a pseudo-table, or if the python/sqlite3 module makes that hard to pull off (I couldn't figure it out), to have that edge-case documented next to the `db.create_view()` docs.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/54/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1383646615, "node_id": "I_kwDOCGYnMM5SeMWX", "number": 491, "title": "Ability to merge databases and tables", "user": {"value": 8904453, "label": "sgraaf"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2022-09-23T11:10:55Z", "updated_at": "2023-06-14T22:14:24Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi! Let me firstly say that I am a big fan of your work -- I follow your tweets and blog posts with great interest \ud83d\ude04.\r\n\r\nNow onto the matter at hand: I think it would be great if `sqlite-utils` included a `merge` or `combine` command, with the purpose of combining different SQLite databases into a single SQLite database. This way, the newly \"merged\" database would contain all differently named tables contained in the databases to be merged as-is, as well a concatenation of all tables of the same name.\r\n\r\nThis could look something like this:\r\n\r\n```bash\r\nsqlite-utils merge cats.db dogs.db > animals.db\r\n```\r\n\r\nI imagine this is rather straightforward if all databases involved in the merge contain differently named tables (i.e. no chance of conflicts), but things get slightly more complicated if two or more of the databases to be merged contain tables with the same name. Not only do you have to \"do something\" with the primary key(s), but these tables could also simply have different schemas (and therefore be incompatible for concatenation to begin with).\r\n\r\nAnyhow, I would love your thoughts on this, and, if you are open to it, work together on the design and implementation!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/491/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 723708310, "node_id": "MDU6SXNzdWU3MjM3MDgzMTA=", "number": 188, "title": "About loading spatialite", "user": {"value": 30607, "label": "aborruso"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-10-17T08:47:02Z", "updated_at": "2022-02-05T00:04:26Z", "closed_at": "2020-10-17T08:52:58Z", "author_association": "NONE", "pull_request": null, "body": "Hi @simonw ,\r\nIf I run\r\n\r\n```\r\nsqlite3\r\n.load /usr/local/lib/mod_spatialite.so\r\nselect spatialite_version();\r\n```\r\n\r\nI have `5.0.0`.\r\n\r\n![image](https://user-images.githubusercontent.com/30607/96332706-d8cd3300-1065-11eb-906b-daf99963198e.png)\r\n\r\n\r\nIf I run\r\n\r\n```\r\nsqlite-utils :memory: \"select spatialite_version()\" --load-extension=spatialite\r\n```\r\n\r\nI have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/bin/sqlite-utils\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/sqlite_utils/cli.py\", line 936, in query\r\n _load_extensions(db, load_extension)\r\n File \"/home/aborruso/.local/lib/python3.8/site-packages/sqlite_utils/cli.py\", line 1326, in _load_extensions\r\n db.conn.load_extension(ext)\r\nTypeError: argument 1 must be str, not None\r\n```\r\n\r\nHow to load properly spatialite extension in sqlite-utils?\r\n\r\nThank you very muc", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/188/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 791237799, "node_id": "MDU6SXNzdWU3OTEyMzc3OTk=", "number": 1196, "title": "Access Denied Error in Windows", "user": {"value": 2826376, "label": "QAInsights"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-01-21T15:40:40Z", "updated_at": "2021-04-14T19:28:38Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I am trying to publish a db to vercel. But while issuing the below command throwing `Access Denied` error which is leading to `RecursionError: maximum recursion depth exceeded while calling a Python object`.\r\n\r\nI am using PyCharm and Python 3.9. I have reinstalled both and launched PyCharm as Admin in Windows 10. But still the issue persists.\r\n\r\nIssued command `datasette publish vercel jmeter.db --project jmeter --install datasette-vega`\r\n\r\nPS: localhost is working fine.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1196/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 451585764, "node_id": "MDU6SXNzdWU0NTE1ODU3NjQ=", "number": 499, "title": "Accessibility for non-techie newsies? ", "user": {"value": 7936571, "label": "chrismp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-03T16:49:37Z", "updated_at": "2019-06-05T21:22:55Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/499/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 705057955, "node_id": "MDU6SXNzdWU3MDUwNTc5NTU=", "number": 969, "title": "Add --tar option to \"datasette publish heroku\"", "user": {"value": 1448859, "label": "betatim"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5971510, "label": "Datasette 0.50"}, "comments": 3, "created_at": "2020-09-20T06:54:53Z", "updated_at": "2020-10-08T23:55:59Z", "closed_at": "2020-10-08T23:30:59Z", "author_association": "NONE", "pull_request": null, "body": "This issue is about how best to pass additional options to tools used for publishing datasettes. A concrete example is wanting to pass the `--tar` flag to the heroku CLI tool. I think there are at least two options for doing this: documentation for each publishing tool to explain how to set flags via env variables (if possible) or building a mechanism that lets users pass additional flags through datasette.\r\n\r\nWhen using `datasette publish heroku binder-launches.db --extra-options=\"--config facet_time_limit_ms:35000 --config sql_time_limit_ms:35000\" --name=binderlytics --install=datasette-vega` to publish https://binderlytics.herokuapp.com/ the following error happens:\r\n\r\n```\r\n \u203a Warning: heroku update available from 7.42.1 to 7.43.0.\r\n \u203a Warning: heroku update available from 7.42.1 to 7.43.0.\r\n \u203a Warning: heroku update available from 7.42.1 to 7.43.0.\r\nSetting WEB_CONCURRENCY and restarting \u2b22 binderlytics... done, v13\r\nWEB_CONCURRENCY: 1\r\n \u203a Warning: heroku update available from 7.42.1 to 7.43.0.\r\n \u25b8 Couldn't detect GNU tar. Builds could fail due to decompression errors\r\n \u25b8 See https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive\r\n \u25b8 Please install it, or specify the '--tar' option\r\n \u25b8 Falling back to node's built-in compressor\r\nbuffer.js:358\r\n throw new ERR_INVALID_OPT_VALUE.RangeError('size', size);\r\n ^\r\n\r\nRangeError [ERR_INVALID_OPT_VALUE]: The value \"3303763968\" is invalid for option \"size\"\r\n at Function.alloc (buffer.js:367:3)\r\n at new Buffer (buffer.js:281:19)\r\n at Readable. (/Users/thead/.local/share/heroku/node_modules/archiver-utils/index.js:39:15)\r\n at Readable.emit (events.js:322:22)\r\n at endReadableNT (/Users/thead/.local/share/heroku/node_modules/readable-stream/lib/_stream_readable.js:1010:12)\r\n at processTicksAndRejections (internal/process/task_queues.js:84:21) {\r\n code: 'ERR_INVALID_OPT_VALUE'\r\n}\r\n```\r\n\r\nAfter installing GNU tar with `brew install gnu-tar` and modifying `datasette/publish/heroku.py` to include the `--tar=/path/to/gnu-tar` publishing works.\r\n\r\nI think the problem occurs once your heroku slug reaches a certain size. At least when I add only a few 100 entries to the datasette then the error does not occcur.\r\n\r\ndatasette version 0.49.1\r\nOSX 10.14.6 (18G103)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/969/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 991575770, "node_id": "MDExOlB1bGxSZXF1ZXN0NzMwMDIwODY3", "number": 1467, "title": "Add Authorization header when CORS flag is set", "user": {"value": 3058200, "label": "jameslittle230"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-09-08T22:14:41Z", "updated_at": "2021-10-17T02:29:07Z", "closed_at": "2021-10-14T18:54:18Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/1467", "body": "This PR adds the [`Access-Control-Allow-Headers`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers) flag when CORS mode is enabled.\r\n\r\nThis would fix https://github.com/simonw/datasette-auth-tokens/issues/4. When making cross-origin requests, the server must respond with all allowable HTTP headers. A Datasette instance using auth tokens must accept the `Authorization` HTTP header in order for cross-origin authenticated requests to take place.\r\n\r\nPlease let me know if there's a better way of doing this! I couldn't figure out a way to change the app's response from the plugin itself, so I'm starting here. If you'd rather this logic live in the plugin, I'd love any guidance you're able to give.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1467/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1553425465, "node_id": "I_kwDOCGYnMM5cl2Q5", "number": 522, "title": "Add COLUMN_TYPE_MAPPING for timedelta", "user": {"value": 81377, "label": "maport"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-23T16:49:54Z", "updated_at": "2023-11-04T00:49:51Z", "closed_at": "2023-11-04T00:49:51Z", "author_association": "NONE", "pull_request": null, "body": "Currently trying to create a column with Python type `datetime.timedelta` results in an error:\r\n\r\n```\r\n>>> from sqlite_utils import Database\r\n>>> db = Database(\"test.db\")\r\n>>> test_tbl = db['test']\r\n>>> test_tbl.insert({'col1': datetime.timedelta()})\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 2979, in insert\r\n return self.insert_all(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 3082, in insert_all\r\n self.create(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 1574, in create\r\n self.db.create_table(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 961, in create_table\r\n sql = self.create_table_sql(\r\n File \"/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py\", line 852, in create_table_sql\r\n column_type=COLUMN_TYPE_MAPPING[column_type],\r\nKeyError: \r\n```\r\n\r\nThe reason this would be useful is that `MySQLdb` uses `timedelta` for MySQL `TIME` columns:\r\n\r\n```\r\n>>> import MySQLdb\r\n>>> conn = MySQLdb.connect(host='database', user='user', passwd='pw')\r\n>>> csr = conn.cursor()\r\n>>> csr.execute(\"SELECT CAST('11:20' AS TIME)\")\r\n>>> tuple(csr)\r\n((datetime.timedelta(seconds=40800),),)\r\n```\r\n\r\nSo currently any attempt to convert a MySQL DB with a `TIME` column using `db-to-sqlite` will result in the above error.\r\n\r\nI was rather surprised that `MySQLdb` uses `timedelta` for `TIME` columns but I see that [this column type](https://dev.mysql.com/doc/refman/8.0/en/time.html) is intended for time intervals as well as the time of day so it makes sense. \r\n\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/522/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 285168503, "node_id": "MDU6SXNzdWUyODUxNjg1MDM=", "number": 176, "title": "Add GraphQL endpoint", "user": {"value": 173848, "label": "yozlet"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2017-12-29T23:21:01Z", "updated_at": "2020-04-21T14:16:24Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/176/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1353074021, "node_id": "I_kwDOCGYnMM5QpkVl", "number": 474, "title": "Add an option for specifying column names when inserting CSV data", "user": {"value": 14294, "label": "hubgit"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-08-27T15:29:59Z", "updated_at": "2022-08-31T03:42:36Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "https://sqlite-utils.datasette.io/en/stable/cli.html#csv-files-without-a-header-row\r\n\r\n> The first row of any CSV or TSV file is expected to contain the names of the columns in that file.\r\n\r\n> If your file does not include this row, you can use the `--no-headers` option to specify that the tool should not use that fist row as headers.\r\n\r\n> If you do this, the table will be created with column names called `untitled_1` and `untitled_2` and so on. You can then rename them using the `sqlite-utils transform ... --rename` command.\r\n\r\nIt would be nice to be able to specify the column names when importing CSV/TSV without a header row, via an extra command line option.\r\n\r\n(renaming a column of a large table can take a long time, which makes it an inconvenient workaround)", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/474/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 476852861, "node_id": "MDU6SXNzdWU0NzY4NTI4NjE=", "number": 568, "title": "Add database_color as a configurable option", "user": {"value": 50906992, "label": "LBHELewis"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-08-05T13:14:45Z", "updated_at": "2023-08-11T05:19:42Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "This would be really useful as it would allow us to tie in with colour schemes.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/568/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 792297010, "node_id": "MDExOlB1bGxSZXF1ZXN0NTYwMjA0MzA2", "number": 224, "title": "Add fts offset docs.", "user": {"value": 37962604, "label": "polyrand"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2021-01-22T20:50:58Z", "updated_at": "2021-02-14T19:31:06Z", "closed_at": "2021-02-14T19:31:06Z", "author_association": "NONE", "pull_request": "simonw/sqlite-utils/pulls/224", "body": "The limit can be passed as a string to the query builder to have an offset. I have tested it using the shorthand `limit=f\"15, 30\"`, the standard syntax should work too.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/224/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1039037439, "node_id": "PR_kwDOCGYnMM4t0uaI", "number": 333, "title": "Add functionality to read Parquet files.", "user": {"value": 2118708, "label": "Florents-Tselai"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-10-28T23:43:19Z", "updated_at": "2021-11-25T19:47:35Z", "closed_at": "2021-11-25T19:47:35Z", "author_association": "NONE", "pull_request": "simonw/sqlite-utils/pulls/333", "body": "I needed this for a project of mine, and I thought it'd be useful to have it in sqlite-utils (It's also mentioned in #248 ).\r\nThe current implementation works (data is read & data types are inferred correctly.\r\nI've added a single straightforward test case, but @simonw please let me know if there are any non-obvious flags/combinations I should test too.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/333/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 541274681, "node_id": "MDU6SXNzdWU1NDEyNzQ2ODE=", "number": 2, "title": "Add linkedin-to-sqlite", "user": {"value": 881925, "label": "mnp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-21T03:13:40Z", "updated_at": "2019-12-21T03:13:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There is an API available. https://developer.linkedin.com/docs/rest-api#\r\n\r\nAt the minimum, I would think contact list and messages would be of interest.", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 504720731, "node_id": "MDU6SXNzdWU1MDQ3MjA3MzE=", "number": 1, "title": "Add more details on how to request data from google takeout correctly.", "user": {"value": 1055831, "label": "dazzag24"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-09T15:17:34Z", "updated_at": "2019-10-09T15:17:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:\r\n\r\n- My Activity\r\n- Location History\r\n\r\nIn addition unless you specify that \"My Activity\" is downloaded in JSON format the default is HTML. This then causes the \r\n\r\n`google-takeout-to-sqlite my-activity takeout.db takeout.zip`\r\n\r\ncommand to fail as it only contains html files not json files.\r\n\r\nThanks", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 697162939, "node_id": "MDU6SXNzdWU2OTcxNjI5Mzk=", "number": 20, "title": "Add more tags so people can find your project.", "user": {"value": 7902810, "label": "ran88dom99"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-09-09T21:14:09Z", "updated_at": "2020-09-09T21:14:09Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "quantified-self habit-tracking google-fit time-tracking wearables quantifiedself \r\nfor example", "repo": {"value": 197431109, "label": "dogsheep-beta"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions\", \"total_count\": 1, \"+1\": 0, \"-1\": 1, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 322741659, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg3NzcwMzQ1", "number": 258, "title": "Add new metadata key persistent_urls which removes the hash from all database urls", "user": {"value": 247131, "label": "philroche"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-05-14T09:39:18Z", "updated_at": "2018-05-21T07:38:15Z", "closed_at": "2018-05-21T07:38:15Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/258", "body": "Add new metadata key \"persistent_urls\" which removes the hash from all database urls when set to \"true\"\r\n\r\nThis PR is just to gauge if this, or something like it, is something you would consider merging?\r\n\r\nI understand the reason why the substring of the hash is included in the url but\r\nthere are some use cases where the urls should persist across deployments. For bookmarks\r\nfor example or for scripts that use the JSON API.\r\n\r\nThis is the initial commit for this feature. Tests and documentation updates to follow.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/258/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 352768017, "node_id": "MDU6SXNzdWUzNTI3NjgwMTc=", "number": 362, "title": "Add option to include/exclude columns in search filters", "user": {"value": 78156, "label": "annapowellsmith"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-08-22T01:32:08Z", "updated_at": "2020-11-03T19:01:59Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a dataset with many columns, of which only some are likely to be of interest for searching.\r\n\r\nIt would be great for usability if the search filters in the UI could be configured to include/exclude columns.\r\n\r\nSee also: https://github.com/simonw/datasette/issues/292", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/362/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1453134846, "node_id": "I_kwDOCGYnMM5WnRP-", "number": 513, "title": "Add or document streamlined workflow for importing Datasette csv / json exports", "user": {"value": 19328961, "label": "henry501"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-17T10:54:47Z", "updated_at": "2022-11-17T10:54:47Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm working on some small front-end enhancements to the laion-aesthetic-datasette project, and I wanted to partially populate a database directly using exports from the existing Datasette instance instead of downloading the parquet files and creating my own multi-GB database.\r\n\r\nThere have been a number of small issues that are certainly related to my relative lack of familiarity with the toolkit, but that are still surprising. \r\n\r\nFor example: a CSV export of the images table (http://laion-aesthetic.datasette.io/laion-aesthetic-6pls.csv?sql=select+rowid%2C+url%2C+text%2C+domain_id%2C+width%2C+height%2C+similarity%2C+punsafe%2C+pwatermark%2C+aesthetic%2C+hash%2C+__index_level_0__+from+images+order+by+random%28%29+limit+100) has nested single quotes, double quotes, and commas that aren't handled by rows_from_file. Similarly, the json output has to be manually transformed to add the column names and remove extraneous information before sqlite_utils can import it.\r\n\r\nI was able to work through these issues, but as an enhancement it would be really helpful to create or document a clear workflow that avoids the friction of this data transformation.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/513/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 282971961, "node_id": "MDU6SXNzdWUyODI5NzE5NjE=", "number": 175, "title": "Add project topic \"automatic-api\"", "user": {"value": 3179832, "label": "dbohdan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-12-18T18:09:17Z", "updated_at": "2017-12-21T18:33:55Z", "closed_at": "2017-12-21T18:33:55Z", "author_association": "NONE", "pull_request": null, "body": "Hi there! Could you add the ~~tag~~ topic `automatic-api` to your repository? I am [making a list](https://github.com/dbohdan/automatic-api) of all projects that automatically expose APIs to databases. (Your Show HN made me do it. :-) I knew about PostgREST and PostGraphQL, but it took adding Datasette to sell me on the concept.) They will be easier to discover if there is a standard GitHub tag, and `automatic-api` seems as good a candidate as any. Two projects [already use it](https://github.com/topics/automatic-api).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/175/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 418329842, "node_id": "MDU6SXNzdWU0MTgzMjk4NDI=", "number": 415, "title": "Add query parameter to hide SQL textarea", "user": {"value": 36796532, "label": "ad-si"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-03-07T14:11:30Z", "updated_at": "2019-03-15T09:30:57Z", "closed_at": "2019-03-15T05:22:43Z", "author_association": "NONE", "pull_request": null, "body": "It would be cool if there was a query parameter to hide / remove the SQL textarea. Then I could simply save a bookmark for a certain query and open it to see the data without having to scroll below the (long) SQL query first.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/415/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1251710928, "node_id": "I_kwDOBm6k_c5Km5fQ", "number": 1751, "title": "Add scrollbars to table presentation in default layout", "user": {"value": 408765, "label": "knutwannheden"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-05-28T19:44:57Z", "updated_at": "2022-05-28T19:52:17Z", "closed_at": "2022-05-28T19:52:17Z", "author_association": "NONE", "pull_request": null, "body": "(As you will be able to tell from the terminology I use, I am not a frontend guy, but I hope you will understand.)\r\n\r\nWhen a table is wide and needs horizontal scrolling to see the columns towards the end, the user needs to scroll horizontally. However, since the container for the HTML table (`div` with class `table-wrapper`) isn't limited by the window size, I first need to vertically scroll near to the bottom of the page in order to scroll horizontally. Then I can scroll back up again. This isn't very user friendly. Instead, I think it would make sense to constrain the table's size (when necessary), so that the vertical and horizontal scrollbars either always are visible or at least not far out of reach.\r\n\r\nI understand that I could provide my own template and / or CSS, but I think it would probably make sense to adjust the default in this regard.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1751/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 893537744, "node_id": "MDU6SXNzdWU4OTM1Mzc3NDQ=", "number": 1331, "title": "Add support for Jinja2 version 3.0", "user": {"value": 475613, "label": "MarkusH"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2021-05-17T17:14:36Z", "updated_at": "2021-05-23T00:57:39Z", "closed_at": "2021-05-23T00:57:39Z", "author_association": "NONE", "pull_request": null, "body": "A week ago, [The Pallets Project](https://github.com/pallets) released [new major versions of several of its projects](https://palletsprojects.com/blog/flask-2-0-released/). Among those updates is one for Jinja2, which bumps it to version 3.0.0.\r\n\r\nI'd like for datasette to support Jinaj2 version 3.0.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1331/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 842765105, "node_id": "MDExOlB1bGxSZXF1ZXN0NjAyMjYxMDky", "number": 6, "title": "Add testres-db tool", "user": {"value": 1151557, "label": "ligurio"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-03-28T15:43:23Z", "updated_at": "2022-02-16T05:12:05Z", "closed_at": "2022-02-16T05:12:05Z", "author_association": "NONE", "pull_request": "dogsheep/dogsheep.github.io/pulls/6", "body": "", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/6/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 1822918995, "node_id": "I_kwDOCGYnMM5sp4lT", "number": 580, "title": "Add way to export to a csv file using the Python library", "user": {"value": 44324811, "label": "kevinlinxc"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-07-26T18:09:26Z", "updated_at": "2023-07-26T18:09:26Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "According to the documentation, we can make a csv output using the CLI tool, but not the Python library. Could we have the latter?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/580/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1351949898, "node_id": "PR_kwDOBm6k_c492dPw", "number": 1793, "title": "Added a useful resource", "user": {"value": 111973926, "label": "MobiWancode"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-08-26T08:41:26Z", "updated_at": "2022-09-06T00:41:25Z", "closed_at": "2022-09-06T00:41:24Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/1793", "body": "Have added a useful resource about the types of databases in SQL i.e SQLite, PostgreSQL, MySQL &, etc from the scaler topics.\r\n\r\n\r\n----\n:books: Documentation preview :books:: https://datasette--1793.org.readthedocs.build/en/1793/\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1793/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 499954048, "node_id": "MDExOlB1bGxSZXF1ZXN0MzIyNTI5Mzgx", "number": 578, "title": "Added support for multi arch builds", "user": {"value": 887095, "label": "heussd"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-09-29T18:43:03Z", "updated_at": "2019-11-13T19:13:15Z", "closed_at": "2019-11-13T19:13:15Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/578", "body": "Minor changes in Dockerfile and new Makefile to support Docker multi architecture builds. `make`will build one image per architecture and push them as one Docker manifest to Docker Hub. Feel free to change `IMAGE_NAME ` to `datasetteproject/datasette` to update your official Docker Hub image(s).", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/578/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 593751293, "node_id": "MDU6SXNzdWU1OTM3NTEyOTM=", "number": 97, "title": "Adding a \"recreate\" flag to the `Database` constructor", "user": {"value": 1448859, "label": "betatim"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-04-04T05:41:10Z", "updated_at": "2020-04-15T14:29:31Z", "closed_at": "2020-04-13T03:52:29Z", "author_association": "NONE", "pull_request": null, "body": "I have a [script](https://github.com/betatim/binder-datasette/blob/master/create-db.ipynb) that imports data into a sqlite DB. When I re-run that script I'd like to remove the existing sqlite DB, instead of adding to it. The pragmatic answer is to add the check and file deletion to my script.\r\n\r\nHowever I thought it would be easy and useful for others to add a `recreate=True` flag to `db = sqlite_utils.Database(\"binder-launches.db\")`. After taking a look at the code for it I am not so sure any more. This is because the connection string could be a URL (or \"connection string\") like `\"file:///tmp/foo.db\"`. I don't know what the equivalent of `os.path.exists()` is for a connection string or how to detect that something is a connection string and raise an error \"can't use recreate=True and conn_string at the same time\".\r\n\r\nDoes anyone have an idea/suggestion where to start investigating?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/97/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 449818897, "node_id": "MDU6SXNzdWU0NDk4MTg4OTc=", "number": 24, "title": "Additional Column Constraints?", "user": {"value": 98555, "label": "IgnoredAmbience"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2019-05-29T13:47:03Z", "updated_at": "2019-06-13T06:47:17Z", "closed_at": "2019-06-13T06:30:26Z", "author_association": "NONE", "pull_request": null, "body": "I'm looking to import data from XML with a pre-defined schema that maps fairly closely to a relational database.\r\nIn particular, it has explicit annotations for when fields are required, optional, or when a default value should be inferred.\r\n\r\nWould there be value in adding the ability to define `NOT NULL` and `DEFAULT` column constraints to sqlite-utils?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/24/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 450862577, "node_id": "MDU6SXNzdWU0NTA4NjI1Nzc=", "number": 496, "title": "Additional options to gcloud build command in cloudrun - timeout", "user": {"value": 1740337, "label": "costrouc"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-05-31T15:43:55Z", "updated_at": "2019-05-31T23:05:05Z", "closed_at": "2019-05-31T23:05:05Z", "author_association": "NONE", "pull_request": null, "body": "I am trying to deploy a 3.1 GB dataset to cloudrun with datasette. Currrently the docker build times out. Would be nice to have a timeout flag or additional gcloud commands that could be specified. \r\n\r\nHere is the line https://github.com/simonw/datasette/blob/f825e2012109247fa246e2b938f8174069e574f1/datasette/publish/cloudrun.py#L78\r\n\r\nI would be happy to submit a PR to allow for a timeout option. What are your ideas of allowing the user additional build publishing flag options?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/496/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1251700382, "node_id": "I_kwDOBm6k_c5Km26e", "number": 1750, "title": "Allow `label_column` to specify array of columns", "user": {"value": 408765, "label": "knutwannheden"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-05-28T18:45:48Z", "updated_at": "2022-05-28T18:45:48Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I think it would be great if the Datasette metadata would allow the `label_column` table key to list multiple columns. Something like:\r\n```json\r\n \"tables\": {\r\n \"person\": {\r\n \"label_column\": [\"first_name\", \"last_name\"]\r\n },\r\n```\r\nIt would even be interesting with a \"label expression\" similar to a Python f-string. E.g. `{row.last_name}, {row.first_name}`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1750/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 828858421, "node_id": "MDU6SXNzdWU4Mjg4NTg0MjE=", "number": 1258, "title": "Allow canned query params to specify default values", "user": {"value": 1385831, "label": "wdccdw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2021-03-11T07:19:02Z", "updated_at": "2023-02-20T23:39:58Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "If I call a canned query that includes named parameters, without passing any parameters, datasette runs the query anyway, resulting in an HTTP status code 400, and a visible error in the browser, with only a link back to home. This means that one of the default links on https://site/database/ will lead to a broken page with no apparent way out.\r\n\r\n![image](https://user-images.githubusercontent.com/1385831/110748683-13e72300-820e-11eb-855c-32e03dfef5bf.png)\r\n\r\nIs there any way to skip performing the query when parameters aren't supplied, but otherwise render the usual canned query page? Alternatively, can I supply default values for my parameters, either when defining my canned queries or when linking to the canned query page from the default database template.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1258/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 548591089, "node_id": "MDU6SXNzdWU1NDg1OTEwODk=", "number": 657, "title": "Allow creation of virtual tables at startup", "user": {"value": 1055831, "label": "dazzag24"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-01-12T16:10:55Z", "updated_at": "2021-01-15T20:24:35Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi,\r\n \r\nI've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow.\r\nhttps://cldellow.com/2018/06/22/sqlite-parquet-vtable.html\r\nhttps://github.com/cldellow/sqlite-parquet-vtable\r\n\r\nThis works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial \r\n\r\nCREATE VIRTUAL TABLE .........\r\n\r\ncommand that is required to load the data from the parquet file into the table.\r\n\r\nIt seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time?\r\n\r\nMy thoughts are that something at init time could detect that the user was loading a *.parquet file and then switch to a mode were it loads that via the \"CREATE VIRTUAL TABLE...\" rather than loading the *.db file in the default case??\r\n\r\nI'm happy to contribute code and testing, I just need some pointers on the best approach.\r\n\r\nThanks\r\nDarren", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/657/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 712889459, "node_id": "MDExOlB1bGxSZXF1ZXN0NDk2Mjk4MTgw", "number": 986, "title": "Allow facet by primary keys, fixes #985", "user": {"value": 39452697, "label": "MrNaif2018"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-10-01T14:18:55Z", "updated_at": "2020-10-01T16:51:45Z", "closed_at": "2020-10-01T16:51:45Z", "author_association": "NONE", "pull_request": "simonw/datasette/pulls/986", "body": "Hello! This PR makes it possible to facet by primary keys.\r\nDid I get it right that just removing the condition on UI side is enough? From testing it works fine with primary keys, just as with normal keys.\r\nIf so, should I also remove unused `data-is-pk`?", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/986/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 814591962, "node_id": "MDU6SXNzdWU4MTQ1OTE5NjI=", "number": 1240, "title": "Allow facetting on custom queries", "user": {"value": 7107523, "label": "Kabouik"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-02-23T15:52:19Z", "updated_at": "2021-02-26T18:19:46Z", "closed_at": "2021-02-26T18:18:18Z", "author_association": "NONE", "pull_request": null, "body": "Facets are a tremendously useful feature, especially for people peeking at the database for the first time and still having little knowledge about the details of the data. It is of great assistance to discover interesting features to explore futher in advanced queries.\r\n\r\nYet, it seems it's impossible to use facets when running a custom SQL query, be it from the little gear icons in column names, the facet suggestions at the top (hidden when performing a custom query), or by appending a facet code to the URL. \r\n\r\nIs there a technical limitation, or is this something that could be unlocked easily?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1240/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 562787785, "node_id": "MDU6SXNzdWU1NjI3ODc3ODU=", "number": 667, "title": "Allow injecting configuration data from plugins", "user": {"value": 870184, "label": "xrotwang"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-02-10T19:50:15Z", "updated_at": "2020-02-12T16:18:22Z", "closed_at": "2020-02-12T09:21:22Z", "author_association": "NONE", "pull_request": null, "body": "I'm trying to customize datasette as explorer for [CLDF](https://cldf.clld.org) datasets. Such datasets can be converted automatically to SQLite, which then can be fed to datasette, (e.g. https://github.com/cldf/cookbook/blob/master/recipes/datasette/README.md).\r\n\r\nPart of this customization would be support for the \"special\" data types described in the [CLDF ontology](https://cldf.clld.org/v1.0/terms.rdf). But while rendering of the values can be customized via the `render_cell` hook in a plugin, e.g. custom labels for foreign keys must be specified through the config file.\r\n\r\nIt would be nice to be able to programmatically inject config data from plugins as well.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/667/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1227571375, "node_id": "I_kwDOCGYnMM5JK0Cv", "number": 431, "title": "Allow making m2m relation of a table to itself", "user": {"value": 738408, "label": "rafguns"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-05-06T08:30:43Z", "updated_at": "2022-06-23T14:12:51Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I am building a database, in which one of the tables has a many-to-many relationship to itself. As far as I can see, this is not (yet) possible using `.m2m()` in sqlite-utils. This may be a bit of a niche use case, so feel free to close this issue if you feel it would introduce too much complexity compared to the benefits.\r\n\r\nExample: suppose I have a table of people, and I want to store the information that John and Mary have two children, Michael and Suzy. It would be neat if I could do something like this:\r\n\r\n```python\r\nfrom sqlite_utils import Database\r\n\r\ndb = Database(memory=True)\r\ndb[\"people\"].insert({\"name\": \"John\"}, pk=\"name\").m2m(\r\n \"people\", [{\"name\": \"Michael\"}, {\"name\": \"Suzy\"}], m2m_table=\"parent_child\", pk=\"name\"\r\n)\r\ndb[\"people\"].insert({\"name\": \"Mary\"}, pk=\"name\").m2m(\r\n \"people\", [{\"name\": \"Michael\"}, {\"name\": \"Suzy\"}], m2m_table=\"parent_child\", pk=\"name\"\r\n)\r\n```\r\n\r\nBut if I do that, the many-to-many table `parent_child` has only one column:\r\n```\r\nCREATE TABLE [parent_child] (\r\n [people_id] TEXT REFERENCES [people]([name]),\r\n PRIMARY KEY ([people_id], [people_id])\r\n)\r\n```\r\n\r\nThis could be solved by adding one or two keyword_arguments to `.m2m()`, e.g. `.m2m(..., left_name=None, right_name=None)` or `.m2m(..., names=(None, None))`.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/431/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1504352503, "node_id": "I_kwDOBm6k_c5Zqpj3", "number": 1968, "title": "Allow to hide some queries in metadata.yml", "user": {"value": 562352, "label": "CharlesNepote"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-12-20T10:45:41Z", "updated_at": "2022-12-20T10:45:41Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "By default all queries are displayed.\r\n\r\nBut there are many cases where it would be interesting to hide the queries by default:\r\n* the website is targeting non-tech people\r\n* the query is veeeeeery long ([eg.](https://mirabelle.openfoodfacts.org/products/energy_calculator))\r\n* reading the query is not important for the users, they only want to see the result\r\n\r\nOf course, the user still could have the option to see the query.\r\n\r\nIt could be an option in the metadata file:\r\n```yml\r\ndatabases:\r\n awesome_db:\r\n tables:\r\n products:\r\n hide_sql: true\r\n queries:\r\n great_query:\r\n hide_sql: true\r\n sql: select * from products where code = :barcode\r\n```\r\n\r\nThe priority could be:\r\n* no option in the metadata and nothing in the URL: query displayed\r\n* hide_sql in the metadata and nothing in the URL: query displayed as asked in the metadata\r\n* hide_sql in the metadata and &_hide_sql= in the URL: query as asked in the URL\r\n\r\nSee also: #1824\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1968/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 544571092, "node_id": "MDU6SXNzdWU1NDQ1NzEwOTI=", "number": 15, "title": "Assets table with downloads", "user": {"value": 2029, "label": "garethr"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5225818, "label": "1.0"}, "comments": 4, "created_at": "2020-01-02T13:05:28Z", "updated_at": "2020-03-28T12:17:01Z", "closed_at": "2020-03-23T19:17:32Z", "author_association": "NONE", "pull_request": null, "body": "The `releases` command extracts the releases table, but data about the individual assets are locked up in the JSON document in the `assets` field. My main interest is in individual and aggregate download counts. I was wondering if creating a new table with a record per asset may be useful?\r\nIf so I'm happy to send a PR when I get a moment. Do you have opinions about that simply being part of the `releases` command or would you prefer a separate command as well?", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1452360613, "node_id": "I_kwDOBm6k_c5WkUOl", "number": 1895, "title": "Avoid using host name when building absolute URLs?", "user": {"value": 14294, "label": "hubgit"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2022-11-16T22:21:27Z", "updated_at": "2022-11-16T22:21:27Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "When deploying Datasette to Cloud Run and rewriting certain routes from a Firebase app to the Cloud Run service, some of the URLs in the page start with `https://[service].run.app` rather than the (custom) domain of the Firebase app. \r\n\r\nI guess this is because a) the custom domain of the Firebase app isn't being passed through in the `host` header of the request to the Cloud Run instance and b) the `absolute_url` function in Datasette is using information from the request to build the URL.\r\n\r\nWould it be possible to not use the host name when building the absolute URLs, i.e. only include the path in the URL?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1895/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 783778672, "node_id": "MDU6SXNzdWU3ODM3Nzg2NzI=", "number": 220, "title": "Better error message for *_fts methods against views", "user": {"value": 649467, "label": "mhalle"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-01-11T23:24:00Z", "updated_at": "2021-02-22T20:44:51Z", "closed_at": "2021-02-14T22:34:26Z", "author_association": "NONE", "pull_request": null, "body": "enable_fts and its related methods only work on tables, not views. \r\n\r\nCould those methods and possibly others move up to the Queryable superclass?\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/220/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 711649325, "node_id": "MDU6SXNzdWU3MTE2NDkzMjU=", "number": 182, "title": "Better handling of encodings other than utf-8 for \"sqlite-utils insert\"", "user": {"value": 765871, "label": "kaihendry"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2020-09-30T05:43:48Z", "updated_at": "2020-10-16T17:20:41Z", "closed_at": "2020-10-16T17:18:52Z", "author_association": "NONE", "pull_request": null, "body": "Makefile:\r\n```\r\ndata.db:\r\n curl -O http://maps.natalian.org/data.txt\r\n go run csv-write.go > data.csv\r\n sqlite-utils insert data.db travels data.csv --csv\r\n\r\nclean:\r\n rm data*\r\n```\r\n[csv-write.go](https://gist.github.com/kaihendry/dff2442de20d73f900026d13bf7a11d9)\r\n\r\n\r\nError message is:\r\n\r\n```\r\nsqlite-utils insert data.db travels data.csv --csv\r\nTraceback (most recent call last):\r\n File \"/home/hendry/.local/bin/sqlite-utils\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/sqlite_utils/cli.py\", line 614, in insert\r\n insert_upsert_implementation(\r\n File \"/home/hendry/.local/lib/python3.8/site-packages/sqlite_utils/cli.py\", line 553, in insert_upsert_implementation\r\n headers = next(reader)\r\n File \"/usr/lib/python3.8/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 1234: invalid continuation byte\r\nmake: *** [Makefile:4: data.db] Error 1\r\n[hendry@t14s datasette-map]$ sqlite-utils --version\r\nsqlite-utils, version 2.19\r\n```\r\n\r\nLittle bit surprised if Go is spewing out bad Unicode, but I'm not sure how to grok `position 1234`..\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/182/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1501900064, "node_id": "I_kwDOBm6k_c5ZhS0g", "number": 1966, "title": "Broken link to live demo in Getting started docs", "user": {"value": 7551922, "label": "lbellomo"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-12-18T13:17:00Z", "updated_at": "2022-12-31T19:15:19Z", "closed_at": "2022-12-31T19:15:10Z", "author_association": "NONE", "pull_request": null, "body": "The link in [Play with a live demo in Getting started](https://github.com/simonw/datasette/blob/main/docs/getting_started.rst#play-with-a-live-demo) to [https://fivethirtyeight.datasettes.com/fivethirtyeight](https://fivethirtyeight.datasettes.com/fivethirtyeight) is broken and the datasette is no longer working (maybe due to the end of the free tier).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1966/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 314665147, "node_id": "MDU6SXNzdWUzMTQ2NjUxNDc=", "number": 216, "title": "Bug: Sort by column with NULL in next_page URL", "user": {"value": 222245, "label": "carlmjohnson"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 15, "created_at": "2018-04-16T14:03:18Z", "updated_at": "2018-04-17T01:45:24Z", "closed_at": "2018-04-17T01:45:24Z", "author_association": "NONE", "pull_request": null, "body": "Copy-pasting from https://github.com/simonw/datasette/issues/189#issuecomment-381429213, since that issue is closed:\r\n\r\nI think I found a bug. I tried to sort by middle initial in my salaries set, and many middle initials are null. The `next_url` gets set by Datasette to:\r\n\r\nhttp://localhost:8001/salaries-d3a5631/2017+Maryland+state+salaries?_next=None%2C391&_sort=middle_initial\r\n\r\nBut then None is interpreted literally and it tries to find a name with the middle initial \"None\" and ends up skipping ahead to O on page 2.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/216/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 770712149, "node_id": "MDExOlB1bGxSZXF1ZXN0NTQyNDA2OTEw", "number": 10, "title": "BugFix for encoding and not update info.", "user": {"value": 1277270, "label": "riverzhou"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-12-18T08:58:54Z", "updated_at": "2021-02-11T22:37:56Z", "closed_at": "2021-02-11T22:37:56Z", "author_association": "NONE", "pull_request": "dogsheep/evernote-to-sqlite/pulls/10", "body": "Bugfix 1:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\anaconda3\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"d:\\anaconda3\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"D:\\Anaconda3\\Scripts\\evernote-to-sqlite.exe\\__main__.py\", line 7, in \r\n File \"d:\\anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n File \"d:\\anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"d:\\anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"d:\\anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"d:\\anaconda3\\lib\\site-packages\\evernote_to_sqlite\\cli.py\", line 30, in enex\r\n for tag, note in find_all_tags(fp, [\"note\"], progress_callback=bar.update):\r\n File \"d:\\anaconda3\\lib\\site-packages\\evernote_to_sqlite\\utils.py\", line 11, in find_all_tags\r\n chunk = fp.read(1024 * 1024)\r\nUnicodeDecodeError: 'gbk' codec can't decode byte 0xa4 in position 383: illegal multibyte sequence\r\n\r\nBugfix 2:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\Anaconda3\\Scripts\\evernote-to-sqlite-script.py\", line 33, in \r\n sys.exit(load_entry_point('evernote-to-sqlite==0.3', 'console_scripts', 'evernote-to-sqlite')())\r\n File \"D:\\Anaconda3\\lib\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"D:\\Anaconda3\\lib\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"D:\\Anaconda3\\lib\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"D:\\Anaconda3\\lib\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"D:\\Anaconda3\\lib\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"D:\\Anaconda3\\lib\\site-packages\\evernote_to_sqlite-0.3-py3.8.egg\\evernote_to_sqlite\\cli.py\", line 31, in enex\r\n File \"D:\\Anaconda3\\lib\\site-packages\\evernote_to_sqlite-0.3-py3.8.egg\\evernote_to_sqlite\\utils.py\", line 28, in save_note\r\nAttributeError: 'NoneType' object has no attribute 'text'", "repo": {"value": 303218369, "label": "evernote-to-sqlite"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/10/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 2029908157, "node_id": "I_kwDOBm6k_c54_fC9", "number": 2214, "title": "CSV export fails for some `text` foreign key references", "user": {"value": 2874, "label": "precipice"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-12-07T05:04:34Z", "updated_at": "2023-12-07T07:36:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm starting this issue without a clear reproduction in case someone else has seen this behavior, and to use the issue as a notebook for research. \r\n\r\nI'm using Datasette with the [SWITRS](https://iswitrs.chp.ca.gov/) data set, which is a California Highway Patrol collection of traffic incident data from the past decade or so. I receive data from them in CSV and want to work with it in Datasette, then export it to CSV for mapping in Felt.com.\r\n\r\nTheir data makes extensive use of codes for incident column data (`1` for `Monday` and so on), some of it integer codes and some of it letter/text codes. The text codes are sometimes blank or `-`. During import, I'm creating lookup tables for foreign key references to make the Datasette UI presentation of the data easier to read.\r\n\r\nIf I import the data and set up the integer foreign keys, everything works fine, but if I set up the text foreign keys, CSV export starts to fail. \r\n\r\nThe foreign key configuration is as follows:\r\n\r\n```\r\n# Some tables use integer ids, like sensible tables do. Let's import them first\r\n# since we favor them.\r\n\r\nfor TABLE in DAY_OF_WEEK CHP_SHIFT POPULATION SPECIAL_COND BEAT_TYPE COLLISION_SEVERITY\r\ndo\r\n\tsqlite-utils create-table records.db $TABLE id integer name text --pk=id\r\n\tsqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv\r\n\tsqlite-utils add-foreign-key records.db collisions $TABLE $TABLE id\r\n\tsqlite-utils create-index records.db collisions $TABLE\r\ndone\r\n\r\n# *Other* tables use letter keys, like they were raised by WOLVES. Let's put them\r\n# at the end of the import queue.\r\n\r\nfor TABLE in WEATHER_1 WEATHER_2 LOCATION_TYPE RAMP_INTERSECTION SIDE_OF_HWY \\\r\nPRIMARY_COLL_FACTOR PCF_CODE_OF_VIOL PCF_VIOL_CATEGORY TYPE_OF_COLLISION MVIW \\\r\nPED_ACTION ROAD_SURFACE ROAD_COND_1 ROAD_COND_2 LIGHTING CONTROL_DEVICE \\\r\nSTWD_VEHTYPE_AT_FAULT CHP_VEHTYPE_AT_FAULT PRIMARY_RAMP SECONDARY_RAMP\r\ndo\r\n\tsqlite-utils create-table records.db $TABLE key text name text --pk=key\r\n\tsqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv\r\n\tsqlite-utils add-foreign-key records.db collisions $TABLE $TABLE key\r\n\tsqlite-utils create-index records.db collisions $TABLE\r\ndone\r\n```\r\n\r\nYou can see the full code and import script here: https://github.com/radical-bike-lobby/switrs-db\r\n\r\nIf I run this code and then hit the CSV export link in the Datasette interface (the simple link or the \"advanced\" dialog), export fails after a small number of CSV rows are written. I am not seeing any detailed error messages but this appears in the logging output:\r\n\r\n```\r\nINFO: 127.0.0.1:57885 - \"GET /records/collisions.csv?_facet=PRIMARY_RD&PRIMARY_RD=ASHBY+AV&_labels=on&_size=max HTTP/1.1\" 200 OK\r\nCaught this error: \r\n\r\n```\r\n\r\n(No other output follows `error:` other than a blank line.)\r\n\r\nI've stared at the rows directly after the error occurs and can't yet see what is causing the problem. I'm going to set up a development environment and see if I get any more detailed error output, and then stare more at some problematic lines to see if I can get a simple reproduction.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2214/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 395236066, "node_id": "MDU6SXNzdWUzOTUyMzYwNjY=", "number": 393, "title": "CSV export in \"Advanced export\" pane doesn't respect query", "user": {"value": 1727065, "label": "ltrgoddard"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2019-01-02T12:39:41Z", "updated_at": "2021-06-17T18:14:24Z", "closed_at": "2019-01-03T02:44:10Z", "author_association": "NONE", "pull_request": null, "body": "It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at [songs released in 1989](https://fivethirtyeight.datasettes.com/fivethirtyeight-c300360/classic-rock%2Fclassic-rock-song-list?Release+Year__exact=1989) in the `classic-rock/classic-rock-song-list` table from the Five Thirty Eight data. The JSON and CSV export links at the top of the page both give me filtered data using `Release+Year__exact=1989` in the URL. In the `Advanced export` tab, though, the CSV option gives me the whole data set, while the JSON options preserve the query.\r\n\r\nIt may be that this is intended behaviour related to the streaming CSV stuff [discussed here](https://github.com/simonw/datasette/issues/266), but if that's the case then I think it should be a little clearer.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/393/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1250629388, "node_id": "I_kwDOCGYnMM5KixcM", "number": 440, "title": "CSV files with too many values in a row cause errors", "user": {"value": 4068, "label": "frafra"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 20, "created_at": "2022-05-27T10:54:44Z", "updated_at": "2022-06-14T22:23:01Z", "closed_at": "2022-06-14T20:12:46Z", "author_association": "NONE", "pull_request": null, "body": "*Original title: csv.DictReader can have None as key*\r\n\r\nIn some cases, `csv.DictReader` can have `None` as key for unnamed columns, and a list of values as value.\r\n`sqlite_utils.utils.rows_from_file` cannot handle that:\r\n\r\n```python\r\nurl=\"https://artsdatabanken.no/Fab2018/api/export/csv\"\r\ndb = sqlite_utils.Database(\":memory\")\r\n\r\nwith urlopen(url) as fab:\r\n reader, _ = sqlite_utils.utils.rows_from_file(fab, encoding=\"utf-16le\") \r\n db[\"fab2018\"].insert_all(reader, pk=\"Id\")\r\n```\r\n\r\nResult:\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 3, in \r\n File \"/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py\", line 2924, in insert_all\r\n chunk = list(chunk)\r\n File \"/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py\", line 3454, in fix_square_braces\r\n if any(\"[\" in key or \"]\" in key for key in record.keys()):\r\n File \"/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py\", line 3454, in \r\n if any(\"[\" in key or \"]\" in key for key in record.keys()):\r\nTypeError: argument of type 'NoneType' is not iterable\r\n```\r\n\r\nCode:\r\nhttps://github.com/simonw/sqlite-utils/blob/59be60c471fd7a2c4be7f75e8911163e618ff5ca/sqlite_utils/db.py#L3454\r\n\r\n`sqlite-utils insert` from command line is not affected by this issue.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/440/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 2023057255, "node_id": "I_kwDOBm6k_c54lWdn", "number": 2212, "title": "Can't filter with numbers", "user": {"value": 605070, "label": "fzakaria"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-12-04T05:26:29Z", "updated_at": "2023-12-04T05:26:29Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean).\r\nI can't seem to get the facet to work or even filtering on this column.\r\n\r\nMy guess is that Datasette is \"stringifying\" the number and it's not matching?\r\nExample: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/2212/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1434911255, "node_id": "I_kwDOCGYnMM5VhwIX", "number": 510, "title": "Cannot enable FTS5 despite it being available", "user": {"value": 1176293, "label": "ar-jan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-11-03T16:03:49Z", "updated_at": "2022-11-18T18:37:52Z", "closed_at": "2022-11-17T10:36:28Z", "author_association": "NONE", "pull_request": null, "body": "When I do `sqlite-utils enable-fts my.db table_name column_name` (with or without `--fts5`), I get an FTS4 virtual table instead of the expected FTS5.\r\n\r\nFTS5 is however available and Python/SQLite versions do not seem to be the issue. I can manually create the FTS5 virtual table, and then Datasette also works with it from this same Python environment.\r\n\r\n`>>> sqlite3.version`\r\n`2.6.0`\r\n`>>> sqlite3.sqlite_version`\r\n`3.39.4`\r\n\r\n`PRAGMA compile_options;` includes `ENABLE_FTS5`.\r\n\r\n`sqlite-utils, version 3.30`.\r\n\r\nAny ideas what's happening and how to fix?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 919314806, "node_id": "MDU6SXNzdWU5MTkzMTQ4MDY=", "number": 270, "title": "Cannot set type JSON", "user": {"value": 4068, "label": "frafra"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-06-11T23:53:22Z", "updated_at": "2021-06-16T17:34:49Z", "closed_at": "2021-06-16T15:47:06Z", "author_association": "NONE", "pull_request": null, "body": "It would be great if the column type could be set to JSON. That would not be different from handling a regular string. It would be something like `repr(value)` and it would work with both JSON and CSV inputs, no matter if `value` is a real list or just a string representing a list.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/270/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 826064552, "node_id": "MDU6SXNzdWU4MjYwNjQ1NTI=", "number": 1253, "title": "Capture \"Ctrl + Enter\" or \"\u2318 + Enter\" to send SQL query?", "user": {"value": 9308268, "label": "rayvoelker"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-03-09T15:00:50Z", "updated_at": "2021-10-30T16:00:42Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It appears as though \"Shift + Enter\" triggers the form submit action to submit SQL, but could that action be bound to the \"Ctrl + Enter\" or \"\u2318 + Enter\" action?\r\n\r\nI feel like that pattern already exists in a number of similar tools and could improve usability of the editor.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1253/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1907281675, "node_id": "I_kwDOCGYnMM5xrs8L", "number": 595, "title": "Cascading DELETE not working with Table.delete(pk)", "user": {"value": 123451970, "label": "cycle-data"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2023-09-21T15:46:41Z", "updated_at": "2023-09-25T09:38:57Z", "closed_at": "2023-09-25T09:38:13Z", "author_association": "NONE", "pull_request": null, "body": "Hi !\r\nI noticed that when I am trying to use the delete method of the Table object,\r\nthe record get properly deleted from the table, but the cascading delete triggers on foreign keys do not activate.\r\n\r\n`self.db[\"contact\"].delete(contact_id)`\r\n\r\nI tried querying the database directly via DB Browser and the triggers work without any issue.\r\nLooked up the source code and behind the scene this method is just querying the database normally so I'm not exactly sure where this behavior comes from.\r\n\r\nThank you in advance for your time ! ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/595/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 569317377, "node_id": "MDU6SXNzdWU1NjkzMTczNzc=", "number": 681, "title": "Cashe-header missing in http-response", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-02-22T10:50:45Z", "updated_at": "2020-02-24T20:53:57Z", "closed_at": "2020-02-24T20:53:56Z", "author_association": "NONE", "pull_request": null, "body": "Hi Simon. I need some help with both understanding and adding http-headers. If I call datasette on localhost with --config default_cache_ttl:120 and --cors, I only get the following response-headers:\r\n\r\naccess-control-allow-origin: *\r\ncontent-type: text/html; charset=utf-8\r\ndate: Sat, 22 Feb 2020 10:32:15 GMT\r\nreferrer-policy: no-referrer\r\nserver: uvicorn\r\ntransfer-encoding: chunked\r\n\r\nCors works, but no caching-header is set? Same thing happens if I use the command in a Dockerfile and run datasette with docker.\r\n\r\nSecond, how can one add headers to uvicorn? I've tried to add uvicorn commands to the Dockerfile, before the final datasette command, but it doesn't work. Is there any way to add headers to the uvicorn.run() command i datasette? I particular, I would like to add some of the missing security-headers:\r\n\r\n\"Screenshot\r\n\r\nThank you for a great product!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/681/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 781262510, "node_id": "MDU6SXNzdWU3ODEyNjI1MTA=", "number": 1181, "title": "Certain database names results in 404: \"Database not found: None\"", "user": {"value": 1470389, "label": "jieter"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 6346396, "label": "Datasette 0.54"}, "comments": 4, "created_at": "2021-01-07T12:01:16Z", "updated_at": "2021-12-21T18:25:15Z", "closed_at": "2021-01-25T05:13:19Z", "author_association": "NONE", "pull_request": null, "body": "I have a file named `test-database (1).sqlite`. When requesting the home route `/`, I see datasette is able to read it correctly:\r\n\r\n\"Screenshot\r\n\r\nHowever, if I click any of the links, datasette replies with: `Error 404 Database not found: None`\r\n\r\nIt seems the hash is crucial, as renaming the file to `database (1).sqlite` makes the error go away.\r\n\r\nThis lines checks for a single dash:\r\nhttps://github.com/simonw/datasette/blob/97fb10c17dd007a275ab743742e93e932335ad67/datasette/views/base.py#L184\r\n\r\n```\r\n$ datasette test-database\\ \\(1\\).sqlite \r\nINFO: Started server process [68314]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nINFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)\r\nINFO: 127.0.0.1:54043 - \"GET /favicon.ico HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:54043 - \"GET / HTTP/1.1\" 200 OK\r\n...\r\nINFO: 127.0.0.1:54044 - \"GET /favicon.ico HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:54044 - \"GET /test-database (1) HTTP/1.1\" 404 Not Found\r\n\r\n```\r\nVersion:\r\n```\r\n$ datasette --version\r\ndatasette, version 0.53\r\n```\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1181/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 508100844, "node_id": "MDU6SXNzdWU1MDgxMDA4NDQ=", "number": 598, "title": "Character encoding bug with CSV export", "user": {"value": 46313, "label": "JoeGermuska"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-10-16T21:09:30Z", "updated_at": "2021-06-17T18:13:20Z", "closed_at": "2019-10-18T22:52:21Z", "author_association": "NONE", "pull_request": null, "body": "I was just poking around, and at [this URL](https://sql-murder-mystery.datasette.io/sql-murder-mystery/crime_scene_report.csv?_stream=on&type=arson&_size=max), I encountered this error:\r\n\r\n```\r\n'latin-1' codec can't encode character '\\u2019' in position 27: ordinal not in range(256)\r\n```\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/598/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1257724585, "node_id": "I_kwDOCGYnMM5K91qp", "number": 441, "title": "Combining `rows_where()` and `search()` to limit which rows are searched", "user": {"value": 1448859, "label": "betatim"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2022-06-02T06:01:55Z", "updated_at": "2022-06-14T21:57:57Z", "closed_at": "2022-06-14T21:54:38Z", "author_association": "NONE", "pull_request": null, "body": "What is the right way to limit a full text search query to some rows of a table?\r\n\r\nFor example, I have a table that contains the following columns: `title`, `content`, `owner` (each row represents a document). The `owner` column is a username. It feels right to store all documents in one table, instead of having one table per owner. In particular because I'd like to full text search all documents, only documents owned by one user and documents owned by a set of users.\r\n\r\nI tried to combine `.rows_where(\"owner = ?\", \"1234\")` and `.search()` from the `Table` class but I don't think that is meant to work. I discovered `.search_sql()` as a way to generate the FTS SQL statement. By hand I can edit it to add a `AND [original].[owner] = :owner` to the `where` clause. This seems to do what I want.\r\n\r\nMy two questions:\r\n1. is adding a `AND ...` to the `where` clause actually the right thing to do or should I be doing something else (my SQL skills are low)?\r\n2. is there a built-in to sqlite-utils way to achieve this?\r\n\r\nRight now I am thinking I will make my own version of `search_sql()` that generates a query that contains an additional `owner = :owner` for my particular use-case.\r\n\r\nBonus question: is this generally useful/something to add to sqlite-utils or too niche?", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/441/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1353441389, "node_id": "I_kwDOCGYnMM5Qq-Bt", "number": 477, "title": "Conda Forge", "user": {"value": 49702524, "label": "thewchan"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-08-28T19:03:08Z", "updated_at": "2022-09-07T03:46:55Z", "closed_at": "2022-09-07T03:46:55Z", "author_association": "NONE", "pull_request": null, "body": "Hello! I have successfully put this package on to Conda Forge, and I have extending the invitation for the owner/maintainers of this package to be maintainers on Conda Forge as well. Let me know if you are interested! Thanks.\r\nhttps://github.com/conda-forge/sqlite-utils-feedstock", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/477/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1157182254, "node_id": "I_kwDOBm6k_c5E-TMu", "number": 1646, "title": "Configuration directory mode does not pick up other file extensions than .db", "user": {"value": 15640196, "label": "dnsos"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2022-03-02T13:15:23Z", "updated_at": "2022-10-07T23:06:17Z", "closed_at": "2022-10-07T23:03:35Z", "author_association": "NONE", "pull_request": null, "body": "Hello, I've been trying to run Datasette with the [configuration directory mode](https://docs.datasette.io/en/stable/settings.html#configuration-directory-mode) with a structure such as this one:\r\n\r\n```plain\r\nsome-directory/\r\n example.sqlite3\r\n another-example.db\r\n one-more.custom\r\n [...]\r\n```\r\n\r\n(In my scenario I can't just change the filename extension without other problems arising)\r\n\r\nNow databases with the `.sqlite3` or the custom filename extension are ignored by Datasette in this case. I'm aware that the docs state that a `.db` extension is required, but I was wondering if there is a reason for restricting this or any workaround available? When I run `datasette example.sqlite3` or `datasette one-more.custom` the databases are served by Datasette without a problem. \r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1646/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 473307794, "node_id": "MDU6SXNzdWU0NzMzMDc3OTQ=", "number": 565, "title": "Conflict between datasette and uvicorn click versions", "user": {"value": 440503, "label": "jonheslop"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-07-26T11:13:40Z", "updated_at": "2020-10-02T00:09:55Z", "closed_at": "2020-10-02T00:09:55Z", "author_association": "NONE", "pull_request": null, "body": "Hello Datasette is awesome thanks so much!\r\n\r\nI not very familiar with Python but I think there is a problem with datasette docker builds \r\n\r\nI keep getting this error\r\n\r\n```\r\nERROR: uvicorn 0.8.4 has requirement click==7.*, but you'll have click 6.0 which is incompatible.\r\nERROR: datasette 0.29.2 has requirement click~=7.0, but you'll have click 6.0 which is incompatible.\r\n```\r\n\r\nThe full log from the docker build is here - https://gist.github.com/jonheslop/e01cd322e761cfaf34f0cb83f86411b0\r\n\r\nJust in case it\u2019s helpful this is my setup - https://github.com/dotwatcher/dotwatcher-data", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/565/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1387712501, "node_id": "I_kwDOBm6k_c5Sts_1", "number": 1824, "title": "Convert &_hide_sql=1 to #_hide_sql", "user": {"value": 562352, "label": "CharlesNepote"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2022-09-27T12:53:31Z", "updated_at": "2022-10-05T12:56:27Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hiding the SQL textarea with `&_hide_sql=1` enforces a page reload, which can take several seconds and use server resource (which is annoying for big database or complex queries).\r\n\r\nIt could probably be done with a few lines of Javascript (I'm going to see if I can do that).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1824/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1171599874, "node_id": "I_kwDOCGYnMM5F1TIC", "number": 415, "title": "Convert with `--multi` and `--dry-run` flag does not work", "user": {"value": 3976183, "label": "dotcs"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-03-16T21:59:46Z", "updated_at": "2022-03-21T04:18:24Z", "closed_at": "2022-03-21T04:18:24Z", "author_association": "NONE", "pull_request": null, "body": "It's not possible to combine `--multi` and `--dry-run` flag in the `convert` command.\r\n\r\nLet's first create a simple database from JSON string\r\n\r\n```console\r\n$ echo '[{\"foo\": \"abc\"}]' | sqlite-utils insert demo.db demo -\r\n$ sqlite-utils query demo.db \"SELECT * FROM demo\" \r\n[{\"foo\": \"abc\"}]\r\n```\r\n\r\nand then try to convert the \"foo\" column with a static value \"bar\" (see docs [Converting a column into multiple columns](https://sqlite-utils.datasette.io/en/stable/cli.html#converting-a-column-into-multiple-columns))\r\n\r\n```console\r\n$ sqlite-utils convert demo.db demo foo '{\"foo\": \"bar\"}' --multi --dry-run\r\nTraceback (most recent call last):\r\n File \"/home/dotcs/anaconda3/envs/tools/bin/sqlite-utils\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/sqlite_utils/cli.py\", line 2686, in convert\r\n for row in db.conn.execute(sql, where_args).fetchall():\r\nsqlite3.OperationalError: user-defined function raised exception\r\n```\r\n\r\nBut without the `--dry-run` flag it does work as expected:\r\n\r\n```console\r\n$ sqlite-utils convert demo.db demo foo '{\"foo\": \"bar\"}' --multi\r\n$ sqlite-utils query demo.db \"SELECT * FROM demo\" \r\n[{\"foo\": \"bar\"}]\r\n```\r\n\r\n```console\r\n$ sqlite-utils --version\r\nsqlite-utils, version 3.25.1\r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/415/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1128466114, "node_id": "I_kwDOCGYnMM5DQwbC", "number": 406, "title": "Creating tables with custom datatypes", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2022-02-09T12:16:31Z", "updated_at": "2022-09-15T18:13:50Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Via https://stackoverflow.com/a/18622264/454773 I note the ability to register custom handlers for novel datatypes that can map into and out of things like sqlite `BLOB`s.\r\n\r\nFrom a quick look and a quick play, I didn't spot a way to do this in `sqlite_utils`?\r\n\r\nFor example:\r\n\r\n```python\r\n# Via https://stackoverflow.com/a/18622264/454773\r\nimport sqlite3\r\nimport numpy as np\r\nimport io\r\n\r\ndef adapt_array(arr):\r\n \"\"\"\r\n http://stackoverflow.com/a/31312102/190597 (SoulNibbler)\r\n \"\"\"\r\n out = io.BytesIO()\r\n np.save(out, arr)\r\n out.seek(0)\r\n return sqlite3.Binary(out.read())\r\n\r\ndef convert_array(text):\r\n out = io.BytesIO(text)\r\n out.seek(0)\r\n return np.load(out)\r\n\r\n\r\n# Converts np.array to TEXT when inserting\r\nsqlite3.register_adapter(np.ndarray, adapt_array)\r\n\r\n# Converts TEXT to np.array when selecting\r\nsqlite3.register_converter(\"array\", convert_array)\r\n```\r\n\r\n```python\r\nfrom sqlite_utils import Database\r\ndb = Database('test.db')\r\n\r\n# Reset the database connection to used the parsed datatype\r\n# sqlite_utils doesn't seem to support eg:\r\n# Database('test.db', detect_types=sqlite3.PARSE_DECLTYPES)\r\ndb.conn = sqlite3.connect(db_name, detect_types=sqlite3.PARSE_DECLTYPES)\r\n\r\n# Create a table the old fashioned way\r\n# but using the new custom data type\r\nvector_table_create = \"\"\"\r\nCREATE TABLE dummy \r\n (title TEXT, vector array );\r\n\"\"\"\r\n\r\ncur = db.conn.cursor()\r\ncur.execute(vector_table_create)\r\n\r\n\r\n# sqlite_utils doesn't appear to support custom types (yet?!)\r\n# The following errors on the \"array\" datatype\r\n\"\"\"\r\ndb[\"dummy\"].create({\r\n \"title\": str,\r\n \"vector\": \"array\",\r\n})\r\n\"\"\"\r\n```\r\n\r\nWe can then add / retrieve records from the database where the datatype of the `vector` field is a custom registered `array` type (which is to say, a `numpy` array):\r\n\r\n```python\r\nimport numpy as np\r\n\r\ndb[\"dummy\"].insert({'title':\"test1\", 'vector':np.array([1,2,3])})\r\n\r\nfor row in db.query(\"SELECT * FROM dummy\"):\r\n print(row['title'], row['vector'], type(row['vector']))\r\n\r\n\"\"\"\r\ntest1 [1 2 3] \r\n\"\"\"\r\n```\r\n\r\nIt would be handy to be able to do this idiomatically in `sqlite_utils`.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/406/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 699947574, "node_id": "MDU6SXNzdWU2OTk5NDc1NzQ=", "number": 963, "title": "Currently selected array facets are not correctly persisted through hidden form fields", "user": {"value": 649467, "label": "mhalle"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 5818042, "label": "Datasette 0.49"}, "comments": 1, "created_at": "2020-09-12T01:49:17Z", "updated_at": "2020-09-12T21:54:29Z", "closed_at": "2020-09-12T21:54:09Z", "author_association": "NONE", "pull_request": null, "body": "Faceted search uses JSON array elements as facets rather than the arrays. However, if a search is \"Apply\"ed (using the Apply button), the array itself rather than its elements used. \r\n\r\nTo reproduce:\r\nhttps://latest.datasette.io/fixtures/facetable?_sort=pk&_facet=created&_facet=tags&_facet_array=tags\r\n\r\nPress \"Apply\", which might be done when removing a filter. Notice that the \"tags\" facet values are now arrays, not array elements. It appears the \"&_facet_array=tags\" element of the query string is dropped.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/963/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1550536442, "node_id": "I_kwDOCGYnMM5ca076", "number": 521, "title": "Custom JSON encoder", "user": {"value": 31504, "label": "janrito"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2023-01-20T09:19:40Z", "updated_at": "2023-01-20T09:19:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It would be nice if we could specify a custom encoder (and decoder) for types that will need extra deserialisation \u2013 e.g., sets, enums or sparse matrices \u2013 or even project-specific types", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/521/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1221849746, "node_id": "I_kwDOBm6k_c5I0_KS", "number": 1732, "title": "Custom page variables aren't decoded", "user": {"value": 52649, "label": "tannewt"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2022-04-30T14:55:46Z", "updated_at": "2022-05-03T01:50:45Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I have a page `templates/filer/{filer_id}.html`. It uses `filer_id` in a `sql()` call to fetch data. With 0.61.1 this no longer works because the spaces in IDs isn't preserved. Instead, the escaped version is passed into the template and the id isn't present in my db.\r\n\r\nDatasette should unescape the url component before passing them into the template.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1732/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1075893249, "node_id": "I_kwDOBm6k_c5AINQB", "number": 1545, "title": "Custom pages don't work on windows", "user": {"value": 559711, "label": "ryascott"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2021-12-09T18:53:05Z", "updated_at": "2022-02-03T02:08:31Z", "closed_at": "2022-02-03T01:58:35Z", "author_association": "NONE", "pull_request": null, "body": "It seems that custom pages don't work when put in templates/pages\r\n\r\nTo reproduce on datasette version 0.59.4 using PowerShell on WIndows 10 with Python 3.10.0\r\n\r\n mkdir -p templates/pages\r\n\r\n echo \"hello world\" >> templates/pages/about.html\r\n\r\nStart datasette\r\n \r\n datasette --template-dir templates/\r\n\r\nNavigate to [http://127.0.0.1:8001/about](url) and receive:\r\n \r\n Error 404:\r\n Database not found: about\r\n\r\n\r\n\r\n\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1545/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 813899472, "node_id": "MDU6SXNzdWU4MTM4OTk0NzI=", "number": 1238, "title": "Custom pages don't work with base_url setting", "user": {"value": 79913, "label": "tsibley"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 9, "created_at": "2021-02-22T21:58:58Z", "updated_at": "2021-06-05T18:59:55Z", "closed_at": "2021-06-05T18:59:55Z", "author_association": "NONE", "pull_request": null, "body": "It seems that custom pages aren't routing properly when the `base_url` setting is used.\r\n\r\nTo reproduce, with Datasette 0.55.\r\n\r\nCreate a `templates/pages/custom.html` with some text.\r\n```\r\nmkdir -p templates/pages/\r\necho \"Hello, world!\" > templates/pages/custom.html\r\n```\r\n\r\nStart Datasette.\r\n\r\n```\r\ndatasette --template-dir templates/\r\n```\r\n\r\nVisit http://localhost:8001/custom and see \"Hello, world!\".\r\n\r\nStart Datasette with a `base_url`.\r\n\r\n```\r\ndatasette --template-dir templates/ --setting base_url /prefix/\r\n```\r\n\r\nVisit http://localhost:8001/prefix/custom and see a \"Database not found: custom\" 404.\r\n\r\nNote that like all routes, http://localhost:8001/custom still works when run with `base_url`.\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1238/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 791381623, "node_id": "MDU6SXNzdWU3OTEzODE2MjM=", "number": 1197, "title": "DB size limit for publishing with Heroku", "user": {"value": 1186275, "label": "mtdukes"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-01-21T18:08:43Z", "updated_at": "2021-01-24T20:53:44Z", "closed_at": "2021-01-24T20:53:44Z", "author_association": "NONE", "pull_request": null, "body": "Hello,\r\nI tried searching for this, but can't seem to get a great answer: Does anybody know the size limit for databases deploying to Heroku? The files I'm working with are pretty large, but I might be able to pare them down if I have a limit in mind.\r\n\r\n I'm getting the following error when running `datasette heroku publish`:\r\n\r\n`RangeError [ERR_INVALID_OPT_VALUE]: The value \"14504095744\" is invalid for option \"size\"`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/1197/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 1515883470, "node_id": "I_kwDOC8tyDs5aWovO", "number": 24, "title": "DOC: xml.etree.ElementTree.ParseError due to healthkit version 12 ", "user": {"value": 6231413, "label": "mmngreco"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2023-01-01T23:00:38Z", "updated_at": "2023-03-30T10:17:31Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi @simonw \r\n\r\nI hope you find this issue ok, the idea is provide some documentation to other users like me about how to solve this problem and save some time.\r\n\r\nFollowing the instructions from the `README.md` I've faced this error:\r\n\r\n```bash\r\n(venv) mgreco@pop-os apple-health master* (23:44|0s)\r\n$ healthkit-to-sqlite apple_health_export/export.xml healthkit.db --xml\r\nImporting from HealthKit [------------------------------------] 0%\r\nTraceback (most recent call last):\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/bin/healthkit-to-sqlite\", line 33, in \r\n sys.exit(load_entry_point('healthkit-to-sqlite', 'console_scripts', 'healthkit-to-sqlite')())\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/mgreco/github/mmngreco/apple-health/.deps/healthkit-to-sqlite/healthkit_to_sqlite/cli.py\", line 57, in cli\r\n convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf)\r\n File \"/home/mgreco/github/mmngreco/apple-health/.deps/healthkit-to-sqlite/healthkit_to_sqlite/utils.py\", line 25, in convert_xml_to_sqlite\r\n for tag, el in find_all_tags(\r\n File \"/home/mgreco/github/mmngreco/apple-health/.deps/healthkit-to-sqlite/healthkit_to_sqlite/utils.py\", line 12, in find_all_tags\r\n for event, el in parser.read_events():\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/xml/etree/ElementTree.py\", line 1324, in read_events\r\n raise event\r\n File \"/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/xml/etree/ElementTree.py\", line 1296, in feed\r\n self._parser.feed(data)\r\nxml.etree.ElementTree.ParseError: syntax error: line 156, column 0\r\n```\r\n\r\nSo, after debugging and searching on internet I found this useful link: https://discussions.apple.com/thread/254202523 (etresoft, the real hero). Which basically says that the xml given by the health app (healthkit version 12) has some bugs but fortunately, they can be solved with a couple of commads:\r\n\r\n1. Uncompress the zip and move the new folder where `export.xml` is.\r\n1. Create a `patch.txt` with the following content\r\n\r\n ```diff\r\n --- export.xml\t2022-09-18 15:17:09.000000000 -0400\r\n +++ export-fixed.xml\t2022-09-18 16:37:08.000000000 -0400\r\n @@ -15,6 +15,7 @@\r\n HKCharacteristicTypeIdentifierBiologicalSex CDATA #REQUIRED\r\n HKCharacteristicTypeIdentifierBloodType CDATA #REQUIRED\r\n HKCharacteristicTypeIdentifierFitzpatrickSkinType CDATA #REQUIRED\r\n + HKCharacteristicTypeIdentifierCardioFitnessMedicationsUse CDATA #IMPLIED\r\n >\r\n \r\n \r\n -\r\n +\r\n \r\n -\r\n +\r\n \r\n \r\n \r\n \r\n \r\n - device CDATA #IMPLIED\r\n -\r\n -\r\n ->\r\n ]>\r\n \r\n \r\n ```\r\n1. Apply the path with the command: `patch < patch.txt`\r\n1. Fix endDates with the command `sed 's/startDate/endDate/2' export.xml > export-fixed.xml`\r\n1. Try again `healthkit-to-sqlite export-fixed.xml healthkit.db --xml`", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/24/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 1077560091, "node_id": "I_kwDODEm0Qs5AOkMb", "number": 61, "title": "Data Pull fails for \"Essential\" level access to the Twitter API (for Documentation)", "user": {"value": 57161638, "label": "jmnickerson05"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2021-12-11T14:59:41Z", "updated_at": "2022-10-31T14:47:58Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Per Twitter documentation:\r\nhttps://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api#v2-access-leve\r\n\r\nThis isn't any fault of twitter-to-sqlite of course, but it should probably be documented as a side-note.\r\n\r\n![image](https://user-images.githubusercontent.com/57161638/145681272-8c85b3b9-be95-44ff-9760-1bafa4917ce2.png)\r\n\r\nAnd this is how I'm surfacing the message from utils.py:\r\n![image](https://user-images.githubusercontent.com/57161638/145681005-2776c0ad-9822-4461-b43a-450ab2e828eb.png)\r\n", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/61/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}