id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 2007893839,I_kwDOCGYnMM53rgdP,605,Insert fails with `Error: Python int too large to convert to SQLite INTEGER`; can we use `NUMERIC` here?,12229877,closed,0,,,1,2023-11-23T10:19:46Z,2023-12-08T05:07:54Z,2023-12-08T05:07:54Z,NONE,,"I'm currently working on a new feature for Hypothesis, where we can dump a tidy jsonlines table of all the test cases we tried - including arguments, outcomes, timings, coverage, etc. Exploring this seems like a perfect cases for `sqlite-utils` and `datasette`, but I pretty quickly ran into an integer overflow problem and don't want to recommend that experience to my users. I originally went to report this as a bug... and then found https://github.com/simonw/sqlite-utils/issues/309#issuecomment-895581038 almost exactly matched my repro 😅 https://github.com/simonw/sqlite-utils/issues/110#issuecomment-626391063 suggests that using `NUMERIC` would avoid this overflow error, although ""If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL."" suggests that this would come at the cost of rounding to the nearest float value. Maybe I should just convert large integers to float before writing out my json? After a bit more hacking, ""manually cast large integers to float"" seems like a decent solution for my particular case, but having written it up I thought I might as well post this issue anyway - I hope it's useful feedback, and won't mind at all if you close as wontfix if it's not.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/605/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1553425465,I_kwDOCGYnMM5cl2Q5,522,Add COLUMN_TYPE_MAPPING for timedelta,81377,closed,0,,,0,2023-01-23T16:49:54Z,2023-11-04T00:49:51Z,2023-11-04T00:49:51Z,NONE,,"Currently trying to create a column with Python type `datetime.timedelta` results in an error: ``` >>> from sqlite_utils import Database >>> db = Database(""test.db"") >>> test_tbl = db['test'] >>> test_tbl.insert({'col1': datetime.timedelta()}) Traceback (most recent call last): File """", line 1, in File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 2979, in insert return self.insert_all( File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 3082, in insert_all self.create( File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 1574, in create self.db.create_table( File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 961, in create_table sql = self.create_table_sql( File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 852, in create_table_sql column_type=COLUMN_TYPE_MAPPING[column_type], KeyError: ``` The reason this would be useful is that `MySQLdb` uses `timedelta` for MySQL `TIME` columns: ``` >>> import MySQLdb >>> conn = MySQLdb.connect(host='database', user='user', passwd='pw') >>> csr = conn.cursor() >>> csr.execute(""SELECT CAST('11:20' AS TIME)"") >>> tuple(csr) ((datetime.timedelta(seconds=40800),),) ``` So currently any attempt to convert a MySQL DB with a `TIME` column using `db-to-sqlite` will result in the above error. I was rather surprised that `MySQLdb` uses `timedelta` for `TIME` columns but I see that [this column type](https://dev.mysql.com/doc/refman/8.0/en/time.html) is intended for time intervals as well as the time of day so it makes sense. ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/522/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 410384988,MDU6SXNzdWU0MTAzODQ5ODg=,411,How to pass named parameter into spatialite MakePoint() function,1055831,closed,0,,,3,2019-02-14T16:30:22Z,2023-10-25T13:23:04Z,2019-05-05T12:25:04Z,NONE,,"Hi, datasette version: ""0.26.2"" extensions: spatialite: ""4.4.0-RC0"" sqlite version: ""3.22.0"" I have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables: ``` conn.execute('SELECT InitSpatialMetadata(1)') conn.execute(""SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);"") conn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||""longitude""||' '||""latitude""||')',4326);''') conn.execute(""SELECT CreateSpatialIndex('airports', 'point_geom');"") ``` I'm attempting to create a canned query and have this in my metadata.json file: ``` ""find_airports_nearest_to_point"":{ ""sql"":""SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a JOIN airports AS b ON (b.rowid = a.fid) WHERE f_table_name = \""airports\"" AND ref_geometry = MakePoint( :Long , :Lat ) AND max_items = 10;""} ``` which doesn't seem to perform the templating of the name parameters correctly and I get no results. Have also tired: ``` MakePoint( || :Long || , || :Lat || ) ``` which returns this error: ``` near ""||"": syntax error ``` However I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported? Thanks Darren",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/411/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1910269679,I_kwDOBm6k_c5x3Gbv,2196,Discord invite link returns 401,1892194,closed,0,,,2,2023-09-24T15:16:54Z,2023-10-13T00:07:08Z,2023-10-12T21:54:54Z,NONE,,"I found the link to the datasette discord channel via [this query](https://github.com/search?q=repo%3Asimonw%2Fdatasette%20discord&type=code). The following video should be self explanatory: https://github.com/simonw/datasette/assets/1892194/8cd33e88-bcaa-41f3-9818-ab4d589c3f02 Link for reference: https://discord.com/invite/ktd74dm5mw",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2196/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1930008379,I_kwDOBm6k_c5zCZc7,2197,click-default-group-wheel dependency conflict,1176293,closed,0,,,3,2023-10-06T11:49:20Z,2023-10-12T21:53:17Z,2023-10-12T21:53:17Z,NONE,,"I upgraded my dependencies, then ran into this problem running `datasette inspect`: > env/lib/python3.9/site-packages/datasette/cli.py"", line 6, in > from click_default_group import DefaultGroup > ModuleNotFoundError: No module named 'click_default_group' Turns out the released version of datasette still depends on `click-default-group-wheel`, so `click-default-group` doesn't get installed/recognized: ``` $ virtualenv venv $ source venv/bin/activate $ pip install datasette $ pip list | grep click-default-group click-default-group 1.2.4 click-default-group-wheel 1.2.3 $ python -c ""from click_default_group import DefaultGroup"" Traceback (most recent call last): File """", line 1, in ModuleNotFoundError: No module named 'click_default_group' $ pip install --force-reinstall click-default-group ... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasette 0.64.4 requires click-default-group-wheel>=1.2.2, which is not installed. Successfully installed click-8.1.7 click-default-group-1.2.4 ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2197/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1907281675,I_kwDOCGYnMM5xrs8L,595,Cascading DELETE not working with Table.delete(pk),123451970,closed,0,,,1,2023-09-21T15:46:41Z,2023-09-25T09:38:57Z,2023-09-25T09:38:13Z,NONE,,"Hi ! I noticed that when I am trying to use the delete method of the Table object, the record get properly deleted from the table, but the cascading delete triggers on foreign keys do not activate. `self.db[""contact""].delete(contact_id)` I tried querying the database directly via DB Browser and the triggers work without any issue. Looked up the source code and behind the scene this method is just querying the database normally so I'm not exactly sure where this behavior comes from. Thank you in advance for your time ! ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/595/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1871935751,I_kwDOD079W85vk3kH,40, ImportError: cannot import name 'formatargspec' from 'inspect',36752421,closed,0,,,0,2023-08-29T15:36:31Z,2023-08-31T03:18:07Z,2023-08-31T03:18:06Z,NONE,,"I get the following error when running ""pip3 install dogsheep-photos"" "" from inspect import ismethod, isclass, formatargspec ImportError: cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?"" Python 3.12.0rc1 sqlite 3.43.0 datasette, version 0.64.3",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/40/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 787098146,MDU6SXNzdWU3ODcwOTgxNDY=,1190,`datasette publish upload` mechanism for uploading databases to an existing Datasette instance,1024355,closed,0,,,3,2021-01-15T18:18:42Z,2023-08-30T22:16:39Z,2023-08-30T22:16:38Z,NONE,,"If I have a self-hosted instance of Datasette up and running, I'd like to be able to the use the CLI to publish databases to that instance, not only Google or Heroku. Ideally there'd be a `url` parameter or something similar to which one could point the publish command to their instance. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1190/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1838266862,I_kwDOBm6k_c5tkbnu,2126,Permissions in metadata.yml / metadata.json,36199671,closed,0,,,3,2023-08-06T16:24:10Z,2023-08-11T05:52:30Z,2023-08-11T05:52:29Z,NONE,,"https://docs.datasette.io/en/latest/authentication.html#other-permissions-in-metadata says the following: > For all other permissions, you can use one or more ""permissions"" blocks in your metadata. > To grant access to the permissions debug tool to all signed in users you can grant permissions-debug to any actor with an id matching the wildcard * by adding this a the root of your metadata: ```yaml permissions: debug-menu: id: '*' ``` I tried this. My `metadata.yml` file looks like: ```yaml permissions: debug-menu: id: '*' permissions-debug: id: '*' plugins: datasette-auth-passwords: myuser_password_hash: $env: ""PASSWORD_HASH_MYUSER"" ``` And then I run ```zsh datasette -m metadata.yml tiddlywiki.db --root ``` And I open a session for the ""root"" user of datasette with the link given. I open a private browser session and log in as ""myuser"" from http://127.0.0.1:8001/-/login Then I check http://127.0.0.1:8001/-/actor which confirms that I am logged in as the ""myuser"" actor ```json { ""actor"": { ""id"": ""myuser"" } } ``` In the session where I am logged in as ""myuser"" I then try to go to http://127.0.0.1:8001/-/permissions But all I get there as the logged in user ""myuser"" is > Forbidden > > Permission denied And then if I check the http://127.0.0.1:8001/-/permissions as the datasette ""root"" user from another browser session, I see: > permissions-debug checked at 2023-08-06T16:22:58.997841 ✗ (used default) > > Actor: {""id"": ""myuser""} It seems that in spite of having tried to give the `permissions-debug` permission to the ""myuser"" user in my `metadata.yml` file, datasette does not agree that ""myuser"" has permission `permissions-debug`.. What do I need to do differently so that my ""myuser"" user is able to access http://127.0.0.1:8001/-/permissions ?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2126/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1841501975,I_kwDOBm6k_c5twxcX,2133,[feature request]`datasette install plugins.json` options,54462,closed,0,,,9,2023-08-08T15:06:50Z,2023-08-10T00:31:24Z,2023-08-09T22:04:46Z,NONE,,"Hi, simon ❤️ `datasette plugins --all > plugins.json` could generate all plugins info. On another machine, it would be great to install all plugins just by `datasette install plugins.json`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2133/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1817281557,I_kwDOC8SPRc5sUYQV,37,cannot use jinja filters in display?,10352819,closed,0,,,1,2023-07-23T20:09:54Z,2023-07-23T20:18:27Z,2023-07-23T20:18:26Z,NONE,,"Hi, I'm trying to have a display function in Dogsheep's `config.yml` that includes something like this: ```

{{ display.title }} (source)

{{ display.snippet|safe }}

``` Unfortunately, rendering fails with a message 'urls is undefined'. The same happens if I'm trying to build a row URL manually, using filters like `quote_plus` (as my keys are URLs). Any hints? Thanks!",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/37/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 810618495,MDU6SXNzdWU4MTA2MTg0OTU=,235,Extract columns cannot create foreign key relation: sqlite3.OperationalError: table sqlite_master may not be modified,6913891,closed,0,,,18,2021-02-17T23:33:23Z,2023-06-26T01:47:01Z,2023-06-25T23:25:53Z,NONE,,"Thanks for what seems like a truly great suite of libraries. I wanted to try out Datasette, but never got more than half way through your YouTube video with the SF tree dataset. Whenever I try to extract a column, I get a `sqlite3.OperationalError: table sqlite_master may not be modified` error from Python. This snippet reproduces the error on my system, Python 3.9.1 and sqlite-utils 3.5 on an M1 Macbook Pro running in rosetta mode: ``` curl ""https://data.nasa.gov/resource/y77d-th95.json"" | \ sqlite-utils insert meteorites.db meteorites - --pk=id sqlite-utils extract meteorites.db meteorites recclass ``` I have tried googling the problem, but all I've found is that this *might* be a problem with the sqlite3 database running in defensive mode, but I definitely can't know for sure. Does the problem seem familiar to you? ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/235/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1279144769,I_kwDOCGYnMM5MPjNB,448,Reading rows from a file => AttributeError: '_io.StringIO' object has no attribute 'readinto',236907,closed,0,,,5,2022-06-21T21:48:27Z,2023-05-08T22:01:00Z,2023-05-08T22:01:00Z,NONE,,"Attempting to run the example given here (without extra bracket ;-): https://sqlite-utils.datasette.io/en/stable/python-api.html#reading-rows-from-a-file ``` from sqlite_utils.utils import rows_from_file import io rows, format = rows_from_file(io.StringIO(""id,name\n1,Cleo"")) print(list(rows), format) # Outputs [{'id': '1', 'name': 'Cleo'}] Format.CSV ``` Gives error ``` >""c:\Program Files\Python37\python.exe"" test2.py Traceback (most recent call last): File ""test2.py"", line 4, in rows, format = rows_from_file(io.StringIO(""id,name\n1,Cleo"")) File ""C:\Users\swood\Downloads\sqlite-utils-main-20220621\sqlite-utils-main\sqlite_utils\utils.py"", line 300, in rows_from_file first_bytes = buffered.peek(2048).strip() AttributeError: '_io.StringIO' object has no attribute 'readinto' ``` I am running Python on Windows. ``` >""c:\Program Files\Python37\python.exe"" Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. ```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/448/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1465194249,I_kwDOCGYnMM5XVRcJ,514,upsert of new row with check constraints fails,193185,closed,0,,,5,2022-11-26T16:12:23Z,2023-05-08T21:50:52Z,2023-05-08T21:50:51Z,NONE,,"(I originally opened this in https://github.com/simonw/datasette-insert/issues/20, but I see that that library depends on sqlite-utils) In the case of a new row, upsert first adds the row, specifying only its pkeys: https://github.com/simonw/sqlite-utils/blob/965ca0d5f5bffe06cc02cd7741344d1ddddf9d56/sqlite_utils/db.py#L2783-L2787 This means that a table with NON NULL (or other constraint) columns that aren't part of the pkey can't have new rows upserted.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/514/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1465194930,PR_kwDOCGYnMM5DvZxa,515,"upsert new rows with constraints, fixes #514",193185,closed,0,,,1,2022-11-26T16:15:21Z,2023-05-08T21:27:11Z,2023-05-08T21:27:10Z,NONE,simonw/sqlite-utils/pulls/515,"This fixes #514 by making the initial insert for upserts include all columns, so that new rows can be added to tables with non-pkey columns that have constraints. (aside: I'm not a python programmer. `pip`? `pipenv`? `venv`? These are mystical incantations to me. The process to set up this repo for local development and testing was _so easy_. Thank you for the excellent contributing documentation!) ---- :books: Documentation preview :books:: https://sqlite-utils--515.org.readthedocs.build/en/515/ ",140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/515/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1432377191,I_kwDOCGYnMM5VYFdn,509,`sqlite-utils transform` breaks DEFAULT string values and STRFTIME(),2199875,closed,0,,,0,2022-11-02T02:32:23Z,2023-05-08T21:13:38Z,2023-05-08T21:13:38Z,NONE,,"Very nice library! Our team found sqlite-utils through @simonw's [comment on the ""Simple declarative schema migration for SQLite"" article](https://news.ycombinator.com/item?id=31249823), and we were excited to use it, but unfortunately `sqlite-utils transform` seems to break our DB. Running `sqlite-utils transform` to modify a column mangles their DEFAULT values: - Default string values are wrapped in extra single quotes - Function expressions such as [`STRFTIME()`](https://www.sqlite.org/lang_datefunc.html) are turned into strings! ------ Here are steps to reproduce: **Original database** ``` $ sqlite3 test.db << EOF CREATE TABLE mytable ( col1 TEXT DEFAULT 'foo', col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) ) EOF $ sqlite3 test.db ""SELECT sql FROM sqlite_master WHERE name = 'mytable';"" CREATE TABLE mytable ( col1 TEXT DEFAULT 'foo', col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) ) ``` **Modified database after sqlite-utils** ``` $ sqlite3 test.db ""INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;"" foo|2022-11-02 02:26:58.038 $ sqlite-utils transform test.db mytable --rename col1 renamedcol1 $ sqlite3 test.db ""SELECT sql FROM sqlite_master WHERE name = 'mytable';"" CREATE TABLE ""mytable"" ( [renamedcol1] TEXT DEFAULT '''foo''', [col2] TEXT DEFAULT 'STRFTIME(''%Y-%m-%d %H:%M:%f'', ''NOW'')' ) $ sqlite3 test.db ""INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;"" foo|2022-11-02 02:26:58.038 'foo'|STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW') ``` (Related: #336)",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/509/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1620254998,I_kwDOCGYnMM5gkyEW,532,Show more information when JSON can't be imported with sqlite-utils insert,83080728,closed,0,,,2,2023-03-12T06:41:44Z,2023-05-08T20:32:16Z,2023-05-08T20:32:02Z,NONE,,"I am currently trying to import the [JSON export of my data from Discord](https://support.discord.com/hc/en-us/articles/360004027692-Requesting-a-Copy-of-your-Data), specifically `activity/reporting/events-*.json` ``` sqlite-utils.exe insert test.db reporting events-2023-00000-of-00001.json [###################################-] 99% 00:00:00 Error: Invalid JSON - use --csv for CSV or --tsv for TSV files ``` Please show more information as to *why* this is invalid, if possible. I am using version 3.30 with Python 3.10 on Windows 11.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/532/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1695428235,I_kwDOCGYnMM5lDi6L,538,`table.upsert_all` fails to write rows when `not_null` is present,1231935,closed,0,,,9,2023-05-04T07:30:38Z,2023-05-08T20:06:35Z,2023-05-08T19:27:02Z,NONE,,"I found an odd bug today, where calls to `table.upsert_all` don't write rows if you include the `not_null` kwarg. ## Repro Example ```py from sqlite_utils import Database db = Database(""upsert-test.db"") db[""comments""].upsert_all( [{""id"": 1, ""name"": ""david""}], pk=""id"", not_null=[""name""], ) assert list(db[""comments""].rows) # err! ``` The schema is correctly created: ```sql CREATE TABLE [comments] ( [id] INTEGER PRIMARY KEY, [name] TEXT NOT NULL ) ``` But no rows are created. Removing either the `not_null` kwargs works as expected, as does an `insert_all` call. ## Version Info - Python: `3.11.0` - sqlite-utils: `3.30` - sqlite: `3.39.5 2022-10-14`",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/538/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1622640374,I_kwDOCGYnMM5gt4b2,534, ResourceWarning: unclosed file,1244826,closed,0,,,1,2023-03-14T03:02:18Z,2023-05-08T19:56:29Z,2023-05-08T19:56:29Z,NONE,,"Issuing either ``` py -Wdefault -m sqlite_utils insert dogs.db dogs dogs0.csv --csv [#############-----------------------] 36% [####################################] 100%C:\Users\Doug\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlite_utils\cli.py:1187: ResourceWarning: unclosed file <_io.TextIOWrapper name='dogs0.csv' encoding='utf-8-sig'> insert_upsert_implementation( ResourceWarning: Enable tracemalloc to get the object allocation traceback ``` or ``` set pythonwarnings=default sqlite-utils insert dogs.db dogs dogs0.csv --csv [#############-----------------------] 36% [####################################] 100%C:\Users\Doug\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlite_utils\cli.py:1187: ResourceWarning: unclosed file <_io.TextIOWrapper name='dogs0.csv' encoding='utf-8-sig'> insert_upsert_implementation( ResourceWarning: Enable tracemalloc to get the object allocation traceback ``` exhibits a ResourceWarning indicating that the CSV file being loaded is not closed. sqlite-utils --version sqlite-utils, version 3.30 py --version Python 3.11.2 Windows Version 10.0.19045 Build 19045 SQLite version 3.41.0 2023-02-21 18:09:37 ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/534/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1617823309,I_kwDOJHON9s5gbgZN,8,Increase performance using macnotesapp,41546558,closed,0,,,1,2023-03-09T18:51:05Z,2023-03-14T22:00:22Z,2023-03-14T22:00:21Z,NONE,,"Neat project! You can probably increase performance using my python interface to Notes, [macnotesapp](https://github.com/RhetTbull/macnotesapp), which uses Scripting Bridge and bulk queries for much better performance than AppleScript. Another related project is [PyXA](https://github.com/SKaplanOfficial/PyXA) which uses Scripting Bridge to access Notes (and many other apps) and can return all the notes at once as opposed to calling AppleScript for each note. macnotesapp allows you to access multiple accounts and folders as well. ```python from macnotesapp import NotesApp # NotesApp() provides interface to Notes.app notesapp = NotesApp() # Get list of notes (Note objects for each note) notes = notesapp.notes() note = notes[0] print( note.id, note.account, note.folder, note.name, note.body, note.plaintext, note.password_protected, ) print(note.asdict()) ```",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/8/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1594383280,I_kwDOBm6k_c5fCFuw,2030,How to use Datasette with apache webserver on GCP?,19700859,closed,0,,,2,2023-02-22T03:08:49Z,2023-02-22T21:54:39Z,2023-02-22T21:54:39Z,NONE,,"Hi Simon and Datasette team- I have installed apache2 webserver inside GCP VM using apt. I can see my ""Hello World"" index.html if I use the external IP of this GCP in a browser. However, when I try to run datasette with different combinations of -h and -p, I am still unable to access the webpage. I cannot invest Docker on this VM. Any pointers to use datasette with already existing apache2 webserver on GCP is appreciated. Thanks.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2030/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1572766460,I_kwDOCGYnMM5dvoL8,524,Transformation type `--type DATETIME`,21095447,closed,0,,,15,2023-02-06T15:18:42Z,2023-02-15T12:10:54Z,2023-02-15T12:10:54Z,NONE,,"Hey. Currently i do transformation with the type `--type TEXT`, but i noticed using the sqlalchemy based library [dataset](https://github.com/pudo/dataset) that is reading and writing differ depending on the column types `TEXT`, `DATETIME`. Is it possible to alter a column type to `DATETIME` somehow using Sqlite-Utils?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/524/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1579695809,I_kwDOBm6k_c5eKD7B,2023,Error: Invalid setting 'hash_urls' in settings.json in 0.64.1,80409402,closed,0,,,2,2023-02-10T13:35:01Z,2023-02-10T15:40:00Z,2023-02-10T15:39:59Z,NONE,,"On a Debian machine, using datasette 0.64.1 installed with `pip3`, I am getting a `datasette[114272]: Error: Invalid setting 'hash_urls' in settings.json` in `journalctl -xe`. The same settings work on 0.54.1 on another Debian server. This is my `settings.json`: ```json { ""default_page_size"": 200, ""max_returned_rows"": 8000, ""num_sql_threads"": 3, ""sql_time_limit_ms"": 1000, ""default_facet_size"": 30, ""facet_time_limit_ms"": 200, ""facet_suggest_time_limit_ms"": 50, ""hash_urls"": false, ""allow_facet"": true, ""allow_download"": true, ""suggest_facets"": true, ""default_cache_ttl"": 5, ""default_cache_ttl_hashed"": 31536000, ""cache_size_kb"": 0, ""allow_csv_stream"": true, ""max_csv_mb"": 100, ""truncate_cells_html"": 2048, ""force_https_urls"": false, ""template_debug"": false, ""base_url"": ""/pclim/db/"" } ``` This looks ok to me. Would you have any ideas?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2023/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1578609658,I_kwDOBm6k_c5eF6v6,2022,Error 500 - not clear the cause,1667631,closed,0,,,1,2023-02-09T20:57:17Z,2023-02-09T21:13:50Z,2023-02-09T21:13:50Z,NONE,,"On the database that I have sent via linkedIn, datasette works great, but the following URL gives a 500 error. http://127.0.0.1:8001/literature/authors_papers?authorId=100550354 The cause of the error is not apparent. Is this expected behaviour? David",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2022/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1501900064,I_kwDOBm6k_c5ZhS0g,1966,Broken link to live demo in Getting started docs,7551922,closed,0,,,1,2022-12-18T13:17:00Z,2022-12-31T19:15:19Z,2022-12-31T19:15:10Z,NONE,,The link in [Play with a live demo in Getting started](https://github.com/simonw/datasette/blob/main/docs/getting_started.rst#play-with-a-live-demo) to [https://fivethirtyeight.datasettes.com/fivethirtyeight](https://fivethirtyeight.datasettes.com/fivethirtyeight) is broken and the datasette is no longer working (maybe due to the end of the free tier).,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1966/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1000275035,PR_kwDOCGYnMM4r7n-9,327,Extract expand: Support JSON Arrays,101753,closed,0,,,0,2021-09-19T10:34:30Z,2022-12-29T09:05:36Z,2022-12-29T09:05:36Z,NONE,simonw/sqlite-utils/pulls/327,"Hi, I needed to extract data in JSON Arrays to normalize data imports. I've quickly hacked the following together based on #241 which refers to #239 where you, @simonw, wrote: > Could this handle lists of objects too? That would be pretty amazing - if the column has a [{...}, {...}] list in it could turn that into a many-to-many. They way this works in my work is that many-to-many relationships are created for anything that maps to an dictionary in a list, and many-to-one relations for everything else (assumed to be scalar values). Not sure what the best approach here would be? Are many-to-one relationships are at all useful here? What do you think about this approach? I could try to add it to the cli interface and documentation if wanted. Thanks for this awesome piece of software in any case! :sun_with_face: ",140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/327/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1496652622,I_kwDOBm6k_c5ZNRtO,1955,"invoke_startup() is not run in some conditions, e.g. gunicorn/uvicorn workers, breaking lots of things",32839123,closed,0,,,36,2022-12-14T13:39:56Z,2022-12-19T04:34:16Z,2022-12-18T02:45:18Z,NONE,,"In the past (pre-september 14, #1809) I had a running deployment of Datasette on Azure WebApps by emulating the call in cli.py to Gunicorn: `gunicorn -w 2 -k uvicorn.workers.UvicornWorker app:app`. My most recent deployment, however, fails loudly by shouting that `Datasette.invoke_startup()` was not called. It does not seem to be possible to call `invoke_startup` when running using a uvicorn command directly like this (I've reproduced this locally using `uvicorn`). Two candidates that I have tried: * Uvicorn has a `--factory` option, but the app factory has to be synchronous, so no `await invoke_startup` there * `asyncio.get_event_loop().run_until_complete` is also not an option because `uvicorn` already has the event loop running. One additional option is: * Use Gunicorn's [server hooks](https://docs.gunicorn.org/en/stable/settings.html#server-hooks) to call `invoke_startup`. These are also synchronous, but I might be able to get ahead of the event loop starting here. In my current deployment setup, it does not appear to be possible to use `datasette serve` directly, so I'm stuck either * Trying to rework my complete deployment setup, for instance, using Azure functions as described [here](https://github.com/simonw/azure-functions-datasette)) * Or dig into the ASGI spec and write a wrapper for the sole purpose of launching Datasette using a direct Uvicorn invocation. Questions for the maintainers: * Is this intended behaviour/will not support/etc.? If so, I'd be happy to add a PR with a couple lines in the documentation. * if this is not intended behaviour, what is a good way to fix it? I could have a go at the ASGI spec thing (I think the Azure Functions thing is related) and provide a PR with the wrapper here, but I'm all ears! Almost forgot, minimal reproducer: ```python from datasette import Datasette ds = Datasette(files=['./global-power-plants.db'])] app = ds.app() ``` Save as app.py in the same folder as global-power-plants.db, and then try running `uvicorn app:app`. Opening the resulting Datasette instance in the browser will show the error message.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1955/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 1306984363,I_kwDOBm6k_c5N5v-r,1771,minor a11y: