id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 1181236173,I_kwDOCGYnMM5GaDvN,422,Reconsider not running convert functions against null values,9599,open,0,,,1,2022-03-25T20:22:40Z,2022-03-25T20:23:21Z,,OWNER,,"I just got caught out by the fact that `None` values are not processed by the `.convert()` mechanism https://github.com/simonw/sqlite-utils/blob/0b7b80bd40fe86e4d66a04c9f607d94991c45c0b/sqlite_utils/db.py#L2504-L2510 I had run this code while working on #420 and I wasn't sure why it didn't work: ``` $ sqlite-utils add-column content.db articles score float $ sqlite-utils convert content.db articles score ' import random random.seed(10) def convert(value): global random return random.random() ' ``` The reason it didn't work is that the newly added `score` column was full of `null` values. I fixed it by doing this instead: $ sqlite-utils add-column content.db articles score float --not-null-default 1.0 But this indicates to me that the design of `convert()` here may be incorrect.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/422/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 688351054,MDU6SXNzdWU2ODgzNTEwNTQ=,140,Idea: insert-files mechanism for adding extra columns with fixed values,9599,open,0,,,1,2020-08-28T20:57:36Z,2022-03-20T19:45:45Z,,OWNER,,"Say for example you want to populate a `file_type` column with the value `gif`. That could work like this: ``` sqlite-utils insert-files gifs.db images *.gif \ -c path -c md5 -c last_modified:mtime \ -c file_type:text:gif --pk=path ``` So a column defined as a `text` column with a value that follows a second colon.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/140/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 675753042,MDU6SXNzdWU2NzU3NTMwNDI=,131,sqlite-utils insert: options for column types,9599,open,0,,,5,2020-08-09T18:59:11Z,2022-03-15T13:21:42Z,,OWNER,,"The `insert` command currently results in string types for every column - at least when used against CSV or TSV inputs. It would be useful if you could do the following: - automatically detects the column types based on eg the first 1000 records - explicitly state the rule for specific columns `--detect-types` could work for the former - or it could do that by default and allow opt-out using `--no-detect-types` For specific columns maybe this: sqlite-utils insert db.db images images.tsv \ --tsv \ -c id int \ -c score float",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/131/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1160034488,I_kwDOCGYnMM5FJLi4,411,Support for generated columns,25778,open,0,,,8,2022-03-04T20:41:33Z,2022-03-11T22:32:43Z,,CONTRIBUTOR,,"This is a fairly new feature -- SQLite version 3.31.0 (2020-01-22) -- that I, admittedly, haven't gotten to work yet. But it looks _incredibly_ useful: https://dgl.cx/2020/06/sqlite-json-support I'm not sure if this is an option on `add-column` or a separate command like `add-generated-column`. Either way, it needs an argument to populate it. It could be something like this: ```sh sqlite-utils add-column data.db table-name generated --as 'json_extract(data, ""$.field"")' --virtual ``` More here: https://www.sqlite.org/gencol.html",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/411/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1125297737,I_kwDOCGYnMM5DEq5J,402,Advanced class-based `conversions=` mechanism,9599,open,0,,,14,2022-02-06T19:47:41Z,2022-02-16T10:18:55Z,,OWNER,,"The `conversions=` parameter works like this at the moment: https://sqlite-utils.datasette.io/en/3.23/python-api.html#converting-column-values-using-sql-functions ```python db[""places""].insert( {""name"": ""Wales"", ""geometry"": wkt}, conversions={""geometry"": ""GeomFromText(?, 4326)""}, ) ``` This proposal is to support values in that dictionary that are objects, not strings, which can represent more complex conversions - spun out from #399. New proposed mechanism: ```python from sqlite_utils.utils import LongitudeLatitude db[""places""].insert( { ""name"": ""London"", ""point"": (-0.118092, 51.509865) }, conversions={""point"": LongitudeLatitude}, ) ``` Here `LongitudeLatitude` is a magical value which does TWO things: it sets up the `GeomFromText(?, 4326)` SQL function, and it handles converting the `(51.509865, -0.118092)` tuple into a `POINT({} {})` string. This would involve a change to the `conversions=` contract - where it usually expects a SQL string fragment, but it can also take an object which combines that SQL string fragment with a Python conversion function. Best of all... this resolves the `lat, lon` v.s. `lon, lat` dilemma because you can use `from sqlite_utils.utils import LongitudeLatitude` OR `from sqlite_utils.utils import LatitudeLongitude` depending on which you prefer! _Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030739566_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/402/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1072792507,I_kwDOCGYnMM4_8YO7,352,`sqlite-utils insert --extract colname`,9599,open,0,,,4,2021-12-07T00:55:44Z,2022-02-03T22:59:36Z,,OWNER,,"Is there a reason I've not added `--extract` as an option for `sqlite-utils insert` next? There's a `extracts=` option for the various `table.insert()` etc methods - last line in this code block: https://github.com/simonw/sqlite-utils/blob/213a0ff177f23a35f3b235386366ff132eb879f1/sqlite_utils/db.py#L2483-L2495",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/352/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1122446693,I_kwDOCGYnMM5C5y1l,394,Test against Python 3.11-dev,9599,open,0,,,1,2022-02-02T22:21:03Z,2022-02-03T21:06:35Z,,OWNER,,"Same as: - https://github.com/simonw/datasette/issues/1621",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/394/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1090798237,I_kwDOCGYnMM5BBEKd,359,Use RETURNING if available to populate last_pk,9599,open,0,,,0,2021-12-29T23:43:23Z,2021-12-29T23:43:23Z,,OWNER,,"Inspired by this: https://news.ycombinator.com/item?id=29729283 > Because SQLite is effectively serializing all the writes for us, we have zero locking in our code. We used to have to lock when inserting new items (to get the LastInsertRowId), but the newer version of SQLite supports the RETURNING keyword, so we don't even have to lock on inserts now.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/359/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 706001517,MDU6SXNzdWU3MDYwMDE1MTc=,163,Idea: conversions= could take Python functions,9599,open,0,,,4,2020-09-22T00:37:12Z,2021-12-20T00:56:52Z,,OWNER,,"Right now you use `conversions=` like this: ```python db[""example""].insert({ ""name"": ""The Bigfoot Discovery Museum"" }, conversions={""name"": ""upper(?)""}) ``` How about if you could optionally provide a Python function (or a lambda) like this? ```python db[""example""].insert({ ""name"": ""The Bigfoot Discovery Museum"" }, conversions={""name"": lambda s: s.upper()}) ``` This would work by creating a random name for that function, registering it (similar to #162), executing the SQL and then un-registering the custom function at the end.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/163/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1066603133,PR_kwDOCGYnMM4vKAzW,347,Test against pysqlite3 running SQLite 3.37,9599,open,0,,,9,2021-11-29T23:17:57Z,2021-12-11T01:02:19Z,,OWNER,simonw/sqlite-utils/pulls/347,Refs #346 and #344.,140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/347/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1071531082,I_kwDOCGYnMM4_3kRK,349,A way of creating indexes on newly created tables,9599,open,0,,,3,2021-12-05T18:56:12Z,2021-12-07T01:04:37Z,,OWNER,,"I'm writing code for https://github.com/simonw/git-history/issues/33 that creates a table inside a loop: ```python item_pk = db[item_table].lookup( {""_item_id"": item_id}, item_to_insert, column_order=(""_id"", ""_item_id""), pk=""_id"", ) ``` I need to look things up by `_item_id` on this table, which means I need an index on that column (the table can get very big). But there's no mechanism in SQLite utils to detect if the table was created for the first time and add an index to it. And I don't want to run `CREATE INDEX IF NOT EXISTS` every time through the loop. This should work like the `foreign_keys=` mechanism. ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/349/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1072435124,I_kwDOCGYnMM4_7A-0,350,Optional caching mechanism for table.lookup(),9599,open,0,,,3,2021-12-06T17:54:25Z,2021-12-06T17:56:57Z,,OWNER,,"Inspired by work on `git-history` where I used this pattern: ```python column_name_to_id = {} def column_id(column): if column not in column_name_to_id: id = db[""columns""].lookup( {""namespace"": namespace_id, ""name"": column}, foreign_keys=((""namespace"", ""namespaces"", ""id""),), ) column_name_to_id[column] = id return column_name_to_id[column] ``` If you're going to be doing a large number of `table.lookup(...)` calls and you know that no other script will be modifying the database at the same time you can presumably get a big speedup using a Python in-memory cache - maybe even a LRU one to avoid memory bloat.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/350/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1066563554,I_kwDOCGYnMM4_knfi,346,Way to test SQLite 3.37 (and potentially other versions) in CI,9599,open,0,,,5,2021-11-29T22:21:06Z,2021-11-29T23:12:49Z,,OWNER,,"> Need to figure out a good pattern for testing this in CI too - it will currently skip the new tests if it doesn't have SQLite 3.37 or higher. _Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/344#issuecomment-982076924_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/346/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 836829560,MDU6SXNzdWU4MzY4Mjk1NjA=,248,support for Apache Arrow / parquet files I/O,649467,open,0,,,1,2021-03-20T14:59:30Z,2021-10-28T23:46:48Z,,NONE,,"I just started looking at Apache Arrow using pyarrow for import and export of tabular datasets, and it looks quite compelling. It might be worth looking at for sqlite-utils and/or datasette. As a test, I took a random jsonl data dump of a dataset I have with floats, strings, and ints and converted it to arrow's parquet format using the naive `pyarrow.parquet.write_file()` command, which has automatic type inferrence. It compressed down to 7% of the original size. Conversion of a 26MB JSON file and serializing it to parquet was eyeblink instantaneous. Parquet files are portable and can be directly imported into pandas and other analytics software. The only hangup is the automatic type inference of the naive reader. It's great for general laziness and for parsing JSON columns (it correctly interpreted a table of mine with a JSON array). However, I did get an exception for a string column where most entries looked integer-like but had a couple values that weren't -- the reader tried to coerce all of them for some reason, even though the JSON type is string. Since the writer optionally takes a schema, it shouldn't be too hard to grab the sqlite header types. With some additional hinting, you might get datetime columns and JSON, which are native Arrow types. Somewhat tangentially, someone even wrote an sqlite vfs extension for Parquet: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/248/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 817989436,MDU6SXNzdWU4MTc5ODk0MzY=,242,Async support,25778,open,0,,,13,2021-02-27T18:29:38Z,2021-10-28T14:37:56Z,,CONTRIBUTOR,,"Following our conversation last week, want to note this here before I forget. I've had a couple situations where I'd like to do a bunch of updates in an async event loop, but I run into SQLite's issues with concurrent writes. This feels like something sqlite-utils could help with. PeeWee ORM has a [SQLite write queue](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq) that might be a good model. It's using threads or gevent, but I _think_ that approach would translate well enough to asyncio. Happy to help with this, too.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/242/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 974067156,MDU6SXNzdWU5NzQwNjcxNTY=,318,Research: handle gzipped CSV directly,9599,open,0,,,2,2021-08-18T21:23:04Z,2021-08-18T21:25:30Z,,OWNER,,"Would it be worthwhile for the `sqlite-utils` command-line tool to grow features to efficiently directly interact with gzipped CSV data? Maybe add `--gz` options to both `insert` and to the various commands that output query results.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/318/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 722816436,MDU6SXNzdWU3MjI4MTY0MzY=,186,.extract() shouldn't extract null values,9599,open,0,,,7,2020-10-16T02:41:08Z,2021-08-12T12:32:14Z,,OWNER,,"This almost works, but it creates a rogue `type` record with a value of None. ``` In [1]: import sqlite_utils In [2]: db = sqlite_utils.Database(memory=True) In [5]: db[""creatures""].insert_all([ {""id"": 1, ""name"": ""Simon"", ""type"": None}, {""id"": 2, ""name"": ""Natalie"", ""type"": None}, {""id"": 3, ""name"": ""Cleo"", ""type"": ""dog""}], pk=""id"") Out[5]: In [7]: db[""creatures""].extract(""type"") Out[7]:
In [8]: list(db[""creatures""].rows) Out[8]: [{'id': 1, 'name': 'Simon', 'type_id': None}, {'id': 2, 'name': 'Natalie', 'type_id': None}, {'id': 3, 'name': 'Cleo', 'type_id': 2}] In [9]: db[""type""] Out[9]:
In [10]: list(db[""type""].rows) Out[10]: [{'id': 1, 'type': None}, {'id': 2, 'type': 'dog'}] ```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/186/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 961008507,MDU6SXNzdWU5NjEwMDg1MDc=,308,Add an interactive tutorial as a Jupyter notebook,9599,open,0,,,2,2021-08-04T20:34:22Z,2021-08-04T21:30:59Z,,OWNER,,Can show people how to open this up in Binder.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 915421499,MDU6SXNzdWU5MTU0MjE0OTk=,267,row.update() or row.pk,12721157,open,0,,,4,2021-06-08T19:56:00Z,2021-06-22T17:27:27Z,,NONE,,"Hi, fantastic framework for working with Sqlite3 databases!!! I tried to update spezific rows in a table and used for row in db[tablename]: newValue = row[""counter""] * row[""prize""] row.update({""Fieldname"": newValue}) print(row) This updates the value in the printet row, but not in the database. So I switched to db[tablename].update(id, {""Filedname"": newValue}) This works fine. But row.update would be nicer, because no need for the id (its that row), no need for the tablename and the db (all defined in the for row ... loop). Thx ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/267/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 818684978,MDU6SXNzdWU4MTg2ODQ5Nzg=,243,How can i use this utils to deal with fts on column meta of tables ?,27874014,open,0,,,0,2021-03-01T09:45:05Z,2021-03-01T09:45:05Z,,NONE,,"Thank you to release this bravo project. When i use this project on multi table db, I want to implement convenient search on column name from different tables. I want to develop a meta table to save the meta data of different columns of different tables and search on this meta table to get rows from the data table (which the meta table describes) does this project provide some simple function on it ? You can think a have a knowledge graph about the table in the db, and i save this knowledge graph into the db with fts enabled.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/243/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 816601354,MDExOlB1bGxSZXF1ZXN0NTgwMjM1NDI3,241,Extract expand - work in progress,9599,open,0,,,0,2021-02-25T16:36:38Z,2021-02-25T16:36:38Z,,OWNER,simonw/sqlite-utils/pulls/241,Refs #239. Still needs documentation and CLI implementation.,140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/241/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1, 688670158,MDU6SXNzdWU2ODg2NzAxNTg=,147,SQLITE_MAX_VARS maybe hard-coded too low,96218,open,0,,,7,2020-08-30T07:26:45Z,2021-02-15T21:27:55Z,,CONTRIBUTOR,,"I came across this while about to open an issue and PR against the documentation for `batch_size`, which is a bit incomplete. As mentioned in #145, while: > [`SQLITE_MAX_VARIABLE_NUMBER`](https://www.sqlite.org/limits.html#max_variable_number) ... defaults to 999 for SQLite versions prior to 3.32.0 (2020-05-22) or 32766 for SQLite versions after 3.32.0. it is common that it is increased at compile time. Debian-based systems, for example, seem to ship with a version of sqlite compiled with SQLITE_MAX_VARIABLE_NUMBER set to 250,000, and I believe this is the case for homebrew installations too. In working to understand what `batch_size` was actually doing and why, I realized that by setting `SQLITE_MAX_VARS` in `db.py` to match the value my sqlite was compiled with (I'm on Debian), I was able to decrease the time to `insert_all()` my test data set (~128k records across 7 tables) from ~26.5s to ~3.5s. Given that this about .05% of my total dataset, this is time I am keen to save... Unfortunately, it seems that `sqlite3` in the python standard library doesn't expose the `get_limit()` C API (even though `pysqlite` used to), so it's hard to know what value sqlite has been compiled with (note that this could mean, I suppose, that it's less than 999, and even hardcoding `SQLITE_MAX_VARS` to the conservative default might not be adequate. It can also be lowered -- but not raised -- at runtime). The best I could come up with is `echo """" | sqlite3 -cmd "".limits variable_number""` (only available in `sqlite >= 2015-05-07 (3.8.10)`). Obviously this couldn't be relied upon in `sqlite_utils`, but I wonder what your opinion would be about exposing `SQLITE_MAX_VARS` as a user-configurable parameter (with suitable ""here be dragons"" warnings)? I'm going to go ahead and monkey-patch it for my purposes in any event, but it seems like it might be worth considering.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/147/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 472115381,MDU6SXNzdWU0NzIxMTUzODE=,49,extracts= should support multiple-column extracts,9599,open,0,,,10,2019-07-24T07:06:41Z,2020-10-16T19:18:19Z,,OWNER,,"Lookup tables can be constructed on compound columns, but the `extracts=` option doesn't currently support that. Right now extracts can be defined in two ways: ```python # Extract these columns into tables with the same name: dogs = db.table(""dogs"", extracts=[""breed"", ""most_recent_trophy""]) # Same as above but with custom table names: dogs = db.table(""dogs"", extracts={""breed"": ""Breeds"", ""most_recent_trophy"": ""Trophies""}) ``` Need some kind of syntax for much more complicated extractions, like when two columns (say ""source"" and ""source_version"") are extracted into a single table.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/49/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 573578548,MDU6SXNzdWU1NzM1Nzg1NDg=,89,Ability to customize columns used by extracts= feature,9599,open,0,,,3,2020-03-01T16:54:48Z,2020-10-16T19:17:50Z,,OWNER,,"@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the ""value"" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named ""value"". I have an existing lookup table that I've populated with columns ""id"" and ""name"" as opposed to ""id"" and ""value"", and seems I can't use `extracts=`, unless I'm missing something... Initial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so: ``` table = db.table(""trees"", extracts={""species_id"": (""Species"", ""name""}) ``` I haven't dug too much into the existing code yet, but does this make sense? Worth doing? _Originally posted by @chrishas35 in https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/89/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 581795570,MDU6SXNzdWU1ODE3OTU1NzA=,93,Support more string values for types in .add_column(),9599,open,0,,,0,2020-03-15T19:32:49Z,2020-09-24T20:36:46Z,,OWNER,,"https://sqlite-utils.readthedocs.io/en/2.4.2/python-api.html#adding-columns says: > SQLite types you can specify are ""TEXT"", ""INTEGER"", ""FLOAT"" or ""BLOB"". As discovered in #92 this isn't the right list of values. I should expand this to match https://www.sqlite.org/datatype3.html",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/93/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 652961907,MDU6SXNzdWU2NTI5NjE5MDc=,121,Improved (and better documented) support for transactions,9599,open,0,,,3,2020-07-08T04:56:51Z,2020-09-24T20:36:46Z,,OWNER,,"_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655283393_ We should put some thought into how this library supports and encourages smart use of transactions.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/121/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 688352145,MDU6SXNzdWU2ODgzNTIxNDU=,141,insert-files support for compressed values,9599,open,0,,,0,2020-08-28T20:59:46Z,2020-09-24T20:36:08Z,,OWNER,,"The `sqlar` format supports this, it would be useful if `insert-files` could support this too. https://www.sqlite.org/sqlar.html",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/141/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 695441530,MDU6SXNzdWU2OTU0NDE1MzA=,154,OperationalError: cannot change into wal mode from within a transaction,9599,open,0,,,2,2020-09-07T23:42:44Z,2020-09-07T23:47:10Z,,OWNER,,"I'm getting this error when running: sqlite-utils enable-wal beta.db `OperationalError: cannot change into wal mode from within a transaction` I'm worried that maybe that's because of this new code from #152: https://github.com/simonw/sqlite-utils/blob/deb2eb013ff85bbc828ebc244a9654f0d9c3139e/sqlite_utils/db.py#L128-L129",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/154/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 644161221,MDU6SXNzdWU2NDQxNjEyMjE=,117,Support for compound (composite) foreign keys,9599,open,0,,,3,2020-06-23T21:33:42Z,2020-06-23T21:40:31Z,,OWNER,,"It turns out SQLite supports composite foreign keys: https://www.sqlite.org/foreignkeys.html#fk_composite Their example looks like this: ```sql CREATE TABLE album( albumartist TEXT, albumname TEXT, albumcover BINARY, PRIMARY KEY(albumartist, albumname) ); CREATE TABLE song( songid INTEGER, songartist TEXT, songalbum TEXT, songname TEXT, FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname) ); ``` Here's what that looks like in sqlite-utils: ``` In [1]: import sqlite_utils In [2]: import sqlite3 In [3]: conn = sqlite3.connect("":memory:"") In [4]: conn Out[4]: In [5]: conn.executescript("""""" ...: CREATE TABLE album( ...: albumartist TEXT, ...: albumname TEXT, ...: albumcover BINARY, ...: PRIMARY KEY(albumartist, albumname) ...: ); ...: ...: CREATE TABLE song( ...: songid INTEGER, ...: songartist TEXT, ...: songalbum TEXT, ...: songname TEXT, ...: FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname) ...: ); ...: """""") Out[5]: In [6]: db = sqlite_utils.Database(conn) In [7]: db.tables Out[7]: [
,
] In [8]: db.tables[0].foreign_keys Out[8]: [] In [9]: db.tables[1].foreign_keys Out[9]: [ForeignKey(table='song', column='songartist', other_table='album', other_column='albumartist'), ForeignKey(table='song', column='songalbum', other_table='album', other_column='albumname')] ``` The table appears to have two separate foreign keys, when actually it has a single compound composite foreign key.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/117/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 539204432,MDU6SXNzdWU1MzkyMDQ0MzI=,70,Implement ON DELETE and ON UPDATE actions for foreign keys,26292069,open,0,,,2,2019-12-17T17:19:10Z,2020-02-27T04:18:53Z,,NONE,,"Hi! I did not find any mention on the library about ON DELETE and ON UPDATE actions for foreign keys. Are those expected to be implemented? If not, it would be a nice thing to include!",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/70/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 546073980,MDU6SXNzdWU1NDYwNzM5ODA=,74,Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column,15092,open,0,,,3,2020-01-07T04:35:50Z,2020-01-12T07:21:17Z,,CONTRIBUTOR,,"openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes. Most fail on the cli exit code like ```py [ 74s] =================================== FAILURES =================================== [ 74s] _________________________________ test_tables __________________________________ [ 74s] [ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db' [ 74s] [ 74s] def test_tables(db_path): [ 74s] result = CliRunner().invoke(cli.cli, [""tables"", db_path]) [ 74s] > assert '[{""table"": ""Gosh""},\n {""table"": ""Gosh2""}]' == result.output.strip() [ 74s] E assert '[{""table"": ""...e"": ""Gosh2""}]' == '' [ 74s] E - [{""table"": ""Gosh""}, [ 74s] E - {""table"": ""Gosh2""}] [ 74s] [ 74s] tests/test_cli.py:28: AssertionError ``` packaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils I'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,