html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1590531892,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1590531892,IC_kwDOCGYnMM5ezZc0,7908073,2023-06-14T06:09:21Z,2023-06-14T06:09:21Z,CONTRIBUTOR,"I put together a [simple script](https://github.com/chapmanjacobd/library/blob/42129c5ebe15f9d74653c0f5ca4ed0c991d383e0/xklb/scripts/dedupe_db.py) to upsert and remove duplicate rows based on business keys. If anyone has similar problems with above this might help ``` CREATE TABLE my_table ( id INTEGER PRIMARY KEY, column1 TEXT, column2 TEXT, column3 TEXT ); INSERT INTO my_table (column1, column2, column3) VALUES ('Value 1', 'Duplicate 1', 'Duplicate A'), ('Value 2', 'Duplicate 2', 'Duplicate B'), ('Value 3', 'Duplicate 2', 'Duplicate C'), ('Value 4', 'Duplicate 3', 'Duplicate D'), ('Value 5', 'Duplicate 3', 'Duplicate E'), ('Value 6', 'Duplicate 3', 'Duplicate F'); ``` ``` library dedupe-db test.db my_table --bk column2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327, https://github.com/simonw/sqlite-utils/issues/555#issuecomment-1592047502,https://api.github.com/repos/simonw/sqlite-utils/issues/555,1592047502,IC_kwDOCGYnMM5e5LeO,7908073,2023-06-14T22:00:10Z,2023-06-14T22:01:57Z,CONTRIBUTOR,"You may want to try doing a performance comparison between this and just selecting all the ids with few constraints and then doing the filtering within python. That might seem like a lazy-programmer, inefficient way but queries with large resultsets are a different profile than what databases like SQLITE are designed for. That is not to say that SQLITE is slow or that python is always faster but when you start reading >20% of an index there is an equilibrium that is reached. Especially when adding in writing extra temp tables and stuff to memory/disk. And especially given the `NOT IN` style of query... You may also try chunking like this: ```py def chunks(lst, n) -> Generator: for i in range(0, len(lst), n): yield lst[i : i + n] SQLITE_PARAM_LIMIT = 32765 data = [] chunked = chunks(video_ids, consts.SQLITE_PARAM_LIMIT) for ids in chunked: data.expand( list( db.query( f""""""SELECT * from videos WHERE id in ("""""" + "","".join([""?""] * len(ids)) + "")"", (*ids,), ) ) ) ``` but that actually won't work with your `NOT IN` requirements. You need to query the full resultset to check any row. Since you are doing stuff with files/videos in SQLITE you might be interested in my side project: https://github.com/chapmanjacobd/library","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1733198948, https://github.com/simonw/sqlite-utils/issues/535#issuecomment-1592052320,https://api.github.com/repos/simonw/sqlite-utils/issues/535,1592052320,IC_kwDOCGYnMM5e5Mpg,7908073,2023-06-14T22:05:28Z,2023-06-14T22:05:28Z,CONTRIBUTOR,piping to `jq` is good enough usually,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1655860104, https://github.com/simonw/sqlite-utils/issues/529#issuecomment-1592110694,https://api.github.com/repos/simonw/sqlite-utils/issues/529,1592110694,IC_kwDOCGYnMM5e5a5m,7908073,2023-06-14T23:11:47Z,2023-06-14T23:12:12Z,CONTRIBUTOR,"sorry i was wrong. `sqlite-utils --raw-lines` works correctly ``` sqlite-utils --raw-lines :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ sqlite-utils --csv --no-headers :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ ``` I think this was fixed somewhat recently","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1581090327,