html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/simonw/sqlite-utils/issues/529#issuecomment-1592110694,https://api.github.com/repos/simonw/sqlite-utils/issues/529,1592110694,IC_kwDOCGYnMM5e5a5m,7908073,chapmanjacobd,2023-06-14T23:11:47Z,2023-06-14T23:12:12Z,CONTRIBUTOR,"sorry i was wrong. `sqlite-utils --raw-lines` works correctly ``` sqlite-utils --raw-lines :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ sqlite-utils --csv --no-headers :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ ``` I think this was fixed somewhat recently","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1581090327,Microsoft line endings, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1264218914,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1264218914,IC_kwDOCGYnMM5LWnMi,7908073,chapmanjacobd,2022-10-01T03:18:36Z,2023-06-14T22:14:24Z,CONTRIBUTOR,"> some good concrete use-cases in mind I actually found myself wanting something like this the past couple days. The use-case was databases with slightly different schema but same table names. here is a full script: ``` import argparse from pathlib import Path from sqlite_utils import Database def connect(args, conn=None, **kwargs) -> Database: db = Database(conn or args.database, **kwargs) with db.conn: db.conn.execute(""PRAGMA main.cache_size = 8000"") return db def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument(""database"") parser.add_argument(""dbs_folder"") parser.add_argument(""--db"", ""-db"", help=argparse.SUPPRESS) parser.add_argument(""--verbose"", ""-v"", action=""count"", default=0) args = parser.parse_args() if args.db: args.database = args.db Path(args.database).touch() args.db = connect(args) return args def merge_db(args, source_db): source_db = str(Path(source_db).resolve()) s_db = connect(argparse.Namespace(database=source_db, verbose = args.verbose)) for table in s_db.table_names(): data = s_db[table].rows args.db[table].insert_all(data, alter=True, replace=True) args.db.conn.commit() def merge_directory(): args = parse_args() source_dbs = list(Path(args.dbs_folder).glob('*.db')) for s_db in source_dbs: merge_db(args, s_db) if __name__ == '__main__': merge_directory() ``` edit: I've made some improvements to this and put it on PyPI: ``` $ pip install xklb $ lb merge-db -h usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...] Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided, and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row replacing all existing fields. Upsert mode will update matching PK rows such that if a source row has a NULL field and the destination row has a value then the value will be preserved instead of changed to the source row's NULL value. Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db Test first by using temp databases as the destination db. Try out different modes / flags until you are satisfied with the behavior of the program library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db Merge database data and tables library merge-dbs --upsert --pk path video.db tv.db movies.db library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path audio-fts.db audio.db library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v positional arguments: database source_dbs ``` Also if you want to dedupe a table based on a ""business key"" which isn't explicitly your primary key(s) you can run this: ``` $ lb dedupe-db -h usage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS] Dedupe your database (not to be confused with the dedupe subcommand) It should not need to be said but *backup* your database before trying this tool! Dedupe-DB will help remove duplicate rows based on non-primary-key business keys library dedupe-db ./video.db media --bk path If --primary-keys is not provided table metadata primary keys will be used If --only-columns is not provided all non-primary and non-business key columns will be upserted positional arguments: database table options: -h, --help show this help message and exit --skip-0 --only-columns ONLY_COLUMNS Comma separated column names to upsert --primary-keys PRIMARY_KEYS, --pk PRIMARY_KEYS Comma separated primary keys --business-keys BUSINESS_KEYS, --bk BUSINESS_KEYS Comma separated business keys ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/issues/535#issuecomment-1592052320,https://api.github.com/repos/simonw/sqlite-utils/issues/535,1592052320,IC_kwDOCGYnMM5e5Mpg,7908073,chapmanjacobd,2023-06-14T22:05:28Z,2023-06-14T22:05:28Z,CONTRIBUTOR,piping to `jq` is good enough usually,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1655860104,rows: --transpose or psql extended view-like functionality, https://github.com/simonw/sqlite-utils/issues/555#issuecomment-1592047502,https://api.github.com/repos/simonw/sqlite-utils/issues/555,1592047502,IC_kwDOCGYnMM5e5LeO,7908073,chapmanjacobd,2023-06-14T22:00:10Z,2023-06-14T22:01:57Z,CONTRIBUTOR,"You may want to try doing a performance comparison between this and just selecting all the ids with few constraints and then doing the filtering within python. That might seem like a lazy-programmer, inefficient way but queries with large resultsets are a different profile than what databases like SQLITE are designed for. That is not to say that SQLITE is slow or that python is always faster but when you start reading >20% of an index there is an equilibrium that is reached. Especially when adding in writing extra temp tables and stuff to memory/disk. And especially given the `NOT IN` style of query... You may also try chunking like this: ```py def chunks(lst, n) -> Generator: for i in range(0, len(lst), n): yield lst[i : i + n] SQLITE_PARAM_LIMIT = 32765 data = [] chunked = chunks(video_ids, consts.SQLITE_PARAM_LIMIT) for ids in chunked: data.expand( list( db.query( f""""""SELECT * from videos WHERE id in ("""""" + "","".join([""?""] * len(ids)) + "")"", (*ids,), ) ) ) ``` but that actually won't work with your `NOT IN` requirements. You need to query the full resultset to check any row. Since you are doing stuff with files/videos in SQLITE you might be interested in my side project: https://github.com/chapmanjacobd/library","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1733198948,Filter table by a large bunch of ids, https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1590531892,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1590531892,IC_kwDOCGYnMM5ezZc0,7908073,chapmanjacobd,2023-06-14T06:09:21Z,2023-06-14T06:09:21Z,CONTRIBUTOR,"I put together a [simple script](https://github.com/chapmanjacobd/library/blob/42129c5ebe15f9d74653c0f5ca4ed0c991d383e0/xklb/scripts/dedupe_db.py) to upsert and remove duplicate rows based on business keys. If anyone has similar problems with above this might help ``` CREATE TABLE my_table ( id INTEGER PRIMARY KEY, column1 TEXT, column2 TEXT, column3 TEXT ); INSERT INTO my_table (column1, column2, column3) VALUES ('Value 1', 'Duplicate 1', 'Duplicate A'), ('Value 2', 'Duplicate 2', 'Duplicate B'), ('Value 3', 'Duplicate 2', 'Duplicate C'), ('Value 4', 'Duplicate 3', 'Duplicate D'), ('Value 5', 'Duplicate 3', 'Duplicate E'), ('Value 6', 'Duplicate 3', 'Duplicate F'); ``` ``` library dedupe-db test.db my_table --bk column2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327,Aliased ROWID option for tables created from alter=True commands, https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1577355134,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1577355134,IC_kwDOCGYnMM5eBId-,7908073,chapmanjacobd,2023-06-05T19:26:26Z,2023-06-05T19:26:26Z,CONTRIBUTOR,"this isn't really actionable... I'm just being a whiny baby. I have tasted the milk of being able to use `upsert_all`, `insert_all`, etc without having to write DDL to create tables. The meat of the issue is that SQLITE doesn't make rowid stable between vacuums so it is not possible to take shortcuts","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327,Aliased ROWID option for tables created from alter=True commands, https://github.com/simonw/sqlite-utils/pull/508#issuecomment-1297788531,https://api.github.com/repos/simonw/sqlite-utils/issues/508,1297788531,IC_kwDOCGYnMM5NWq5z,7908073,chapmanjacobd,2022-10-31T22:54:33Z,2022-11-17T15:11:16Z,CONTRIBUTOR,"Maybe this is actually a problem in the python sqlite bindings. Given [SQLITE's stance on this](https://www.sqlite.org/invalidutf.html) they should probably use `encode('utf-8', 'surrogatepass')`. As far as I understand the error here won't actually be resolved by this PR as-is. We would need to modify the data with `surrogateescape`... :/ or modify the sqlite3 module to use `surrogatepass`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1430563092,Allow surrogates in parameters, https://github.com/simonw/sqlite-utils/issues/510#issuecomment-1318777114,https://api.github.com/repos/simonw/sqlite-utils/issues/510,1318777114,IC_kwDOCGYnMM5OmvEa,7908073,chapmanjacobd,2022-11-17T15:09:47Z,2022-11-17T15:09:47Z,CONTRIBUTOR,"why close? is the only problem that the _config table that incorrectly says 4 for fts5? if so, that's still something that should be fixed","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1434911255,Cannot enable FTS5 despite it being available, https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304320521,https://api.github.com/repos/simonw/sqlite-utils/issues/511,1304320521,IC_kwDOCGYnMM5NvloJ,7908073,chapmanjacobd,2022-11-04T22:54:09Z,2022-11-04T22:59:54Z,CONTRIBUTOR,I ran `PRAGMA integrity_check` and it returned `ok`. but then I tried restoring from a backup and I didn't get this `IntegrityError: constraint failed` error. So I think it was just something wrong with my database. If it happens again I will first try to reindex and see if that fixes the issue,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1436539554,"[insert_all, upsert_all] IntegrityError: constraint failed", https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304078945,https://api.github.com/repos/simonw/sqlite-utils/issues/511,1304078945,IC_kwDOCGYnMM5Nuqph,7908073,chapmanjacobd,2022-11-04T19:38:36Z,2022-11-04T20:13:17Z,CONTRIBUTOR,"Even more bizarre, the source db only has one record and the target table has no conflicting record: ``` 875 0.3s lb:/ (main|✚2) [0|0]🌺 sqlite-utils tube_71.db 'select * from media where path = ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz""' | jq [ { ""size"": null, ""time_created"": null, ""play_count"": 1, ""language"": null, ""view_count"": null, ""width"": null, ""height"": null, ""fps"": null, ""average_rating"": null, ""live_status"": null, ""age_limit"": null, ""uploader"": null, ""time_played"": 0, ""path"": ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"", ""id"": ""088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz/074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv"", ""ie_key"": ""ArchiveOrg"", ""playlist_path"": ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"", ""duration"": 1424.05, ""tags"": null, ""title"": ""074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv"" } ] 875 0.3s lb:/ (main|✚2) [0|0]🥧 sqlite-utils video.db 'select * from media where path = ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz""' | jq [] ``` I've been able to use this code successfully several times before so not sure what's causing the issue. I guess the way that I'm handling multiple databases is an issue, though it hasn't ever inserted into the source db, not sure what's different. The only reasonable explanation is that it is trying to insert into the source db from the source db for some reason? Or maybe sqlite3 is checking the source db for primary key violation because the table name is the same","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1436539554,"[insert_all, upsert_all] IntegrityError: constraint failed", https://github.com/simonw/sqlite-utils/issues/50#issuecomment-1303660293,https://api.github.com/repos/simonw/sqlite-utils/issues/50,1303660293,IC_kwDOCGYnMM5NtEcF,7908073,chapmanjacobd,2022-11-04T14:38:36Z,2022-11-04T14:38:36Z,CONTRIBUTOR,where did you see the limit as 999? I believe the limit has been 32766 for quite some time. If you could detect which one this could speed up batch insert of some types of data significantly,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",473083260,"""Too many SQL variables"" on large inserts", https://github.com/simonw/sqlite-utils/issues/507#issuecomment-1297859539,https://api.github.com/repos/simonw/sqlite-utils/issues/507,1297859539,IC_kwDOCGYnMM5NW8PT,7908073,chapmanjacobd,2022-11-01T00:40:16Z,2022-11-01T00:40:16Z,CONTRIBUTOR,"Ideally people could fix their data if they run into this issue. If you are using filenames try [convmv](https://linux.die.net/man/1/convmv) ``` convmv --preserve-mtimes -f utf8 -t utf8 --notest -i -r . ``` maybe this script will also help: ```py import argparse, shutil from pathlib import Path import ftfy from xklb import utils from xklb.utils import log def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument(""paths"", nargs='*') parser.add_argument(""--verbose"", ""-v"", action=""count"", default=0) args = parser.parse_args() log.info(utils.dict_filter_bool(args.__dict__)) return args def rename_invalid_paths() -> None: args = parse_args() for path in args.paths: log.info(path) for p in sorted([str(p) for p in Path(path).rglob(""*"")], key=len): fixed = ftfy.fix_text(p, uncurl_quotes=False).replace(""\r\n"", ""\n"").replace(""\r"", ""\n"").replace(""\n"", """") if p != fixed: try: shutil.move(p, fixed) except FileNotFoundError: log.warning(""FileNotFound. %s"", p) else: log.info(fixed) if __name__ == ""__main__"": rename_invalid_paths() ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1430325103,conn.execute: UnicodeEncodeError: 'utf-8' codec can't encode character, https://github.com/simonw/sqlite-utils/pull/499#issuecomment-1292401308,https://api.github.com/repos/simonw/sqlite-utils/issues/499,1292401308,IC_kwDOCGYnMM5NCHqc,7908073,chapmanjacobd,2022-10-26T17:54:26Z,2022-10-26T17:54:51Z,CONTRIBUTOR,"The problem with how it is currently is that the transformed fts table _will_ return incorrect results (unless the table was only 1 row or something), even if create_triggers was enabled previously. Maybe the simplest solution is to disable fts on a transformed table rather than try to recreate it? Thoughts?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1405196044,feat: recreate fts triggers after table transform, https://github.com/dogsheep/twitter-to-sqlite/issues/60#issuecomment-1279249898,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/60,1279249898,IC_kwDODEm0Qs5MP83q,7908073,chapmanjacobd,2022-10-14T16:58:26Z,2022-10-14T16:58:26Z,NONE,"You could try using `msys2`. I've had better luck running python CLIs within that system on Windows. Here is a guide: https://github.com/chapmanjacobd/lb/blob/main/Windows.md#prep","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1063982712,Execution on Windows, https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-1279224780,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51,1279224780,IC_kwDODFdgUs5MP2vM,7908073,chapmanjacobd,2022-10-14T16:34:07Z,2022-10-14T16:34:07Z,NONE,"also, it says that authenticated requests have a much higher ""rate limit"". Unauthenticated requests only get 60 req/hour ?? seems more like a quota than a ""rate limit"" (although I guess that is semantic equivalence) You would want to use `x-ratelimit-reset` ``` time.sleep(r['x-ratelimit-reset'] + 1 - time.time()) ``` But a more complete solution would bring authenticated requests to the other subcommands. I'm surprised only `github-to-sqlite get` is using the `--auth=` CLI flag","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",703246031,github-to-sqlite should handle rate limits better, https://github.com/simonw/sqlite-utils/pull/498#issuecomment-1274153135,https://api.github.com/repos/simonw/sqlite-utils/issues/498,1274153135,IC_kwDOCGYnMM5L8giv,7908073,chapmanjacobd,2022-10-11T06:34:31Z,2022-10-11T06:34:31Z,CONTRIBUTOR,nevermind it was because I was running `db[table].transform`. The fts tables would still be there but the triggers would be dropped,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1404013495,fix: enable-fts permanently save triggers, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223554,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223554,IC_kwDOCGYnMM5LWoVC,7908073,chapmanjacobd,2022-10-01T03:42:50Z,2022-10-01T03:42:50Z,CONTRIBUTOR,oh weird. it inserts into db2,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489,`with db:` for transactions, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223363,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223363,IC_kwDOCGYnMM5LWoSD,7908073,chapmanjacobd,2022-10-01T03:41:45Z,2022-10-01T03:41:45Z,CONTRIBUTOR,"``` pytest xklb/check.py --pdb xklb/check.py:11: in test_transaction assert list(db2[""t""].rows) == [] E AssertionError: assert [{'foo': 1}] == [] E + where [{'foo': 1}] = list() E + where = .rows >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > /home/xk/github/xk/lb/xklb/check.py(11)test_transaction() 9 with db1.conn: 10 db1[""t""].insert({""foo"": 1}) ---> 11 assert list(db2[""t""].rows) == [] 12 assert list(db2[""t""].rows) == [{""foo"": 1}] ``` It fails because it is already inserted. btw if you put these two lines in you pyproject.toml you can get `ipdb` in pytest ``` [tool.pytest.ini_options] addopts = ""--pdbcls=IPython.terminal.debugger:TerminalPdb --ignore=tests/data --capture=tee-sys --log-cli-level=ERROR"" ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489,`with db:` for transactions, https://github.com/simonw/sqlite-utils/issues/493#issuecomment-1264219650,https://api.github.com/repos/simonw/sqlite-utils/issues/493,1264219650,IC_kwDOCGYnMM5LWnYC,7908073,chapmanjacobd,2022-10-01T03:22:50Z,2022-10-01T03:23:58Z,CONTRIBUTOR,"this is likely what you are looking for: https://stackoverflow.com/a/51076749/697964 but yeah I would say just disable smart quotes","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1386562662,Tiny typographical error in install/uninstall docs, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1256858763,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1256858763,IC_kwDOCGYnMM5K6iSL,7908073,chapmanjacobd,2022-09-24T04:50:59Z,2022-09-24T04:52:08Z,CONTRIBUTOR,"Instead of outputting binary data to stdout the interface might be better like this ``` sqlite-utils merge animals.db cats.db dogs.db ``` similar to `zip`, `ogr2ogr`, etc Actually I think this might already be possible within `ogr2ogr`. I don't believe spatial data is a requirement though it might add an `ogc_id` column or something ``` cp cats.db animals.db ogr2ogr -append animals.db dogs.db ogr2ogr -append animals.db another.db ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1252898131,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1252898131,IC_kwDOCGYnMM5KrbVT,7908073,chapmanjacobd,2022-09-20T20:51:21Z,2022-09-20T20:56:07Z,CONTRIBUTOR,"When I run `reset` it fixes my terminal. I suspect it is related to the progress bar https://linux.die.net/man/1/reset ``` 950 1s /m/d/03_Downloads 🐑 echo $TERM xterm-kitty ▓░▒░ /m/d/03_Downloads 🌏 kitty -v kitty 0.26.2 created by Kovid Goyal $ sqlite-utils insert test.db facility facility-boundary-us-all.csv --csv blah blah blah (no offense) $ $ reset $ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/sqlite-utils/pull/480#issuecomment-1232356302,https://api.github.com/repos/simonw/sqlite-utils/issues/480,1232356302,IC_kwDOCGYnMM5JdEPO,7908073,chapmanjacobd,2022-08-31T01:51:49Z,2022-08-31T01:51:49Z,CONTRIBUTOR,Thanks for pointing me to the right place,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1355433619,search_sql add include_rank option,