home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

22 rows where user = 7908073 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 18

  • `with db:` for transactions 2
  • Ability to merge databases and tables 2
  • [insert_all, upsert_all] IntegrityError: constraint failed 2
  • Aliased ROWID option for tables created from alter=True commands 2
  • "Too many SQL variables" on large inserts 1
  • github-to-sqlite should handle rate limits better 1
  • Execution on Windows 1
  • CLI eats my cursor 1
  • search_sql add include_rank option 1
  • Tiny typographical error in install/uninstall docs 1
  • fix: enable-fts permanently save triggers 1
  • feat: recreate fts triggers after table transform 1
  • conn.execute: UnicodeEncodeError: 'utf-8' codec can't encode character 1
  • Allow surrogates in parameters 1
  • Cannot enable FTS5 despite it being available 1
  • Microsoft line endings 1
  • rows: --transpose or psql extended view-like functionality 1
  • Filter table by a large bunch of ids 1

author_association 2

  • CONTRIBUTOR 20
  • NONE 2

user 1

  • chapmanjacobd · 22 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1592110694 https://github.com/simonw/sqlite-utils/issues/529#issuecomment-1592110694 https://api.github.com/repos/simonw/sqlite-utils/issues/529 IC_kwDOCGYnMM5e5a5m chapmanjacobd 7908073 2023-06-14T23:11:47Z 2023-06-14T23:12:12Z CONTRIBUTOR

sorry i was wrong. sqlite-utils --raw-lines works correctly

``` sqlite-utils --raw-lines :memory: "SELECT * FROM (VALUES ('test'), ('line2'))" | cat -A test$ line2$

sqlite-utils --csv --no-headers :memory: "SELECT * FROM (VALUES ('test'), ('line2'))" | cat -A test$ line2$ ```

I think this was fixed somewhat recently

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Microsoft line endings 1581090327  
1264218914 https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1264218914 https://api.github.com/repos/simonw/sqlite-utils/issues/491 IC_kwDOCGYnMM5LWnMi chapmanjacobd 7908073 2022-10-01T03:18:36Z 2023-06-14T22:14:24Z CONTRIBUTOR

some good concrete use-cases in mind

I actually found myself wanting something like this the past couple days. The use-case was databases with slightly different schema but same table names.

here is a full script:

``` import argparse from pathlib import Path

from sqlite_utils import Database

def connect(args, conn=None, kwargs) -> Database: db = Database(conn or args.database, kwargs) with db.conn: db.conn.execute("PRAGMA main.cache_size = 8000") return db

def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument("database") parser.add_argument("dbs_folder") parser.add_argument("--db", "-db", help=argparse.SUPPRESS) parser.add_argument("--verbose", "-v", action="count", default=0) args = parser.parse_args()

if args.db:
    args.database = args.db
Path(args.database).touch()
args.db = connect(args)

return args

def merge_db(args, source_db): source_db = str(Path(source_db).resolve())

s_db = connect(argparse.Namespace(database=source_db, verbose = args.verbose))
for table in s_db.table_names():
    data = s_db[table].rows
    args.db[table].insert_all(data, alter=True, replace=True)

args.db.conn.commit()

def merge_directory(): args = parse_args() source_dbs = list(Path(args.dbs_folder).glob('*.db')) for s_db in source_dbs: merge_db(args, s_db)

if name == 'main': merge_directory() ```

edit: I've made some improvements to this and put it on PyPI:

``` $ pip install xklb $ lb merge-db -h usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...]

Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided,
and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row
replacing all existing fields.

Upsert mode will update matching PK rows such that if a source row has a NULL field and
the destination row has a value then the value will be preserved instead of changed to the source row's NULL value.

Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db

Test first by using temp databases as the destination db.
Try out different modes / flags until you are satisfied with the behavior of the program

    library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db

Merge database data and tables

    library merge-dbs --upsert --pk path video.db tv.db movies.db
    library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path audio-fts.db audio.db

    library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db
    library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v

positional arguments: database source_dbs ```

Also if you want to dedupe a table based on a "business key" which isn't explicitly your primary key(s) you can run this:

``` $ lb dedupe-db -h usage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS]

Dedupe your database (not to be confused with the dedupe subcommand)

It should not need to be said but *backup* your database before trying this tool!

Dedupe-DB will help remove duplicate rows based on non-primary-key business keys

    library dedupe-db ./video.db media --bk path

If --primary-keys is not provided table metadata primary keys will be used
If --only-columns is not provided all non-primary and non-business key columns will be upserted

positional arguments: database table

options: -h, --help show this help message and exit --skip-0 --only-columns ONLY_COLUMNS Comma separated column names to upsert --primary-keys PRIMARY_KEYS, --pk PRIMARY_KEYS Comma separated primary keys --business-keys BUSINESS_KEYS, --bk BUSINESS_KEYS Comma separated business keys ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Ability to merge databases and tables 1383646615  
1592052320 https://github.com/simonw/sqlite-utils/issues/535#issuecomment-1592052320 https://api.github.com/repos/simonw/sqlite-utils/issues/535 IC_kwDOCGYnMM5e5Mpg chapmanjacobd 7908073 2023-06-14T22:05:28Z 2023-06-14T22:05:28Z CONTRIBUTOR

piping to jq is good enough usually

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
rows: --transpose or psql extended view-like functionality 1655860104  
1592047502 https://github.com/simonw/sqlite-utils/issues/555#issuecomment-1592047502 https://api.github.com/repos/simonw/sqlite-utils/issues/555 IC_kwDOCGYnMM5e5LeO chapmanjacobd 7908073 2023-06-14T22:00:10Z 2023-06-14T22:01:57Z CONTRIBUTOR

You may want to try doing a performance comparison between this and just selecting all the ids with few constraints and then doing the filtering within python.

That might seem like a lazy-programmer, inefficient way but queries with large resultsets are a different profile than what databases like SQLITE are designed for. That is not to say that SQLITE is slow or that python is always faster but when you start reading >20% of an index there is an equilibrium that is reached. Especially when adding in writing extra temp tables and stuff to memory/disk. And especially given the NOT IN style of query...

You may also try chunking like this:

```py def chunks(lst, n) -> Generator: for i in range(0, len(lst), n): yield lst[i : i + n]

SQLITE_PARAM_LIMIT = 32765

data = [] chunked = chunks(video_ids, consts.SQLITE_PARAM_LIMIT) for ids in chunked: data.expand( list( db.query( f"""SELECT * from videos WHERE id in (""" + ",".join(["?"] * len(ids)) + ")", (*ids,), ) ) ) ```

but that actually won't work with your NOT IN requirements. You need to query the full resultset to check any row.

Since you are doing stuff with files/videos in SQLITE you might be interested in my side project: https://github.com/chapmanjacobd/library

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Filter table by a large bunch of ids 1733198948  
1590531892 https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1590531892 https://api.github.com/repos/simonw/sqlite-utils/issues/557 IC_kwDOCGYnMM5ezZc0 chapmanjacobd 7908073 2023-06-14T06:09:21Z 2023-06-14T06:09:21Z CONTRIBUTOR

I put together a simple script to upsert and remove duplicate rows based on business keys. If anyone has similar problems with above this might help

``` CREATE TABLE my_table ( id INTEGER PRIMARY KEY, column1 TEXT, column2 TEXT, column3 TEXT );

INSERT INTO my_table (column1, column2, column3) VALUES ('Value 1', 'Duplicate 1', 'Duplicate A'), ('Value 2', 'Duplicate 2', 'Duplicate B'), ('Value 3', 'Duplicate 2', 'Duplicate C'), ('Value 4', 'Duplicate 3', 'Duplicate D'), ('Value 5', 'Duplicate 3', 'Duplicate E'), ('Value 6', 'Duplicate 3', 'Duplicate F'); ```

library dedupe-db test.db my_table --bk column2

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Aliased ROWID option for tables created from alter=True commands 1740150327  
1577355134 https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1577355134 https://api.github.com/repos/simonw/sqlite-utils/issues/557 IC_kwDOCGYnMM5eBId- chapmanjacobd 7908073 2023-06-05T19:26:26Z 2023-06-05T19:26:26Z CONTRIBUTOR

this isn't really actionable... I'm just being a whiny baby. I have tasted the milk of being able to use upsert_all, insert_all, etc without having to write DDL to create tables. The meat of the issue is that SQLITE doesn't make rowid stable between vacuums so it is not possible to take shortcuts

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Aliased ROWID option for tables created from alter=True commands 1740150327  
1297788531 https://github.com/simonw/sqlite-utils/pull/508#issuecomment-1297788531 https://api.github.com/repos/simonw/sqlite-utils/issues/508 IC_kwDOCGYnMM5NWq5z chapmanjacobd 7908073 2022-10-31T22:54:33Z 2022-11-17T15:11:16Z CONTRIBUTOR

Maybe this is actually a problem in the python sqlite bindings. Given SQLITE's stance on this they should probably use encode('utf-8', 'surrogatepass'). As far as I understand the error here won't actually be resolved by this PR as-is. We would need to modify the data with surrogateescape... :/ or modify the sqlite3 module to use surrogatepass

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow surrogates in parameters 1430563092  
1318777114 https://github.com/simonw/sqlite-utils/issues/510#issuecomment-1318777114 https://api.github.com/repos/simonw/sqlite-utils/issues/510 IC_kwDOCGYnMM5OmvEa chapmanjacobd 7908073 2022-11-17T15:09:47Z 2022-11-17T15:09:47Z CONTRIBUTOR

why close? is the only problem that the _config table that incorrectly says 4 for fts5? if so, that's still something that should be fixed

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Cannot enable FTS5 despite it being available 1434911255  
1304320521 https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304320521 https://api.github.com/repos/simonw/sqlite-utils/issues/511 IC_kwDOCGYnMM5NvloJ chapmanjacobd 7908073 2022-11-04T22:54:09Z 2022-11-04T22:59:54Z CONTRIBUTOR

I ran PRAGMA integrity_check and it returned ok. but then I tried restoring from a backup and I didn't get this IntegrityError: constraint failed error. So I think it was just something wrong with my database. If it happens again I will first try to reindex and see if that fixes the issue

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[insert_all, upsert_all] IntegrityError: constraint failed 1436539554  
1304078945 https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304078945 https://api.github.com/repos/simonw/sqlite-utils/issues/511 IC_kwDOCGYnMM5Nuqph chapmanjacobd 7908073 2022-11-04T19:38:36Z 2022-11-04T20:13:17Z CONTRIBUTOR

Even more bizarre, the source db only has one record and the target table has no conflicting record:

875 0.3s lb:/ (main|✚2) [0|0]🌺 sqlite-utils tube_71.db 'select * from media where path = "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"' | jq [ { "size": null, "time_created": null, "play_count": 1, "language": null, "view_count": null, "width": null, "height": null, "fps": null, "average_rating": null, "live_status": null, "age_limit": null, "uploader": null, "time_played": 0, "path": "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz", "id": "088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz/074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv", "ie_key": "ArchiveOrg", "playlist_path": "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz", "duration": 1424.05, "tags": null, "title": "074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv" } ] 875 0.3s lb:/ (main|✚2) [0|0]🥧 sqlite-utils video.db 'select * from media where path = "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"' | jq []

I've been able to use this code successfully several times before so not sure what's causing the issue.

I guess the way that I'm handling multiple databases is an issue, though it hasn't ever inserted into the source db, not sure what's different. The only reasonable explanation is that it is trying to insert into the source db from the source db for some reason? Or maybe sqlite3 is checking the source db for primary key violation because the table name is the same

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[insert_all, upsert_all] IntegrityError: constraint failed 1436539554  
1303660293 https://github.com/simonw/sqlite-utils/issues/50#issuecomment-1303660293 https://api.github.com/repos/simonw/sqlite-utils/issues/50 IC_kwDOCGYnMM5NtEcF chapmanjacobd 7908073 2022-11-04T14:38:36Z 2022-11-04T14:38:36Z CONTRIBUTOR

where did you see the limit as 999? I believe the limit has been 32766 for quite some time. If you could detect which one this could speed up batch insert of some types of data significantly

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Too many SQL variables" on large inserts 473083260  
1297859539 https://github.com/simonw/sqlite-utils/issues/507#issuecomment-1297859539 https://api.github.com/repos/simonw/sqlite-utils/issues/507 IC_kwDOCGYnMM5NW8PT chapmanjacobd 7908073 2022-11-01T00:40:16Z 2022-11-01T00:40:16Z CONTRIBUTOR

Ideally people could fix their data if they run into this issue.

If you are using filenames try convmv

convmv --preserve-mtimes -f utf8 -t utf8 --notest -i -r .

maybe this script will also help:

```py import argparse, shutil from pathlib import Path

import ftfy

from xklb import utils from xklb.utils import log

def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument("paths", nargs='*') parser.add_argument("--verbose", "-v", action="count", default=0) args = parser.parse_args()

log.info(utils.dict_filter_bool(args.__dict__))
return args

def rename_invalid_paths() -> None: args = parse_args()

for path in args.paths:
    log.info(path)
    for p in sorted([str(p) for p in Path(path).rglob("*")], key=len):
        fixed = ftfy.fix_text(p, uncurl_quotes=False).replace("\r\n", "\n").replace("\r", "\n").replace("\n", "")
        if p != fixed:
            try:
                shutil.move(p, fixed)
            except FileNotFoundError:
                log.warning("FileNotFound. %s", p)
            else:
                log.info(fixed)

if name == "main": rename_invalid_paths() ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
conn.execute: UnicodeEncodeError: 'utf-8' codec can't encode character 1430325103  
1292401308 https://github.com/simonw/sqlite-utils/pull/499#issuecomment-1292401308 https://api.github.com/repos/simonw/sqlite-utils/issues/499 IC_kwDOCGYnMM5NCHqc chapmanjacobd 7908073 2022-10-26T17:54:26Z 2022-10-26T17:54:51Z CONTRIBUTOR

The problem with how it is currently is that the transformed fts table will return incorrect results (unless the table was only 1 row or something), even if create_triggers was enabled previously. Maybe the simplest solution is to disable fts on a transformed table rather than try to recreate it? Thoughts?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
feat: recreate fts triggers after table transform 1405196044  
1279249898 https://github.com/dogsheep/twitter-to-sqlite/issues/60#issuecomment-1279249898 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/60 IC_kwDODEm0Qs5MP83q chapmanjacobd 7908073 2022-10-14T16:58:26Z 2022-10-14T16:58:26Z NONE

You could try using msys2. I've had better luck running python CLIs within that system on Windows.

Here is a guide: https://github.com/chapmanjacobd/lb/blob/main/Windows.md#prep

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Execution on Windows 1063982712  
1279224780 https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-1279224780 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51 IC_kwDODFdgUs5MP2vM chapmanjacobd 7908073 2022-10-14T16:34:07Z 2022-10-14T16:34:07Z NONE

also, it says that authenticated requests have a much higher "rate limit". Unauthenticated requests only get 60 req/hour ?? seems more like a quota than a "rate limit" (although I guess that is semantic equivalence)

You would want to use x-ratelimit-reset

time.sleep(r['x-ratelimit-reset'] + 1 - time.time())

But a more complete solution would bring authenticated requests to the other subcommands. I'm surprised only github-to-sqlite get is using the --auth= CLI flag

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
github-to-sqlite should handle rate limits better 703246031  
1274153135 https://github.com/simonw/sqlite-utils/pull/498#issuecomment-1274153135 https://api.github.com/repos/simonw/sqlite-utils/issues/498 IC_kwDOCGYnMM5L8giv chapmanjacobd 7908073 2022-10-11T06:34:31Z 2022-10-11T06:34:31Z CONTRIBUTOR

nevermind it was because I was running db[table].transform. The fts tables would still be there but the triggers would be dropped

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
fix: enable-fts permanently save triggers 1404013495  
1264223554 https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223554 https://api.github.com/repos/simonw/sqlite-utils/issues/409 IC_kwDOCGYnMM5LWoVC chapmanjacobd 7908073 2022-10-01T03:42:50Z 2022-10-01T03:42:50Z CONTRIBUTOR

oh weird. it inserts into db2

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`with db:` for transactions 1149661489  
1264223363 https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223363 https://api.github.com/repos/simonw/sqlite-utils/issues/409 IC_kwDOCGYnMM5LWoSD chapmanjacobd 7908073 2022-10-01T03:41:45Z 2022-10-01T03:41:45Z CONTRIBUTOR

``` pytest xklb/check.py --pdb

xklb/check.py:11: in test_transaction assert list(db2["t"].rows) == [] E AssertionError: assert [{'foo': 1}] == [] E + where [{'foo': 1}] = list(<generator object Queryable.rows_where at 0x7f2d84d1f0d0>) E + where <generator object Queryable.rows_where at 0x7f2d84d1f0d0> = <Table t (foo)>.rows

entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/xk/github/xk/lb/xklb/check.py(11)test_transaction() 9 with db1.conn: 10 db1["t"].insert({"foo": 1}) ---> 11 assert list(db2["t"].rows) == [] 12 assert list(db2["t"].rows) == [{"foo": 1}] ```

It fails because it is already inserted.

btw if you put these two lines in you pyproject.toml you can get ipdb in pytest

[tool.pytest.ini_options] addopts = "--pdbcls=IPython.terminal.debugger:TerminalPdb --ignore=tests/data --capture=tee-sys --log-cli-level=ERROR"

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`with db:` for transactions 1149661489  
1264219650 https://github.com/simonw/sqlite-utils/issues/493#issuecomment-1264219650 https://api.github.com/repos/simonw/sqlite-utils/issues/493 IC_kwDOCGYnMM5LWnYC chapmanjacobd 7908073 2022-10-01T03:22:50Z 2022-10-01T03:23:58Z CONTRIBUTOR

this is likely what you are looking for: https://stackoverflow.com/a/51076749/697964

but yeah I would say just disable smart quotes

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Tiny typographical error in install/uninstall docs 1386562662  
1256858763 https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1256858763 https://api.github.com/repos/simonw/sqlite-utils/issues/491 IC_kwDOCGYnMM5K6iSL chapmanjacobd 7908073 2022-09-24T04:50:59Z 2022-09-24T04:52:08Z CONTRIBUTOR

Instead of outputting binary data to stdout the interface might be better like this

sqlite-utils merge animals.db cats.db dogs.db

similar to zip, ogr2ogr, etc

Actually I think this might already be possible within ogr2ogr. I don't believe spatial data is a requirement though it might add an ogc_id column or something

cp cats.db animals.db ogr2ogr -append animals.db dogs.db ogr2ogr -append animals.db another.db

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Ability to merge databases and tables 1383646615  
1252898131 https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1252898131 https://api.github.com/repos/simonw/sqlite-utils/issues/433 IC_kwDOCGYnMM5KrbVT chapmanjacobd 7908073 2022-09-20T20:51:21Z 2022-09-20T20:56:07Z CONTRIBUTOR

When I run reset it fixes my terminal. I suspect it is related to the progress bar

https://linux.die.net/man/1/reset

950 1s /m/d/03_Downloads 🐑 echo $TERM xterm-kitty ▓░▒░ /m/d/03_Downloads 🌏 kitty -v kitty 0.26.2 created by Kovid Goyal $ sqlite-utils insert test.db facility facility-boundary-us-all.csv --csv blah blah blah (no offense) $ <no cursor> $ reset $ <cursor lives again (resurrection [explicit])>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CLI eats my cursor 1239034903  
1232356302 https://github.com/simonw/sqlite-utils/pull/480#issuecomment-1232356302 https://api.github.com/repos/simonw/sqlite-utils/issues/480 IC_kwDOCGYnMM5JdEPO chapmanjacobd 7908073 2022-08-31T01:51:49Z 2022-08-31T01:51:49Z CONTRIBUTOR

Thanks for pointing me to the right place

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
search_sql add include_rank option 1355433619  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 28.354ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows