id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason
403625674,MDU6SXNzdWU0MDM2MjU2NzQ=,7,.insert_all() should accept a generator and process it efficiently,9599,closed,0,,,3,2019-01-28T02:11:58Z,2019-01-28T06:26:53Z,2019-01-28T06:26:53Z,OWNER,,"Right now you have to load every record into memory before passing the list to `.insert_all()` and friends.

If you want to process millions of rows, this is inefficient. Python has generators - we should use them!

The only catch here is that part of the magic of `sqlite-utils` is that it guesses the column types and creates the table for you. This code will need to be updated to notice if the table needs creating and, if it does, create it using the first X (where x=1,000 but can be customized) records.

If a record outside of those first 1,000 has a rogue column, we can crash with an error.

This will free us up to make the `--nl` option added in #6 much more efficient.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/7/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
787900412,MDU6SXNzdWU3ODc5MDA0MTI=,222,.m2m() should accept alter=True parameter,9599,closed,0,,,0,2021-01-18T04:15:43Z,2021-01-18T04:26:10Z,2021-01-18T04:26:10Z,OWNER,,Needed by https://github.com/dogsheep/swarm-to-sqlite/issues/11,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/222/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
738128913,MDU6SXNzdWU3MzgxMjg5MTM=,201,.search(columns=) and sqlite-utils search -c ... bug,9599,closed,0,,6079500,1,2020-11-07T01:27:26Z,2020-11-08T16:54:15Z,2020-11-08T16:54:15Z,OWNER,,"Both `table.search(columns=)` and the `sqlite-utils search -c` option do not work as expected - they always return both the `rowid` and the `rank` columns even if those have not been requested.

This should be fixed before the 3.0 non-alpha release.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/201/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1856075668,I_kwDOCGYnMM5uoXeU,586,.transform() fails to drop column if table is part of a view,9599,open,0,,,3,2023-08-18T05:25:22Z,2023-08-18T06:13:47Z,,OWNER,,"I got this error trying to drop a column from a table that was part of a SQL view:

> error in view plugins: no such table: main.pypi_releases

Upon further investigation I found that this pattern seemed to fix it:
```python
def transform_the_table(conn):
    # Run this in a transaction:
    with conn:
        # We have to read all the views first, because we need to drop and recreate them
        db = sqlite_utils.Database(conn)
        views = {v.name: v.schema for v in db.views if table.lower() in v.schema.lower()}
        for view in views.keys():
            db[view].drop()
        db[table].transform(
            types=types,
            rename=rename,
            drop=drop,
            column_order=[p[0] for p in order_pairs],
        )
        # Now recreate the views
        for name, schema in views.items():
            db.create_view(name, schema)
```
So grab a copy of any view that might reference this table, start a transaction, drop those views, run the transform, recreate the views again.

> I wonder if this should become an option in `sqlite-utils`? Maybe a `recreate_views=True` argument for `table.tranform(...)`? Should it be opt-in or opt-out?

_Originally posted by @simonw in https://github.com/simonw/datasette-edit-schema/issues/35#issuecomment-1683370548_
            ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/586/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
925320167,MDU6SXNzdWU5MjUzMjAxNjc=,284,.transform(types=) turns rowid into a concrete column,9599,closed,0,,,5,2021-06-19T05:25:27Z,2021-06-19T15:28:30Z,2021-06-19T15:28:30Z,OWNER,,"Noticed this in the tests for `sqlite-utils memory` in #282 - is it possible to fix this?

https://github.com/simonw/sqlite-utils/commit/ec5174ed40fa283cb06f25ee0c0136297ec313ae",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/284/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
561460274,MDU6SXNzdWU1NjE0NjAyNzQ=,84,.upsert() with hash_id throws error,9599,closed,0,,,0,2020-02-07T07:08:19Z,2020-02-07T07:17:11Z,2020-02-07T07:17:11Z,OWNER,,"```python
db[table_name].upsert_all(rows, hash_id=""pk"")
```
This throws an error: `PrimaryKeyRequired('upsert() requires a pk')`

The problem is, if you try this:

```python
db[table_name].upsert_all(rows, hash_id=""pk"", pk=""pk"")
```
You get this error: `AssertionError('Use either pk= or hash_id=')`

`hash_id=` should imply that `pk=` that column.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/84/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
598640234,MDU6SXNzdWU1OTg2NDAyMzQ=,99,.upsert_all() should maybe error if dictionaries passed to it do not have the same keys,9599,closed,0,,,2,2020-04-13T03:02:25Z,2020-04-13T03:05:20Z,2020-04-13T03:05:04Z,OWNER,,"While investigating #98 I stumbled across this:
```
    def test_upsert_compound_primary_key(fresh_db):
        table = fresh_db[""table""]
        table.upsert_all(
            [
                {""species"": ""dog"", ""id"": 1, ""name"": ""Cleo"", ""age"": 4},
                {""species"": ""cat"", ""id"": 1, ""name"": ""Catbag""},
            ],
            pk=(""species"", ""id""),
        )
        table.upsert_all(
            [
                {""species"": ""dog"", ""id"": 1, ""age"": 5},
                {""species"": ""dog"", ""id"": 2, ""name"": ""New Dog"", ""age"": 1},
            ],
            pk=(""species"", ""id""),
        )
>       assert [
            {""species"": ""dog"", ""id"": 1, ""name"": ""Cleo"", ""age"": 5},
            {""species"": ""cat"", ""id"": 1, ""name"": ""Catbag"", ""age"": None},
            {""species"": ""dog"", ""id"": 2, ""name"": ""New Dog"", ""age"": 1},
        ] == list(table.rows)
E       AssertionError: assert [{'age': 5, '...cies': 'dog'}] == [{'age': 5, '...cies': 'dog'}]
E         At index 0 diff: {'species': 'dog', 'id': 1, 'name': 'Cleo', 'age': 5} != {'species': 'dog', 'id': 1, 'name': None, 'age': 5}
E         Full diff:
E         - [{'age': 5, 'id': 1, 'name': 'Cleo', 'species': 'dog'},
E         ?                              ^^^ --
E         + [{'age': 5, 'id': 1, 'name': None, 'species': 'dog'},
E         ?                              ^^^
E         {'age': None, 'id': 1, 'name': 'Catbag', 'species': 'cat'},
E         {'age': 1, 'id': 2, 'name': 'New Dog', 'species': 'dog'}]
```
If you run `.upsert_all()` with multiple dictionaries it doesn't quite have the effect you might expect.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/99/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
735650864,MDU6SXNzdWU3MzU2NTA4NjQ=,194,3.0 release with some minor breaking changes,9599,closed,0,,6079500,3,2020-11-03T21:36:31Z,2020-11-08T17:19:35Z,2020-11-08T17:19:34Z,OWNER,,"While working on search (#192) I've spotted a few small changes I would like to make that would break backwards compatibility in minor ways, hence requiring a 3.x release.

`db[table].search()` - I would like this to default to sorting by rank

Also I'd like to free up the `-c` and `-f` options for other purposes from the standard output formats here:

https://github.com/simonw/sqlite-utils/blob/43eae8b193d362f2b292df73e087ed6f10838144/sqlite_utils/cli.py#L48-L58

I'd like `-f` to be used to indicate a full-text search column during an insert and `-c` to indicate a column (so you can specify which columns you want to output).",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/194/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
802583450,MDU6SXNzdWU4MDI1ODM0NTA=,226,3.4 release is broken - includes a rogue line,9599,closed,0,,,0,2021-02-06T02:08:01Z,2021-02-06T02:10:26Z,2021-02-06T02:10:26Z,OWNER,,"I started seeing weird errors, caused by this line: https://github.com/simonw/sqlite-utils/blob/f8010ca78fed8c5fca6cde19658ec09fdd468420/sqlite_utils/cli.py#L1-L3

That was added by accident in 1b666f9315d4ea6bb332b2e75e48480c26100199

I'm surprised the tests didn't catch this!",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/226/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
737855731,MDU6SXNzdWU3Mzc4NTU3MzE=,199,"@db.register_function(..., replace=False) to avoid double-registering custom functions",9599,closed,0,,,1,2020-11-06T15:39:21Z,2020-11-06T18:30:44Z,2020-11-06T18:30:44Z,OWNER,,"I'd like a mechanism to optionally avoid registering a custom function if it has already been registered.

SQLite doesn't seem to offer a way to introspect registered custom functions so I'll need to track what has already been registered in `sqlite-utils` instead.

> Should I register the custom `rank_bm25` SQLite function for every connection, or should I register it against the connection just the first time the user attempts an FTS4 search? I think I'd rather register it only if it is needed.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/198#issuecomment-723145383_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/199/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
705995722,MDU6SXNzdWU3MDU5OTU3MjI=,162,A decorator for registering custom SQL functions,9599,closed,0,,,2,2020-09-22T00:18:32Z,2020-09-22T00:40:44Z,2020-09-22T00:32:17Z,OWNER,,"Syntactic sugar for `db.conn.create_function` - it would work something like this:

```python
db = sqlite_utils.Database(""mydb.db"")

@db.register_function
def scramble(text):
    chars = list(text)
    random.shuffle(chars)
    return """".join(chars)
```
The decorator would inspect the function to find its name and arity (number of arguments). Having run the above you could then do:
```python
db.execute(""select scramble('hello')"").fetchall()
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/162/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1071531082,I_kwDOCGYnMM4_3kRK,349,A way of creating indexes on newly created tables,9599,open,0,,,3,2021-12-05T18:56:12Z,2021-12-07T01:04:37Z,,OWNER,,"I'm writing code for https://github.com/simonw/git-history/issues/33 that creates a table inside a loop:

```python
item_pk = db[item_table].lookup(
    {""_item_id"": item_id},
    item_to_insert,
    column_order=(""_id"", ""_item_id""),
    pk=""_id"",
)
```
I need to look things up by `_item_id` on this table, which means I need an index on that column (the table can get very big).

But there's no mechanism in SQLite utils to detect if the table was created for the first time and add an index to it. And I don't want to run `CREATE INDEX IF NOT EXISTS` every time through the loop.

This should work like the `foreign_keys=` mechanism.
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/349/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
413857257,MDU6SXNzdWU0MTM4NTcyNTc=,15,Ability to add columns to tables,9599,closed,0,,,0,2019-02-24T19:20:51Z,2019-02-24T20:04:40Z,2019-02-24T20:04:40Z,OWNER,,"Makes sense to do this before foreign keys in #2

Python:

    db[""table""].add_column(""new_column"", int)

CLI:

    $ sqlite-utils add-column table new_column INTEGER
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/15/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
413779210,MDU6SXNzdWU0MTM3NzkyMTA=,13,Ability to automatically create IDs from content hash of row,9599,closed,0,,,1,2019-02-24T04:07:08Z,2019-02-24T04:36:48Z,2019-02-24T04:36:48Z,OWNER,,"Sometimes when you are importing data the underlying source provides records without IDs that can be uniquely identified by their contents.

A utility mechanism for calculating a sha1 hash of the contents and using that as a unique ID would be useful.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/13/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
573578548,MDU6SXNzdWU1NzM1Nzg1NDg=,89,Ability to customize columns used by extracts= feature,9599,open,0,,,3,2020-03-01T16:54:48Z,2020-10-16T19:17:50Z,,OWNER,,"@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the ""value"" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named ""value"". I have an existing lookup table that I've populated with columns ""id"" and ""name"" as opposed to ""id"" and ""value"", and seems I can't use `extracts=`, unless I'm missing something...

Initial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so:
```
table = db.table(""trees"", extracts={""species_id"": (""Species"", ""name""})
```

I haven't dug too much into the existing code yet, but does this make sense? Worth doing?

_Originally posted by @chrishas35 in https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/89/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1754174496,I_kwDOCGYnMM5ojpQg,558,Ability to define unique columns when creating a table,1910303,open,0,,,0,2023-06-13T06:56:19Z,2023-08-18T01:06:03Z,,NONE,,"When creating a new table, it would be good to have an option to set unique columns similar to how not_null is set.

```python
from sqlite_utils import Database

columns = {""mRID"": str, ""name"": str}
db = Database(""example.db"")
db[""ExampleTable""].create(columns, pk=""mRID"", not_null=[""mRID""], if_not_exists=True)
db[""ExampleTable""].create_index([""mRID""], unique=True, if_not_exists=True)
```

So something like this would add the UNIQUE flag to the table definition. 

```python
db[""ExampleTable""].create(columns, pk=""mRID"", not_null=[""mRID""], unique=[""mRID""], if_not_exists=True)
```

```sql
CREATE TABLE ExampleTable (
    mRID TEXT PRIMARY KEY
              NOT NULL
              UNIQUE,
    name TEXT
);
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/558/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
637889964,MDU6SXNzdWU2Mzc4ODk5NjQ=,115,Ability to execute insert/update statements with the CLI,9599,closed,0,,,1,2020-06-12T17:01:17Z,2020-06-12T17:51:11Z,2020-06-12T17:41:10Z,OWNER,,"```
$ sqlite-utils github.db ""update stars set starred_at = ''""
Traceback (most recent call last):
  File ""/Users/simon/.local/bin/sqlite-utils"", line 8, in <module>
    sys.exit(cli())
  File ""/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/core.py"", line 829, in __call__
    return self.main(*args, **kwargs)
  File ""/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/core.py"", line 782, in main
    rv = self.invoke(ctx)
  File ""/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/core.py"", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File ""/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/core.py"", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File ""/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/core.py"", line 610, in invoke
    return callback(*args, **kwargs)
  File ""/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/cli.py"", line 673, in query
    headers = [c[0] for c in cursor.description]
TypeError: 'NoneType' object is not iterable
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/115/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
665819048,MDU6SXNzdWU2NjU4MTkwNDg=,126,Ability to insert binary data on the CLI using JSON,9599,closed,0,,,2,2020-07-26T16:54:14Z,2020-07-27T04:00:33Z,2020-07-27T03:59:45Z,OWNER,,"> I could solve round tripping (at least a bit) by allowing insert to be run with a flag that says ""these columns are base64 encoded, store the decoded data in a BLOB"".
>
> That would solve inserting binary data using JSON too.
_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/125#issuecomment-664012247_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/126/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
557825032,MDU6SXNzdWU1NTc4MjUwMzI=,77,Ability to insert data that is transformed by a SQL function,9599,closed,0,,,2,2020-01-30T23:45:55Z,2022-02-05T00:04:25Z,2020-01-31T00:24:32Z,OWNER,,"I want to be able to run the equivalent of this SQL insert:
```python
# Convert to ""Well Known Text"" format
wkt = shape(geojson['geometry']).wkt
# Insert and commit the record
conn.execute(""INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))"", (
   ""Wales"", wkt
))
conn.commit()
```
From the Datasette SpatiaLite docs: https://datasette.readthedocs.io/en/stable/spatialite.html

To do this, I need a way of telling `sqlite-utils` that a specific column should be wrapped in `GeomFromText(?, 4326)`.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/77/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
666040390,MDU6SXNzdWU2NjYwNDAzOTA=,127,Ability to insert files piped to insert-files stdin,9599,closed,0,,,3,2020-07-27T07:09:33Z,2020-07-30T03:08:52Z,2020-07-30T03:08:18Z,OWNER,,"> Inserting files by piping them in should work - but since a filename cannot be derived this will need a `--name blah.gif` option.
>
>    cat blah.gif | sqlite-utils insert-files files.db files - --name=blah.gif
>
_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/122#issuecomment-664128071_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/127/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1382457780,I_kwDOCGYnMM5SZqG0,490,Ability to insert multi-line files,6180701,closed,0,,,4,2022-09-22T13:29:22Z,2022-09-26T18:24:44Z,2022-09-23T16:37:58Z,NONE,,"I was looking into how to parse application log files that contain multiline text (e.g. Java stack traces) into sqlite. 
I can see that at the moment `--lines` helps, but falls short when processing multi-line texts.

I wonder if this functionality would be useful for sqlite-utils. A similar approach to Elastic logstash/filebeat can be adopted: https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html 

Potential changes:

- add a `--multiline` option
- additional properties for
  - multiline-pattern (regex expression)
  - multiline-negate: true/false
  - multiline-what: previous or next

Or if this is achievable in a different way, please share. Thanks!",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/490/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
488338965,MDU6SXNzdWU0ODgzMzg5NjU=,59,Ability to introspect triggers,9599,closed,0,,,0,2019-09-02T23:47:16Z,2019-09-03T01:52:36Z,2019-09-03T00:09:42Z,OWNER,,"Now that we're creating triggers (thanks to @amjith in #57) it would be neat if we could introspect them too.

I'm thinking:

`db.triggers` - lists all triggers for the database
`db[""tablename""].triggers` - lists triggers for that table

The underlying query for this is `select * from sqlite_master where type = 'trigger'`

I'll return the trigger information in a new namedtuple, similar to how Indexes and ForeignKeys work.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/59/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
480961330,MDU6SXNzdWU0ODA5NjEzMzA=,54,"Ability to list views, and to access db[""view_name""].rows / rows_where / etc",20264,closed,0,,,5,2019-08-15T02:00:28Z,2019-08-23T12:41:09Z,2019-08-23T12:20:15Z,NONE,,"The docs show me how to create a view via `db.create_view()` but I can't seem to get back to that view post-creation; if I query it as a table it returns `None`, and it doesn't appear in the table listing, even though querying the view works fine from inside the sqlite3 command-line.

It'd be great to have the view as a pseudo-table, or if the python/sqlite3 module makes that hard to pull off (I couldn't figure it out), to have that edge-case documented next to the `db.create_view()` docs.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/54/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1374939463,I_kwDOCGYnMM5R8-lH,489,Ability to load JSON records held in a file with a single top level key that is a list of objects,9599,open,0,,,9,2022-09-15T18:46:03Z,2022-09-15T20:56:10Z,,OWNER,,"It's very common for JSON to look like this:
```json
{
  ""Version"": ""5.5.52.6"",
  ""List"": [
    {
      ""Description"": ""Nonpartisan"",
      ""Id"": 1,
      ""ExternalId"": """"
    },
    {
      ""Description"": ""Undeclared"",
      ""Id"": 2,
      ""ExternalId"": """"
    }
  ]
}
```
This example taken from the records downloaded from https://www.elections.alaska.gov/election-results/e/

Right now you can't import this into `sqlite-utils` - you need to run it through `jq .List` first.

But since this is so common, it would be neat if `sqlite-utils` could have a rule of thumb that says ""if it's an object, but it has a single key that is is a list of objects, use that instead"".",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/489/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1383646615,I_kwDOCGYnMM5SeMWX,491,Ability to merge databases and tables,8904453,open,0,,,7,2022-09-23T11:10:55Z,2023-06-14T22:14:24Z,,NONE,,"Hi! Let me firstly say that I am a big fan of your work -- I follow your tweets and blog posts with great interest 😄.

Now onto the matter at hand: I think it would be great if `sqlite-utils` included a `merge` or `combine` command, with the purpose of combining different SQLite databases into a single SQLite database. This way, the newly ""merged"" database would contain all differently named tables contained in the databases to be merged as-is, as well a concatenation of all tables of the same name.

This could look something like this:

```bash
sqlite-utils merge cats.db dogs.db > animals.db
```

I imagine this is rather straightforward if all databases involved in the merge contain differently named tables (i.e. no chance of conflicts), but things get slightly more complicated if two or more of the databases to be merged contain tables with the same name. Not only do you have to ""do something"" with the primary key(s), but these tables could also simply have different schemas (and therefore be incompatible for concatenation to begin with).

Anyhow, I would love your thoughts on this, and, if you are open to it, work together on the design and implementation!",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/491/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
652700770,MDU6SXNzdWU2NTI3MDA3NzA=,119,Ability to remove a foreign key,9599,closed,0,,,3,2020-07-07T22:31:37Z,2020-09-24T20:36:59Z,2020-09-24T20:36:59Z,OWNER,,Useful if you add one but make a mistake and need to undo it without recreating the database from scratch.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/119/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
952154468,MDU6SXNzdWU5NTIxNTQ0Njg=,299,Ability to see just specific table schemas with `sqlite-utils schema`,9599,closed,0,,,1,2021-07-24T22:00:05Z,2021-07-24T22:12:01Z,2021-07-24T22:08:46Z,OWNER,,"It currently accepts no arguments. Allowing for optional arguments specifying tables would be useful:

    sqlite-utils schema fixtures.db facetable searchable
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/299/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1879214365,I_kwDOCGYnMM5wAokd,590,Ability to tell if a Database is an in-memory one,9599,open,0,,,1,2023-09-03T19:50:15Z,2023-09-03T19:50:36Z,,OWNER,,"Currently the constructor accepts `memory=True` or `memory_name=...` and uses those to create a connection, but does not record what those values were:

https://github.com/simonw/sqlite-utils/blob/1260bdc7bfe31c36c272572c6389125f8de6ef71/sqlite_utils/db.py#L307-L349

This makes it hard to tell if a database object is to an in-memory or a file-based database, which is sometimes useful to know.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/590/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
723708310,MDU6SXNzdWU3MjM3MDgzMTA=,188,About loading spatialite,30607,closed,0,,,1,2020-10-17T08:47:02Z,2022-02-05T00:04:26Z,2020-10-17T08:52:58Z,NONE,,"Hi @simonw ,
If I run

```
sqlite3
.load /usr/local/lib/mod_spatialite.so
select spatialite_version();
```

I have `5.0.0`.

![image](https://user-images.githubusercontent.com/30607/96332706-d8cd3300-1065-11eb-906b-daf99963198e.png)


If I run

```
sqlite-utils :memory: ""select spatialite_version()"" --load-extension=spatialite
```

I have

```
Traceback (most recent call last):
  File ""/home/aborruso/.local/bin/sqlite-utils"", line 8, in <module>
    sys.exit(cli())
  File ""/home/aborruso/.local/lib/python3.8/site-packages/click/core.py"", line 829, in __call__
    return self.main(*args, **kwargs)
  File ""/home/aborruso/.local/lib/python3.8/site-packages/click/core.py"", line 782, in main
    rv = self.invoke(ctx)
  File ""/home/aborruso/.local/lib/python3.8/site-packages/click/core.py"", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File ""/home/aborruso/.local/lib/python3.8/site-packages/click/core.py"", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File ""/home/aborruso/.local/lib/python3.8/site-packages/click/core.py"", line 610, in invoke
    return callback(*args, **kwargs)
  File ""/home/aborruso/.local/lib/python3.8/site-packages/sqlite_utils/cli.py"", line 936, in query
    _load_extensions(db, load_extension)
  File ""/home/aborruso/.local/lib/python3.8/site-packages/sqlite_utils/cli.py"", line 1326, in _load_extensions
    db.conn.load_extension(ext)
TypeError: argument 1 must be str, not None
```

How to load properly spatialite extension in sqlite-utils?

Thank you very muc",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/188/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1243715381,I_kwDOCGYnMM5KIZc1,436,"Add ""copy to clipboard"" button to code examples in documentation",9599,closed,0,,,0,2022-05-20T21:53:23Z,2022-05-20T21:57:53Z,2022-05-20T21:57:53Z,OWNER,,"Follows:
- #435

Imitates:
- https://github.com/simonw/datasette/issues/1748

I'll use https://github.com/executablebooks/sphinx-copybutton - here's the Datasette commit: https://github.com/simonw/datasette/commit/1465fea4798599eccfe7e8f012bd8d9adfac3039",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/436/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
924991194,MDU6SXNzdWU5MjQ5OTExOTQ=,280,Add --encoding option to sqlite-utils memory,9599,closed,0,,,0,2021-06-18T15:03:32Z,2021-06-18T15:29:46Z,2021-06-18T15:29:46Z,OWNER,,Follow-on from #272 - this will work like `--encoding` on `sqlite-utils insert` and will affect all CSV files processed by `sqlite-utils memory`.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/280/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1292060682,I_kwDOCGYnMM5NA0gK,450,Add --ignore option to more commands,9599,closed,0,,,9,2022-07-02T13:52:02Z,2022-07-15T22:39:09Z,2022-07-15T22:37:45Z,OWNER,,"As seen in https://sqlite-utils.datasette.io/en/stable/cli-reference.html#add-foreign-key

Could make this TIL trick unnecessary: https://til.simonwillison.net/bash/ignore-errors",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/450/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1553425465,I_kwDOCGYnMM5cl2Q5,522,Add COLUMN_TYPE_MAPPING for timedelta,81377,closed,0,,,0,2023-01-23T16:49:54Z,2023-11-04T00:49:51Z,2023-11-04T00:49:51Z,NONE,,"Currently trying to create a column with Python type `datetime.timedelta` results in an error:

```
>>> from sqlite_utils import Database
>>> db = Database(""test.db"")
>>> test_tbl = db['test']
>>> test_tbl.insert({'col1': datetime.timedelta()})
Traceback (most recent call last):
  File ""<stdin>"", line 1, in <module>
  File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 2979, in insert
    return self.insert_all(
  File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 3082, in insert_all
    self.create(
  File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 1574, in create
    self.db.create_table(
  File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 961, in create_table
    sql = self.create_table_sql(
  File ""/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py"", line 852, in create_table_sql
    column_type=COLUMN_TYPE_MAPPING[column_type],
KeyError: <class 'datetime.timedelta'>
```

The reason this would be useful is that `MySQLdb` uses `timedelta` for MySQL `TIME` columns:

```
>>> import MySQLdb
>>> conn = MySQLdb.connect(host='database', user='user', passwd='pw')
>>> csr = conn.cursor()
>>> csr.execute(""SELECT CAST('11:20' AS TIME)"")
>>> tuple(csr)
((datetime.timedelta(seconds=40800),),)
```

So currently any attempt to convert a MySQL DB with a `TIME` column using `db-to-sqlite` will result in the above error.

I was rather surprised that `MySQLdb` uses `timedelta` for `TIME` columns but I see that [this column type](https://dev.mysql.com/doc/refman/8.0/en/time.html) is intended for time intervals as well as the time of day so it makes sense. 

",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/522/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1124237013,I_kwDOCGYnMM5DAn7V,398,Add SpatiaLite helpers to CLI,25778,closed,0,,,9,2022-02-04T14:01:28Z,2022-02-16T01:02:29Z,2022-02-16T00:58:07Z,CONTRIBUTOR,,"Now that #385 is merged, add CLI versions of those methods.

```sh
# init spatialite
sqlite-utils init-spatialite database.db

# or maybe/also
sqlite-utils create database.db --enable-wal --spatialite

# add geometry columns
# needs a database, table, geometry column name, type, with optional SRID and not-null
# this needs to create a table if it doesn't already exist
sqlite-utils add-geometry-column database.db table-name geometry --srid 4326 --not-null

# spatial index an existing table/column
sqlite-utils create-spatial-index database.db table-name geometry
```

Should be mostly straightforward. The one thing worth highlighting in docs is that geometry columns can only be added to existing tables. Trying to add a geometry column to a table that doesn't exist yet might mean you have a schema like `{""rowid"": int, ""geometry"": bytes}`. Might be worth nudging people to explicitly create a table first, then add geometry columns.
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/398/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
931752773,MDU6SXNzdWU5MzE3NTI3NzM=,294,Add a `sqlite-utils memory` example to the README,9599,closed,0,,,0,2021-06-28T16:35:59Z,2021-08-18T21:40:03Z,2021-08-18T21:40:03Z,OWNER,,,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/294/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
961008507,MDU6SXNzdWU5NjEwMDg1MDc=,308,Add an interactive tutorial as a Jupyter notebook,9599,open,0,,,2,2021-08-04T20:34:22Z,2021-08-04T21:30:59Z,,OWNER,,Can show people how to open this up in Binder.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1353074021,I_kwDOCGYnMM5QpkVl,474,Add an option for specifying column names when inserting CSV data,14294,open,0,,,3,2022-08-27T15:29:59Z,2022-08-31T03:42:36Z,,NONE,,"https://sqlite-utils.datasette.io/en/stable/cli.html#csv-files-without-a-header-row

> The first row of any CSV or TSV file is expected to contain the names of the columns in that file.

> If your file does not include this row, you can use the `--no-headers` option to specify that the tool should not use that fist row as headers.

> If you do this, the table will be created with column names called `untitled_1` and `untitled_2` and so on. You can then rename them using the `sqlite-utils transform ... --rename` command.

It would be nice to be able to specify the column names when importing CSV/TSV without a header row, via an extra command line option.

(renaming a column of a large table can take a long time, which makes it an inconvenient workaround)",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/474/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
927789811,MDU6SXNzdWU5Mjc3ODk4MTE=,292,Add contributing documentation,9599,closed,0,,,0,2021-06-23T02:13:05Z,2021-06-25T17:53:51Z,2021-06-25T17:53:51Z,OWNER,,Like https://docs.datasette.io/en/latest/contributing.html (but simpler) - should cover how to run `black` and `flake8` and `mypy` and how to run the tests.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/292/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
708261775,MDU6SXNzdWU3MDgyNjE3NzU=,175,Add docs for .transform(column_order=),9599,closed,0,,,3,2020-09-24T15:19:04Z,2020-09-24T20:35:48Z,2020-09-24T16:00:56Z,OWNER,,"> Need to update docs for `.transform()` now that `column_order=` is available.
_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/pull/174#discussion_r494403327_

Maybe also add this as an option to `sqlite-utils transform` - since reordering columns is actually a pretty nice capability.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/175/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1099586786,I_kwDOCGYnMM5Bilzi,383,Add documentation page with the output of `--help`,9599,closed,0,,,4,2022-01-11T20:25:58Z,2022-01-11T22:55:05Z,2022-01-11T21:44:05Z,OWNER,,"Can be maintained using `cog` from #373. Similar in purpose to the API reference page, but this is for the CLI.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/383/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1128120451,I_kwDOCGYnMM5DPcCD,404,Add example of `--convert` to the help for `sqlite-utils insert`,9599,closed,0,,,2,2022-02-09T06:49:09Z,2022-02-09T06:56:35Z,2022-02-09T06:55:16Z,OWNER,,"https://sqlite-utils.datasette.io/en/3.23/cli-reference.html#insert would be more useful if it included an example of `--convert` in action.

I can maybe use an example from https://simonwillison.net/2022/Jan/11/sqlite-utils/",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/404/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1099897648,I_kwDOCGYnMM5Bjxsw,384,Add examples to every `--help`,9599,closed,0,,,0,2022-01-12T05:31:25Z,2022-01-26T03:15:02Z,2022-01-26T03:15:02Z,OWNER,,Everything on https://sqlite-utils.datasette.io/en/stable/cli-reference.html would benefit from an example.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/384/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1178456794,I_kwDOCGYnMM5GPdLa,418,Add generated files to .gitignore,25778,closed,0,,,0,2022-03-23T17:48:12Z,2022-03-24T21:01:44Z,2022-03-24T21:01:44Z,CONTRIBUTOR,,"I end up with these in my local directory:

	.hypothesis/
	Pipfile
	Pipfile.lock
	pyproject.toml

Might as well gitignore them.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/418/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
925487946,MDU6SXNzdWU5MjU0ODc5NDY=,286,Add installation instructions,9599,closed,0,,,1,2021-06-19T23:55:36Z,2021-06-20T18:47:13Z,2021-06-20T18:47:13Z,OWNER,,"`pip install sqlite-utils`, `pipx install sqlite-utils` and `brew install sqlite-utils`",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/286/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1453134846,I_kwDOCGYnMM5WnRP-,513,Add or document streamlined workflow for importing Datasette csv / json exports,19328961,open,0,,,0,2022-11-17T10:54:47Z,2022-11-17T10:54:47Z,,NONE,,"I'm working on some small front-end enhancements to the laion-aesthetic-datasette project, and I wanted to partially populate a database directly using exports from the existing Datasette instance instead of downloading the parquet files and creating my own multi-GB database.

There have been a number of small issues that are certainly related to my relative lack of familiarity with the toolkit, but that are still surprising. 

For example: a CSV export of the images table (http://laion-aesthetic.datasette.io/laion-aesthetic-6pls.csv?sql=select+rowid%2C+url%2C+text%2C+domain_id%2C+width%2C+height%2C+similarity%2C+punsafe%2C+pwatermark%2C+aesthetic%2C+hash%2C+__index_level_0__+from+images+order+by+random%28%29+limit+100) has nested single quotes, double quotes, and commas that aren't handled by rows_from_file. Similarly, the json output has to be manually transformed to add the column names and remove extraneous information before sqlite_utils can import it.

I was able to work through these issues, but as an enhancement it would be really helpful to create or document a clear workflow that avoids the friction of this data transformation.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/513/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
965102534,MDU6SXNzdWU5NjUxMDI1MzQ=,311,Add reference documentation generated from docstrings,9599,closed,0,,,4,2021-08-10T16:04:00Z,2021-08-11T12:03:50Z,2021-08-11T12:03:50Z,OWNER,,"Using https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html

I'm not a big fan of this kind of documentation because it so often comes in place of narrative documentation - but the library has great narrative documentation now, so the reference documentation can link to it in places.

This will also encourage me to add good docstrings everywhere, useful for IDEs and suchlike.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/311/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
913135723,MDU6SXNzdWU5MTMxMzU3MjM=,266,"Add some types, enforce with mypy",9599,closed,0,,,3,2021-06-07T06:05:56Z,2021-08-18T22:25:38Z,2021-08-18T22:25:38Z,OWNER,,"A good starting point would be adding type information to the members of these named tuples and the introspection methods that return them:

https://github.com/simonw/sqlite-utils/blob/9dff7a38831d471b1dff16d40d89eb5c3b4e84d6/sqlite_utils/db.py#L51-L75",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/266/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
531583658,MDU6SXNzdWU1MzE1ODM2NTg=,68,Add support for porter stemming in FTS,9599,closed,0,,,1,2019-12-02T22:35:52Z,2020-09-20T04:25:53Z,2020-09-20T04:25:47Z,OWNER,,FTS5 can have porter stemming enabled.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/68/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1822918995,I_kwDOCGYnMM5sp4lT,580,Add way to export to a csv file using the Python library,44324811,open,0,,,0,2023-07-26T18:09:26Z,2023-07-26T18:09:26Z,,NONE,,"According to the documentation, we can make a csv output using the CLI tool, but not the Python library. Could we have the latter?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/580/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
593751293,MDU6SXNzdWU1OTM3NTEyOTM=,97,"Adding a ""recreate"" flag to the `Database` constructor",1448859,closed,0,,,4,2020-04-04T05:41:10Z,2020-04-15T14:29:31Z,2020-04-13T03:52:29Z,NONE,,"I have a [script](https://github.com/betatim/binder-datasette/blob/master/create-db.ipynb) that imports data into a sqlite DB. When I re-run that script I'd like to remove the existing sqlite DB, instead of adding to it. The pragmatic answer is to add the check and file deletion to my script.

However I thought it would be easy and useful for others to add a `recreate=True` flag to `db = sqlite_utils.Database(""binder-launches.db"")`. After taking a look at the code for it I am not so sure any more. This is because the connection string could be a URL (or ""connection string"") like `""file:///tmp/foo.db""`. I don't know what the equivalent of `os.path.exists()` is for a connection string or how to detect that something is a connection string and raise an error ""can't use recreate=True and conn_string at the same time"".

Does anyone have an idea/suggestion where to start investigating?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/97/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
449818897,MDU6SXNzdWU0NDk4MTg4OTc=,24,Additional Column Constraints?,98555,closed,0,,,6,2019-05-29T13:47:03Z,2019-06-13T06:47:17Z,2019-06-13T06:30:26Z,NONE,,"I'm looking to import data from XML with a pre-defined schema that maps fairly closely to a relational database.
In particular, it has explicit annotations for when fields are required, optional, or when a default value should be inferred.

Would there be value in adding the ability to define `NOT NULL` and `DEFAULT` column constraints to sqlite-utils?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/24/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
927766296,MDU6SXNzdWU5Mjc3NjYyOTY=,291,Adopt flake8,9599,closed,0,,,2,2021-06-23T01:19:37Z,2021-06-24T17:50:27Z,2021-06-24T17:50:27Z,OWNER,,,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/291/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1125297737,I_kwDOCGYnMM5DEq5J,402,Advanced class-based `conversions=` mechanism,9599,open,0,,,14,2022-02-06T19:47:41Z,2022-02-16T10:18:55Z,,OWNER,,"The `conversions=` parameter works like this at the moment: https://sqlite-utils.datasette.io/en/3.23/python-api.html#converting-column-values-using-sql-functions

```python
db[""places""].insert(
    {""name"": ""Wales"", ""geometry"": wkt},
    conversions={""geometry"": ""GeomFromText(?, 4326)""},
)
```
This proposal is to support values in that dictionary that are objects, not strings, which can represent more complex conversions - spun out from #399.

New proposed mechanism:
```python
from sqlite_utils.utils import LongitudeLatitude

db[""places""].insert(
    {
        ""name"": ""London"",
        ""point"": (-0.118092, 51.509865)
    },
    conversions={""point"": LongitudeLatitude},
)
```
Here `LongitudeLatitude` is a magical value which does TWO things: it sets up the `GeomFromText(?, 4326)` SQL function, and it handles converting the `(51.509865, -0.118092)` tuple into a `POINT({} {})` string.

This would involve a change to the `conversions=` contract - where it usually expects a SQL string fragment, but it can also take an object which combines that SQL string fragment with a Python conversion function.

Best of all... this resolves the `lat, lon` v.s. `lon, lat` dilemma because you can use `from sqlite_utils.utils import LongitudeLatitude` OR `from sqlite_utils.utils import LatitudeLongitude` depending on which you prefer!

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030739566_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/402/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1740150327,I_kwDOCGYnMM5nuJY3,557,Aliased ROWID option for tables created from alter=True commands,7908073,closed,0,,,2,2023-06-04T05:29:28Z,2023-06-14T06:09:21Z,2023-06-05T19:26:26Z,CONTRIBUTOR,,"> If you use INTEGER PRIMARY KEY column, the VACUUM does not change the values of that column. However, if you use unaliased rowid, the VACUUM command will reset the rowid values.

ROWID should never be used with foreign keys but the simple act of aliasing rowid to id (which is what happens when one does `id integer primary key` DDL) makes it OK.

It would be convenient if there were more options to use a string column (eg. filepath) as the PK, and be able to use it during upserts, but when creating a foreign key, to create an integer column which aliases rowid

I made an attempt to switch to integer primary keys here but it is not going well... In my usecase the path column is a business key. Yes, it should be as simple as including the `id` column in any select statement where I plan on using `upsert` but it would be nice if this could be abstracted away somehow  https://github.com/chapmanjacobd/library/commit/788cd125be01d76f0fe2153335d9f6b21db1343c

https://github.com/chapmanjacobd/library/actions/runs/5173602136/jobs/9319024777",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/557/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
449848803,MDU6SXNzdWU0NDk4NDg4MDM=,25,"Allow .insert(..., foreign_keys=()) to auto-detect table and primary key",9599,closed,0,,,4,2019-05-29T14:39:22Z,2019-06-13T05:32:32Z,2019-06-13T05:32:32Z,OWNER,,"The `foreign_keys=` argument currently takes a list of triples:
```python
db[""usages""].insert_all(
    usages_to_insert,
    foreign_keys=(
        (""line_id"", ""lines"", ""id""),
        (""definition_id"", ""definitions"", ""id""),
    ),
)
```
As of #16 we have a mechanism for detecting the primary key column (the third item in this triple) - we should use that here too, so foreign keys can be optionally defined as a list of pairs.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/25/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1227571375,I_kwDOCGYnMM5JK0Cv,431,Allow making m2m relation of a table to itself,738408,open,0,,,3,2022-05-06T08:30:43Z,2022-06-23T14:12:51Z,,NONE,,"I am building a database, in which one of the tables has a many-to-many relationship to itself. As far as I can see, this is not (yet) possible using `.m2m()` in sqlite-utils. This may be a bit of a niche use case, so feel free to close this issue if you feel it would introduce too much complexity compared to the benefits.

Example: suppose I have a table of people, and I want to store the information that John and Mary have two children, Michael and Suzy. It would be neat if I could do something like this:

```python
from sqlite_utils import Database

db = Database(memory=True)
db[""people""].insert({""name"": ""John""}, pk=""name"").m2m(
    ""people"", [{""name"": ""Michael""}, {""name"": ""Suzy""}], m2m_table=""parent_child"", pk=""name""
)
db[""people""].insert({""name"": ""Mary""}, pk=""name"").m2m(
    ""people"", [{""name"": ""Michael""}, {""name"": ""Suzy""}], m2m_table=""parent_child"", pk=""name""
)
```

But if I do that, the many-to-many table `parent_child` has only one column:
```
CREATE TABLE [parent_child] (
   [people_id] TEXT REFERENCES [people]([name]),
   PRIMARY KEY ([people_id], [people_id])
)
```

This could be solved by adding one or two keyword_arguments to `.m2m()`, e.g. `.m2m(..., left_name=None, right_name=None)` or `.m2m(..., names=(None, None))`.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/431/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1077102934,I_kwDOCGYnMM5AM0lW,353,"Allow passing a file of code to ""sqlite-utils convert""",536941,closed,0,,,8,2021-12-10T18:06:14Z,2021-12-11T01:38:29Z,2021-12-11T01:09:39Z,CONTRIBUTOR,,"sqlite-utils is so nice, but the ergonomics of the multiline code in kind of tough. It's really hard (maybe impossible) to make the newlines play well with Makefiles.

it would be great to write your code fragment in a separate file and direct it into the sqlite-utils

either like

```sqlite-utils convert my.db my_table my_column < custom_code.py```

or

```sqlite-utils convert my.db my_table my_column --custom-code=custom_code.py```

Thanks, as ever, for these great tools!",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/353/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1077322009,I_kwDOCGYnMM5ANqEZ,355,Allow users to pass a full convert() function definition,9599,closed,0,,,4,2021-12-10T23:59:58Z,2021-12-11T00:51:15Z,2021-12-11T00:49:31Z,OWNER,,"> I think the fix for this is to change the rules about what code is accepted in both the `-` mode and the literal code string mode: you can pass in a Python expression, OR a fragment that gets turned into a function, OR code that implements its own `def convert(value)` function. So this would work too:
> ```sh
> sqlite-utils convert my.db mytable col1 '
> def convert(value):
>     return value.upper()
> '
> ```
_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/353#issuecomment-991381679_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/355/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
817989436,MDU6SXNzdWU4MTc5ODk0MzY=,242,Async support,25778,open,0,,,13,2021-02-27T18:29:38Z,2021-10-28T14:37:56Z,,CONTRIBUTOR,,"Following our conversation last week, want to note this here before I forget.

I've had a couple situations where I'd like to do a bunch of updates in an async event loop, but I run into SQLite's issues with concurrent writes. This feels like something sqlite-utils could help with.

PeeWee ORM has a [SQLite write queue](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq) that might be a good model. It's using threads or gevent, but I _think_ that approach would translate well enough to asyncio. 

Happy to help with this, too.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/242/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1718595700,I_kwDOCGYnMM5mb7B0,550,AttributeError: 'EntryPoints' object has no attribute 'get' for flake8 on Python 3.7,9599,closed,0,,,3,2023-05-21T18:24:39Z,2023-05-21T18:42:25Z,2023-05-21T18:41:58Z,OWNER,,"https://github.com/simonw/sqlite-utils/actions/runs/5039064797/jobs/9036965488

```
Traceback (most recent call last):
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/bin/flake8"", line 8, in <module>
    sys.exit(main())
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/cli.py"", line 22, in main
    app.run(argv)
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py"", line 363, in run
    self._run(argv)
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py"", line 350, in _run
    self.initialize(argv)
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py"", line 330, in initialize
    self.find_plugins(config_finder)
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/main/application.py"", line 153, in find_plugins
    self.check_plugins = plugin_manager.Checkers(local_plugins.extension)
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/plugins/manager.py"", line 357, in __init__
    self.namespace, local_plugins=local_plugins
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/plugins/manager.py"", line 238, in __init__
    self._load_entrypoint_plugins()
  File ""/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/flake8/plugins/manager.py"", line 254, in _load_entrypoint_plugins
    eps = importlib_metadata.entry_points().get(self.namespace, ())
AttributeError: 'EntryPoints' object has no attribute 'get'
Error: Process completed with exit code 1.
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/550/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
706167456,MDU6SXNzdWU3MDYxNjc0NTY=,168,Automate (as much as possible) updates published to Homebrew,9599,closed,0,,,2,2020-09-22T08:08:37Z,2020-11-09T07:43:30Z,2020-11-09T07:43:30Z,OWNER,,I'd like to get new `sqlite-utils` (and Datasette) releases submitted to Homebrew as painlessly as possible.,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/168/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
925305186,MDU6SXNzdWU5MjUzMDUxODY=,282,Automatic type detection for CSV data,9599,closed,0,,,4,2021-06-19T03:33:21Z,2021-06-19T04:42:03Z,2021-06-19T04:38:00Z,OWNER,,"I've touched on this before in #179 - but now that I've added `sqlite-utils memory` this is much more important - because unlike with `sqlite-utils insert` the in-memory command doesn't give you the opportunity to fix any types you imported from CSV, so queries like `select * from stdin where age > 3` are never going to work correctly against these temporary in-memory tables.

Teaching `sqlite-utils insert` to detect types for columns in a CSV file would be a backwards-compatibility breaking change. Teaching `sqlite-utils memory` that trick would not be, since it hasn't been included in a release yet.

It's a little inconsistent, but I'm going to have `sqlite-utils memory` default to detecting types while `sqlite-utils insert` does not. In each case this can be controlled by a new command-line option:

    cat file.csv | sqlite-utils memory - --no-detect-types

To opt-in for `sqlite-utils insert`:

    cat file.csv | sqlite-utils insert blah.db blah - --detect-types

I'll have short options for these too: `-n` for `--no-detect-types` and `-d` for `--detect-types`.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/282/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,completed
1816997390,I_kwDOCGYnMM5sTS4O,576,Backfill the release notes prior to 0.4,9599,closed,0,,,2,2023-07-23T05:41:42Z,2023-07-23T05:49:51Z,2023-07-23T05:48:21Z,OWNER,,"Currently the changelog starts at 0.4:

https://sqlite-utils.datasette.io/en/3.34/changelog.html#id115

I want the other releases - according to https://pypi.org/project/sqlite-utils/#history there are three missing:

<img width=""663"" alt=""image"" src=""https://github.com/simonw/sqlite-utils/assets/9599/4ebc036b-7bb1-477c-95c1-a2c7e26bcb62"">
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/576/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1107557831,I_kwDOCGYnMM5CA_3H,386,"Better ""contributing"" documentation",9599,closed,0,,,0,2022-01-19T02:11:48Z,2022-01-19T02:15:21Z,2022-01-19T02:15:21Z,OWNER,,"This page jumps straight into running the tests: https://sqlite-utils.datasette.io/en/latest/contributing.html

It should add a little more about expected collaboration styles - opening an issue before filing a pull request - and probably link to https://simonwillison.net/2022/Jan/12/how-i-build-a-feature/",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/386/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1118585417,I_kwDOCGYnMM5CrEJJ,393,Better documentation for insert-replace,9599,closed,0,,,1,2022-01-30T15:40:23Z,2022-02-03T22:13:24Z,2022-02-03T22:13:24Z,OWNER,,"Currently: https://sqlite-utils.datasette.io/en/stable/python-api.html#insert-replacing-data

> If you want to insert a record or replace an existing record with the same primary key, using the replace=True argument to .insert() or .insert_all():

Should describe the exception you get first, then how to use replace to avoid it.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/393/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
783778672,MDU6SXNzdWU3ODM3Nzg2NzI=,220,Better error message for *_fts methods against views,649467,closed,0,,,3,2021-01-11T23:24:00Z,2021-02-22T20:44:51Z,2021-02-14T22:34:26Z,NONE,,"enable_fts and its related methods only work on tables, not views. 

Could those methods and possibly others move up to the Queryable superclass?
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/220/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1094981339,I_kwDOCGYnMM5BRBbb,363,Better error message if `--convert` code fails to return a dict,9599,closed,0,,,4,2022-01-06T05:26:28Z,2022-02-03T22:52:30Z,2022-02-03T22:51:30Z,OWNER,,"Here's the traceback if your `--convert` function doesn't return a dict right now:
```
% sqlite-utils insert /tmp/all.db blah /tmp/log.log --convert 'all.upper()' --all         

Traceback (most recent call last):
  File ""/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/bin/sqlite-utils"", line 33, in <module>
    sys.exit(load_entry_point('sqlite-utils', 'console_scripts', 'sqlite-utils')())
  File ""/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py"", line 1137, in __call__
    return self.main(*args, **kwargs)
  File ""/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py"", line 1062, in main
    rv = self.invoke(ctx)
  File ""/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py"", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File ""/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py"", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File ""/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py"", line 763, in invoke
    return __callback(*args, **kwargs)
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/cli.py"", line 949, in insert
    insert_upsert_implementation(
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/cli.py"", line 834, in insert_upsert_implementation
    db[table].insert_all(
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/db.py"", line 2602, in insert_all
    first_record = next(records)
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/db.py"", line 3044, in fix_square_braces
    for record in records:
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/cli.py"", line 831, in <genexpr>
    docs = (decode_base64_values(doc) for doc in docs)
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/utils.py"", line 86, in decode_base64_values
    to_fix = [
  File ""/Users/simon/Dropbox/Development/sqlite-utils/sqlite_utils/utils.py"", line 89, in <listcomp>
    if isinstance(doc[k], dict)
TypeError: string indices must be integers
```
It would be nicer if that returned a more useful error message.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/361#issuecomment-1006295276_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/363/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1200866134,I_kwDOCGYnMM5Hk8NW,424,Better error message if you try to create a table with no columns,9599,closed,0,,,1,2022-04-12T02:43:20Z,2022-04-13T22:40:15Z,2022-04-13T22:40:10Z,OWNER,,"Seen here:

- https://github.com/simonw/geojson-to-sqlite/issues/30

Attempting to create a table with no columns produced this confusing error:

```
File ""/Users/simon/.local/pipx/venvs/geojson-to-sqlite/lib/python3.9/site-packages/geojson_to_sqlite/utils.py"", line 69, in import_features
    db[table].create(column_types, pk=pk)
  File ""/Users/simon/.local/pipx/venvs/geojson-to-sqlite/lib/python3.9/site-packages/sqlite_utils/db.py"", line 863, in create
    self.db.create_table(
  File ""/Users/simon/.local/pipx/venvs/geojson-to-sqlite/lib/python3.9/site-packages/sqlite_utils/db.py"", line 517, in create_table
    self.execute(sql)
  File ""/Users/simon/.local/pipx/venvs/geojson-to-sqlite/lib/python3.9/site-packages/sqlite_utils/db.py"", line 236, in execute
    return self.conn.execute(sql)
sqlite3.OperationalError: near "")"": syntax error
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/424/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
711649325,MDU6SXNzdWU3MTE2NDkzMjU=,182,"Better handling of encodings other than utf-8 for ""sqlite-utils insert""",765871,closed,0,,,5,2020-09-30T05:43:48Z,2020-10-16T17:20:41Z,2020-10-16T17:18:52Z,NONE,,"Makefile:
```
data.db:
        curl -O http://maps.natalian.org/data.txt
        go run csv-write.go > data.csv
        sqlite-utils insert data.db travels data.csv --csv

clean:
        rm data*
```
[csv-write.go](https://gist.github.com/kaihendry/dff2442de20d73f900026d13bf7a11d9)


Error message is:

```
sqlite-utils insert data.db travels data.csv --csv
Traceback (most recent call last):
  File ""/home/hendry/.local/bin/sqlite-utils"", line 8, in <module>
    sys.exit(cli())
  File ""/home/hendry/.local/lib/python3.8/site-packages/click/core.py"", line 829, in __call__
    return self.main(*args, **kwargs)
  File ""/home/hendry/.local/lib/python3.8/site-packages/click/core.py"", line 782, in main
    rv = self.invoke(ctx)
  File ""/home/hendry/.local/lib/python3.8/site-packages/click/core.py"", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File ""/home/hendry/.local/lib/python3.8/site-packages/click/core.py"", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File ""/home/hendry/.local/lib/python3.8/site-packages/click/core.py"", line 610, in invoke
    return callback(*args, **kwargs)
  File ""/home/hendry/.local/lib/python3.8/site-packages/sqlite_utils/cli.py"", line 614, in insert
    insert_upsert_implementation(
  File ""/home/hendry/.local/lib/python3.8/site-packages/sqlite_utils/cli.py"", line 553, in insert_upsert_implementation
    headers = next(reader)
  File ""/usr/lib/python3.8/codecs.py"", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 1234: invalid continuation byte
make: *** [Makefile:4: data.db] Error 1
[hendry@t14s datasette-map]$ sqlite-utils --version
sqlite-utils, version 2.19
```

Little bit surprised if Go is spewing out bad Unicode, but I'm not sure how to grok `position 1234`..
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/182/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
688659182,MDU6SXNzdWU2ODg2NTkxODI=,145,Bug when first record contains fewer columns than subsequent records,96218,closed,0,,,2,2020-08-30T05:44:44Z,2020-09-08T23:21:23Z,2020-09-08T23:21:23Z,CONTRIBUTOR,,"`insert_all()` selects the maximum batch size based on the number of fields in the first record.  If the first record has fewer fields than subsequent records (and `alter=True` is passed), this can result in SQL statements with more than the maximum permitted number of host parameters.  This situation is perhaps unlikely to occur, but could happen if the first record had, say, 10 columns, such that `batch_size` (based on  `SQLITE_MAX_VARIABLE_NUMBER = 999`) would be 99.  If the next 98 rows had 11 columns, the resulting SQL statement for the first batch would have `10 * 1 + 11 * 98 = 1088` host parameters (and subsequent batches, if the data were consistent from thereon out, would have `99 * 11 = 1089`).

I suspect that this bug is masked somewhat by the fact that while:
> [`SQLITE_MAX_VARIABLE_NUMBER`](https://www.sqlite.org/limits.html#max_variable_number) ... defaults to 999 for SQLite versions prior to 3.32.0 (2020-05-22) or 32766 for SQLite versions after 3.32.0.

it is common that it is increased at compile time.  Debian-based systems, for example, seem to ship with a version of sqlite compiled with `SQLITE_MAX_VARIABLE_NUMBER` set to 250,000, and I believe this is the case for homebrew installations too.

A test for this issue might look like this:
```python
def test_columns_not_in_first_record_should_not_cause_batch_to_be_too_large(fresh_db):
    # sqlite on homebrew and Debian/Ubuntu etc. is typically compiled with
    #  SQLITE_MAX_VARIABLE_NUMBER set to 250,000, so we need to exceed this value to
    #  trigger the error on these systems.
    THRESHOLD = 250000
    extra_columns = 1 + (THRESHOLD - 1) // 99
    records = [
        {""c0"": ""first record""},  # one column in first record -> batch_size = 100
        # fill out the batch with 99 records with enough columns to exceed THRESHOLD
        *[
            dict([(""c{}"".format(i), j) for i in range(extra_columns)])
            for j in range(99)
        ]
    ]
    try:
        fresh_db[""too_many_columns""].insert_all(records, alter=True)
    except sqlite3.OperationalError:
        raise
```

The best solution, I think, is simply to process all the records when determining columns, column types, and the batch size.  In my tests this doesn't seem to be particularly costly at all, and cuts out a lot of complications (including obviating my implementation of #139 at #142).  I'll raise a PR for your consideration.

",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/145/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1306548397,I_kwDOCGYnMM5N4Fit,454,CLI command for duplicating tables,9599,closed,0,,,1,2022-07-15T21:31:27Z,2022-07-15T21:48:23Z,2022-07-15T21:45:51Z,OWNER,,"CLI equivalent of:
- #449",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/454/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1205687423,I_kwDOCGYnMM5H3VR_,426,CLI docs should link to Python docs and vice versa,9599,closed,0,9599,,1,2022-04-15T16:05:15Z,2023-07-22T22:13:22Z,2023-07-22T22:13:22Z,OWNER,,"For every command/API method there should be a link to the equivalent in the other form factor.

Maybe also link to the API and CLI reference pages too.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/426/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1239034903,I_kwDOCGYnMM5J2iwX,433,CLI eats my cursor,7908073,closed,0,,,10,2022-05-17T18:52:52Z,2023-11-04T00:46:30Z,2023-11-04T00:46:30Z,CONTRIBUTOR,,"I'm not sure why this happens but `sqlite-utils` makes my terminal cursor disappear after running commands like `sqlite-utils insert`. I've only noticed this behavior in `sqlite-utils`, not in any other CLI tools

I can still type commands after it runs but the text cursor is invisible",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/433/reactions"", ""total_count"": 5, ""+1"": 5, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1855894222,I_kwDOCGYnMM5unrLO,585,CLI equivalents to `transform(add_foreign_keys=)`,9599,closed,0,,,7,2023-08-18T01:07:15Z,2023-08-18T01:51:16Z,2023-08-18T01:51:15Z,OWNER,,"The new options added in:
- #577
Deserve consideration in the CLI as well.

https://github.com/simonw/sqlite-utils/blob/d2bcdc00c6ecc01a6e8135e775ffdb87572b802b/sqlite_utils/db.py#L1706-L1708",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/585/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1098544628,I_kwDOCGYnMM5BenX0,379,CLI options for running ANALYZE,9599,closed,0,,7558727,0,2022-01-11T01:09:16Z,2022-01-11T01:38:01Z,2022-01-11T01:36:48Z,OWNER,,"> The Python methods are all done now, next step is the CLI options. I'll do those in a separate issue.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/366#issuecomment-1009508865_

- [x] `sqlite-utils analyze` command
- [x] `sqlite-utils create-index --analyze` option (see #365)
- [x] `sqlite-utils insert --analyze` option
- [x] `sqlite-utils upsert --analyze` option

In #378 I also added `.delete_where(..., analyze=True)` but there isn't currently a `sqlite-utils delete-where` CLI command - deletions via CLI are expected to be handled using SQL queries.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/379/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
665700495,MDU6SXNzdWU2NjU3MDA0OTU=,122,CLI utility for inserting binary files into SQLite,9599,closed,0,,,10,2020-07-26T03:27:39Z,2020-07-27T07:10:41Z,2020-07-27T07:09:03Z,OWNER,,"SQLite BLOB columns can store entire binary files. The challenge is inserting them, since they don't neatly fit into JSON objects.

It would be great if the `sqlite-utils` CLI had a trick for helping with this.

Inspired by https://github.com/simonw/datasette-media/issues/14",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/122/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1271426387,I_kwDOCGYnMM5LyG1T,444,CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool,9599,open,0,,,5,2022-06-14T22:22:47Z,2022-07-07T16:39:18Z,,OWNER,,"> I forgot to add equivalents of `extras_key=` and `ignore_extras=` to the CLI tool - will do that in a separate issue.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155767915_",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/444/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1250629388,I_kwDOCGYnMM5KixcM,440,CSV files with too many values in a row cause errors,4068,closed,0,,,20,2022-05-27T10:54:44Z,2022-06-14T22:23:01Z,2022-06-14T20:12:46Z,NONE,,"*Original title: csv.DictReader can have None as key*

In some cases, `csv.DictReader` can have `None` as key for unnamed columns, and a list of values as value.
`sqlite_utils.utils.rows_from_file` cannot handle that:

```python
url=""https://artsdatabanken.no/Fab2018/api/export/csv""
db = sqlite_utils.Database("":memory"")

with urlopen(url) as fab:
    reader, _ = sqlite_utils.utils.rows_from_file(fab, encoding=""utf-16le"")   
    db[""fab2018""].insert_all(reader, pk=""Id"")
```

Result:
```
Traceback (most recent call last):
  File ""<stdin>"", line 3, in <module>
  File ""/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py"", line 2924, in insert_all
    chunk = list(chunk)
  File ""/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py"", line 3454, in fix_square_braces
    if any(""["" in key or ""]"" in key for key in record.keys()):
  File ""/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py"", line 3454, in <genexpr>
    if any(""["" in key or ""]"" in key for key in record.keys()):
TypeError: argument of type 'NoneType' is not iterable
```

Code:
https://github.com/simonw/sqlite-utils/blob/59be60c471fd7a2c4be7f75e8911163e618ff5ca/sqlite_utils/db.py#L3454

`sqlite-utils insert` from command line is not affected by this issue.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/440/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
602569315,MDU6SXNzdWU2MDI1NjkzMTU=,102,Can't store an array or dictionary containing a bytes value,9599,closed,0,,,0,2020-04-18T22:49:21Z,2020-05-01T20:45:45Z,2020-05-01T20:45:45Z,OWNER,,"```
In [1]: import sqlite_utils                                                     

In [2]: db = sqlite_utils.Database(memory=True)                                 

In [3]: db[""t""].insert({""id"": 1, ""data"": {""foo"": b""bytes""}})                    
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-a8ab1f72c72c> in <module>
----> 1 db[""t""].insert({""id"": 1, ""data"": {""foo"": b""bytes""}})

~/Dropbox/Development/sqlite-utils/sqlite_utils/db.py in insert(self, record, pk, foreign_keys, column_order, not_null, defaults, hash_id, alter, ignore, replace, extracts, conversions, columns)
    950             extracts=extracts,
    951             conversions=conversions,
--> 952             columns=columns,
    953         )
    954 

~/Dropbox/Development/sqlite-utils/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, conversions, columns, upsert)
   1052                 for key in all_columns:
   1053                     value = jsonify_if_needed(
-> 1054                         record.get(key, None if key != hash_id else _hash(record))
   1055                     )
   1056                     if key in extracts:

~/Dropbox/Development/sqlite-utils/sqlite_utils/db.py in jsonify_if_needed(value)
   1318 def jsonify_if_needed(value):
   1319     if isinstance(value, (dict, list, tuple)):
-> 1320         return json.dumps(value)
   1321     elif isinstance(value, (datetime.time, datetime.date, datetime.datetime)):
   1322         return value.isoformat()

/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    229         cls is None and indent is None and separators is None and
    230         default is None and not sort_keys and not kw):
--> 231         return _default_encoder.encode(obj)
    232     if cls is None:
    233         cls = JSONEncoder

/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in encode(self, o)
    197         # exceptions aren't as detailed.  The list call should be roughly
    198         # equivalent to the PySequence_Fast that ''.join() would do.
--> 199         chunks = self.iterencode(o, _one_shot=True)
    200         if not isinstance(chunks, (list, tuple)):
    201             chunks = list(chunks)

/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in iterencode(self, o, _one_shot)
    255                 self.key_separator, self.item_separator, self.sort_keys,
    256                 self.skipkeys, _one_shot)
--> 257         return _iterencode(o, 0)
    258 
    259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,

/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in default(self, o)
    177 
    178         """"""
--> 179         raise TypeError(f'Object of type {o.__class__.__name__} '
    180                         f'is not JSON serializable')
    181 

TypeError: Object of type bytes is not JSON serializable
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/102/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
573740712,MDU6SXNzdWU1NzM3NDA3MTI=,90,Cannot .enable_fts() for columns with spaces in their names,9599,closed,0,,,0,2020-03-02T06:06:03Z,2020-03-02T06:10:49Z,2020-03-02T06:10:49Z,OWNER,,"```
import sqlite_utils
db = sqlite_utils.Database(memory=True)                                 
db[""test""].insert({""space in name"": ""hello""})                           
db[""test""].enable_fts([""space in name""])                                
---------------------------------------------------------------------------
OperationalError                          Traceback (most recent call last)
<ipython-input-8-ce4b87dd1c7a> in <module>
----> 1 db['test'].enable_fts([""space in name""])

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in enable_fts(self, columns, fts_version, create_triggers)
    755         )
    756         self.db.conn.executescript(sql)
--> 757         self.populate_fts(columns)
    758 
    759         if create_triggers:

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in populate_fts(self, columns)
    787             table=self.name, columns="", "".join(columns)
    788         )
--> 789         self.db.conn.executescript(sql)
    790         return self
    791 

OperationalError: near ""in"": syntax error
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/90/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1434911255,I_kwDOCGYnMM5VhwIX,510,Cannot enable FTS5 despite it being available,1176293,closed,0,,,3,2022-11-03T16:03:49Z,2022-11-18T18:37:52Z,2022-11-17T10:36:28Z,NONE,,"When I do `sqlite-utils enable-fts my.db table_name column_name` (with or without `--fts5`), I get an FTS4 virtual table instead of the expected FTS5.

FTS5 is however available and Python/SQLite versions do not seem to be the issue. I can manually create the FTS5 virtual table, and then Datasette also works with it from this same Python environment.

`>>> sqlite3.version`
`2.6.0`
`>>> sqlite3.sqlite_version`
`3.39.4`

`PRAGMA compile_options;` includes `ENABLE_FTS5`.

`sqlite-utils, version 3.30`.

Any ideas what's happening and how to fix?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/510/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1976986318,I_kwDOCGYnMM511mrO,599,Cannot find spatialite on arm64 linux,37802088,closed,0,,,1,2023-11-03T22:05:51Z,2023-11-04T01:06:31Z,2023-11-04T00:33:28Z,CONTRIBUTOR,,"Initially, I found an issue in `datasette` where it wouldn’t find `spatialite` when running on my Radxa Rock 5B - an RK3588 powered SBC, running the arm64 build of Debian Bullseye. I confirmed the same behaviour on my Raspberry Pi 4 - a BCM2711 powered SBC, running the arm64 build of Debian Bookworm.

```
$ datasette --load-extension=spatialite example.db
Error: Could not find SpatiaLite extension
```

I did some digging and realised the issue originates in this project. Even with the `libsqlite3-mod-spatialite` package installed, `pytest` skips all of the GIS tests in the project.

```
$ apt list --installed | grep spatial
[…]
libsqlite3-mod-spatialite/stable,now 5.0.1-3 arm64 [installed]

$ ls -l /usr/lib/*/*spatial*
lrwxrwxrwx 1 root root      23 Dec  1  2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so -> mod_spatialite.so.7.1.0
lrwxrwxrwx 1 root root      23 Dec  1  2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so.7 -> mod_spatialite.so.7.1.0
-rw-r--r-- 1 root root 7348584 Dec  1  2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so.7.1.0
```

```
$ pytest
tests/test_get.py ......                                                 [ 73%]
tests/test_gis.py ssssssssssss                                           [ 75%]
tests/test_hypothesis.py ....                                            [ 75%]
```

I tracked the issue down to the [`find_sqlite()` function in the `utils.py`](https://github.com/simonw/sqlite-utils/blob/622c3a5a7dd53a09c029e2af40c2643fe7579340/sqlite_utils/utils.py#L60) file. The [`SPATIALITE_PATHS`](https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/utils.py#L34-L39) array doesn’t have an entry for the location of this module on arm64 linux.
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/599/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
919314806,MDU6SXNzdWU5MTkzMTQ4MDY=,270,Cannot set type JSON,4068,closed,0,,,4,2021-06-11T23:53:22Z,2021-06-16T17:34:49Z,2021-06-16T15:47:06Z,NONE,,"It would be great if the column type could be set to JSON. That would not be different from handling a regular string. It would be something like `repr(value)` and it would work with both JSON and CSV inputs, no matter if `value` is a real list or just a string representing a list.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/270/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1907281675,I_kwDOCGYnMM5xrs8L,595,Cascading DELETE not working with Table.delete(pk),123451970,closed,0,,,1,2023-09-21T15:46:41Z,2023-09-25T09:38:57Z,2023-09-25T09:38:13Z,NONE,,"Hi !
I noticed that when I am trying to use the delete method of the Table object,
the record get properly deleted from the table, but the cascading delete triggers on foreign keys do not activate.

`self.db[""contact""].delete(contact_id)`

I tried querying the database directly via DB Browser and the triggers work without any issue.
Looked up the source code and behind the scene this method is just querying the database normally so I'm not exactly sure where this behavior comes from.

Thank you in advance for your time ! ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/595/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
683805434,MDU6SXNzdWU2ODM4MDU0MzQ=,135,Code for finding SpatiaLite in the usual locations,9599,closed,0,,,3,2020-08-21T20:15:34Z,2022-02-05T00:04:26Z,2020-08-21T20:30:13Z,OWNER,,"I built this for `shapefile-to-sqlite` but it would be useful in `sqlite-utils` too:

https://github.com/simonw/shapefile-to-sqlite/blob/e754d0747ca2facf9a7433e2d5d15a6a37a9cf6e/shapefile_to_sqlite/utils.py#L16-L19

```python
SPATIALITE_PATHS = (
    ""/usr/lib/x86_64-linux-gnu/mod_spatialite.so"",
    ""/usr/local/lib/mod_spatialite.dylib"",
)
```

https://github.com/simonw/shapefile-to-sqlite/blob/e754d0747ca2facf9a7433e2d5d15a6a37a9cf6e/shapefile_to_sqlite/utils.py#L105-L109

```python
def find_spatialite():
    for path in SPATIALITE_PATHS:
        if os.path.exists(path):
            return path
    return None
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/135/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
586486367,MDU6SXNzdWU1ODY0ODYzNjc=,95,Columns with only null values are no longer created in the database,9599,closed,0,,,0,2020-03-23T20:07:42Z,2020-03-23T20:31:15Z,2020-03-23T20:31:15Z,OWNER,,"Bug introduced in #94, and released in `2.4.3`.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/95/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1257724585,I_kwDOCGYnMM5K91qp,441,Combining `rows_where()` and `search()` to limit which rows are searched,1448859,closed,0,,,4,2022-06-02T06:01:55Z,2022-06-14T21:57:57Z,2022-06-14T21:54:38Z,NONE,,"What is the right way to limit a full text search query to some rows of a table?

For example, I have a table that contains the following columns: `title`, `content`, `owner` (each row represents a document). The `owner` column is a username. It feels right to store all documents in one table, instead of having one table per owner. In particular because I'd like to full text search all documents, only documents owned by one user and documents owned by a set of users.

I tried to combine `.rows_where(""owner = ?"", ""1234"")` and `.search()` from the `Table` class but I don't think that is meant to work. I discovered `.search_sql()` as a way to generate the FTS SQL statement. By hand I can edit it to add a `AND [original].[owner] = :owner` to the `where` clause. This seems to do what I want.

My two questions:
1. is adding a `AND ...` to the `where` clause actually the right thing to do or should I be doing something else (my SQL skills are low)?
2. is there a built-in to sqlite-utils way to achieve this?

Right now I am thinking I will make my own version of `search_sql()` that generates a query that contains an additional `owner = :owner` for my particular use-case.

Bonus question: is this generally useful/something to add to sqlite-utils or too niche?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/441/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1067771698,I_kwDOCGYnMM4_pOcy,348,Command for creating an empty database,9599,closed,0,,7558727,6,2021-11-30T23:24:27Z,2022-01-13T07:06:59Z,2022-01-09T20:33:20Z,OWNER,,"I sometimes find the need to create an empty SQLite database file - for example if I want to enable WAL on it before using it with another script. I currently do that like this:

    sqlite3 my.db vacuum
    sqlite-utils enable-wal my.db

It would be nice if `sqlite-utils` had a convenience command for doing this.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/348/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1353441389,I_kwDOCGYnMM5Qq-Bt,477,Conda Forge,49702524,closed,0,,,2,2022-08-28T19:03:08Z,2022-09-07T03:46:55Z,2022-09-07T03:46:55Z,NONE,,"Hello! I have successfully put this package on to Conda Forge, and I have extending the invitation for the owner/maintainers of this package to be maintainers on Conda Forge as well. Let me know if you are interested! Thanks.
https://github.com/conda-forge/sqlite-utils-feedstock",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/477/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
958516743,MDU6SXNzdWU5NTg1MTY3NDM=,306,Configure sphinx.ext.extlinks for issues,9599,closed,0,,,2,2021-08-02T21:19:19Z,2021-08-02T21:39:34Z,2021-08-02T21:29:22Z,OWNER,,As seen in Datasette: https://github.com/simonw/datasette/issues/1227,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/306/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1326349129,I_kwDOCGYnMM5PDntJ,461,Consider including animated SVG console demos,9599,open,0,,,1,2022-08-02T20:10:04Z,2022-08-02T20:12:14Z,,OWNER,,"I recorded this one using https://github.com/nbedos/termtosvg - with `pipx install termtosvg` and then `termtosvg` - execute demo - `exit` to save.

![sqlite-utils-insert-json](https://user-images.githubusercontent.com/9599/182464206-f4976af4-eda8-4020-8257-4ada1867fb44.svg)

```json
[
  {
    ""id"": 1,
    ""name"": ""Catimus""
  },
  {
    ""id"": 2,
    ""name"": ""Feliopia""
  }
]
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/461/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1171599874,I_kwDOCGYnMM5F1TIC,415,Convert with `--multi` and `--dry-run` flag does not work,3976183,closed,0,,,2,2022-03-16T21:59:46Z,2022-03-21T04:18:24Z,2022-03-21T04:18:24Z,NONE,,"It's not possible to combine `--multi` and `--dry-run` flag in the `convert` command.

Let's first create a simple database from JSON string

```console
$ echo '[{""foo"": ""abc""}]' | sqlite-utils insert demo.db demo -
$ sqlite-utils query demo.db ""SELECT * FROM demo""             
[{""foo"": ""abc""}]
```

and then try to convert the ""foo"" column with a static value ""bar"" (see docs [Converting a column into multiple columns](https://sqlite-utils.datasette.io/en/stable/cli.html#converting-a-column-into-multiple-columns))

```console
$ sqlite-utils convert demo.db demo foo '{""foo"": ""bar""}' --multi --dry-run
Traceback (most recent call last):
  File ""/home/dotcs/anaconda3/envs/tools/bin/sqlite-utils"", line 8, in <module>
    sys.exit(cli())
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1128, in __call__
    return self.main(*args, **kwargs)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1053, in main
    rv = self.invoke(ctx)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 754, in invoke
    return __callback(*args, **kwargs)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/sqlite_utils/cli.py"", line 2686, in convert
    for row in db.conn.execute(sql, where_args).fetchall():
sqlite3.OperationalError: user-defined function raised exception
```

But without the `--dry-run` flag it does work as expected:

```console
$ sqlite-utils convert demo.db demo foo '{""foo"": ""bar""}' --multi
$ sqlite-utils query demo.db ""SELECT * FROM demo""               
[{""foo"": ""bar""}]
```

```console
$ sqlite-utils --version
sqlite-utils, version 3.25.1
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/415/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
562911863,MDU6SXNzdWU1NjI5MTE4NjM=,85,Create index doesn't work for columns containing spaces,9599,closed,0,,,1,2020-02-11T00:34:46Z,2020-02-11T05:13:20Z,2020-02-11T05:13:20Z,OWNER,,,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/85/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1128466114,I_kwDOCGYnMM5DQwbC,406,Creating tables with custom datatypes,82988,open,0,,,5,2022-02-09T12:16:31Z,2022-09-15T18:13:50Z,,NONE,,"Via https://stackoverflow.com/a/18622264/454773 I note the ability to register custom handlers for novel datatypes that can map into and out of things like sqlite `BLOB`s.

From a quick look and a quick play, I didn't spot a way to do this in `sqlite_utils`?

For example:

```python
# Via https://stackoverflow.com/a/18622264/454773
import sqlite3
import numpy as np
import io

def adapt_array(arr):
    """"""
    http://stackoverflow.com/a/31312102/190597 (SoulNibbler)
    """"""
    out = io.BytesIO()
    np.save(out, arr)
    out.seek(0)
    return sqlite3.Binary(out.read())

def convert_array(text):
    out = io.BytesIO(text)
    out.seek(0)
    return np.load(out)


# Converts np.array to TEXT when inserting
sqlite3.register_adapter(np.ndarray, adapt_array)

# Converts TEXT to np.array when selecting
sqlite3.register_converter(""array"", convert_array)
```

```python
from sqlite_utils import Database
db = Database('test.db')

# Reset the database connection to used the parsed datatype
# sqlite_utils doesn't seem to support eg:
#  Database('test.db', detect_types=sqlite3.PARSE_DECLTYPES)
db.conn = sqlite3.connect(db_name, detect_types=sqlite3.PARSE_DECLTYPES)

# Create a table the old fashioned way
# but using the new custom data type
vector_table_create = """"""
CREATE TABLE dummy 
    (title TEXT, vector array );
""""""

cur = db.conn.cursor()
cur.execute(vector_table_create)


# sqlite_utils doesn't appear to support custom types (yet?!)
# The following errors on the ""array"" datatype
""""""
db[""dummy""].create({
    ""title"": str,
    ""vector"": ""array"",
})
""""""
```

We can then add / retrieve records from the database where the datatype of the `vector` field is a custom registered `array` type (which is to say, a `numpy` array):

```python
import numpy as np

db[""dummy""].insert({'title':""test1"", 'vector':np.array([1,2,3])})

for row in db.query(""SELECT * FROM dummy""):
    print(row['title'], row['vector'], type(row['vector']))

""""""
test1 [1 2 3] <class 'numpy.ndarray'>
""""""
```

It would be handy to be able to do this idiomatically in `sqlite_utils`.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/406/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1550536442,I_kwDOCGYnMM5ca076,521,Custom JSON encoder,31504,open,0,,,0,2023-01-20T09:19:40Z,2023-01-20T09:19:40Z,,NONE,,"It would be nice if we could specify a custom encoder (and decoder) for types that will need extra deserialisation – e.g., sets, enums or sparse matrices – or even project-specific types",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/521/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
413740684,MDU6SXNzdWU0MTM3NDA2ODQ=,11,Detect numpy types when creating tables,9599,closed,0,,,2,2019-02-23T21:09:35Z,2019-02-24T04:02:20Z,2019-02-24T04:02:20Z,OWNER,,Inspired by #8,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/11/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1166587040,I_kwDOCGYnMM5FiLSg,413,Display autodoc type information more legibly,9599,closed,0,,,5,2022-03-11T15:58:20Z,2022-03-11T18:07:10Z,2022-03-11T18:07:10Z,OWNER,,"https://sqlite-utils.datasette.io/en/3.25/reference.html#sqlite_utils.db.Table.insert looks like this at the moment:

<img width=""703"" alt=""image"" src=""https://user-images.githubusercontent.com/9599/157902622-368935a8-93f2-42e9-98ad-94a45c818e80.png"">
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/413/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1126692066,I_kwDOCGYnMM5DJ_Ti,403,Document how to add a primary key to a rowid table using `sqlite-utils transform --pk`,536941,closed,0,,,4,2022-02-08T01:39:40Z,2022-02-09T04:22:43Z,2022-02-08T19:33:59Z,CONTRIBUTOR,,"*Original title: Add option for adding a new, serial, primary key*

sometimes we have tables that don't have primary keys, but ought to have them. we *can* use rowid for that, but it would often be nicer to have an explicit primary key. using the current value of rowid would be fine.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/403/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1718612569,I_kwDOCGYnMM5mb_JZ,552,Document how to setup shell auto-completion,9599,closed,0,,,1,2023-05-21T19:20:41Z,2023-05-21T21:05:16Z,2023-05-21T21:03:40Z,OWNER,,"https://click.palletsprojects.com/en/8.1.x/shell-completion/

This works for `zsh`:

    eval ""$(_SQLITE_UTILS_COMPLETE=zsh_source sqlite-utils)""

This will probably work for `bash`:

    eval ""$(_SQLITE_UTILS_COMPLETE=bash_source sqlite-utils)""

Need to add this to the installation docs here: https://sqlite-utils.datasette.io/en/stable/installation.html - along with the pattern for adding that to `.zshrc` or whatever.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/552/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1386593843,I_kwDOCGYnMM5Spb4z,494,Document how to use Just,9599,closed,0,,,2,2022-09-26T19:25:12Z,2022-09-26T19:32:36Z,2022-09-26T19:26:39Z,OWNER,,"I'm using `just` a lot know, based on this file - I should add that to https://sqlite-utils.datasette.io/en/latest/contributing.html

https://github.com/simonw/sqlite-utils/blob/afbd2b2cba45cccb305c3d4638d18db4dd3d4bbd/Justfile#L1-L24",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/494/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1224112817,I_kwDOCGYnMM5I9nqx,430,Document how to use `PRAGMA temp_store` to avoid errors when running VACUUM against huge databases,9308268,open,0,,,2,2022-05-03T13:33:58Z,2022-06-14T23:26:37Z,,NONE,,"I'm trying to figure out a way to get the `table.extract()` method to complete successfully -- I'm not sure if maybe the cause (and a possible solution) of this on Ubuntu Server 22.04 is to adjust some of the PRAGMA values within SQLite itself ... on another Linux system (PopOS), using this method on this same database appears to work just fine.

Here's the bit that's causing the error, and the resulting error output:
```python
# combine these columns into 1 table ""bib_properties"" :
# best_title
# bib_level_code
# mat_type
# material_code
# best_author
db[""circ_trans""].extract(
    [""best_title"", ""bib_level_code"", ""mat_type"", ""material_code"", ""best_author""], 
    table=""bib_properties"", 
    fk_column=""bib_properties_id""
)

db[""circ_trans""].extract(
    [""call_number""], 
    table=""call_number"", 
    fk_column=""call_number_id"",
    rename={""call_number"": ""value""}
)
```

```python
---------------------------------------------------------------------------
OperationalError                          Traceback (most recent call last)
Input In [17], in <cell line: 7>()
      1 # combine these columns into 1 table ""bib_properties"" :
      2 # best_title
      3 # bib_level_code
      4 # mat_type
      5 # material_code
      6 # best_author
----> 7 db[""circ_trans""].extract(
      8     [""best_title"", ""bib_level_code"", ""mat_type"", ""material_code"", ""best_author""], 
      9     table=""bib_properties"", 
     10     fk_column=""bib_properties_id""
     11 )
     13 db[""circ_trans""].extract(
     14     [""call_number""], 
     15     table=""call_number"", 
     16     fk_column=""call_number_id"",
     17     rename={""call_number"": ""value""}
     18 )

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1764, in Table.extract(self, columns, table, fk_column, rename)
   1761         column_order.append(c.name)
   1763 # Drop the unnecessary columns and rename lookup column
-> 1764 self.transform(
   1765     drop=set(columns),
   1766     rename={magic_lookup_column: fk_column},
   1767     column_order=column_order,
   1768 )
   1770 # And add the foreign key constraint
   1771 self.add_foreign_key(fk_column, table, ""id"")

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1526, in Table.transform(self, types, rename, drop, pk, not_null, defaults, drop_foreign_keys, column_order)
   1524 with self.db.conn:
   1525     for sql in sqls:
-> 1526         self.db.execute(sql)
   1527     # Run the foreign_key_check before we commit
   1528     if pragma_foreign_keys_was_on:

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:465, in Database.execute(self, sql, parameters)
    463     return self.conn.execute(sql, parameters)
    464 else:
--> 465     return self.conn.execute(sql)

OperationalError: database or disk is full
```

This database is about 17G in total size, so I'm assuming the error is coming from the vacuum ... where i'm assuming it's maybe trying to do the temp storage in a location that doesn't have sufficient room. The disk space is more than ample on the host in question (1.8T is free in the directory where the sqlite db resides) The `/tmp` directory however is limited on a smaller disk associated with the OS

I'm trying to think if there's a way to set the `PRAGMA temp_store` or maybe if it's `temp_store_directory` that I'm after ... to use the same local directory of where the file is located (maybe this is a property of the version of sqlite on the system?) 

```python
# SET the temp file store to be a file ...
print(db.execute('PRAGMA temp_store').fetchall())
print(db.execute('PRAGMA temp_store=FILE').fetchall())

print(db.execute('PRAGMA temp_store').fetchall())

# the users home directory ...
print(db.execute(""PRAGMA temp_store_directory='/home/plchuser/'"").fetchall())
print(db.execute(""PRAGMA sqlite3_temp_directory='/home/plchuser/'"").fetchall())

print(db.execute(""PRAGMA temp_store_directory"").fetchall())
print(db.execute(""PRAGMA sqlite3_temp_directory"").fetchall())
```
```text
[(1,)]
[]
[(1,)]
[]
[]
[('/home/plchuser/',)]
[]
```

Here's the docs on the Temporary File Storage Locations 
https://www.sqlite.org/tempfiles.html",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/430/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,