id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason
1795219865,I_kwDOCGYnMM5rAOGZ,566,`--no-headers` doesn't work on most formats,33625,open,0,,,2,2023-07-09T03:43:36Z,2023-07-09T04:13:35Z,,NONE,,"Version 3.33

```
sqlite-utils query library.db 'select asin from audible' --fmt plain --no-headers | head -3
asin
0062804006
0062891421
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/566/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1620254998,I_kwDOCGYnMM5gkyEW,532,Show more information when JSON can't be imported with sqlite-utils insert,83080728,closed,0,,,2,2023-03-12T06:41:44Z,2023-05-08T20:32:16Z,2023-05-08T20:32:02Z,NONE,,"I am currently trying to import the [JSON export of my data from Discord](https://support.discord.com/hc/en-us/articles/360004027692-Requesting-a-Copy-of-your-Data), specifically `activity/reporting/events-*.json`

```
sqlite-utils.exe insert test.db reporting events-2023-00000-of-00001.json
  [###################################-]   99%  00:00:00
Error: Invalid JSON - use --csv for CSV or --tsv for TSV files
```

Please show more information as to *why* this is invalid, if possible.

I am using version 3.30 with Python 3.10 on Windows 11.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/532/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1367835380,I_kwDOCGYnMM5Rh4L0,487,Specify foreign key against compound key in other table,540968,closed,0,,,2,2022-09-09T13:32:09Z,2022-09-11T04:00:44Z,2022-09-11T04:00:44Z,NONE,,"When inserting rows via the library, is it possible to specify a foreign key to a compound primary key?

For example, suppose I create a table:
```
db = Database('events.db')
db['events'].insert_all([
    {'venue': 'Times Square', 'date': '2022-12-31', 'title': 'Rockin New Year Eve'},
    {'venue': 'Wembley Stadium', 'date': '2022-06-05', 'title': 'FA Cup'},
    {'venue': 'Times Square', 'date': '2021-12-31', 'title': 'Rockin New Year Eve'},
], pk=('date', 'venue'))
```

And I want to add related data in another table:
```
act = {'name': 'Rick Astley', 'venue': 'Times Square', 'date': '2021-12-31' }
db['performers'].insert(act, pk=<???>)
```

Is it possible to specify a value for `pk` that will point to the compound primary key in `events`?

SQLite does support it:
https://www.sqlite.org/foreignkeys.html#fk_composite",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/487/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1353441389,I_kwDOCGYnMM5Qq-Bt,477,Conda Forge,49702524,closed,0,,,2,2022-08-28T19:03:08Z,2022-09-07T03:46:55Z,2022-09-07T03:46:55Z,NONE,,"Hello! I have successfully put this package on to Conda Forge, and I have extending the invitation for the owner/maintainers of this package to be maintainers on Conda Forge as well. Let me know if you are interested! Thanks.
https://github.com/conda-forge/sqlite-utils-feedstock",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/477/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1224112817,I_kwDOCGYnMM5I9nqx,430,Document how to use `PRAGMA temp_store` to avoid errors when running VACUUM against huge databases,9308268,open,0,,,2,2022-05-03T13:33:58Z,2022-06-14T23:26:37Z,,NONE,,"I'm trying to figure out a way to get the `table.extract()` method to complete successfully -- I'm not sure if maybe the cause (and a possible solution) of this on Ubuntu Server 22.04 is to adjust some of the PRAGMA values within SQLite itself ... on another Linux system (PopOS), using this method on this same database appears to work just fine.

Here's the bit that's causing the error, and the resulting error output:
```python
# combine these columns into 1 table ""bib_properties"" :
# best_title
# bib_level_code
# mat_type
# material_code
# best_author
db[""circ_trans""].extract(
    [""best_title"", ""bib_level_code"", ""mat_type"", ""material_code"", ""best_author""], 
    table=""bib_properties"", 
    fk_column=""bib_properties_id""
)

db[""circ_trans""].extract(
    [""call_number""], 
    table=""call_number"", 
    fk_column=""call_number_id"",
    rename={""call_number"": ""value""}
)
```

```python
---------------------------------------------------------------------------
OperationalError                          Traceback (most recent call last)
Input In [17], in <cell line: 7>()
      1 # combine these columns into 1 table ""bib_properties"" :
      2 # best_title
      3 # bib_level_code
      4 # mat_type
      5 # material_code
      6 # best_author
----> 7 db[""circ_trans""].extract(
      8     [""best_title"", ""bib_level_code"", ""mat_type"", ""material_code"", ""best_author""], 
      9     table=""bib_properties"", 
     10     fk_column=""bib_properties_id""
     11 )
     13 db[""circ_trans""].extract(
     14     [""call_number""], 
     15     table=""call_number"", 
     16     fk_column=""call_number_id"",
     17     rename={""call_number"": ""value""}
     18 )

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1764, in Table.extract(self, columns, table, fk_column, rename)
   1761         column_order.append(c.name)
   1763 # Drop the unnecessary columns and rename lookup column
-> 1764 self.transform(
   1765     drop=set(columns),
   1766     rename={magic_lookup_column: fk_column},
   1767     column_order=column_order,
   1768 )
   1770 # And add the foreign key constraint
   1771 self.add_foreign_key(fk_column, table, ""id"")

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1526, in Table.transform(self, types, rename, drop, pk, not_null, defaults, drop_foreign_keys, column_order)
   1524 with self.db.conn:
   1525     for sql in sqls:
-> 1526         self.db.execute(sql)
   1527     # Run the foreign_key_check before we commit
   1528     if pragma_foreign_keys_was_on:

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:465, in Database.execute(self, sql, parameters)
    463     return self.conn.execute(sql, parameters)
    464 else:
--> 465     return self.conn.execute(sql)

OperationalError: database or disk is full
```

This database is about 17G in total size, so I'm assuming the error is coming from the vacuum ... where i'm assuming it's maybe trying to do the temp storage in a location that doesn't have sufficient room. The disk space is more than ample on the host in question (1.8T is free in the directory where the sqlite db resides) The `/tmp` directory however is limited on a smaller disk associated with the OS

I'm trying to think if there's a way to set the `PRAGMA temp_store` or maybe if it's `temp_store_directory` that I'm after ... to use the same local directory of where the file is located (maybe this is a property of the version of sqlite on the system?) 

```python
# SET the temp file store to be a file ...
print(db.execute('PRAGMA temp_store').fetchall())
print(db.execute('PRAGMA temp_store=FILE').fetchall())

print(db.execute('PRAGMA temp_store').fetchall())

# the users home directory ...
print(db.execute(""PRAGMA temp_store_directory='/home/plchuser/'"").fetchall())
print(db.execute(""PRAGMA sqlite3_temp_directory='/home/plchuser/'"").fetchall())

print(db.execute(""PRAGMA temp_store_directory"").fetchall())
print(db.execute(""PRAGMA sqlite3_temp_directory"").fetchall())
```
```text
[(1,)]
[]
[(1,)]
[]
[]
[('/home/plchuser/',)]
[]
```

Here's the docs on the Temporary File Storage Locations 
https://www.sqlite.org/tempfiles.html",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/430/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1250161887,I_kwDOCGYnMM5Kg_Tf,438,illegal UTF-16 surrogate,4068,closed,0,,,2,2022-05-26T22:49:52Z,2022-05-27T08:21:53Z,2022-05-27T08:21:53Z,NONE,,"I am trying to insert `https://artsdatabanken.no/Fab2018/api/export/csv` into a SQLite database, but I have an error when using `sqlite-utils`:

```
sqlite-utils insert --csv --delimiter "";"" --encoding=""utf-16-le"" --pk ""Id"" csv fremmedart test.db
  [------------------------------------]    0%
Error: 'utf-16-le' codec can't decode bytes in position 98-99: illegal UTF-16 surrogate

The input you provided uses a character encoding other than utf-8.

You can fix this by passing the --encoding= option with the encoding of the file.

If you do not know the encoding, running 'file filename.csv' may tell you.

It's often worth trying: --encoding=latin-1
```

I tried to convert the file using `iconv -f ""utf-16le"" -t ""utf-8""`, but I still get a similar error (slightly different position):

```
sqlite-utils insert --csv --delimiter "";"" --encoding=utf-8 --pk ""Id"" csv_utf8 fremmedart test.db
  [------------------------------------]    0%
Error: 'utf-8' codec can't decode byte 0xd9 in position 99: invalid continuation byte

The input you provided uses a character encoding other than utf-8.

You can fix this by passing the --encoding= option with the encoding of the file.

If you do not know the encoding, running 'file filename.csv' may tell you.

It's often worth trying: --encoding=latin-1
```

I have no issues reading such file using this Python code:
```python
content = open('csv', encoding='utf-16-le').read())
```

`in2csv` works too.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/438/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1171599874,I_kwDOCGYnMM5F1TIC,415,Convert with `--multi` and `--dry-run` flag does not work,3976183,closed,0,,,2,2022-03-16T21:59:46Z,2022-03-21T04:18:24Z,2022-03-21T04:18:24Z,NONE,,"It's not possible to combine `--multi` and `--dry-run` flag in the `convert` command.

Let's first create a simple database from JSON string

```console
$ echo '[{""foo"": ""abc""}]' | sqlite-utils insert demo.db demo -
$ sqlite-utils query demo.db ""SELECT * FROM demo""             
[{""foo"": ""abc""}]
```

and then try to convert the ""foo"" column with a static value ""bar"" (see docs [Converting a column into multiple columns](https://sqlite-utils.datasette.io/en/stable/cli.html#converting-a-column-into-multiple-columns))

```console
$ sqlite-utils convert demo.db demo foo '{""foo"": ""bar""}' --multi --dry-run
Traceback (most recent call last):
  File ""/home/dotcs/anaconda3/envs/tools/bin/sqlite-utils"", line 8, in <module>
    sys.exit(cli())
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1128, in __call__
    return self.main(*args, **kwargs)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1053, in main
    rv = self.invoke(ctx)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/click/core.py"", line 754, in invoke
    return __callback(*args, **kwargs)
  File ""/home/dotcs/anaconda3/envs/tools/lib/python3.9/site-packages/sqlite_utils/cli.py"", line 2686, in convert
    for row in db.conn.execute(sql, where_args).fetchall():
sqlite3.OperationalError: user-defined function raised exception
```

But without the `--dry-run` flag it does work as expected:

```console
$ sqlite-utils convert demo.db demo foo '{""foo"": ""bar""}' --multi
$ sqlite-utils query demo.db ""SELECT * FROM demo""               
[{""foo"": ""bar""}]
```

```console
$ sqlite-utils --version
sqlite-utils, version 3.25.1
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/415/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
1026794056,I_kwDOCGYnMM49M6JI,331,Mypy error: found module but no type hints or library stubs,53032010,closed,0,,,2,2021-10-14T20:29:50Z,2021-11-14T23:21:08Z,2021-11-14T23:21:08Z,NONE,,"```
Python 3.9.5
mypy 0.910
sqlite-utils 3.17.1
```

While using sqlite-utils as a library, when I use mypy for static type checking, it throws an error:

```
mypy .
src/etl.py:5: error: Skipping analyzing ""sqlite_utils"": found module but no type hints or library stubs
    import sqlite_utils
    ^
src/etl.py:5: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
test/test_etl.py:4: error: Skipping analyzing ""sqlite_utils"": found module but no type hints or library stubs
    import sqlite_utils
    ^
Found 2 errors in 2 files (checked 7 source files)
```


When I add a `py.typed` file to the sqlite-utils package to mark it as PEP 561 compatible, the error goes away.

```
al@nbal ..b/python3.9/site-packages/sqlite_utils (git)-[main] % la
total 200
drwx------   3 al al   4096 Oct 14 22:00 .
drwx------ 117 al al   4096 Oct 12 21:12 ..
-rw-------   1 al al  64409 Oct 12 21:11 cli.py
-rw-------   1 al al 109092 Oct 12 21:11 db.py
-rw-------   1 al al      0 Oct 14 22:00 py.typed
-rw-------   1 al al    684 Oct 12 21:11 recipes.py
-rw-------   1 al al   7988 Oct 12 21:11 utils.py
-rw-------   1 al al    113 Oct 12 21:11 __init__.py
```

I would like to suggest adding a `py.typed` file to the repository.

See also the mypy docs on creating PEP 561 compatible packages:
https://mypy.readthedocs.io/en/stable/installed_packages.html#creating-pep-561-compatible-packages

",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/331/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
951581763,MDU6SXNzdWU5NTE1ODE3NjM=,298,Read lines with JSON object,2172260,closed,0,,,2,2021-07-23T13:28:52Z,2021-08-03T06:50:47Z,2021-08-02T21:55:16Z,NONE,,"I found this posted on HN a while ago and love it -- thank you!

As a minor improvement, it would be great to have the ability to parse a file with line-separated JSON objects. Currently the parser obviously requires an array wrapping all these objects.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/298/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
847423559,MDU6SXNzdWU4NDc0MjM1NTk=,253,fixtures.db example error in sql-utils blog post,192568,closed,0,,,2,2021-03-31T22:07:36Z,2021-05-19T03:31:48Z,2021-05-19T03:31:47Z,NONE,,"En route to trying to understand column order transform documentation, I tried the instructions here:
https://simonwillison.net/2020/Sep/23/sqlite-advanced-alter-table/
I get a malformed database schema syntax error.

```
$ wget https://latest.datasette.io/fixtures.db
--2021-03-31 18:00:23--  https://latest.datasette.io/fixtures.db
Resolving latest.datasette.io (latest.datasette.io)... 2607:f8b0:4004:801::2013, 142.250.73.211
Connecting to latest.datasette.io (latest.datasette.io)|2607:f8b0:4004:801::2013|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]
Saving to: ‘fixtures.db’

fixtures.db                                                             [ <=>                                                                                                                                                              ] 260.00K  --.-KB/s    in 0.1s

2021-03-31 18:00:23 (2.41 MB/s) - ‘fixtures.db’ saved [266240]

$ sqlite3 fixtures.db '.schema facetable'
Error: malformed database schema (generated_columns) - near ""AS"": syntax error

$ sqlite3 fixtures.db
SQLite version 3.28.0 2019-04-15 14:49:49
Enter "".help"" for usage hints.
sqlite> .schema
Error: malformed database schema (generated_columns) - near ""AS"": syntax error
```
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/253/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
861622839,MDU6SXNzdWU4NjE2MjI4Mzk=,256,inserting with --nl errors with: sqlite3.OperationalError: table <table> has no column named <column>,279769,closed,0,,,2,2021-04-19T18:01:03Z,2021-05-19T03:26:54Z,2021-05-19T03:26:54Z,NONE,,"I have a `jsonl` file, it is 10,000 lines long.

Inserting from the cli with `sqlite-utils insert db table file --nl --batch-size 10000` fails with this missing column error, even though I'm telling it to use the whole file in the first batch.

This seems similar to #18 and #139, but maybe it's unique to `--nl`?",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/256/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
836963850,MDU6SXNzdWU4MzY5NjM4NTA=,249,Full text search possibly broken?,36287,closed,0,,,2,2021-03-21T02:03:44Z,2021-03-21T02:43:32Z,2021-03-21T02:43:32Z,NONE,,"I'm not quite sure if this is an issue with sqlite-utils or datasette.

**Background**
I was previously using sqlite-utils version < 3.6. I have a bunch of csv files that have some data scraped from a website.

```
sqlite-utils create-table mydb.db post \
    posted_date text \
    url text \
    title text \
    raw_text text \
    --not-null posted_date \
    --not-null url \
    --pk=url
```
FTS is enabled via
`sqlite-utils enable-fts ./mydb.db post title raw_text`

Data is loaded to the table via
`sqlite-utils insert ./mydb.db post ${filename} --csv`

Note that the data contains text in my language Tamil.

Loading happens just fine.
datasette serves the db file just fine. It recognizes FTS and shows the ""search"" box. However, none of the queries work. Whatever text I supply, it always returns 0 rows. I literally copy paste words from the row listing on the screen and paste it on the search box.

Interestingly, only thing  I can remember is switching to sqlite-utils 3.6. I had to do this because the prior version had an issue with column size.

I have attached one of the csv files that can be loaded to the table. Substitute ""${filename}"" with that file for the sqlite-utils insert command.
[posts_20200417-20201231.csv.zip](https://github.com/simonw/sqlite-utils/files/6176697/posts_20200417-20201231.csv.zip)

Interestingly, the FTS based search from datasette worked just fine before this version upgrade. That is, the queries returned results. I will try to downgrade just to see if the theory is correct. 

I appreciate any help here. Thanks.
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/249/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
797159961,MDExOlB1bGxSZXF1ZXN0NTY0MjE1MDEx,225,fix for problem in Table.insert_all on search for columns per chunk of rows,261237,closed,0,,,2,2021-01-29T20:16:07Z,2021-02-14T21:04:13Z,2021-02-14T21:04:13Z,NONE,simonw/sqlite-utils/pulls/225,"Hi,

I ran into a problem when trying to create a database from my Apple Healthkit data using [healthkit-to-sqlite](https://github.com/dogsheep/healthkit-to-sqlite). The program crashed because of an invalid insert statement that was generated for table `rDistanceCycling`. 

The actual problem turned out to be in [sqlite-utils](https://github.com/simonw/sqlite-utils). `Table.insert_all` processes the data to be inserted in chunks of rows and checks for every chunk which columns are used, and it will collect all column names in the variable `all_columns`.  The collection of columns is done using a nested list comprehension that is not completely correct. 

I'm using a Windows machine and had to make a few adjustments to the tests in order to be able to run them because they had a posix dependency.

Thanks, kind regards,

Frans

```
# this is a (condensed) chunk of data from my Apple healthkit export that caused the problem.
# the 3 last items in the chunk have additional keys: metadata_HKMetadataKeySyncVersion and metadata_HKMetadataKeySyncIdentifier

chunk = [{'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '7.0.1',
          'device': '<<HKDevice: 0x281cf6c70>, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:7.0.1>',
          'unit': 'km', 'creationDate': '2020-10-10 12:29:09 +0100', 'startDate': '2020-10-10 12:29:06 +0100',
          'endDate': '2020-10-10 12:29:07 +0100', 'value': '0.00518016'},
         {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '7.0.1',
          'device': '<<HKDevice: 0x281cf6c70>, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:7.0.1>',
          'unit': 'km', 'creationDate': '2020-10-10 12:29:10 +0100', 'startDate': '2020-10-10 12:29:07 +0100',
          'endDate': '2020-10-10 12:29:08 +0100', 'value': '0.00544049'},
         {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6',
          'device': '<<HKDevice: 0x281cf83e0>, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>',
          'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:40:50 +0100',
          'endDate': '2020-07-15 16:42:49 +0100', 'value': '0.952092', 'metadata_HKMetadataKeySyncVersion': '1',
          'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520450.99823:616520569.99360:119'},
         {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6',
          'device': '<<HKDevice: 0x281cf83e0>, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>',
          'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:42:49 +0100',
          'endDate': '2020-07-15 16:44:51 +0100', 'value': '0.848983', 'metadata_HKMetadataKeySyncVersion': '1',
          'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520569.99360:616520691.98826:119'},
         {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6',
          'device': '<<HKDevice: 0x281cf83e0>, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>',
          'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:44:51 +0100',
          'endDate': '2020-07-15 16:46:50 +0100', 'value': '0.834403', 'metadata_HKMetadataKeySyncVersion': '1',
          'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520691.98826:616520810.98305:119'}]



def all_columns_old():
    all_columns = [col for col in chunk[0]]
    all_columns += [column for record in chunk
                           for column in record if column not in all_columns]
    return all_columns


def all_columns_new():
    all_columns = [col for col in chunk[0]]
    for record in chunk:
        all_columns += [column for column in record if column not in all_columns]
    return all_columns



if __name__ == '__main__':
    from pprint import pprint

    print('problem: ')
    pprint(all_columns_old())
    print('\nfix: ')
    pprint(all_columns_new())

```
",140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/225/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
792297010,MDExOlB1bGxSZXF1ZXN0NTYwMjA0MzA2,224,Add fts offset docs.,37962604,closed,0,,,2,2021-01-22T20:50:58Z,2021-02-14T19:31:06Z,2021-02-14T19:31:06Z,NONE,simonw/sqlite-utils/pulls/224,"The limit can be passed as a string to the query builder to have an offset. I have tested it using the shorthand `limit=f""15, 30""`, the standard syntax should work too.",140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/224/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
760960559,MDU6SXNzdWU3NjA5NjA1NTk=,205,"sqlite3.OperationalError: near ""("": syntax error",765871,closed,0,,,2,2020-12-10T06:44:40Z,2020-12-10T19:18:22Z,2020-12-10T07:24:23Z,NONE,,"The sqlite version is 3.22.0 2018-01-22 18:45:57 0c55d179733b46d8d0ba4d88e01a25e10677046ee3da1d5b1581e86726f2alt1
sqlite-utils, version 3.0

It fails here: https://github.com/kaihendry/aws-partners-datasette/runs/1528432635?check_suite_focus=true

I'm not sure where the problem is, since it works _fine locally_ on Archlinux system running 3.34.0 2020-12-01 16:14:00 a26b6597e3ae272231b96f9982c3bcc17ddec2f2b6eb4df06a224b91089fed5b


https://github.com/kaihendry/aws-partners-datasette/blob/main/create-summary-view.sh

Maybe I need to bump up from ubuntu-latest to ?
",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/205/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
707407567,MDU6SXNzdWU3MDc0MDc1Njc=,171,Idea: transitive closure tables for tree structures,649467,closed,0,,,2,2020-09-23T14:17:33Z,2020-10-22T04:38:35Z,2020-10-22T04:07:14Z,NONE,,"I just read that sqlite has a transitive closure table extension using a virtual table in order to represent trees:

https://charlesleifer.com/blog/querying-tree-structures-in-sqlite-using-python-and-the-transitive-closure-extension/

Even without this extension, though, a util to build a transitive closure table would allow trees to be queried easily. Since it relies on self-referential foreign keys, the relationships might even be able to be automatically detected. ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/171/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed
539204432,MDU6SXNzdWU1MzkyMDQ0MzI=,70,Implement ON DELETE and ON UPDATE actions for foreign keys,26292069,open,0,,,2,2019-12-17T17:19:10Z,2020-02-27T04:18:53Z,,NONE,,"Hi! I did not find any mention on the library about ON DELETE and ON UPDATE actions for foreign keys. Are those expected to be implemented? If not, it would be a nice thing to include!",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/70/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
491219910,MDU6SXNzdWU0OTEyMTk5MTA=,61,importing CSV to SQLite as library,17739,closed,0,,,2,2019-09-09T17:12:40Z,2019-11-04T16:25:01Z,2019-11-04T16:25:01Z,NONE,,"CSV can be imported to SQLite when used CLI, but I don't see documentation for when using as library. ",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/61/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed