id,node_id,number,state,locked,title,user,body,created_at,updated_at,closed_at,merged_at,merge_commit_sha,assignee,milestone,draft,head,base,author_association,repo,url,merged_by,auto_merge 560204306,MDExOlB1bGxSZXF1ZXN0NTYwMjA0MzA2,224,closed,0,Add fts offset docs.,37962604,"The limit can be passed as a string to the query builder to have an offset. I have tested it using the shorthand `limit=f""15, 30""`, the standard syntax should work too.",2021-01-22T20:50:58Z,2021-02-14T19:31:06Z,2021-02-14T19:31:06Z,,4d6ff040770119fb2c1bcbc97678d9deca752f2f,,,0,341f50d2d95ba1d69ad64ba8c0ec0ffa9a68d063,36dc7e3909a44878681c266b90f9be76ac749f2d,NONE,140912432,https://github.com/simonw/sqlite-utils/pull/224,, 564215011,MDExOlB1bGxSZXF1ZXN0NTY0MjE1MDEx,225,closed,0,fix for problem in Table.insert_all on search for columns per chunk of rows,261237,"Hi, I ran into a problem when trying to create a database from my Apple Healthkit data using [healthkit-to-sqlite](https://github.com/dogsheep/healthkit-to-sqlite). The program crashed because of an invalid insert statement that was generated for table `rDistanceCycling`. The actual problem turned out to be in [sqlite-utils](https://github.com/simonw/sqlite-utils). `Table.insert_all` processes the data to be inserted in chunks of rows and checks for every chunk which columns are used, and it will collect all column names in the variable `all_columns`. The collection of columns is done using a nested list comprehension that is not completely correct. I'm using a Windows machine and had to make a few adjustments to the tests in order to be able to run them because they had a posix dependency. Thanks, kind regards, Frans ``` # this is a (condensed) chunk of data from my Apple healthkit export that caused the problem. # the 3 last items in the chunk have additional keys: metadata_HKMetadataKeySyncVersion and metadata_HKMetadataKeySyncIdentifier chunk = [{'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '7.0.1', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:7.0.1>', 'unit': 'km', 'creationDate': '2020-10-10 12:29:09 +0100', 'startDate': '2020-10-10 12:29:06 +0100', 'endDate': '2020-10-10 12:29:07 +0100', 'value': '0.00518016'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '7.0.1', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:7.0.1>', 'unit': 'km', 'creationDate': '2020-10-10 12:29:10 +0100', 'startDate': '2020-10-10 12:29:07 +0100', 'endDate': '2020-10-10 12:29:08 +0100', 'value': '0.00544049'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>', 'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:40:50 +0100', 'endDate': '2020-07-15 16:42:49 +0100', 'value': '0.952092', 'metadata_HKMetadataKeySyncVersion': '1', 'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520450.99823:616520569.99360:119'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>', 'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:42:49 +0100', 'endDate': '2020-07-15 16:44:51 +0100', 'value': '0.848983', 'metadata_HKMetadataKeySyncVersion': '1', 'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520569.99360:616520691.98826:119'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>', 'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:44:51 +0100', 'endDate': '2020-07-15 16:46:50 +0100', 'value': '0.834403', 'metadata_HKMetadataKeySyncVersion': '1', 'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520691.98826:616520810.98305:119'}] def all_columns_old(): all_columns = [col for col in chunk[0]] all_columns += [column for record in chunk for column in record if column not in all_columns] return all_columns def all_columns_new(): all_columns = [col for col in chunk[0]] for record in chunk: all_columns += [column for column in record if column not in all_columns] return all_columns if __name__ == '__main__': from pprint import pprint print('problem: ') pprint(all_columns_old()) print('\nfix: ') pprint(all_columns_new()) ``` ",2021-01-29T20:16:07Z,2021-02-14T21:04:13Z,2021-02-14T21:04:13Z,,1cba965a1ddc2bd77db3bc3912aa7e8467e2fa2f,,,0,929ea7551135df0cc2ac9d67f4fbbecf701a11f6,36dc7e3909a44878681c266b90f9be76ac749f2d,NONE,140912432,https://github.com/simonw/sqlite-utils/pull/225,, 737050557,PR_kwDOCGYnMM4r7n-9,327,closed,0,Extract expand: Support JSON Arrays,101753,"Hi, I needed to extract data in JSON Arrays to normalize data imports. I've quickly hacked the following together based on #241 which refers to #239 where you, @simonw, wrote: > Could this handle lists of objects too? That would be pretty amazing - if the column has a [{...}, {...}] list in it could turn that into a many-to-many. They way this works in my work is that many-to-many relationships are created for anything that maps to an dictionary in a list, and many-to-one relations for everything else (assumed to be scalar values). Not sure what the best approach here would be? Are many-to-one relationships are at all useful here? What do you think about this approach? I could try to add it to the cli interface and documentation if wanted. Thanks for this awesome piece of software in any case! :sun_with_face: ",2021-09-19T10:34:30Z,2022-12-29T09:05:36Z,2022-12-29T09:05:36Z,,f0105cde23452cb4c8a15fc6096154b15d9b7c5a,,,0,2840c697aa9817462d864ed5f8a7696d749fe039,8d641ab08ac449081e96f3e25bd6c0226870948a,NONE,140912432,https://github.com/simonw/sqlite-utils/pull/327,, 768796296,PR_kwDOCGYnMM4t0uaI,333,closed,0,Add functionality to read Parquet files.,2118708,"I needed this for a project of mine, and I thought it'd be useful to have it in sqlite-utils (It's also mentioned in #248 ). The current implementation works (data is read & data types are inferred correctly. I've added a single straightforward test case, but @simonw please let me know if there are any non-obvious flags/combinations I should test too.",2021-10-28T23:43:19Z,2021-11-25T19:47:35Z,2021-11-25T19:47:35Z,,eda2b1f8d2670c6ca8512e3e7c0150866bd0bdc6,,,0,50ec2e49dee3b09a48a7aef55eceaa3f752a52e7,fda4dad23a0494890267fbe8baf179e2b56ee914,NONE,140912432,https://github.com/simonw/sqlite-utils/pull/333,, 774610166,PR_kwDOCGYnMM4uK5z2,337,closed,0,Default values for `--attach` and `--param` options,771193,"It seems that `click` 8.x uses `None` as the default value for `multiple=True` options. This change makes the code forward-compatible with `click` 8.x. See this build failure for more info: https://hydra.nixos.org/build/156926608",2021-11-05T21:57:53Z,2021-11-05T22:33:03Z,2021-11-05T22:33:02Z,,eb8bf28da1794638a5693043cd5268f506a674d3,,,0,095fc64c5399d75d44d304571a21293d06d817f0,fda4dad23a0494890267fbe8baf179e2b56ee914,NONE,140912432,https://github.com/simonw/sqlite-utils/pull/337,, 1136499802,PR_kwDOCGYnMM5DvZxa,515,closed,0,"upsert new rows with constraints, fixes #514",193185,"This fixes #514 by making the initial insert for upserts include all columns, so that new rows can be added to tables with non-pkey columns that have constraints. (aside: I'm not a python programmer. `pip`? `pipenv`? `venv`? These are mystical incantations to me. The process to set up this repo for local development and testing was _so easy_. Thank you for the excellent contributing documentation!) ---- :books: Documentation preview :books:: https://sqlite-utils--515.org.readthedocs.build/en/515/ ",2022-11-26T16:15:21Z,2023-05-08T21:27:11Z,2023-05-08T21:27:10Z,,c3713ef6944cbeacf36e462712cecac2176db692,,,0,32f8173a8fe830c224e39a0a514cd12e78de7028,965ca0d5f5bffe06cc02cd7741344d1ddddf9d56,NONE,140912432,https://github.com/simonw/sqlite-utils/pull/515,,