home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

22 rows where comments = 7 and repo = 140912432 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, author_association, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 18
  • pull 4

state 2

  • closed 17
  • open 5

repo 1

  • sqlite-utils · 22 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1855894222 I_kwDOCGYnMM5unrLO 585 CLI equivalents to `transform(add_foreign_keys=)` simonw 9599 closed 0     7 2023-08-18T01:07:15Z 2023-08-18T01:51:16Z 2023-08-18T01:51:15Z OWNER  

The new options added in: - #577 Deserve consideration in the CLI as well.

https://github.com/simonw/sqlite-utils/blob/d2bcdc00c6ecc01a6e8135e775ffdb87572b802b/sqlite_utils/db.py#L1706-L1708

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/585/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1816851056 I_kwDOCGYnMM5sSvJw 568 table.create(..., replace=True) simonw 9599 closed 0     7 2023-07-22T18:12:22Z 2023-07-22T19:25:35Z 2023-07-22T19:15:44Z OWNER  

Found myself using this pattern to quickly prototype a schema:

```python import sqlite_utils db = sqlite_utils.Database(memory=True)

print(db["answers_chunks"].create({ "id": int, "content": str, "embedding_type_id": int, "embedding": bytes, "embedding_content_md5": str, "source": str, }, pk="id", transform=True).schema) ```

Using replace=True to drop and then recreate the table would be neat here, and would be consistent with other places that use replace=True.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/568/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1383646615 I_kwDOCGYnMM5SeMWX 491 Ability to merge databases and tables sgraaf 8904453 open 0     7 2022-09-23T11:10:55Z 2023-06-14T22:14:24Z   NONE  

Hi! Let me firstly say that I am a big fan of your work -- I follow your tweets and blog posts with great interest 😄.

Now onto the matter at hand: I think it would be great if sqlite-utils included a merge or combine command, with the purpose of combining different SQLite databases into a single SQLite database. This way, the newly "merged" database would contain all differently named tables contained in the databases to be merged as-is, as well a concatenation of all tables of the same name.

This could look something like this:

bash sqlite-utils merge cats.db dogs.db > animals.db

I imagine this is rather straightforward if all databases involved in the merge contain differently named tables (i.e. no chance of conflicts), but things get slightly more complicated if two or more of the databases to be merged contain tables with the same name. Not only do you have to "do something" with the primary key(s), but these tables could also simply have different schemas (and therefore be incompatible for concatenation to begin with).

Anyhow, I would love your thoughts on this, and, if you are open to it, work together on the design and implementation!

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/491/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1114543475 I_kwDOCGYnMM5CbpVz 388 Link to stable docs from older versions simonw 9599 closed 0     7 2022-01-26T01:55:46Z 2023-03-26T23:43:12Z 2022-01-26T02:00:22Z OWNER  

https://sqlite-utils.datasette.io/en/2.14.1/ isn't showing a link to the stable release right now.

I should also apply the same fix I used for Datasette in: - https://github.com/simonw/datasette/issues/1608

TIL: https://til.simonwillison.net/readthedocs/link-from-latest-to-stable

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/388/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
743384829 MDExOlB1bGxSZXF1ZXN0NTIxMjg3OTk0 203 changes to allow for compound foreign keys drkane 1049910 open 0     7 2020-11-16T00:30:10Z 2023-01-25T18:47:18Z   FIRST_TIME_CONTRIBUTOR simonw/sqlite-utils/pulls/203

Add support for compound foreign keys, as per issue #117

Not sure if this is the right approach. In particular I'm unsure about:

  • the new ForeignKey class, which replaces the namedtuple in order to ensure that column and other_column are forced into tuples. The class does the job, but doesn't feel very elegant.
  • I haven't rewritten guess_foreign_table to take account of multiple columns, so it just checks for the first column in the foreign key definition. This isn't ideal.
  • I haven't added any ability to the CLI to add compound foreign keys, it's only in the python API at the moment.

The PR also contains a minor related change that columns and tables are always quoted in foreign key definitions.

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/203/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1082651698 I_kwDOCGYnMM5Ah_Qy 358 Support for CHECK constraints luxint 11597658 open 0     7 2021-12-16T21:19:45Z 2022-09-25T07:15:59Z   NONE  

Hi,

I noticed the transform.table() method doesn't have an option to add/change or drop a check constraint (see https://sqlite.org/lang_createtable.html -> 3.7 Check Constraints. would be great to have this as an option!

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/358/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1366512990 PR_kwDOCGYnMM4-nBs9 486 progressbar for inserts/upserts of all fileformats, closes #485 MischaU8 99098079 closed 0     7 2022-09-08T14:58:02Z 2022-09-15T20:40:03Z 2022-09-15T20:37:51Z CONTRIBUTOR simonw/sqlite-utils/pulls/486

:books: Documentation preview :books:: https://sqlite-utils--486.org.readthedocs.build/en/486/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/486/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1166731361 I_kwDOCGYnMM5Fiuhh 414 I forgot to include the changelog in the 3.25.1 release simonw 9599 closed 0     7 2022-03-11T18:32:36Z 2022-03-11T18:40:39Z 2022-03-11T18:40:39Z OWNER  

I pushed a release for https://github.com/simonw/sqlite-utils/releases/tag/3.25.1 but forgot to include the release notes in docs/changelog.rst

This means https://sqlite-utils.datasette.io/en/stable/changelog.html isn't showing them.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/414/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1138948786 PR_kwDOCGYnMM4y3yW0 407 Add SpatiaLite helpers to CLI eyeseast 25778 closed 0     7 2022-02-15T16:50:17Z 2022-02-16T01:49:40Z 2022-02-16T00:58:08Z CONTRIBUTOR simonw/sqlite-utils/pulls/407

Closes #398

This adds SpatiaLite helpers to the CLI.

```sh

init spatialite when creating a database

sqlite-utils create database.db --enable-wal --init-spatialite

add geometry columns

needs a database, table, geometry column name, type, with optional SRID and not-null

this will throw an error if the table doesn't already exist

sqlite-utils add-geometry-column database.db table-name geometry --srid 4326 --not-null

spatial index an existing table/column

this will throw an error it the table and column don't exist

sqlite-utils create-spatial-index database.db table-name geometry ```

Docs and tests are included.

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/407/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1077243232 I_kwDOCGYnMM5ANW1g 354 Test failure in test_rebuild_fts simonw 9599 closed 0     7 2021-12-10T21:27:55Z 2021-12-11T01:08:46Z 2021-12-11T01:08:46Z OWNER  

Not sure why this has only just started failing, but I'm getting this: https://github.com/simonw/sqlite-utils/runs/4488687639

``` E sqlite3.DatabaseError: database disk image is malformed

sqlite_utils/db.py:425: DatabaseError ___ test_rebuild_fts[searchable_fts] ___

fresh_db = <Database \<sqlite3.Connection object at 0x1084ea9d0>> table_to_fix = 'searchable_fts'

@pytest.mark.parametrize("table_to_fix", ["searchable", "searchable_fts"])
def test_rebuild_fts(fresh_db, table_to_fix):
    table = fresh_db["searchable"]
    table.insert(search_records[0])
    table.enable_fts(["text", "country"])
    # Run a search
    rows = list(table.search("tanuki"))
    assert len(rows) == 1
    assert {
        "rowid": 1,
        "text": "tanuki are running tricksters",
        "country": "Japan",
        "not_searchable": "foo",
    }.items() <= rows[0].items()
    # Delete from searchable_fts_data
    fresh_db["searchable_fts_data"].delete_where()
    # This should have broken the index
    with pytest.raises(sqlite3.DatabaseError):
        list(table.search("tanuki"))
    # Running rebuild_fts() should fix it
  fresh_db[table_to_fix].rebuild_fts()

```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/354/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1058196641 I_kwDOCGYnMM4_Esyh 342 Extra options to `lookup()` which get passed to `insert()` simonw 9599 closed 0     7 2021-11-19T06:53:03Z 2021-11-19T07:26:54Z 2021-11-19T07:26:54Z OWNER  

For https://github.com/simonw/git-history/issues/12 I found myself wanting to pass extra options to lookup() to set the column order, primary key etc.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/342/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
597671518 MDU6SXNzdWU1OTc2NzE1MTg= 98 Only set .last_rowid and .last_pk for single update/inserts, not for .insert_all()/.upsert_all() with multiple records simonw 9599 closed 0     7 2020-04-10T03:19:40Z 2021-09-28T04:38:44Z 2020-04-13T03:29:15Z OWNER  
sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/98/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
722816436 MDU6SXNzdWU3MjI4MTY0MzY= 186 .extract() shouldn't extract null values simonw 9599 open 0     7 2020-10-16T02:41:08Z 2021-08-12T12:32:14Z   OWNER  

This almost works, but it creates a rogue type record with a value of None. In [1]: import sqlite_utils In [2]: db = sqlite_utils.Database(memory=True) In [5]: db["creatures"].insert_all([ {"id": 1, "name": "Simon", "type": None}, {"id": 2, "name": "Natalie", "type": None}, {"id": 3, "name": "Cleo", "type": "dog"}], pk="id") Out[5]: <Table creatures (id, name, type)> In [7]: db["creatures"].extract("type") Out[7]: <Table creatures (id, name, type_id)> In [8]: list(db["creatures"].rows) Out[8]: [{'id': 1, 'name': 'Simon', 'type_id': None}, {'id': 2, 'name': 'Natalie', 'type_id': None}, {'id': 3, 'name': 'Cleo', 'type_id': 2}] In [9]: db["type"] Out[9]: <Table type (id, type)> In [10]: list(db["type"].rows) Out[10]: [{'id': 1, 'type': None}, {'id': 2, 'type': 'dog'}]

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/186/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
925410305 MDU6SXNzdWU5MjU0MTAzMDU= 285 Introspection property for telling if a table is a rowid table simonw 9599 closed 0     7 2021-06-19T14:56:16Z 2021-06-19T15:12:33Z 2021-06-19T15:12:33Z OWNER  

Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/284#issuecomment-864416785

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/285/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
924990677 MDU6SXNzdWU5MjQ5OTA2Nzc= 279 sqlite-utils memory should handle TSV and JSON in addition to CSV simonw 9599 closed 0     7 2021-06-18T15:02:54Z 2021-06-19T03:11:59Z 2021-06-19T03:11:59Z OWNER  
  • Use sniff to detect CSV or TSV (if :tsv or :csv was not specified) and delimiters

Follow-on from #272

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/279/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
816560819 MDU6SXNzdWU4MTY1NjA4MTk= 240 table.pks_and_rows_where() method returning primary keys along with the rows simonw 9599 closed 0     7 2021-02-25T15:49:28Z 2021-02-25T16:39:23Z 2021-02-25T16:28:23Z OWNER  

Original title: Easier way to update a row returned from .rows

Here's a surprisingly hard problem I ran into while trying to implement #239 - given a row returned by db[table].rows how can you update that row?

The problem is that the db[table].update(...) method requires a primary key. But if you have a row from the db[table].rows iterator it might not even contain the primary key - provided the table is a rowid table.

Instead, currently, you need to introspect the table and, if rowid is a primary key, explicitly include that in the select= argument to table.rows_where(...) - otherwise it will not be returned.

A utility mechanism to make this easier would be very welcome.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
688670158 MDU6SXNzdWU2ODg2NzAxNTg= 147 SQLITE_MAX_VARS maybe hard-coded too low simonwiles 96218 open 0     7 2020-08-30T07:26:45Z 2021-02-15T21:27:55Z   CONTRIBUTOR  

I came across this while about to open an issue and PR against the documentation for batch_size, which is a bit incomplete.

As mentioned in #145, while:

SQLITE_MAX_VARIABLE_NUMBER ... defaults to 999 for SQLite versions prior to 3.32.0 (2020-05-22) or 32766 for SQLite versions after 3.32.0.

it is common that it is increased at compile time. Debian-based systems, for example, seem to ship with a version of sqlite compiled with SQLITE_MAX_VARIABLE_NUMBER set to 250,000, and I believe this is the case for homebrew installations too.

In working to understand what batch_size was actually doing and why, I realized that by setting SQLITE_MAX_VARS in db.py to match the value my sqlite was compiled with (I'm on Debian), I was able to decrease the time to insert_all() my test data set (~128k records across 7 tables) from ~26.5s to ~3.5s. Given that this about .05% of my total dataset, this is time I am keen to save...

Unfortunately, it seems that sqlite3 in the python standard library doesn't expose the get_limit() C API (even though pysqlite used to), so it's hard to know what value sqlite has been compiled with (note that this could mean, I suppose, that it's less than 999, and even hardcoding SQLITE_MAX_VARS to the conservative default might not be adequate. It can also be lowered -- but not raised -- at runtime). The best I could come up with is echo "" | sqlite3 -cmd ".limits variable_number" (only available in sqlite >= 2015-05-07 (3.8.10)).

Obviously this couldn't be relied upon in sqlite_utils, but I wonder what your opinion would be about exposing SQLITE_MAX_VARS as a user-configurable parameter (with suitable "here be dragons" warnings)? I'm going to go ahead and monkey-patch it for my purposes in any event, but it seems like it might be worth considering.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/147/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
707944044 MDExOlB1bGxSZXF1ZXN0NDkyMjU3NDA1 174 Much, much faster extract() implementation simonw 9599 closed 0     7 2020-09-24T07:52:31Z 2020-09-24T15:44:00Z 2020-09-24T15:43:56Z OWNER simonw/sqlite-utils/pulls/174

Takes my test down from ten minutes to four seconds. Refs #172.

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/174/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
686978131 MDU6SXNzdWU2ODY5NzgxMzE= 139 insert_all(..., alter=True) should work for new columns introduced after the first 100 records simonwiles 96218 closed 0     7 2020-08-27T06:25:25Z 2020-08-28T22:48:51Z 2020-08-28T22:30:14Z CONTRIBUTOR  

Is there a way to make .insert_all() work properly when new columns are introduced outside the first 100 records (with or without the alter=True argument)?

I'm using .insert_all() to bulk insert ~3-4k records at a time and it is common for records to need to introduce new columns. However, if new columns are introduced after the first 100 records, sqlite_utils doesn't even raise the OperationalError: table ... has no column named ... exception; it just silently drops the extra data and moves on.

It took me a while to find this little snippet in the documentation for .insert_all() (it's not mentioned under Adding columns automatically on insert/update):

The column types used in the CREATE TABLE statement are automatically derived from the types of data in that first batch of rows. Any additional or missing columns in subsequent batches will be ignored.

I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem.

Is there a way around this that you would suggest? It seems like it should raise an exception at least.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
581339961 MDU6SXNzdWU1ODEzMzk5NjE= 92 .columns_dict doesn't work for all possible column types simonw 9599 closed 0     7 2020-03-14T19:30:35Z 2020-03-15T18:37:43Z 2020-03-14T20:04:14Z OWNER  

Got this error: File ".../python3.7/site-packages/sqlite_utils/db.py", line 462, in <dictcomp> for column in self.columns KeyError: 'REAL' .columns_dict uses REVERSE_COLUMN_TYPE_MAPPING: https://github.com/simonw/sqlite-utils/blob/43f1c6ab4e3a6b76531fb6f5447adb83d26f3971/sqlite_utils/db.py#L457-L463 REVERSE_COLUMN_TYPE_MAPPING defines FLOAT not REALA https://github.com/simonw/sqlite-utils/blob/43f1c6ab4e3a6b76531fb6f5447adb83d26f3971/sqlite_utils/db.py#L68-L74

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/92/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
564579430 MDU6SXNzdWU1NjQ1Nzk0MzA= 86 Problem with square bracket in CSV column name foscoj 8149512 closed 0     7 2020-02-13T10:19:57Z 2020-02-27T04:16:08Z 2020-02-27T04:16:07Z NONE  

testing some data from european power information (entsoe.eu), the title of the csv contains square brackets. as I am playing with glitch, sqlite-utils are used for creating the db.

Traceback (most recent call last):

File "/app/.local/bin/sqlite-utils", line 8, in <module>

sys.exit(cli())

File "/app/.local/lib/python3.7/site-packages/click/core.py", line 764, in call

return self.main(*args, **kwargs)

File "/app/.local/lib/python3.7/site-packages/click/core.py", line 717, in main

rv = self.invoke(ctx)

File "/app/.local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke

return _process_result(sub_ctx.command.invoke(sub_ctx))

File "/app/.local/lib/python3.7/site-packages/click/core.py", line 956, in invoke

return ctx.invoke(self.callback, **ctx.params)

File "/app/.local/lib/python3.7/site-packages/click/core.py", line 555, in invoke

return callback(*args, **kwargs)

File "/app/.local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 434, in insert

default=default,

File "/app/.local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 384, in insert_upsert_implementation

docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs

File "/app/.local/lib/python3.7/site-packages/sqlite_utils/db.py", line 997, in insert_all

extracts=extracts,

File "/app/.local/lib/python3.7/site-packages/sqlite_utils/db.py", line 618, in create

extracts=extracts,

File "/app/.local/lib/python3.7/site-packages/sqlite_utils/db.py", line 310, in create_table

self.conn.execute(sql)

sqlite3.OperationalError: unrecognized token: "]"

entsoe_2016.csv

renamed to txt for uploading compatibility

entsoe_2016.txt

code is remixed directly from your https://glitch.com/edit/#!/datasette-csvs repo

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/86/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
413868452 MDU6SXNzdWU0MTM4Njg0NTI= 17 Improve and document foreign_keys=... argument to insert/create/etc simonw 9599 closed 0     7 2019-02-24T21:09:11Z 2019-02-24T23:45:48Z 2019-02-24T23:45:48Z OWNER  

The foreign_keys= argument to table.insert_all() and friends can be used to specify foreign key relationships that should be created.

It is not yet documented. It also requires you to specify the SQLite type of each column, even though this can be detected by introspecting the referenced table:

cols = [c for c in self.db[other_table].columns if c.name == other_column]
cols[0].type

Relates to #2

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/17/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 48.921ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows