home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

1 row where state = "closed" and user = 16622642 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 1

state 1

  • closed · 1 ✖

repo 1

  • sqlite-utils 1
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
963897111 MDU6SXNzdWU5NjM4OTcxMTE= 309 sqlite-utils insert errors should show SQL and parameters, if possible scaleoutsean 16622642 closed 0     6 2021-08-09T11:24:14Z 2021-08-09T23:40:29Z 2021-08-09T22:25:58Z NONE  

I've tried several approaches, but this is the current one:

sh echo $json-line | sqlite-utils insert json.db jsontable --truncate --alter --detect-types - In all cases, I get this error:

sh OverflowError: Python int too large to convert to SQLite INTEGER Traceback (most recent call last): File "/home/sean/.local/bin/sqlite-utils", line 8, in <module> sys.exit(cli()) File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/cli.py", line 841, in insert insert_upsert_implementation( File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/cli.py", line 780, in insert_upsert_implementation db[table].insert_all( File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/db.py", line 2145, in insert_all self.insert_chunk( File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/db.py", line 1957, in insert_chunk result = self.db.execute(query, params) File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/db.py", line 257, in execute return self.conn.execute(sql, parameters)

I googled the error and checked SO answers and advice, all good. I changed my JSON file to not use integers so I no longer get this error. Of course, that makes using the database a bit harder, so I also tried to solve the problem by modifying DB structure (while using integers in JSON).

If change all INTEGER Data Types to something else (STRING, TEXT) and try to import again using --truncate, I still get this error. I suppose I should tell sqlite-utils which columns should use non-INTEGER Data Type rather than rely on it to check my SQL table configuration.

If that is the case, can this error be a bit more specific for easier troubleshooting - maybe tell us which which record caused the problem when that error is thrown?

My table has 60+ columns, many of which use 64-bit integers (not all records are large or known in advance), so while I can modify JSON to use strings instead of integers, it decreases usability and finding out which records have values for which SQLite integers aren't sufficient requires some work (I'm thinking about parsing all integers with jq and sorting output by length to identify those columns, but I'd prefer if sqlite-utils could tell me that).

My environment:

  • Python 3.8.10
  • sqlite-utils 3.14
  • pandas 1.3.1
  • numpy 1.21.1
  • sqlite-fts4 1.0.1
  • sqlite 3.31.1-4ubuntu0.2
sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/309/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 37.895ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows