home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

8 rows where "created_at" is on date 2022-06-14 and issue = 1250629388 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • simonw 8

issue 1

  • CSV files with too many values in a row cause errors · 8 ✖

author_association 1

  • OWNER 8
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1155767915 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155767915 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E455r simonw 9599 2022-06-14T22:22:27Z 2022-06-14T22:22:27Z OWNER

I forgot to add equivalents of extras_key= and ignore_extras= to the CLI tool - will do that in a separate issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155672675 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155672675 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E4ipj simonw 9599 2022-06-14T20:19:07Z 2022-06-14T20:19:07Z OWNER

Documentation: https://sqlite-utils.datasette.io/en/latest/python-api.html#reading-rows-from-a-file

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155666672 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155666672 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E4hLw simonw 9599 2022-06-14T20:11:52Z 2022-06-14T20:11:52Z OWNER

I'm going to rename restkey to extras_key for consistency with ignore_extras.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155389614 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155389614 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E3diu simonw 9599 2022-06-14T15:54:03Z 2022-06-14T15:54:03Z OWNER

Filed an issue against python/typeshed:

  • https://github.com/python/typeshed/issues/8075
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155358637 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155358637 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E3V-t simonw 9599 2022-06-14T15:31:34Z 2022-06-14T15:31:34Z OWNER

Getting this past mypy is really hard!

% mypy sqlite_utils sqlite_utils/utils.py:189: error: No overload variant of "pop" of "MutableMapping" matches argument type "None" sqlite_utils/utils.py:189: note: Possible overload variants: sqlite_utils/utils.py:189: note: def pop(self, key: str) -> str sqlite_utils/utils.py:189: note: def [_T] pop(self, key: str, default: Union[str, _T] = ...) -> Union[str, _T] That's because of this line:

row.pop(key=None)

Which is legit here - we have a dictionary where one of the keys is None and we want to remove that key. But the baked in type is apparently def pop(self, key: str) -> str.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155350755 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155350755 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E3UDj simonw 9599 2022-06-14T15:25:18Z 2022-06-14T15:25:18Z OWNER

That broke mypy:

sqlite_utils/utils.py:229: error: Incompatible types in assignment (expression has type "Iterable[Dict[Any, Any]]", variable has type "DictReader[str]")

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155317293 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155317293 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E3L4t simonw 9599 2022-06-14T15:04:01Z 2022-06-14T15:04:01Z OWNER

I think that's unavoidable: it looks like csv.Sniffer only works if you feed it a CSV file with an equal number of values in each row, which is understandable.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  
1155310521 https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155310521 https://api.github.com/repos/simonw/sqlite-utils/issues/440 IC_kwDOCGYnMM5E3KO5 simonw 9599 2022-06-14T14:58:50Z 2022-06-14T14:58:50Z OWNER

Interesting challenge in writing tests for this: if you give csv.Sniffer a short example with an invalid row in it sometimes it picks the wrong delimiter!

id,name\r\n1,Cleo,oops

It decided the delimiter there was e.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV files with too many values in a row cause errors 1250629388  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 22.782ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows