home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER", issue = 488835586 and user = 9599 sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • simonw · 2 ✖

issue 1

  • Command for importing data from a Twitter Export file · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
540879620 https://github.com/dogsheep/twitter-to-sqlite/issues/4#issuecomment-540879620 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDU0MDg3OTYyMA== simonw 9599 2019-10-11T02:59:16Z 2019-10-11T02:59:16Z MEMBER

Also import ad preferences and all that other junk.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Command for importing data from a Twitter Export file 488835586  
527682713 https://github.com/dogsheep/twitter-to-sqlite/issues/4#issuecomment-527682713 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDUyNzY4MjcxMw== simonw 9599 2019-09-03T23:48:57Z 2019-09-03T23:48:57Z MEMBER

One interesting challenge here is that the JSON format for tweets in the archive is subtly different from the JSON format currently returned by the API.

If we want to keep the tweets in the same database table (which feels like the right thing to me) we'll need to handle this.

One thing we can do is have a column for from_archive which is set to 1 for tweets that were recovered from the archive.

We can also ensure that tweets from the API always over-write the version that came from the archive (using .upsert()) while tweets from the archive use .insert(..., ignore=True) to avoid over-writing a better version that came from the API.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Command for importing data from a Twitter Export file 488835586  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 333.98ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows