html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/dogsheep/twitter-to-sqlite/issues/4#issuecomment-540879620,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/4,540879620,MDEyOklzc3VlQ29tbWVudDU0MDg3OTYyMA==,9599,simonw,2019-10-11T02:59:16Z,2019-10-11T02:59:16Z,MEMBER,Also import ad preferences and all that other junk.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",488835586,Command for importing data from a Twitter Export file, https://github.com/dogsheep/twitter-to-sqlite/issues/4#issuecomment-527682713,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/4,527682713,MDEyOklzc3VlQ29tbWVudDUyNzY4MjcxMw==,9599,simonw,2019-09-03T23:48:57Z,2019-09-03T23:48:57Z,MEMBER,"One interesting challenge here is that the JSON format for tweets in the archive is subtly different from the JSON format currently returned by the API. If we want to keep the tweets in the same database table (which feels like the right thing to me) we'll need to handle this. One thing we can do is have a column for `from_archive` which is set to 1 for tweets that were recovered from the archive. We can also ensure that tweets from the API always over-write the version that came from the archive (using `.upsert()`) while tweets from the archive use `.insert(..., ignore=True)` to avoid over-writing a better version that came from the API.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",488835586,Command for importing data from a Twitter Export file,