issue_comments: 606998669
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/dogsheep/twitter-to-sqlite/issues/39#issuecomment-606998669 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/39 | 606998669 | MDEyOklzc3VlQ29tbWVudDYwNjk5ODY2OQ== | 9599 | 2020-04-01T02:57:36Z | 2020-04-01T02:57:36Z | MEMBER | The tricky thing here is thinking about the interaction between the recorded since_id and a desire to run the initial import. The first time you run We need to record the maximum ID from those as the But what happens if our initial import is cancelled after only a few tweets? We risk never pulling in the rest of the tweets. Not sure if I need to solve this at all or if I should instead trust users to run the command a second time without I had considered letting |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
590666760 |