issue_comments: 389572201
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/266#issuecomment-389572201 | https://api.github.com/repos/simonw/datasette/issues/266 | 389572201 | MDEyOklzc3VlQ29tbWVudDM4OTU3MjIwMQ== | 9599 | 2018-05-16T15:58:43Z | 2018-05-16T16:00:47Z | OWNER | This will likely be implemented in the This means it will take ALL arguments that are available to the In streaming mode, things will behave a little bit differently - in particular, if It can't include a length header because we don't know how many bytes it will be CSV output will throw an error if the endpoint doesn't have rows and columns keys eg So the implementation...
I like that this takes advantage of efficient pagination. It may not work so well for views which use offset/limit though. It won't work at all for custom SQL because custom SQL doesn't support _next= pagination. That's fine. For views... easiest fix is to cut off after first X000 records. That seems OK. View JSON would need to include a property that the mechanism can identify. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
323681589 |