issue_comments
1 row where issue = 323681589 and "updated_at" is on date 2018-06-04 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Export to CSV · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
394417567 | https://github.com/simonw/datasette/issues/266#issuecomment-394417567 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NDQxNzU2Nw== | simonw 9599 | 2018-06-04T16:30:48Z | 2018-06-04T16:32:55Z | OWNER | When serving streaming responses, I need to check that a large CSV file doesn't completely max out the CPU in a way that is harmful to the rest of the instance. If it does, one option may be to insert an async sleep call in between each chunk that is streamed back. This could be controlled by a That's only if testing proves that this is a necessary mechanism. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1