issue_comments
1 row where issue = 919508498 and user = 9599 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- JSON export dumps JSON fields as TEXT · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
860230385 | https://github.com/simonw/datasette/issues/1375#issuecomment-860230385 | https://api.github.com/repos/simonw/datasette/issues/1375 | MDEyOklzc3VlQ29tbWVudDg2MDIzMDM4NQ== | simonw 9599 | 2021-06-13T15:37:49Z | 2021-06-13T15:37:49Z | OWNER | There is a feature for this at the moment, but it's a little bit hidden: you can use I considered trying to make this automatic - so it detects columns that appear to contain valid JSON and outputs them as nested objects - but the problem with that is that it can lead to inconsistent results - you might hit the API and find that not every column contains valid JSON (compared to the previous day) resulting in the API retuning string instead of the expected dictionary and breaking your code. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
JSON export dumps JSON fields as TEXT 919508498 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1