issue_comments
4 rows where "updated_at" is on date 2017-11-10 and user = 9599 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
user 1
- simonw · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
343581332 | https://github.com/simonw/datasette/issues/21#issuecomment-343581332 | https://api.github.com/repos/simonw/datasette/issues/21 | MDEyOklzc3VlQ29tbWVudDM0MzU4MTMzMg== | simonw 9599 | 2017-11-10T20:45:42Z | 2017-11-10T20:45:42Z | OWNER | I'm not going to use Sanic's mechanism for this. I'll use arguments passed to my cli instead. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Use Sanic configuration mechanism 267769034 | |
343581130 | https://github.com/simonw/datasette/issues/20#issuecomment-343581130 | https://api.github.com/repos/simonw/datasette/issues/20 | MDEyOklzc3VlQ29tbWVudDM0MzU4MTEzMA== | simonw 9599 | 2017-11-10T20:44:38Z | 2017-11-10T20:44:38Z | OWNER | I'm going to handle this a different way. I'm going to support a local history of your own queries stored in localStorage, but if you want to share a query you have to do it with a URL. If people really want canned query support, they can do that using custom templates - see #12 - or by adding views to their database before they publish it. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Config file with support for defining canned queries 267759136 | |
343557070 | https://github.com/simonw/datasette/issues/52#issuecomment-343557070 | https://api.github.com/repos/simonw/datasette/issues/52 | MDEyOklzc3VlQ29tbWVudDM0MzU1NzA3MA== | simonw 9599 | 2017-11-10T18:57:47Z | 2017-11-10T18:57:47Z | OWNER | https://file.io/ looks like it could be good for this. It's been around since 2015, and lets you upload a temporary file which can be downloaded once.
Downloading from that URL serves up the data with a
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Solution for temporarily uploading DB so it can be built by docker 273026602 | |
343551356 | https://github.com/simonw/datasette/issues/49#issuecomment-343551356 | https://api.github.com/repos/simonw/datasette/issues/49 | MDEyOklzc3VlQ29tbWVudDM0MzU1MTM1Ng== | simonw 9599 | 2017-11-10T18:33:22Z | 2017-11-10T18:33:22Z | OWNER | I'm going with datasette. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pick a name 272661336 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 4