issues
5 rows where milestone = 2859414, state = "closed" and type = "issue" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: body, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
268087542 | MDU6SXNzdWUyNjgwODc1NDI= | 31 | Idea: colour scheme based on sha256 of db | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-24T15:52:38Z | 2018-05-28T18:10:45Z | 2017-11-09T14:14:59Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/31/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
268592894 | MDU6SXNzdWUyNjg1OTI4OTQ= | 43 | While running, server should spot new db files added to its directory | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-26T00:32:37Z | 2017-11-14T08:25:53Z | 2017-11-14T08:25:37Z | OWNER | Maybe in each request it checks the time and if 5s has elapsed since t last scanned the directory it scans it again This would allow people with dedicated hosting to run the app there and just upload new datasets whenever they want. It would also be very convenient for development. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/43/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
267769034 | MDU6SXNzdWUyNjc3NjkwMzQ= | 21 | Use Sanic configuration mechanism | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-23T18:25:14Z | 2017-11-10T20:45:42Z | 2017-11-10T20:45:42Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/21/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
268106803 | MDU6SXNzdWUyNjgxMDY4MDM= | 32 | Try running SQLite queries in a separate thread | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-24T16:48:42Z | 2017-11-09T14:05:56Z | 2017-11-09T14:05:56Z | OWNER | https://pymotw.com/3/asyncio/executors.html Would be good to have some actual benchmarks so I can evaluate if this is worth it or not. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/32/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
267517381 | MDU6SXNzdWUyNjc1MTczODE= | 10 | Set up Travis | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-23T01:29:07Z | 2017-11-04T23:48:57Z | 2017-11-04T23:48:57Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);