pull_requests
4 rows where draft = 1, repo = 107914493 and state = "closed"
This data as json, CSV (advanced)
Suggested facets: user, author_association, created_at (date), updated_at (date), closed_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | repo | url | merged_by | auto_merge |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
391924509 | MDExOlB1bGxSZXF1ZXN0MzkxOTI0NTA5 | 703 | closed | 0 | WIP implementation of writable canned queries | simonw 9599 | Refs #698. | 2020-03-21T22:23:51Z | 2020-06-03T00:08:14Z | 2020-06-02T23:57:35Z | 80c5a74a947e63673389604de12e80fa27305454 | 1 | 61e40e917efc43a8aea5298a22badbb6eaea3fa1 | 89c4ddd4828623888e91a1d2cb396cba12d4e7b4 | OWNER | datasette 107914493 | https://github.com/simonw/datasette/pull/703 | |||||
782105066 | PR_kwDOBm6k_c4unfnq | 1512 | closed | 0 | New pattern for async view classes | simonw 9599 | Refs #878 - starting out with the new `AsyncBase` class implementing a pytest-inspired `asyncio` parallel execution mechanism. | 2021-11-16T21:55:44Z | 2021-11-17T01:39:54Z | 2021-11-17T01:39:44Z | fb57d4474cb1fdaef260e244b1b6f470f1992e40 | 1 | 8f757da0750fe7f27b4ed3839bc3ef3650832ad9 | 0156c6b5e52d541e93f0d68e9245f20ae83bc933 | OWNER | datasette 107914493 | https://github.com/simonw/datasette/pull/1512 | |||||
1067479608 | PR_kwDOBm6k_c4_oHI4 | 1820 | closed | 0 | [SPIKE] Don't truncate query CSVs | fgregg 536941 | Relates to #526 This is a minimal set of changes needed for having *query* CSVs attempt to download all the rows. What's good about it is the minimalism. What's bad about it: 1. We are abusing the `_size` argument to indicate we don't want truncation, which isn't the most obvious thing. Additionally, there are various checks that make sure the "_size" URL parameter is a positive integer, which we are relying on to prevent overloading. 2. The default CSV on a table page will use the max_returned_rows argument. Changing this could be a breaking change, since that's currently a place that has some facilities for pagination. Additionally, i think there's a limit under the hood somewhere which if we removed could lead to sql timeouts 3. There are similar reasons for leaving the current streaming method alone, as the current methods could allow for downloading very large files that could have a sql timeout if we tried to get them in one go. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1820.org.readthedocs.build/en/1820/ <!-- readthedocs-preview datasette end --> | 2022-09-26T17:27:01Z | 2022-10-07T16:12:17Z | 2022-10-07T16:12:17Z | bd62037d5cdf72c06fd4d78da162cbc1526c1ab6 | 1 | 9bead2a95b74f3a2e0be2a9f1cb1f624aec22c2f | eff112498ecc499323c26612d707908831446d25 | CONTRIBUTOR | datasette 107914493 | https://github.com/simonw/datasette/pull/1820 | |||||
1303909190 | PR_kwDOBm6k_c5NuBNG | 2053 | closed | 0 | WIP new JSON for queries | simonw 9599 | Refs: - #2049 TODO: - [x] Read queries JSON - Implement error display with `"ok": false` and an errors key - Read queries HTML - Read queries other formats (plugins) - Canned read queries (dispatched to from table) - Write queries (a canned query thing) - Implement different shapes, refactoring to share code with table - Implement a sensible subset of extras, also refactoring to share code with table - Get all tests passing <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2053.org.readthedocs.build/en/2053/ <!-- readthedocs-preview datasette end --> | 2023-04-05T23:26:15Z | 2023-07-26T18:28:59Z | 2023-07-26T18:26:45Z | c69f7961e42c1103e281ca061edbe041e212cbb0 | 1 | ee24ea94525ace221f1b4d141d01cf56410c2c6d | dda99fc09fb0b5523948f6d481c6c051c1c7b5de | OWNER | datasette 107914493 | https://github.com/simonw/datasette/pull/2053 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) , [auto_merge] TEXT); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);