home / github

Menu
  • Search all tables
  • GraphQL API

pull_requests

Table actions
  • GraphQL API for pull_requests

8 rows where user = 536941

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, draft, base, repo, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association repo url merged_by auto_merge
764281468 PR_kwDOBm6k_c4tjgJ8 1495 open 0 Allow routes to have extra options fgregg 536941 Right now, datasette routes can only be a 2-tuple of `(regex, view_fn)`. If it was possible for datasette to handle extra options, like [standard Django does](https://docs.djangoproject.com/en/3.2/topics/http/urls/#passing-extra-options-to-view-functions), it would add flexibility for plugin authors. For example, if extra options were enabled, then it would be easy to make a single table the home page (#1284). This plugin would accomplish it. ```python from datasette import hookimpl from datasette.views.table import TableView @hookimpl def register_routes(datasette): return [ (r"^/$", TableView.as_view(datasette), {'db_name': 'DB_NAME', 'table': 'TABLE_NAME'}) ] ``` 2021-10-22T15:00:45Z 2021-11-19T15:36:27Z     44969c5654748fb26ad05ab37245678f245f32e5     0 fe7fa14b39846b919dfed44514a7d18d67e01dfd ff9ccfb0310501a3b4b4ca24d73246a8eb3e7914 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1495    
811088967 PR_kwDOBm6k_c4wWDxH 1582 closed 0 don't set far expiry if hash is '000' fgregg 536941 This will close #1581. I couldn't find any unit tests related to the testing hashed urls, and I know that you want to break that code out of the core application (#1561), so I'm not quite sure what you would like me to for testing. 2021-12-28T18:16:13Z 2022-03-24T04:07:58Z 2022-03-24T04:07:58Z   e7249b52558b4ddcd92e68a13bd02fb54a2b92f8     0 216f3b32b88d85b33e45937ed89ac919d82c23b4 8c401ee0f054de2f568c3a8302c9223555146407 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1582    
1000800283 PR_kwDOCGYnMM47pwAb 455 closed 0 in extract code, check equality with IS instead of = for nulls fgregg 536941 sqlite "IS" is equivalent to SQL "IS NOT DISTINCT FROM" closes #423 2022-07-19T13:40:25Z 2022-08-27T14:45:03Z 2022-08-27T14:45:03Z 2022-08-27T14:45:03Z c5f8a2eb1a81a18b52825cc649112f71fe419b12     0 1b35a92e3ede76f0f29f6f8dcd899f44b2abbb02 855bce8c3823718def13e0b8928c58bf857e41b2 CONTRIBUTOR sqlite-utils 140912432 https://github.com/simonw/sqlite-utils/pull/455    
1067479608 PR_kwDOBm6k_c4_oHI4 1820 closed 0 [SPIKE] Don't truncate query CSVs fgregg 536941 Relates to #526 This is a minimal set of changes needed for having *query* CSVs attempt to download all the rows. What's good about it is the minimalism. What's bad about it: 1. We are abusing the `_size` argument to indicate we don't want truncation, which isn't the most obvious thing. Additionally, there are various checks that make sure the "_size" URL parameter is a positive integer, which we are relying on to prevent overloading. 2. The default CSV on a table page will use the max_returned_rows argument. Changing this could be a breaking change, since that's currently a place that has some facilities for pagination. Additionally, i think there's a limit under the hood somewhere which if we removed could lead to sql timeouts 3. There are similar reasons for leaving the current streaming method alone, as the current methods could allow for downloading very large files that could have a sql timeout if we tried to get them in one go. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1820.org.readthedocs.build/en/1820/ <!-- readthedocs-preview datasette end --> 2022-09-26T17:27:01Z 2022-10-07T16:12:17Z 2022-10-07T16:12:17Z   bd62037d5cdf72c06fd4d78da162cbc1526c1ab6     1 9bead2a95b74f3a2e0be2a9f1cb1f624aec22c2f eff112498ecc499323c26612d707908831446d25 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1820    
1079437524 PR_kwDOBm6k_c5AVujU 1835 closed 0 use inspect data for hash and file size fgregg 536941 `inspect_data` should already include the hash and the db file size, so this PR takes advantage of using those instead of always recalculating. should help a lot on startup with large DBs. closes #1834 2022-10-06T18:25:24Z 2022-10-27T20:51:30Z 2022-10-06T20:06:07Z 2022-10-06T20:06:07Z eff112498ecc499323c26612d707908831446d25     0 b4b92df38c8ca8a6faeec4daaf803cee80e0dbed bbf33a763537a1d913180b22bd3b5fe4a5e5b252 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1835    
1079714826 PR_kwDOBm6k_c5AWyQK 1837 closed 0 Make hash and size a lazy property fgregg 536941 Many apologies, @simonw. My previous PR #1835 did not really solve the problem because the name of the database is often not known to database object in the init method. I took a cue from how you dealt with this issue and made hash a lazy property and did something similar with size. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1837.org.readthedocs.build/en/1837/ <!-- readthedocs-preview datasette end --> 2022-10-06T23:51:22Z 2022-10-27T20:51:21Z 2022-10-27T20:51:20Z 2022-10-27T20:51:20Z b912d92b651c4f0b5137da924d135654511f0fe0     0 c12447e484036ace9a685bd04b9f0e1fa66541c8 eff112498ecc499323c26612d707908831446d25 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1837    
1102353255 PR_kwDOBm6k_c5BtJNn 1870 open 0 don't use immutable=1, only mode=ro fgregg 536941 Opening db files in immutable mode sometimes leads to the file being mutated, which causes duplication in the docker image layers: see #1836, #1480 That this happens in "immutable" mode is surprising, because the sqlite docs say that setting this should open the database as read only. https://www.sqlite.org/c3ref/open.html > immutable: The immutable parameter is a boolean query parameter that indicates that the database file is stored on read-only media. When immutable is set, SQLite assumes that the database file cannot be changed, even by a process with higher privilege, and so the database is opened read-only and all locking and change detection is disabled. Caution: Setting the immutable property on a database file that does in fact change can result in incorrect query results and/or [SQLITE_CORRUPT](https://www.sqlite.org/rescode.html#corrupt) errors. See also: [SQLITE_IOCAP_IMMUTABLE](https://www.sqlite.org/c3ref/c_iocap_atomic.html). Perhaps this is a bug in sqlite? <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1870.org.readthedocs.build/en/1870/ <!-- readthedocs-preview datasette end --> 2022-10-27T23:33:04Z 2023-10-03T19:12:37Z     fc2d316f9e22593d48036e9d81fe972bb5973016     0 4faa4fd3b3e7f5eae758b713d0a121b960e2e261 bf00b0b59b6692bdec597ac9db4e0b497c5a47b4 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1870    
1215742203 PR_kwDOBm6k_c5IdsD7 2003 open 0 Show referring tables and rows when the referring foreign key is compound fgregg 536941 sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys. in particular, 1. in a table view, there is not a link from the row to the referenced row if the foreign key is compound 2. in a row view, there is no listing of tables and rows that refer to the focal row if those referencing foreign keys are compound. Both of these issues are discussed in #1099. This PR only fixes the second one, because it's not clear what the right UX is for the first issue. ![Screenshot 2023-01-24 at 19-47-40 nlrb bargaining_unit](https://user-images.githubusercontent.com/536941/214454749-d53deead-4151-4329-a5d4-8a7a454de7d3.png) Some things that might not be desirable about this approach. 1. it changes the external API, by changing `column` => `columns` and `other_column` => `other_columns` (see inline comment for more discussion. 2. There are various places where the plural foreign keys have to be checked for length and discarded or transformed to singular. 2023-01-24T21:31:31Z 2023-01-25T18:44:42Z     fb3abeceb2785a582d2c120c7c1bf7dc3cd1de05     0 1e5b42f9d6490926300953837cbaa571ef81d772 e4ebef082de90db4e1b8527abc0d582b7ae0bc9d CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2003    

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
, [auto_merge] TEXT);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 34.419ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows