issues
17 rows where comments = 1, state = "open" and type = "pull" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, author_association, draft, updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1983600865 | PR_kwDOBm6k_c5e7WH7 | 2206 | Bump the python-packages group with 1 update | dependabot[bot] 49699333 | open | 0 | 1 | 2023-11-08T13:18:56Z | 2023-12-08T13:46:24Z | CONTRIBUTOR | simonw/datasette/pulls/2206 | Bumps the python-packages group with 1 update: black. Release notesSourced from black's releases.
... (truncated) ChangelogSourced from black's changelog.
... (truncated) Commits
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting Dependabot commands and optionsYou can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions :books: Documentation preview :books:: https://datasette--2206.org.readthedocs.build/en/2206/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1864112887 | PR_kwDOBm6k_c5Yo7bk | 2151 | Test Datasette on multiple SQLite versions | asg017 15178711 | open | 0 | 1 | 2023-08-23T22:42:51Z | 2023-08-23T22:58:13Z | CONTRIBUTOR | simonw/datasette/pulls/2151 | still testing, hope it works! :books: Documentation preview :books:: https://datasette--2151.org.readthedocs.build/en/2151/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1641117021 | PR_kwDODtX3eM5M66op | 6 | Add permalink virtual field to items table | xavdid 1231935 | open | 0 | 1 | 2023-03-26T22:22:38Z | 2023-03-29T18:38:52Z | FIRST_TIME_CONTRIBUTOR | dogsheep/hacker-news-to-sqlite/pulls/6 | I added a virtual column (no storage overhead) to the output that easily links back to the source. It works nicely out of the box with datasette: I got bit a bit by https://github.com/simonw/sqlite-utils/issues/411, so I went with a manual I also added my best-guess instructions for local development on this package. I'm shooting in the dark, so feel free to replace with how you work on it locally. |
hacker-news-to-sqlite 248903544 | pull | { "url": "https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
473288428 | MDExOlB1bGxSZXF1ZXN0MzAxNDgzNjEz | 564 | First proof-of-concept of Datasette Library | simonw 9599 | open | 0 | 1 | 2019-07-26T10:22:26Z | 2023-02-07T15:14:11Z | OWNER | simonw/datasette/pulls/564 | Refs #417. Run it like this:
Uses a new plugin hook - available_databases() |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1556065335 | PR_kwDOBm6k_c5Ie5nA | 2004 | use single quotes for string literals, fixes #2001 | cldellow 193185 | open | 0 | 1 | 2023-01-25T05:08:46Z | 2023-02-01T06:37:18Z | CONTRIBUTOR | simonw/datasette/pulls/2004 | This modernizes some uses of double quotes for string literals to use only single quotes, fixes simonw/datasette#2001 While developing it, I manually enabled the stricter mode by using the code snippet at https://gist.github.com/cldellow/85bba507c314b127f85563869cd94820 I think that code snippet isn't generally safe/portable, so I haven't tried to automate it in the tests. :books: Documentation preview :books:: https://datasette--2004.org.readthedocs.build/en/2004/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1538342965 | PR_kwDOBm6k_c5HpNYo | 1996 | Document custom json encoder | eyeseast 25778 | open | 0 | 1 | 2023-01-18T16:54:14Z | 2023-01-19T12:55:57Z | CONTRIBUTOR | simonw/datasette/pulls/1996 | Closes #1983 All documentation here. Edits welcome. :books: Documentation preview :books:: https://datasette--1996.org.readthedocs.build/en/1996/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1363280254 | PR_kwDODFdgUs4-cIa_ | 76 | Add organization support to repos command | OverkillGuy 2757699 | open | 0 | 1 | 2022-09-06T13:21:42Z | 2022-09-06T13:59:08Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/76 | New --organization flag to signify all given "usernames" are private orgs. Adapts API URL to the organization path instead. Not the best implementation, but a first draft to talk around Fixes #75 (badly, no tests, overly vague, untested) |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/76/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1268121674 | PR_kwDOBm6k_c45fz-O | 1757 | feat: add a wildcard for _json columns | ytjohn 163156 | open | 0 | 1 | 2022-06-11T01:01:17Z | 2022-09-06T00:51:21Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1757 | This allows _json to accept a wildcard for when there are many JSON columns that the user wants to convert. I hope this is useful. I've tested it on our datasette and haven't ran into any issues. I imagine on a large set of results, there could be some performance issues, but it will probably be negligible for most use cases. On a side note, I ran into an issue where I had to upgrade black on my system beyond the pinned version in setup.py. Here is the upstream issue <https://github.com/psf/black/issues/2964 . I didn't include this in the PR yet since I didn't look into the issue too far, but I can if you would like. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
988493790 | MDExOlB1bGxSZXF1ZXN0NzI3MzkwODM1 | 36 | Correct naming of tool in readme | badboy 2129 | open | 0 | 1 | 2021-09-05T12:05:40Z | 2022-01-06T16:04:46Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/36 | dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
975161924 | MDExOlB1bGxSZXF1ZXN0NzE2MzU3OTgy | 66 | Add --merged-by flag to pull-requests sub command | sarcasticadmin 30531572 | open | 0 | 1 | 2021-08-20T00:57:55Z | 2021-09-28T21:50:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/66 | DescriptionProposing a solution to the API limitation for
This approach might cause larger repos to hit rate limits called out in https://github.com/dogsheep/github-to-sqlite/issues/51 but seems to work well in the repos I tested and included below. Old Behavior
New Behavior
TestingPicking some repo that has more than one merger (datasette only has 1 😉 )
Without the flag the
Individual PRs passed via
|
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/66/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
0 | ||||||
743071410 | MDExOlB1bGxSZXF1ZXN0NTIxMDU0NjEy | 13 | SQLite does not have case sensitive columns | tomaskrehlik 1689944 | open | 0 | 1 | 2020-11-14T20:12:32Z | 2021-08-24T13:28:26Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/13 | This solves a weird issue when there is record with metadata key that is only different in letter cases. See the test for details. |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
978086284 | MDExOlB1bGxSZXF1ZXN0NzE4NzM0MTkx | 22 | Make sure that case-insensitive column names are unique | FabianHertwig 32016596 | open | 0 | 1 | 2021-08-24T13:13:38Z | 2021-08-24T13:26:20Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/22 | This closes #21. When there are metadata entries with the same case insensitive string, then there is an error when trying to create a new column for that metadata entry in the database table, because a column with that case insensitive name already exists.
The code added in this PR checks if a key already exists in a record and if so adds a number at its end. The resulting column names look like the example below then. Interestingly, the column names viewed with Datasette are not case insensitive.
|
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/22/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
970463436 | MDExOlB1bGxSZXF1ZXN0NzEyNDEyODgz | 1434 | Enrich arbitrary query results with foreign key links and column descriptions | simonw 9599 | open | 0 | 1 | 2021-08-13T14:43:01Z | 2021-08-19T21:18:58Z | OWNER | simonw/datasette/pulls/1434 | Refs #1293, follows #942. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
718395987 | MDExOlB1bGxSZXF1ZXN0NTAwNzk4MDkx | 1008 | Add json_loads and json_dumps jinja2 filters | mhalle 649467 | open | 0 | 1 | 2020-10-09T20:11:34Z | 2020-12-15T02:30:28Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1008 | datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
723982480 | MDExOlB1bGxSZXF1ZXN0NTA1NDUzOTAw | 1030 | Make `package` command deal with a configuration directory argument | frankier 299380 | open | 0 | 1 | 2020-10-18T11:07:02Z | 2020-10-19T08:01:51Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1030 | Currently if we run |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
501773982 | MDExOlB1bGxSZXF1ZXN0MzIzOTgzNzMy | 579 | New connection pooling | simonw 9599 | open | 0 | 1 | 2019-10-02T23:22:19Z | 2019-11-15T22:57:21Z | OWNER | simonw/datasette/pulls/579 | See #569 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
355299310 | MDExOlB1bGxSZXF1ZXN0MjExODYwNzA2 | 363 | Search all apps during heroku publish | kevboh 436032 | open | 0 | 1 | 2018-08-29T19:25:10Z | 2018-08-31T14:39:45Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/363 | Adds the |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);