issues
47 rows where comments = 0, state = "open" and type = "pull" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: author_association, draft, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1884499674 | PR_kwDODFE5qs5ZtYMc | 13 | use poetry for packages, asdf for versioning, and gh actions for ci | iloveitaly 150855 | open | 0 | 0 | 2023-09-06T17:59:16Z | 2023-09-06T17:59:16Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/13 |
|
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1880968405 | PR_kwDOJHON9s5ZhYny | 14 | fix: fix the problem of Chinese character garbling | barretlee 2698003 | open | 0 | 0 | 2023-09-04T23:48:28Z | 2023-09-04T23:48:28Z | FIRST_TIME_CONTRIBUTOR | dogsheep/apple-notes-to-sqlite/pulls/14 |
|
apple-notes-to-sqlite 611552758 | pull | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1866815458 | PR_kwDOBm6k_c5YyF-C | 2159 | Implement Dark Mode colour scheme | jamietanna 3315059 | open | 0 | 0 | 2023-08-25T10:46:23Z | 2023-08-25T10:46:35Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2159 | Closes #2095. :books: Documentation preview :books:: https://datasette--2159.org.readthedocs.build/en/2159/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1865983069 | PR_kwDOBm6k_c5YvQSi | 2158 | add brand option to metadata.json. | publicmatt 52261150 | open | 0 | 0 | 2023-08-24T22:37:41Z | 2023-08-24T22:37:57Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2158 | This adds a brand link to the top navbar if 'brand' key is populated in metadata.json. The link will be either '#' or use the contents of 'brand_url' in metadata.json for href. I was able to get this done on my own site by replacing :books: Documentation preview :books:: https://datasette--2158.org.readthedocs.build/en/2158/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1802613340 | PR_kwDOBm6k_c5VZhfw | 2100 | Make primary key view accessible to render_cell hook | meowcat 1563881 | open | 0 | 0 | 2023-07-13T09:30:36Z | 2023-08-10T13:15:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2100 | :books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1827436260 | PR_kwDOD079W85WtVyk | 39 | Missing option in datasette instructions | coldclimate 319473 | open | 0 | 0 | 2023-07-29T10:34:48Z | 2023-07-29T10:34:48Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/39 | Gotta tell it where to look |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1794604602 | PR_kwDOBm6k_c5U-akg | 2096 | Clarify docs for descriptions in metadata | garthk 15906 | open | 0 | 0 | 2023-07-08T01:57:58Z | 2023-07-08T01:58:13Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2096 | G'day! I got confused while debugging, earlier today. That's on me, but it does strike me a little repetition in the metadata documentation might help those flicking around it rather than reading it from top to bottom. No worries if you think otherwise. :books: Documentation preview :books:: https://datasette--2096.org.readthedocs.build/en/2096/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1734786661 | PR_kwDOBm6k_c5R0fcK | 2082 | Catch query interrupted on facet suggest row count | redraw 10843208 | open | 0 | 0 | 2023-05-31T18:42:46Z | 2023-05-31T18:45:26Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2082 | Just like facet's I've included :books: Documentation preview :books:: https://datasette--2082.org.readthedocs.build/en/2082/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1715468032 | PR_kwDOBm6k_c5QzEAM | 2076 | Datsette gpt plugin | StudioCordillera 130708713 | open | 0 | 0 | 2023-05-18T11:22:30Z | 2023-05-18T11:22:45Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2076 | :books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1708981860 | PR_kwDOBm6k_c5QdMea | 2074 | sort files by mtime | abbbi 3919561 | open | 0 | 0 | 2023-05-14T15:25:15Z | 2023-05-14T15:25:29Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2074 | serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;) :books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1674322631 | PR_kwDOBm6k_c5OpEz_ | 2061 | Add "Packaging a plugin using Poetry" section in docs | rclement 1238873 | open | 0 | 0 | 2023-04-19T07:23:28Z | 2023-04-19T07:27:18Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2061 | This PR adds a new section about packaging a plugin using :books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1650984552 | PR_kwDOJHON9s5NbyYN | 13 | use universal command | amlestin 14314871 | open | 0 | 0 | 2023-04-02T15:10:54Z | 2023-04-02T15:37:34Z | FIRST_TIME_CONTRIBUTOR | dogsheep/apple-notes-to-sqlite/pulls/13 | apple-notes-to-sqlite 611552758 | pull | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1639873822 | PR_kwDOBm6k_c5M29tt | 2044 | Expand labels in row view as well (patch for 0.64.x branch) | tmcl-it 82332573 | open | 0 | 0 | 2023-03-24T18:44:44Z | 2023-03-24T18:44:57Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2044 | This is a version of #2031 for the 0.64.x branch. :books: Documentation preview :books:: https://datasette--2044.org.readthedocs.build/en/2044/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1586980089 | PR_kwDOBm6k_c5KF-by | 2026 | Avoid repeating primary key columns if included in _col args | runderwood 8513 | open | 0 | 0 | 2023-02-16T04:16:25Z | 2023-02-16T04:16:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2026 | ...while maintaining given order. Fixes #1975 (if I'm understanding correctly). :books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1581218043 | PR_kwDOBm6k_c5JyqPy | 2025 | Add database metadata to index.html template context | palewire 9993 | open | 0 | 0 | 2023-02-12T11:16:58Z | 2023-02-12T11:17:14Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2025 | Fixes #2016 :books: Documentation preview :books:: https://datasette--2025.org.readthedocs.build/en/2025/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1515717718 | PR_kwDOC8tyDs5Gc-VH | 23 | Include workout statistics | badboy 2129 | open | 0 | 0 | 2023-01-01T17:29:57Z | 2023-01-01T17:29:57Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/23 | Not sure when this changed (iOS 16 maybe?), but the Adding it as another column at leat allows me to pull these out (using SQLite's JSON support). I'm running with this patch on my own data now. |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/23/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1513238455 | PR_kwDODEm0Qs5GUoPm | 71 | Archive: Fix "ni devices" typo in importer | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:33:31Z | 2022-12-28T23:33:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/71 | twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1513238314 | PR_kwDODEm0Qs5GUoN6 | 70 | Archive: Import Twitter Circle data | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:33:09Z | 2022-12-28T23:33:09Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/70 | twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1513238152 | PR_kwDODEm0Qs5GUoMM | 69 | Archive: Import new tweets table name | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:32:44Z | 2022-12-28T23:32:44Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/69 | Given the code here, it seems like in the past this file was named "tweet.js". In recent exports, it's named "tweets.js". The archive importer needs to be modified to take this into account. Existing logic is reused for importing this table. (However, the resulting table name will be different, matching the different file name -- archive_tweets, rather than archive_tweet). |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1513237982 | PR_kwDODEm0Qs5GUoKL | 68 | Archive: Import mute table | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:32:06Z | 2022-12-28T23:32:06Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/68 | twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1513237712 | PR_kwDODEm0Qs5GUoG_ | 67 | Add support for app-only bearer tokens | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:31:20Z | 2022-12-28T23:31:20Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/67 | Previously, twitter-to-sqlite only supported OAuth1 authentication, and the token must be on behalf of a user. However, Twitter also supports application-only bearer tokens, documented here: https://developer.twitter.com/en/docs/authentication/oauth-2-0/bearer-tokens This PR adds support to twitter-to-sqlite for using application-only bearer tokens. To use, the auth.json file just needs to contain a "bearer_token" key instead of "api_key", "api_secret_key", etc. |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/67/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1393330070 | PR_kwDODD6af84__DNJ | 14 | Photo links | redmanmale 6782721 | open | 0 | 0 | 2022-10-01T09:44:15Z | 2022-11-18T17:10:49Z | FIRST_TIME_CONTRIBUTOR | dogsheep/swarm-to-sqlite/pulls/14 |
Fixes #9. |
swarm-to-sqlite 205429375 | pull | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1353418822 | PR_kwDODtX3eM497MOV | 5 | The program fails when the user has no submissions | fernand0 2467 | open | 0 | 0 | 2022-08-28T17:25:45Z | 2022-08-28T17:25:45Z | FIRST_TIME_CONTRIBUTOR | dogsheep/hacker-news-to-sqlite/pulls/5 | Tested with:
Result:
There is a problem of style with the patch (but not sure what to do) because with the new inicialization ( submitted = []) the part
is not needed. Maybe there is a more adequate way of doing this. |
hacker-news-to-sqlite 248903544 | pull | { "url": "https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1307359454 | PR_kwDOBm6k_c47iWbd | 1772 | Convert to setup.cfg | kfdm 89725 | open | 0 | 0 | 2022-07-18T03:39:53Z | 2022-07-18T03:39:53Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1772 | Recent versions of setuptools can run most things from setup.cfg so one can have a simpler version that does not require executing code on install. The bulk of the changes were automated by running https://pypi.org/project/setup-py-upgrade/ with a few minor edits for the bits that it can not auto convert (the initial |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1293698966 | PR_kwDOD079W84600uh | 37 | Fix former command name in readme | DanLipsitt 578773 | open | 0 | 0 | 2022-07-05T02:09:13Z | 2022-07-05T02:09:13Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/37 | Looks like a previous commit missed a |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/37/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1250287607 | PR_kwDODFE5qs44jvRV | 11 | Update README.md | ashanan 11887 | open | 0 | 0 | 2022-05-27T03:13:59Z | 2022-05-27T03:13:59Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/11 | Fix typo |
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/11/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1244082183 | PR_kwDODEm0Qs44PPLy | 66 | Ageinfo workaround | ashanan 11887 | open | 0 | 0 | 2022-05-21T21:08:29Z | 2022-05-21T21:09:16Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/66 | I'm not sure if this is due to a new format or just because my ageinfo file is blank, but trying to import an archive would crash when it got to that file. This PR adds a guard clause in the Let me know if you want any changes! |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/66/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1160327106 | PR_kwDODEm0Qs4z_V3w | 65 | Update Twitter dev link, clarify apps vs projects | rixx 2657547 | open | 0 | 0 | 2022-03-05T11:56:08Z | 2022-03-05T11:56:08Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/65 | Twitter pushes you heavily towards v2 projects instead of v1 apps – I know the README mentions v1 API compatibility at the top, but I still nearly got turned around here. |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/65/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1149402080 | PR_kwDODFdgUs4zaUta | 70 | scrape-dependents: enable paging through package menu option if present | stanbiryukov 36061055 | open | 0 | 0 | 2022-02-24T15:07:25Z | 2022-02-24T15:07:25Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/70 | Some repos organize network dependents by a Package toggle. This PR adds the ability to page through those options and scrape underlying dependents. |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1046887492 | PR_kwDODFE5qs4uMsMJ | 9 | Removed space from filename My Activity.json | widadmogral 91880982 | open | 0 | 0 | 2021-11-08T00:04:31Z | 2021-11-08T00:04:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/9 | File name from google takeout has no space. The code only runs without error if filename is "MyActivity.json" and not "My Activity.json". Is it a new change by Google? |
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1042759769 | PR_kwDOEhK-wc4uAJb9 | 15 | include note tags in the export | d-rep 436138 | open | 0 | 0 | 2021-11-02T20:04:31Z | 2021-11-02T20:04:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/evernote-to-sqlite/pulls/15 | When parsing the Evernote Here is an example of how to query the data after the script has run:
My .enex source file is 3+ years old so I am assuming the structure hasn't changed. Interestingly, my notebook names show up in the tags list where the tag name is prefixed with |
evernote-to-sqlite 303218369 | pull | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1013506559 | PR_kwDODFdgUs4skaNS | 68 | Add support for retrieving teams / members | philwills 68329 | open | 0 | 0 | 2021-10-01T15:55:02Z | 2021-10-01T15:59:53Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/68 | Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1001104942 | PR_kwDOBm6k_c4r-EVH | 1475 | feat: allow joins using _through in both directions | bram2000 5268174 | open | 0 | 0 | 2021-09-20T15:28:20Z | 2021-09-20T15:28:20Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1475 | Currently the This is an admittedly hacky change to implement bidirectional joins using |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
991206402 | MDExOlB1bGxSZXF1ZXN0NzI5NzA0NTM3 | 1465 | add support for -o --get /path | ctb 51016 | open | 0 | 0 | 2021-09-08T14:30:42Z | 2021-09-08T14:31:45Z | CONTRIBUTOR | simonw/datasette/pulls/1465 | Fixes https://github.com/simonw/datasette/issues/1459 Adds support for If TODO items:
- [ ] update documentation
- [ ] print out error message when note, '@CTB' is used in this PR to flag code that needs revisiting. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
987985935 | MDExOlB1bGxSZXF1ZXN0NzI2OTkwNjgw | 35 | Support for Datasette's --base-url setting | brandonrobertz 2670795 | open | 0 | 0 | 2021-09-03T17:47:45Z | 2021-09-03T17:47:45Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-beta/pulls/35 | This makes it so you can use Dogsheep if you're using Datasette with the |
dogsheep-beta 197431109 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/35/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
981690086 | MDExOlB1bGxSZXF1ZXN0NzIxNjg2NzIx | 67 | Replacing step ID key with step_id | jshcmpbll 16374374 | open | 0 | 0 | 2021-08-28T01:26:41Z | 2021-08-28T01:27:00Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/67 | Workflows that have an e.g.
ChangesI'm proposing that the key for Special thanks to @sarcasticadmin @egiffen and @ruebenramirez for helping a bit on this 😄 |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/67/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
925384329 | MDExOlB1bGxSZXF1ZXN0NjczODcyOTc0 | 7 | Add instagram-to-sqlite | gavindsouza 36654812 | open | 0 | 0 | 2021-06-19T12:26:16Z | 2021-07-28T07:58:59Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep.github.io/pulls/7 | The tool covers only chat imports at the time of opening this PR but I'm planning to import everything else that I feel inquisitive about |
dogsheep.github.io 214746582 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
947596222 | MDExOlB1bGxSZXF1ZXN0NjkyNTU3Mzgx | 1399 | Multiple sort | jgryko5 87192257 | open | 0 | 0 | 2021-07-19T12:20:14Z | 2021-07-19T12:20:14Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1399 | Closes #197. I have added support for sorting by multiple parameters as mentioned in the issue above, and together with that, a suggestion on how to implement such sorting in the user interface. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
892383270 | MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4 | 12 | Recovering of malformed ENEX file | engdan77 8431437 | open | 0 | 0 | 2021-05-15T07:49:31Z | 2021-05-15T19:57:50Z | FIRST_TIMER | dogsheep/evernote-to-sqlite/pulls/12 | Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :) Some background to this PR... I've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services. Now I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process. My first attempt I tried to adapt to more robust lxml package allowing huge data and with "recover", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble. And with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a "hugexmlparser" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for <note> to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within <content> so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end </note> before new (malformed?) it would add this after encounter a new start-tag. The code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted. Now with the above we pass this a minor changed version of save_note_recovery() assure the existing works. Also adding this as a new recover-enex command to click and kept the original options. A couple of new tests was added as well to check against using this command. Now this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository. As a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-) |
evernote-to-sqlite 303218369 | pull | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
836064851 | MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw | 18 | Add datetime parsing | n8henrie 1234956 | open | 0 | 0 | 2021-03-19T14:34:22Z | 2021-03-19T14:34:22Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/18 | Parses the datetime columns so they are subsequently properly recognized as datetime. Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17 |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
830901133 | MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1 | 16 | Add a fallback ID, print if no ID found | n8henrie 1234956 | open | 0 | 0 | 2021-03-13T13:38:29Z | 2021-03-13T14:44:04Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/16 | healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
816601354 | MDExOlB1bGxSZXF1ZXN0NTgwMjM1NDI3 | 241 | Extract expand - work in progress | simonw 9599 | open | 0 | 0 | 2021-02-25T16:36:38Z | 2021-02-25T16:36:38Z | OWNER | simonw/sqlite-utils/pulls/241 | Refs #239. Still needs documentation and CLI implementation. |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/241/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
793907673 | MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz | 15 | added try / except to write_records | ryancheley 9857779 | open | 0 | 0 | 2021-01-26T03:56:21Z | 2021-01-26T03:56:21Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/15 | to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following: ``` db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered healthkit-to-sqlite 8 <module> sys.exit(cli()) core.py 829 call return self.main(args, *kwargs) core.py 782 main rv = self.invoke(ctx) core.py 1066 invoke return ctx.invoke(self.callback, **ctx.params) core.py 610 invoke return callback(args, *kwargs) cli.py 57 cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) utils.py 42 convert_xml_to_sqlite write_records(records, db) utils.py 143 write_records db[table].insert_all( db.py 1899 insert_all self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1714 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered ``` Adding the try / except in the |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
723499985 | MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4 | 5 | Add fitbit-to-sqlite | mrphil007 4632208 | open | 0 | 0 | 2020-10-16T20:04:05Z | 2020-10-16T20:04:05Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep.github.io/pulls/5 | dogsheep.github.io 214746582 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
655974395 | MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw | 30 | Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces) | scanner 110038 | open | 0 | 0 | 2020-07-13T16:15:26Z | 2020-07-13T16:15:26Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/30 | Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3. Had to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop. Also probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it. |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
440325850 | MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2 | 452 | SQL builder utility classes | russss 45057 | open | 0 | 0 | 2019-05-04T13:57:47Z | 2019-05-04T14:03:04Z | CONTRIBUTOR | simonw/datasette/pulls/452 | This adds a straightforward set of classes to aid in the construction of SQL queries. My plan for this was to allow plugins to manipulate the
Datasette-generated SQL in a more structured way. I'm not sure that's
going to work, but I feel like this is still a step forward - it
reduces the number of intermediate variables in There are a fair number of minor structure changes in here too as I've
tried to make the ordering of |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
359075028 | MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx | 364 | Support for other types of databases using external connectors | jsancho-gpl 11912854 | open | 0 | 0 | 2018-09-11T14:31:47Z | 2018-09-11T14:31:47Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/364 | This PR is related to #293, but now all commits have been merged. The purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points. The modifications in the original datasette code are minimal and many are in a separated file. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);