home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

47 rows where comments = 0, state = "open" and type = "pull" sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: author_association, draft, created_at (date), updated_at (date)

repo 13

  • datasette 17
  • twitter-to-sqlite 7
  • healthkit-to-sqlite 4
  • google-takeout-to-sqlite 3
  • github-to-sqlite 3
  • dogsheep-photos 3
  • dogsheep.github.io 2
  • evernote-to-sqlite 2
  • apple-notes-to-sqlite 2
  • sqlite-utils 1
  • dogsheep-beta 1
  • swarm-to-sqlite 1
  • hacker-news-to-sqlite 1

type 1

  • pull · 47 ✖

state 1

  • open · 47 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1884499674 PR_kwDODFE5qs5ZtYMc 13 use poetry for packages, asdf for versioning, and gh actions for ci iloveitaly 150855 open 0     0 2023-09-06T17:59:16Z 2023-09-06T17:59:16Z   FIRST_TIME_CONTRIBUTOR dogsheep/google-takeout-to-sqlite/pulls/13
  • build: use poetry for package management, asdf for python version
  • build: cleanup poetry config, add keywords, ignore dist
  • ci: migrate circleci to gh actions
  • fix: dup method definition
google-takeout-to-sqlite 206649770 pull    
{
    "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/13/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1880968405 PR_kwDOJHON9s5ZhYny 14 fix: fix the problem of Chinese character garbling barretlee 2698003 open 0     0 2023-09-04T23:48:28Z 2023-09-04T23:48:28Z   FIRST_TIME_CONTRIBUTOR dogsheep/apple-notes-to-sqlite/pulls/14
  1. The code uses two different ways of writing encoding formats, mac_roman and macroman. It is uncertain whether there are any typo errors.
  2. When there are Chinese characters in the content, exporting it results in garbled code. Changing it to utf8 can fix the issue.
apple-notes-to-sqlite 611552758 pull    
{
    "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/14/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1866815458 PR_kwDOBm6k_c5YyF-C 2159 Implement Dark Mode colour scheme jamietanna 3315059 open 0     0 2023-08-25T10:46:23Z 2023-08-25T10:46:35Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2159

Closes #2095.


:books: Documentation preview :books:: https://datasette--2159.org.readthedocs.build/en/2159/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2159/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1  
1865983069 PR_kwDOBm6k_c5YvQSi 2158 add brand option to metadata.json. publicmatt 52261150 open 0     0 2023-08-24T22:37:41Z 2023-08-24T22:37:57Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2158

This adds a brand link to the top navbar if 'brand' key is populated in metadata.json. The link will be either '#' or use the contents of 'brand_url' in metadata.json for href.

I was able to get this done on my own site by replacing templates/_crumbs.html with a custom version, but I thought it would be nice to incorporate this in the tool directly.


:books: Documentation preview :books:: https://datasette--2158.org.readthedocs.build/en/2158/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2158/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1802613340 PR_kwDOBm6k_c5VZhfw 2100 Make primary key view accessible to render_cell hook meowcat 1563881 open 0     0 2023-07-13T09:30:36Z 2023-08-10T13:15:41Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2100

:books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2100/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1827436260 PR_kwDOD079W85WtVyk 39 Missing option in datasette instructions coldclimate 319473 open 0     0 2023-07-29T10:34:48Z 2023-07-29T10:34:48Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep-photos/pulls/39

Gotta tell it where to look

dogsheep-photos 256834907 pull    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/39/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1794604602 PR_kwDOBm6k_c5U-akg 2096 Clarify docs for descriptions in metadata garthk 15906 open 0     0 2023-07-08T01:57:58Z 2023-07-08T01:58:13Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2096

G'day! I got confused while debugging, earlier today. That's on me, but it does strike me a little repetition in the metadata documentation might help those flicking around it rather than reading it from top to bottom. No worries if you think otherwise.


:books: Documentation preview :books:: https://datasette--2096.org.readthedocs.build/en/2096/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2096/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1734786661 PR_kwDOBm6k_c5R0fcK 2082 Catch query interrupted on facet suggest row count redraw 10843208 open 0     0 2023-05-31T18:42:46Z 2023-05-31T18:45:26Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2082

Just like facet's suggest() is trapping QueryInterrupted for facet columns, we also need to trap get_row_count(), which can reach timeout if database tables are big enough.

I've included get_columns() inside the block as that's just another query, despite it's a really cheap one and might never raise the exception.


:books: Documentation preview :books:: https://datasette--2082.org.readthedocs.build/en/2082/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2082/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1715468032 PR_kwDOBm6k_c5QzEAM 2076 Datsette gpt plugin StudioCordillera 130708713 open 0     0 2023-05-18T11:22:30Z 2023-05-18T11:22:45Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2076

:books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2076/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1708981860 PR_kwDOBm6k_c5QdMea 2074 sort files by mtime abbbi 3919561 open 0     0 2023-05-14T15:25:15Z 2023-05-14T15:25:29Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2074

serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;)


:books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2074/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1674322631 PR_kwDOBm6k_c5OpEz_ 2061 Add "Packaging a plugin using Poetry" section in docs rclement 1238873 open 0     0 2023-04-19T07:23:28Z 2023-04-19T07:27:18Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2061

This PR adds a new section about packaging a plugin using poetry within the "Writing plugins" page of the documentation.


:books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2061/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1650984552 PR_kwDOJHON9s5NbyYN 13 use universal command amlestin 14314871 open 0     0 2023-04-02T15:10:54Z 2023-04-02T15:37:34Z   FIRST_TIME_CONTRIBUTOR dogsheep/apple-notes-to-sqlite/pulls/13   apple-notes-to-sqlite 611552758 pull    
{
    "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/13/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1639873822 PR_kwDOBm6k_c5M29tt 2044 Expand labels in row view as well (patch for 0.64.x branch) tmcl-it 82332573 open 0     0 2023-03-24T18:44:44Z 2023-03-24T18:44:57Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2044

This is a version of #2031 for the 0.64.x branch.


:books: Documentation preview :books:: https://datasette--2044.org.readthedocs.build/en/2044/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2044/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1586980089 PR_kwDOBm6k_c5KF-by 2026 Avoid repeating primary key columns if included in _col args runderwood 8513 open 0     0 2023-02-16T04:16:25Z 2023-02-16T04:16:41Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2026

...while maintaining given order.

Fixes #1975 (if I'm understanding correctly).


:books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2026/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1581218043 PR_kwDOBm6k_c5JyqPy 2025 Add database metadata to index.html template context palewire 9993 open 0     0 2023-02-12T11:16:58Z 2023-02-12T11:17:14Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2025

Fixes #2016


:books: Documentation preview :books:: https://datasette--2025.org.readthedocs.build/en/2025/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2025/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1515717718 PR_kwDOC8tyDs5Gc-VH 23 Include workout statistics badboy 2129 open 0     0 2023-01-01T17:29:57Z 2023-01-01T17:29:57Z   FIRST_TIME_CONTRIBUTOR dogsheep/healthkit-to-sqlite/pulls/23

Not sure when this changed (iOS 16 maybe?), but the WorkoutStatistics now has a whole bunch of information about workouts, e.g. for runs it contains the distance (as a <WorkoutStatistics type="HKQuantityTypeIdentifierDistanceWalkingRunning ...> element).

Adding it as another column at leat allows me to pull these out (using SQLite's JSON support). I'm running with this patch on my own data now.

healthkit-to-sqlite 197882382 pull    
{
    "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/23/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1513238455 PR_kwDODEm0Qs5GUoPm 71 Archive: Fix "ni devices" typo in importer sometimes-i-send-pull-requests 26161409 open 0     0 2022-12-28T23:33:31Z 2022-12-28T23:33:31Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/71   twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/71/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1513238314 PR_kwDODEm0Qs5GUoN6 70 Archive: Import Twitter Circle data sometimes-i-send-pull-requests 26161409 open 0     0 2022-12-28T23:33:09Z 2022-12-28T23:33:09Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/70   twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/70/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1513238152 PR_kwDODEm0Qs5GUoMM 69 Archive: Import new tweets table name sometimes-i-send-pull-requests 26161409 open 0     0 2022-12-28T23:32:44Z 2022-12-28T23:32:44Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/69

Given the code here, it seems like in the past this file was named "tweet.js". In recent exports, it's named "tweets.js". The archive importer needs to be modified to take this into account. Existing logic is reused for importing this table. (However, the resulting table name will be different, matching the different file name -- archive_tweets, rather than archive_tweet).

twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/69/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1513237982 PR_kwDODEm0Qs5GUoKL 68 Archive: Import mute table sometimes-i-send-pull-requests 26161409 open 0     0 2022-12-28T23:32:06Z 2022-12-28T23:32:06Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/68   twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/68/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1513237712 PR_kwDODEm0Qs5GUoG_ 67 Add support for app-only bearer tokens sometimes-i-send-pull-requests 26161409 open 0     0 2022-12-28T23:31:20Z 2022-12-28T23:31:20Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/67

Previously, twitter-to-sqlite only supported OAuth1 authentication, and the token must be on behalf of a user. However, Twitter also supports application-only bearer tokens, documented here: https://developer.twitter.com/en/docs/authentication/oauth-2-0/bearer-tokens This PR adds support to twitter-to-sqlite for using application-only bearer tokens. To use, the auth.json file just needs to contain a "bearer_token" key instead of "api_key", "api_secret_key", etc.

twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/67/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1393330070 PR_kwDODD6af84__DNJ 14 Photo links redmanmale 6782721 open 0     0 2022-10-01T09:44:15Z 2022-11-18T17:10:49Z   FIRST_TIME_CONTRIBUTOR dogsheep/swarm-to-sqlite/pulls/14
  • add to checkin_details view new column for a calculated photo links
  • supported multiple links split by newline
  • create events table if there's no events in the history to avoid SQL errors

Fixes #9.

swarm-to-sqlite 205429375 pull    
{
    "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/14/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1353418822 PR_kwDODtX3eM497MOV 5 The program fails when the user has no submissions fernand0 2467 open 0     0 2022-08-28T17:25:45Z 2022-08-28T17:25:45Z   FIRST_TIME_CONTRIBUTOR dogsheep/hacker-news-to-sqlite/pulls/5

Tested with:

 hacker-news-to-sqlite user hacker-news.db fernand0

Result: Traceback (most recent call last): File "/home/ftricas/.pyenv/versions/3.10.6/bin/hacker-news-to-sqlite", line 8, in <module> sys.exit(cli()) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/hacker_news_to_sqlite/cli.py", line 27, in user submitted = user.pop("submitted", None) or [] AttributeError: 'NoneType' object has no attribute 'pop'

There is a problem of style with the patch (but not sure what to do) because with the new inicialization ( submitted = []) the part

 or []

is not needed. Maybe there is a more adequate way of doing this.

hacker-news-to-sqlite 248903544 pull    
{
    "url": "https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/5/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1307359454 PR_kwDOBm6k_c47iWbd 1772 Convert to setup.cfg kfdm 89725 open 0     0 2022-07-18T03:39:53Z 2022-07-18T03:39:53Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1772

Recent versions of setuptools can run most things from setup.cfg so one can have a simpler version that does not require executing code on install.

The bulk of the changes were automated by running https://pypi.org/project/setup-py-upgrade/ with a few minor edits for the bits that it can not auto convert (the initial get_long_description() and get_version() can not be automatically converted)

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1772/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1293698966 PR_kwDOD079W84600uh 37 Fix former command name in readme DanLipsitt 578773 open 0     0 2022-07-05T02:09:13Z 2022-07-05T02:09:13Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep-photos/pulls/37

Looks like a previous commit missed a photo-to-sqlite→ dogsheep-photos replacement.

dogsheep-photos 256834907 pull    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/37/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1250287607 PR_kwDODFE5qs44jvRV 11 Update README.md ashanan 11887 open 0     0 2022-05-27T03:13:59Z 2022-05-27T03:13:59Z   FIRST_TIME_CONTRIBUTOR dogsheep/google-takeout-to-sqlite/pulls/11

Fix typo

google-takeout-to-sqlite 206649770 pull    
{
    "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/11/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1244082183 PR_kwDODEm0Qs44PPLy 66 Ageinfo workaround ashanan 11887 open 0     0 2022-05-21T21:08:29Z 2022-05-21T21:09:16Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/66

I'm not sure if this is due to a new format or just because my ageinfo file is blank, but trying to import an archive would crash when it got to that file. This PR adds a guard clause in the ageinfo transformer and sets a default value that doesn't throw an exception. Seems likely to be the same issue mentioned by danp in https://github.com/dogsheep/twitter-to-sqlite/issues/54, my ageinfo file looks the same. Added that same ageinfo file to the test archive as well to help confirm my workaround didn't break anything.

Let me know if you want any changes!

twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/66/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1160327106 PR_kwDODEm0Qs4z_V3w 65 Update Twitter dev link, clarify apps vs projects rixx 2657547 open 0     0 2022-03-05T11:56:08Z 2022-03-05T11:56:08Z   FIRST_TIME_CONTRIBUTOR dogsheep/twitter-to-sqlite/pulls/65

Twitter pushes you heavily towards v2 projects instead of v1 apps – I know the README mentions v1 API compatibility at the top, but I still nearly got turned around here.

twitter-to-sqlite 206156866 pull    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/65/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1149402080 PR_kwDODFdgUs4zaUta 70 scrape-dependents: enable paging through package menu option if present stanbiryukov 36061055 open 0     0 2022-02-24T15:07:25Z 2022-02-24T15:07:25Z   FIRST_TIME_CONTRIBUTOR dogsheep/github-to-sqlite/pulls/70

Some repos organize network dependents by a Package toggle. This PR adds the ability to page through those options and scrape underlying dependents.

github-to-sqlite 207052882 pull    
{
    "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/70/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1046887492 PR_kwDODFE5qs4uMsMJ 9 Removed space from filename My Activity.json widadmogral 91880982 open 0     0 2021-11-08T00:04:31Z 2021-11-08T00:04:31Z   FIRST_TIME_CONTRIBUTOR dogsheep/google-takeout-to-sqlite/pulls/9

File name from google takeout has no space. The code only runs without error if filename is "MyActivity.json" and not "My Activity.json". Is it a new change by Google?

google-takeout-to-sqlite 206649770 pull    
{
    "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/9/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1042759769 PR_kwDOEhK-wc4uAJb9 15 include note tags in the export d-rep 436138 open 0     0 2021-11-02T20:04:31Z 2021-11-02T20:04:31Z   FIRST_TIME_CONTRIBUTOR dogsheep/evernote-to-sqlite/pulls/15

When parsing the Evernote <note> elements, the script will now also parse any nested <tag> elements, writing them out into a separate sqlite table.

Here is an example of how to query the data after the script has run: select notes.*, (select group_concat(tag) from notes_tags where notes_tags.note_id=notes.id) as tags from notes;

My .enex source file is 3+ years old so I am assuming the structure hasn't changed. Interestingly, my notebook names show up in the tags list where the tag name is prefixed with notebook_, so this could maybe help work around the first limitation mentioned in the evernote-to-sqlite blog post.

evernote-to-sqlite 303218369 pull    
{
    "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/15/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1013506559 PR_kwDODFdgUs4skaNS 68 Add support for retrieving teams / members philwills 68329 open 0     0 2021-10-01T15:55:02Z 2021-10-01T15:59:53Z   FIRST_TIME_CONTRIBUTOR dogsheep/github-to-sqlite/pulls/68

Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table team_members beteween teams and users.

github-to-sqlite 207052882 pull    
{
    "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/68/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1001104942 PR_kwDOBm6k_c4r-EVH 1475 feat: allow joins using _through in both directions bram2000 5268174 open 0     0 2021-09-20T15:28:20Z 2021-09-20T15:28:20Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1475

Currently the _through clause can only work if the FK relationship is defined in a specific direction. I don't think there is any reason for this limitation, as an FK allows joining in both directions.

This is an admittedly hacky change to implement bidirectional joins using _through. It does work for our use-case, but I don't know if there are other implications that I haven't thought of. Also if this change is desirable we probably want to make the code a little nicer.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1475/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
991206402 MDExOlB1bGxSZXF1ZXN0NzI5NzA0NTM3 1465 add support for -o --get /path ctb 51016 open 0     0 2021-09-08T14:30:42Z 2021-09-08T14:31:45Z   CONTRIBUTOR simonw/datasette/pulls/1465

Fixes https://github.com/simonw/datasette/issues/1459

Adds support for --open --get /path to be used in combination.

If --open is provided alone, datasette will open a web page to a default URL. If --get <url> is provided alone, datasette will output the result of doing a GET to that URL and then exit. If --open --get <url> are provided together, datasette will open a web page to that URL.

TODO items: - [ ] update documentation - [ ] print out error message when --root --open --get <url> is used - [ ] adjust code to require that <url> start with a / when -o --get <url> is used - [ ] add test(s)

note, '@CTB' is used in this PR to flag code that needs revisiting.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1465/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1  
987985935 MDExOlB1bGxSZXF1ZXN0NzI2OTkwNjgw 35 Support for Datasette's --base-url setting brandonrobertz 2670795 open 0     0 2021-09-03T17:47:45Z 2021-09-03T17:47:45Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep-beta/pulls/35

This makes it so you can use Dogsheep if you're using Datasette with the --base-url /some-path/ setting.

dogsheep-beta 197431109 pull    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/35/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
981690086 MDExOlB1bGxSZXF1ZXN0NzIxNjg2NzIx 67 Replacing step ID key with step_id jshcmpbll 16374374 open 0     0 2021-08-28T01:26:41Z 2021-08-28T01:27:00Z   FIRST_TIME_CONTRIBUTOR dogsheep/github-to-sqlite/pulls/67

Workflows that have an id in any step result in the following error when running workflows:

e.g.github-to-sqlite workflows github.db nixos/nixpkgs

Traceback (most recent call last): File "/usr/local/bin/github-to-sqlite", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1668, in invokeTraceback (most recent call last): File "/usr/local/bin/github-to-sqlite", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1137, in call return self.main(args, kwargs) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 763, in invoke return __callback(args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/github_to_sqlite/cli.py", line 601, in workflows utils.save_workflow(db, repo_id, filename, content) File "/usr/local/lib/python3.8/dist-packages/github_to_sqlite/utils.py", line 865, in save_workflow db["steps"].insert_all( File "/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py", line 2596, in insert_all self.insert_chunk( File "/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py", line 2378, in insert_chunk result = self.db.execute(query, params) File "/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py", line 419, in execute return self.conn.execute(sql, parameters) sqlite3.IntegrityError: datatype mismatch ```

  • Information about the ID key in a step for GHA
  • An example workflow from a public repo

Changes

I'm proposing that the key for id in step is replaced with step_id so that it no longer interferes with the table id for tracking the record.

Special thanks to @sarcasticadmin @egiffen and @ruebenramirez for helping a bit on this 😄

github-to-sqlite 207052882 pull    
{
    "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/67/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
925384329 MDExOlB1bGxSZXF1ZXN0NjczODcyOTc0 7 Add instagram-to-sqlite gavindsouza 36654812 open 0     0 2021-06-19T12:26:16Z 2021-07-28T07:58:59Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep.github.io/pulls/7

The tool covers only chat imports at the time of opening this PR but I'm planning to import everything else that I feel inquisitive about

ref: https://github.com/gavindsouza/instagram-to-sqlite

dogsheep.github.io 214746582 pull    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/7/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
947596222 MDExOlB1bGxSZXF1ZXN0NjkyNTU3Mzgx 1399 Multiple sort jgryko5 87192257 open 0     0 2021-07-19T12:20:14Z 2021-07-19T12:20:14Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1399

Closes #197. I have added support for sorting by multiple parameters as mentioned in the issue above, and together with that, a suggestion on how to implement such sorting in the user interface.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1399/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
892383270 MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4 12 Recovering of malformed ENEX file engdan77 8431437 open 0     0 2021-05-15T07:49:31Z 2021-05-15T19:57:50Z   FIRST_TIMER dogsheep/evernote-to-sqlite/pulls/12

Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :)

Some background to this PR... I've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services.

Now I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process.

My first attempt I tried to adapt to more robust lxml package allowing huge data and with "recover", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble.

And with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a "hugexmlparser" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for <note> to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within <content> so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end </note> before new (malformed?) it would add this after encounter a new start-tag.

The code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted.

Now with the above we pass this a minor changed version of save_note_recovery() assure the existing works. Also adding this as a new recover-enex command to click and kept the original options. A couple of new tests was added as well to check against using this command.

Now this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository.

As a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-)

evernote-to-sqlite 303218369 pull    
{
    "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/12/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
836064851 MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw 18 Add datetime parsing n8henrie 1234956 open 0     0 2021-03-19T14:34:22Z 2021-03-19T14:34:22Z   FIRST_TIME_CONTRIBUTOR dogsheep/healthkit-to-sqlite/pulls/18

Parses the datetime columns so they are subsequently properly recognized as datetime.

Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17

healthkit-to-sqlite 197882382 pull    
{
    "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
830901133 MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1 16 Add a fallback ID, print if no ID found n8henrie 1234956 open 0     0 2021-03-13T13:38:29Z 2021-03-13T14:44:04Z   FIRST_TIME_CONTRIBUTOR dogsheep/healthkit-to-sqlite/pulls/16

Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/14

healthkit-to-sqlite 197882382 pull    
{
    "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
816601354 MDExOlB1bGxSZXF1ZXN0NTgwMjM1NDI3 241 Extract expand - work in progress simonw 9599 open 0     0 2021-02-25T16:36:38Z 2021-02-25T16:36:38Z   OWNER simonw/sqlite-utils/pulls/241

Refs #239. Still needs documentation and CLI implementation.

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/241/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1  
793907673 MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz 15 added try / except to write_records ryancheley 9857779 open 0     0 2021-01-26T03:56:21Z 2021-01-26T03:56:21Z   FIRST_TIME_CONTRIBUTOR dogsheep/healthkit-to-sqlite/pulls/15

to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following:

``` db.py 1709 insert_chunk result = self.db.execute(query, params)

db.py 226 execute return self.conn.execute(sql, parameters)

sqlite3.OperationalError: too many SQL variables


db.py 1709 insert_chunk result = self.db.execute(query, params)

db.py 226 execute return self.conn.execute(sql, parameters)

sqlite3.OperationalError: too many SQL variables


db.py 1709 insert_chunk result = self.db.execute(query, params)

db.py 226 execute return self.conn.execute(sql, parameters)

sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered


healthkit-to-sqlite 8 <module> sys.exit(cli())

core.py 829 call return self.main(args, *kwargs)

core.py 782 main rv = self.invoke(ctx)

core.py 1066 invoke return ctx.invoke(self.callback, **ctx.params)

core.py 610 invoke return callback(args, *kwargs)

cli.py 57 cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf)

utils.py 42 convert_xml_to_sqlite write_records(records, db)

utils.py 143 write_records db[table].insert_all(

db.py 1899 insert_all self.insert_chunk(

db.py 1720 insert_chunk self.insert_chunk(

db.py 1720 insert_chunk self.insert_chunk(

db.py 1714 insert_chunk result = self.db.execute(query, params)

db.py 226 execute return self.conn.execute(sql, parameters)

sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered ```

Adding the try / except in the write_records seems to fix that issue.

healthkit-to-sqlite 197882382 pull    
{
    "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
723499985 MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4 5 Add fitbit-to-sqlite mrphil007 4632208 open 0     0 2020-10-16T20:04:05Z 2020-10-16T20:04:05Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep.github.io/pulls/5
dogsheep.github.io 214746582 pull    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
655974395 MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw 30 Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces) scanner 110038 open 0     0 2020-07-13T16:15:26Z 2020-07-13T16:15:26Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep-photos/pulls/30

Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3. Had to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop.

Also probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it.

dogsheep-photos 256834907 pull    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
440325850 MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2 452 SQL builder utility classes russss 45057 open 0     0 2019-05-04T13:57:47Z 2019-05-04T14:03:04Z   CONTRIBUTOR simonw/datasette/pulls/452

This adds a straightforward set of classes to aid in the construction of SQL queries.

My plan for this was to allow plugins to manipulate the Datasette-generated SQL in a more structured way. I'm not sure that's going to work, but I feel like this is still a step forward - it reduces the number of intermediate variables in TableView.data which aids readability, and also factors out a lot of the boring string concatenation.

There are a fair number of minor structure changes in here too as I've tried to make the ordering of TableView.data a bit more logical. As far as I can tell, I haven't broken anything...

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/452/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
359075028 MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx 364 Support for other types of databases using external connectors jsancho-gpl 11912854 open 0     0 2018-09-11T14:31:47Z 2018-09-11T14:31:47Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/364

This PR is related to #293, but now all commits have been merged.

The purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points.

The modifications in the original datasette code are minimal and many are in a separated file.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/364/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 502.159ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows