github
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622171097 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33 | 622171097 | MDEyOklzc3VlQ29tbWVudDYyMjE3MTA5Nw== | 9599 | 2020-04-30T23:22:45Z | 2020-04-30T23:23:57Z | MEMBER | The `auth.json` mechanism this uses is standard across all of the other Dogsheep tools - it's actually designed so you can have one `auth.json` with a bunch of different credentials for different tools: ```json { "goodreads_personal_token": "...", "goodreads_user_id": "...", "github_personal_token": "...", "pocket_consumer_key": "...", "pocket_username": "...", "pocket_access_token": "..." } ``` But... `github-to-sqlite` does feel like it deserves a special case here, since it's such a good fit for running inside of GitHub Actions - which even provide a `GITHUB_TOKEN` for you to use! So I don't think it will harm the family of tools too much if this has an environment variable alternative to the `-a` file. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
609950090 | |
https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622169728 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33 | 622169728 | MDEyOklzc3VlQ29tbWVudDYyMjE2OTcyOA== | 9599 | 2020-04-30T23:18:51Z | 2020-04-30T23:18:51Z | MEMBER | Sure, that sounds fine to me. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
609950090 | |
https://github.com/dogsheep/github-to-sqlite/issues/34#issuecomment-622135654 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34 | 622135654 | MDEyOklzc3VlQ29tbWVudDYyMjEzNTY1NA== | 9599 | 2020-04-30T21:53:44Z | 2020-04-30T21:56:06Z | MEMBER | I think this is the neatest scraping pattern: ```python [a["href"].lstrip("/") for a in soup.select("a[data-hovercard-type=repository]")] ``` | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610408908 | |
https://github.com/dogsheep/github-to-sqlite/issues/34#issuecomment-622136585 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34 | 622136585 | MDEyOklzc3VlQ29tbWVudDYyMjEzNjU4NQ== | 9599 | 2020-04-30T21:55:51Z | 2020-04-30T21:55:51Z | MEMBER | And to find the "Next" pagination link: ```python soup.select(".paginate-container")[0].find("a", text="Next") ``` | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610408908 | |
https://github.com/dogsheep/github-to-sqlite/issues/34#issuecomment-622133775 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34 | 622133775 | MDEyOklzc3VlQ29tbWVudDYyMjEzMzc3NQ== | 9599 | 2020-04-30T21:49:27Z | 2020-04-30T21:49:27Z | MEMBER | Proposed command: github-to-sqlite scrape-dependents github.db simonw/datasette I'll pull full details of the scraped repos from the regular API. I'll also record when they were "first seen" by the command. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610408908 | |
https://github.com/dogsheep/github-to-sqlite/issues/34#issuecomment-622133422 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34 | 622133422 | MDEyOklzc3VlQ29tbWVudDYyMjEzMzQyMg== | 9599 | 2020-04-30T21:48:39Z | 2020-04-30T21:48:39Z | MEMBER | It looks like the only option is to scrape them. I'll do that and then replace with an API as soon as one becomes available. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610408908 | |
https://github.com/simonw/datasette/issues/747#issuecomment-622043887 | https://api.github.com/repos/simonw/datasette/issues/747 | 622043887 | MDEyOklzc3VlQ29tbWVudDYyMjA0Mzg4Nw== | 9599 | 2020-04-30T19:04:19Z | 2020-04-30T19:04:19Z | OWNER | https://datasette.readthedocs.io/en/latest/config.html#configuration-directory-mode | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610192152 | |
https://github.com/simonw/datasette/issues/747#issuecomment-622036934 | https://api.github.com/repos/simonw/datasette/issues/747 | 622036934 | MDEyOklzc3VlQ29tbWVudDYyMjAzNjkzNA== | 9599 | 2020-04-30T18:51:18Z | 2020-04-30T18:51:18Z | OWNER | Needs docs. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610192152 | |
https://github.com/simonw/datasette/issues/747#issuecomment-622003376 | https://api.github.com/repos/simonw/datasette/issues/747 | 622003376 | MDEyOklzc3VlQ29tbWVudDYyMjAwMzM3Ng== | 9599 | 2020-04-30T17:45:32Z | 2020-04-30T17:45:32Z | OWNER | I can use this function to load the JSON-or-YAML: https://github.com/simonw/datasette/blob/d349d57cdf3d577afb62bdf784af342a4d5be660/datasette/utils/__init__.py#L798-L806 | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610192152 | |
https://github.com/simonw/datasette/issues/747#issuecomment-621948898 | https://api.github.com/repos/simonw/datasette/issues/747 | 621948898 | MDEyOklzc3VlQ29tbWVudDYyMTk0ODg5OA== | 9599 | 2020-04-30T16:05:50Z | 2020-04-30T16:05:50Z | OWNER | Relevant code: https://github.com/simonw/datasette/blob/e37f4077c0f1cd09d4102213d4e2a512af471b8d/datasette/app.py#L206-L208 | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
610192152 |