issues
93 rows where state = "open" and type = "pull" sorted by updated_at descending
This data as json, CSV (advanced)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1160327106 | PR_kwDODEm0Qs4z_V3w | 65 | Update Twitter dev link, clarify apps vs projects | rixx 2657547 | open | 0 | 0 | 2022-03-05T11:56:08Z | 2022-03-05T11:56:08Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/65 | Twitter pushes you heavily towards v2 projects instead of v1 apps – I know the README mentions v1 API compatibility at the top, but I still nearly got turned around here. |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/65/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1149402080 | PR_kwDODFdgUs4zaUta | 70 | scrape-dependents: enable paging through package menu option if present | stanbiryukov 36061055 | open | 0 | 0 | 2022-02-24T15:07:25Z | 2022-02-24T15:07:25Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/70 | Some repos organize network dependents by a Package toggle. This PR adds the ability to page through those options and scrape underlying dependents. |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1122451096 | PR_kwDOBm6k_c4x_mXy | 1626 | Try test suite against macOS and Windows | simonw 9599 | open | 0 | 3 | 2022-02-02T22:26:51Z | 2022-02-03T01:22:44Z | OWNER | simonw/datasette/pulls/1626 | Refs #1625 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
988493790 | MDExOlB1bGxSZXF1ZXN0NzI3MzkwODM1 | 36 | Correct naming of tool in readme | badboy 2129 | open | 0 | 1 | 2021-09-05T12:05:40Z | 2022-01-06T16:04:46Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/36 | dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
793002853 | MDExOlB1bGxSZXF1ZXN0NTYwNzYwMTQ1 | 1204 | WIP: Plugin includes | simonw 9599 | open | 0 | 3 | 2021-01-25T03:59:06Z | 2021-12-17T07:10:49Z | OWNER | simonw/datasette/pulls/1204 | Refs #1191 Next steps:
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1066603133 | PR_kwDOCGYnMM4vKAzW | 347 | Test against pysqlite3 running SQLite 3.37 | simonw 9599 | open | 0 | 9 | 2021-11-29T23:17:57Z | 2021-12-11T01:02:19Z | OWNER | simonw/sqlite-utils/pulls/347 | Refs #346 and #344. |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1033678984 | PR_kwDOBm6k_c4tjgJ8 | 1495 | Allow routes to have extra options | fgregg 536941 | open | 0 | 5 | 2021-10-22T15:00:45Z | 2021-11-19T15:36:27Z | CONTRIBUTOR | simonw/datasette/pulls/1495 | Right now, datasette routes can only be a 2-tuple of If it was possible for datasette to handle extra options, like standard Django does, it would add flexibility for plugin authors. For example, if extra options were enabled, then it would be easy to make a single table the home page (#1284). This plugin would accomplish it. ```python from datasette import hookimpl from datasette.views.table import TableView @hookimpl def register_routes(datasette): return [ (r"^/$", TableView.as_view(datasette), {'db_name': 'DB_NAME', 'table': 'TABLE_NAME'}) ] ``` |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1046887492 | PR_kwDODFE5qs4uMsMJ | 9 | Removed space from filename My Activity.json | widadmogral 91880982 | open | 0 | 0 | 2021-11-08T00:04:31Z | 2021-11-08T00:04:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/9 | File name from google takeout has no space. The code only runs without error if filename is "MyActivity.json" and not "My Activity.json". Is it a new change by Google? |
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1042759769 | PR_kwDOEhK-wc4uAJb9 | 15 | include note tags in the export | d-rep 436138 | open | 0 | 0 | 2021-11-02T20:04:31Z | 2021-11-02T20:04:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/evernote-to-sqlite/pulls/15 | When parsing the Evernote Here is an example of how to query the data after the script has run:
My .enex source file is 3+ years old so I am assuming the structure hasn't changed. Interestingly, my notebook names show up in the tags list where the tag name is prefixed with |
evernote-to-sqlite 303218369 | pull | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1013506559 | PR_kwDODFdgUs4skaNS | 68 | Add support for retrieving teams / members | philwills 68329 | open | 0 | 0 | 2021-10-01T15:55:02Z | 2021-10-01T15:59:53Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/68 | Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
975161924 | MDExOlB1bGxSZXF1ZXN0NzE2MzU3OTgy | 66 | Add --merged-by flag to pull-requests sub command | sarcasticadmin 30531572 | open | 0 | 1 | 2021-08-20T00:57:55Z | 2021-09-28T21:50:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/66 | DescriptionProposing a solution to the API limitation for
This approach might cause larger repos to hit rate limits called out in https://github.com/dogsheep/github-to-sqlite/issues/51 but seems to work well in the repos I tested and included below. Old Behavior
New Behavior
TestingPicking some repo that has more than one merger (datasette only has 1 😉 )
Without the flag the
Individual PRs passed via
|
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/66/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
0 | ||||||
1001104942 | PR_kwDOBm6k_c4r-EVH | 1475 | feat: allow joins using _through in both directions | bram2000 5268174 | open | 0 | 0 | 2021-09-20T15:28:20Z | 2021-09-20T15:28:20Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1475 | Currently the This is an admittedly hacky change to implement bidirectional joins using |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
991206402 | MDExOlB1bGxSZXF1ZXN0NzI5NzA0NTM3 | 1465 | add support for -o --get /path | ctb 51016 | open | 0 | 0 | 2021-09-08T14:30:42Z | 2021-09-08T14:31:45Z | CONTRIBUTOR | simonw/datasette/pulls/1465 | Fixes https://github.com/simonw/datasette/issues/1459 Adds support for If TODO items:
- [ ] update documentation
- [ ] print out error message when note, '@CTB' is used in this PR to flag code that needs revisiting. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
987985935 | MDExOlB1bGxSZXF1ZXN0NzI2OTkwNjgw | 35 | Support for Datasette's --base-url setting | brandonrobertz 2670795 | open | 0 | 0 | 2021-09-03T17:47:45Z | 2021-09-03T17:47:45Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-beta/pulls/35 | This makes it so you can use Dogsheep if you're using Datasette with the |
dogsheep-beta 197431109 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/35/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
981690086 | MDExOlB1bGxSZXF1ZXN0NzIxNjg2NzIx | 67 | Replacing step ID key with step_id | jshcmpbll 16374374 | open | 0 | 0 | 2021-08-28T01:26:41Z | 2021-08-28T01:27:00Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/67 | Workflows that have an e.g.
ChangesI'm proposing that the key for Special thanks to @sarcasticadmin @egiffen and @ruebenramirez for helping a bit on this 😄 |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/67/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
743071410 | MDExOlB1bGxSZXF1ZXN0NTIxMDU0NjEy | 13 | SQLite does not have case sensitive columns | tomaskrehlik 1689944 | open | 0 | 1 | 2020-11-14T20:12:32Z | 2021-08-24T13:28:26Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/13 | This solves a weird issue when there is record with metadata key that is only different in letter cases. See the test for details. |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
978086284 | MDExOlB1bGxSZXF1ZXN0NzE4NzM0MTkx | 22 | Make sure that case-insensitive column names are unique | FabianHertwig 32016596 | open | 0 | 1 | 2021-08-24T13:13:38Z | 2021-08-24T13:26:20Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/22 | This closes #21. When there are metadata entries with the same case insensitive string, then there is an error when trying to create a new column for that metadata entry in the database table, because a column with that case insensitive name already exists.
The code added in this PR checks if a key already exists in a record and if so adds a number at its end. The resulting column names look like the example below then. Interestingly, the column names viewed with Datasette are not case insensitive.
|
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/22/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
970463436 | MDExOlB1bGxSZXF1ZXN0NzEyNDEyODgz | 1434 | Enrich arbitrary query results with foreign key links and column descriptions | simonw 9599 | open | 0 | 1 | 2021-08-13T14:43:01Z | 2021-08-19T21:18:58Z | OWNER | simonw/datasette/pulls/1434 | Refs #1293, follows #942. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
925384329 | MDExOlB1bGxSZXF1ZXN0NjczODcyOTc0 | 7 | Add instagram-to-sqlite | gavindsouza 36654812 | open | 0 | 0 | 2021-06-19T12:26:16Z | 2021-07-28T07:58:59Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep.github.io/pulls/7 | The tool covers only chat imports at the time of opening this PR but I'm planning to import everything else that I feel inquisitive about |
dogsheep.github.io 214746582 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
813880401 | MDExOlB1bGxSZXF1ZXN0NTc3OTUzNzI3 | 5 | WIP: Add Gmail takeout mbox import | UtahDave 306240 | open | 0 | 25 | 2021-02-22T21:30:40Z | 2021-07-28T07:18:56Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/5 | WIP This PR adds the ability to import emails from a Gmail mbox export from Google Takeout. This is my first PR to a datasette/dogsheep repo. I've tested this on my personal Google Takeout mbox with ~520,000 emails going back to 2004. This took around ~20 minutes to process. To provide some feedback on the progress of the import I added the "rich" python module. I'm happy to remove that if adding a dependency is discouraged. However, I think it makes a nice addition to give feedback on the progress of a long import. Do we want to log emails that have errors when trying to import them? Dealing with encodings with emails is a bit tricky. I'm very open to feedback on how to deal with those better. As well as any other feedback for improvements. |
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
855446829 | MDExOlB1bGxSZXF1ZXN0NjEzMTc4OTY4 | 1296 | Dockerfile: use Ubuntu 20.10 as base | tmcl-it 82332573 | open | 0 | 4 | 2021-04-12T00:23:32Z | 2021-07-20T08:52:13Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1296 | This PR changes the main Dockerfile to use ubuntu:20.10 as base image instead of python:3.9.2-slim-buster (itself based on debian:buster-slim). The Dockerfile is essentially the one from https://github.com/simonw/datasette/issues/1249#issuecomment-803698983 with some additional cleanups to slim it down. This fixes a couple of issues: 1. The SQLite version in Debian Buster (2.6.0) doesn't support generated columns 2. Installing SpatiaLite from the Debian sid repositories has the side effect of also installing updates to libc and libstdc++ from sid. As a bonus, the Docker image becomes smaller:
Reproduction of the first issue``` $ curl -O https://latest.datasette.io/fixtures.db % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 260k 0 260k 0 0 489k 0 --:--:-- --:--:-- --:--:-- 489k $ docker run -v Here is the SQLite version:
Reproduction of the second issue
Both libc and libstdc++ are backwards compatible, so the image still works, but it will result in a combination of libraries and Python versions that exists only in the Datasette image, so it's likely untested. In addition, since Debian sid is an always-changing rolling-release, the versions of libc, libstdc++, Spatialite, and their dependencies change frequently, so the library versions in the Datasette image will depend on the day when it was built. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1296/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
947596222 | MDExOlB1bGxSZXF1ZXN0NjkyNTU3Mzgx | 1399 | Multiple sort | jgryko5 87192257 | open | 0 | 0 | 2021-07-19T12:20:14Z | 2021-07-19T12:20:14Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1399 | Closes #197. I have added support for sorting by multiple parameters as mentioned in the issue above, and together with that, a suggestion on how to implement such sorting in the user interface. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
646448486 | MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0 | 868 | initial windows ci setup | joshmgrant 702729 | open | 0 | 3 | 2020-06-26T18:49:13Z | 2021-07-10T23:41:43Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/868 | Picking up the work done on #557 with a new PR. Seeing if I can get this working. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
756876238 | MDExOlB1bGxSZXF1ZXN0NTMyMzQ4OTE5 | 1130 | Fix footer not sticking to bottom in short pages | abdusco 3243482 | open | 0 | 4 | 2020-12-04T07:29:01Z | 2021-06-15T13:27:48Z | CONTRIBUTOR | simonw/datasette/pulls/1130 | datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
904598267 | MDExOlB1bGxSZXF1ZXN0NjU1NzQxNDI4 | 1348 | DRAFT: add test and scan for docker images | blairdrummond 10801138 | open | 0 | 2 | 2021-05-28T03:02:12Z | 2021-05-28T03:06:16Z | CONTRIBUTOR | simonw/datasette/pulls/1348 | NOTE: I don't think this PR is ready, since the arm/v6 and arm/v7 images are failing pytest due to missing dependencies (gcc and friends). But it's pretty close. Closes https://github.com/simonw/datasette/issues/1344 . Using a build-matrix for the platforms and this test, we test all the platforms in parallel. I also threw in container scanning. Switch
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
892383270 | MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4 | 12 | Recovering of malformed ENEX file | engdan77 8431437 | open | 0 | 0 | 2021-05-15T07:49:31Z | 2021-05-15T19:57:50Z | FIRST_TIMER | dogsheep/evernote-to-sqlite/pulls/12 | Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :) Some background to this PR... I've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services. Now I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process. My first attempt I tried to adapt to more robust lxml package allowing huge data and with "recover", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble. And with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a "hugexmlparser" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for <note> to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within <content> so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end </note> before new (malformed?) it would add this after encounter a new start-tag. The code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted. Now with the above we pass this a minor changed version of save_note_recovery() assure the existing works. Also adding this as a new recover-enex command to click and kept the original options. A couple of new tests was added as well to check against using this command. Now this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository. As a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-) |
evernote-to-sqlite 303218369 | pull | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
837956424 | MDExOlB1bGxSZXF1ZXN0NTk4MjEzNTY1 | 1271 | Use SQLite conn.interrupt() instead of sqlite_timelimit() | simonw 9599 | open | 0 | 3 | 2021-03-22T17:34:20Z | 2021-03-22T21:49:27Z | OWNER | simonw/datasette/pulls/1271 | Refs #1270, #1268, #1249 Before merging this I need to do some more testing (to make sure that expensive queries really are properly cancelled). I also need to delete a bunch of code relating to the old mechanism of cancelling queries. [See comment below: this doesn't actually cancel the query due to a thread-local confusion] |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
836064851 | MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw | 18 | Add datetime parsing | n8henrie 1234956 | open | 0 | 0 | 2021-03-19T14:34:22Z | 2021-03-19T14:34:22Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/18 | Parses the datetime columns so they are subsequently properly recognized as datetime. Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17 |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
830901133 | MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1 | 16 | Add a fallback ID, print if no ID found | n8henrie 1234956 | open | 0 | 0 | 2021-03-13T13:38:29Z | 2021-03-13T14:44:04Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/16 | healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
816601354 | MDExOlB1bGxSZXF1ZXN0NTgwMjM1NDI3 | 241 | Extract expand - work in progress | simonw 9599 | open | 0 | 0 | 2021-02-25T16:36:38Z | 2021-02-25T16:36:38Z | OWNER | simonw/sqlite-utils/pulls/241 | Refs #239. Still needs documentation and CLI implementation. |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/241/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
793907673 | MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz | 15 | added try / except to write_records | ryancheley 9857779 | open | 0 | 0 | 2021-01-26T03:56:21Z | 2021-01-26T03:56:21Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/15 | to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following: ``` db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered healthkit-to-sqlite 8 <module> sys.exit(cli()) core.py 829 call return self.main(args, *kwargs) core.py 782 main rv = self.invoke(ctx) core.py 1066 invoke return ctx.invoke(self.callback, **ctx.params) core.py 610 invoke return callback(args, *kwargs) cli.py 57 cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) utils.py 42 convert_xml_to_sqlite write_records(records, db) utils.py 143 write_records db[table].insert_all( db.py 1899 insert_all self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1714 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered ``` Adding the try / except in the |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
718395987 | MDExOlB1bGxSZXF1ZXN0NTAwNzk4MDkx | 1008 | Add json_loads and json_dumps jinja2 filters | mhalle 649467 | open | 0 | 1 | 2020-10-09T20:11:34Z | 2020-12-15T02:30:28Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1008 | datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
723982480 | MDExOlB1bGxSZXF1ZXN0NTA1NDUzOTAw | 1030 | Make `package` command deal with a configuration directory argument | frankier 299380 | open | 0 | 1 | 2020-10-18T11:07:02Z | 2020-10-19T08:01:51Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1030 | Currently if we run |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
723499985 | MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4 | 5 | Add fitbit-to-sqlite | mrphil007 4632208 | open | 0 | 0 | 2020-10-16T20:04:05Z | 2020-10-16T20:04:05Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep.github.io/pulls/5 | dogsheep.github.io 214746582 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
655974395 | MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw | 30 | Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces) | scanner 110038 | open | 0 | 0 | 2020-07-13T16:15:26Z | 2020-07-13T16:15:26Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/30 | Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3. Had to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop. Also probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it. |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
648749062 | MDExOlB1bGxSZXF1ZXN0NDQyNTA1MDg4 | 883 | Skip counting hidden tables | abdusco 3243482 | open | 0 | 4 | 2020-07-01T07:38:08Z | 2020-07-02T00:25:44Z | CONTRIBUTOR | simonw/datasette/pulls/883 | Potential fix for https://github.com/simonw/datasette/issues/859. Disabling table counts for hidden tables speeds up database page quite a bit. In my setup it reduced load time by 2/3 (~300 -> ~90ms) |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
565064079 | MDExOlB1bGxSZXF1ZXN0Mzc1MTgwODMy | 672 | --dirs option for scanning directories for SQLite databases | simonw 9599 | open | 0 | 15 | 2020-02-14T02:25:52Z | 2020-03-27T01:03:53Z | OWNER | simonw/datasette/pulls/672 | Refs #417. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/672/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
464987783 | MDExOlB1bGxSZXF1ZXN0Mjk1MTI3MjEz | 546 | Facet by delimiter | simonw 9599 | open | 0 | 2 | 2019-07-07T20:06:05Z | 2019-11-18T23:46:01Z | OWNER | simonw/datasette/pulls/546 | Refs #510 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
501773982 | MDExOlB1bGxSZXF1ZXN0MzIzOTgzNzMy | 579 | New connection pooling | simonw 9599 | open | 0 | 1 | 2019-10-02T23:22:19Z | 2019-11-15T22:57:21Z | OWNER | simonw/datasette/pulls/579 | See #569 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
440325850 | MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2 | 452 | SQL builder utility classes | russss 45057 | open | 0 | 0 | 2019-05-04T13:57:47Z | 2019-05-04T14:03:04Z | CONTRIBUTOR | simonw/datasette/pulls/452 | This adds a straightforward set of classes to aid in the construction of SQL queries. My plan for this was to allow plugins to manipulate the
Datasette-generated SQL in a more structured way. I'm not sure that's
going to work, but I feel like this is still a step forward - it
reduces the number of intermediate variables in There are a fair number of minor structure changes in here too as I've
tried to make the ordering of |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
359075028 | MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx | 364 | Support for other types of databases using external connectors | jsancho-gpl 11912854 | open | 0 | 0 | 2018-09-11T14:31:47Z | 2018-09-11T14:31:47Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/364 | This PR is related to #293, but now all commits have been merged. The purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points. The modifications in the original datasette code are minimal and many are in a separated file. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
355299310 | MDExOlB1bGxSZXF1ZXN0MjExODYwNzA2 | 363 | Search all apps during heroku publish | kevboh 436032 | open | 0 | 1 | 2018-08-29T19:25:10Z | 2018-08-31T14:39:45Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/363 | Adds the |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);