issues
252 rows where comments = 0 and state = "open" sorted by author_association
This data as json, CSV (advanced)
Suggested facets: milestone, author_association, draft, created_at (date), updated_at (date)
repo 16
- datasette 165
- sqlite-utils 17
- twitter-to-sqlite 13
- github-to-sqlite 10
- dogsheep-beta 9
- google-takeout-to-sqlite 7
- healthkit-to-sqlite 5
- dogsheep-photos 5
- apple-notes-to-sqlite 5
- dogsheep.github.io 4
- evernote-to-sqlite 3
- swarm-to-sqlite 2
- inaturalist-to-sqlite 2
- pocket-to-sqlite 2
- hacker-news-to-sqlite 2
- genome-to-sqlite 1
state 1
- open · 252 ✖
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association ▼ | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
314834783 | MDU6SXNzdWUzMTQ4MzQ3ODM= | 219 | Expose units in the JSON API? | russss 45057 | open | 0 | 0 | 2018-04-16T22:04:25Z | 2018-04-16T22:04:25Z | CONTRIBUTOR | From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map. I'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
377166793 | MDU6SXNzdWUzNzcxNjY3OTM= | 372 | Docker build tools | psychemedia 82988 | open | 0 | 0 | 2018-11-04T16:02:35Z | 2018-11-04T16:02:35Z | CONTRIBUTOR | In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base Examples of Dockerfile generators / container builders: Discussions / threads (via Binderhub gitter) on:
- why Relates to things like: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/372/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
||||||||
440325850 | MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2 | 452 | SQL builder utility classes | russss 45057 | open | 0 | 0 | 2019-05-04T13:57:47Z | 2019-05-04T14:03:04Z | CONTRIBUTOR | simonw/datasette/pulls/452 | This adds a straightforward set of classes to aid in the construction of SQL queries. My plan for this was to allow plugins to manipulate the
Datasette-generated SQL in a more structured way. I'm not sure that's
going to work, but I feel like this is still a step forward - it
reduces the number of intermediate variables in There are a fair number of minor structure changes in here too as I've
tried to make the ordering of |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
756875827 | MDU6SXNzdWU3NTY4NzU4Mjc= | 1129 | Fix footer to the bottom of the page | abdusco 3243482 | open | 0 | 0 | 2020-12-04T07:28:07Z | 2020-12-04T16:04:29Z | CONTRIBUTOR | Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport. This can be fixed using flexbox. ```css body { min-height: 100vh; display: flex; flex-direction: column; } .content { flex-grow: 1; } ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
797784080 | MDU6SXNzdWU3OTc3ODQwODA= | 62 | Stargazers and workflows commands always require an auth file when using GITHUB_TOKEN | frosencrantz 631242 | open | 0 | 0 | 2021-01-31T18:56:05Z | 2021-01-31T18:56:05Z | CONTRIBUTOR | Requested fix in https://github.com/dogsheep/github-to-sqlite/pull/59 The stargazers and workflows commands always require an auth file, even when using a |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/62/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
843884745 | MDU6SXNzdWU4NDM4ODQ3NDU= | 1283 | advanced #export causes unexpected scrolling | mroswell 192568 | open | 0 | 0 | 2021-03-29T22:46:57Z | 2021-03-29T22:46:57Z | CONTRIBUTOR |
The user remedy seems to be to manually remove the "#export" from the URL. This behavior happens in my project, and in: https://covid-19.datasettes.com/covid/economist_excess_deaths (for instance) but not in this table: https://global-power-plants.datasettes.com/global-power-plants/global-power-plants |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
847700726 | MDU6SXNzdWU4NDc3MDA3MjY= | 1285 | Feature Request or Plugin Request: Numeric Range Facets | mroswell 192568 | open | 0 | 0 | 2021-04-01T01:50:20Z | 2021-04-01T02:28:19Z | CONTRIBUTOR | It would be great to offer facets for numeric data ranges. The ranges could pull from typical GIS methods of creating choropleth maps. https://gisgeography.com/choropleth-maps-data-classification/ Of the following, for mapping, I've always preferred a Jenks Natural Breaks, or a cross between Jenks and Pretty breaks.
Here are some links for Natural Breaks, in case this method is unfamiliar.
Per that last link, there is a Jenks Python module... They also describe it as data-intensive for larger datasets. Maybe this is a good plugin idea. An example of equal Intervals would be 0 – < 10 10 – < 20 20 – < 30 30 – < 40 It's kind of confusing to have that less-than sign in there. it could also be displayed as: 0 – 10 10 – 20 20 – 30 30 – 40 But then it's not completely clear which category 10 is in, for instance. (Best to right-justify.. and use an "en dash" between numbers.) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
853672224 | MDU6SXNzdWU4NTM2NzIyMjQ= | 1294 | "You can check out any time you like. But you can never leave!" | mroswell 192568 | open | 0 | 0 | 2021-04-08T17:02:15Z | 2021-04-08T18:35:50Z | CONTRIBUTOR | (Feel free to rename this one.)
UPDATE: - It would be helpful to have a "Previous page" available for all but the first table page. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
855451460 | MDU6SXNzdWU4NTU0NTE0NjA= | 1297 | Documentation: json1, and introspection endpoints | mroswell 192568 | open | 0 | 0 | 2021-04-12T00:38:00Z | 2021-04-12T01:29:33Z | CONTRIBUTOR | https://docs.datasette.io/en/stable/facets.html notes that:
When I check -/versions I see two sections relevant to json1:
The ENABLE_JSON1 makes me think json1 is likely available. But the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
860734722 | MDU6SXNzdWU4NjA3MzQ3MjI= | 1302 | Fix disappearing facets | mroswell 192568 | open | 0 | 0 | 2021-04-18T18:42:33Z | 2021-04-20T07:40:15Z | CONTRIBUTOR |
Since my site is devoted to whether disinfectants are Safer or Toxic, having the suggested facet disappear from the suggested facet list is very confusing* to end-users. This, along with a few other issues, unfortunately proved beyond my own programming ability to address. So I hired a Senior-level developer to address a number of issues, including this disappearing act.
I'm not sure how to do a pull request for this, because the plugin contains other functionality that goes beyond this bug. I wanted the facets sorted in a certain order (both in the suggested facet list, and the detail lists) (... the detail lists were hopping around all over the place before...) I wanted the duplicate facets removed (leaving only the one where you can facet by individual item in an array.) I wanted the arrays to be presented in a prettier fashion (I did that in the template... That could be moved over to the plugin at some point) I'm thinking it'll be very helpful if applicable parts of my project's plugin (sort_suggested_facets_plugin.py) will be able to be incorporated back into datasette, but I leave that to you to consider. (* The disappearing facet bug was especially confusing because I'm removing the filters and sql from the table page, at the request of the organization. The filters and sql detail created a lot of confusion for end users who try to find disinfectants used by Hospitals, for instance, as an '=' won't find them, since they are part of the Use_site array.) My disappearing-facet confusion was documented in my own issue: https://github.com/mroswell/list-N/issues/57 (addressed by the plugin). Other facet-related issues here: https://github.com/mroswell/list-N/issues/54 (addressed by the plugin); https://github.com/mroswell/list-N/issues/15 (addressed by template); https://github.com/mroswell/list-N/issues/53 (not yet addressed). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
988552851 | MDU6SXNzdWU5ODg1NTI4NTE= | 1456 | conda install results in non-functioning `datasette serve` due to out-of-date asgiref | ctb 51016 | open | 0 | 0 | 2021-09-05T16:59:55Z | 2021-09-05T16:59:55Z | CONTRIBUTOR | Over in https://github.com/ctb/2021-sourmash-datasette, I discovered that the following commands fail:
This appears to be because asgiref 3.3.4 doesn't have WebSocketScope, but later versions do - a simple
fixes the problem for me, at least to the point where I can run datasette and poke around as usual. I note that over in the conda-forge recipe, https://github.com/conda-forge/datasette-feedstock/blob/master/recipe/meta.yaml pins asgiref to < 3.4.0, but I'm not sure why - so I'm not sure how to best resolve this issue :). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
989109888 | MDU6SXNzdWU5ODkxMDk4ODg= | 1460 | Override column metadata with metadata from another column | MichaelTiemannOSC 72577720 | open | 0 | 0 | 2021-09-06T12:13:33Z | 2021-09-06T12:13:33Z | CONTRIBUTOR | I have a table from the PUDL project (https://github.com/catalyst-cooperative/pudl) that looks like this:
Note that @catalyst-cooperative |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
991206402 | MDExOlB1bGxSZXF1ZXN0NzI5NzA0NTM3 | 1465 | add support for -o --get /path | ctb 51016 | open | 0 | 0 | 2021-09-08T14:30:42Z | 2021-09-08T14:31:45Z | CONTRIBUTOR | simonw/datasette/pulls/1465 | Fixes https://github.com/simonw/datasette/issues/1459 Adds support for If TODO items:
- [ ] update documentation
- [ ] print out error message when note, '@CTB' is used in this PR to flag code that needs revisiting. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1006781949 | I_kwDOBm6k_c48AkX9 | 1478 | Documentation Request: Feature alternative ID instead of default ID | mroswell 192568 | open | 0 | 0 | 2021-09-24T19:56:13Z | 2021-09-25T16:18:54Z | CONTRIBUTOR | My data already has an ID that comes from a federal agency. Would love to have documentation on how to modify the template to: - Remove the generated ID from the table - Link the federal ID to the detail page - and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key. I don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports. Perhaps this isn't a template issue, maybe more of a db manipulation... Margie |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1324659241 | I_kwDOCGYnMM5O9LIp | 459 | Single quoted transform recipes on Windows do not work as expected | shakeel 19921 | open | 0 | 0 | 2022-08-01T16:14:54Z | 2022-08-01T16:14:54Z | CONTRIBUTOR | Trying to follow the tutorial for sqlite-utils and datasette https://datasette.io/tutorials/clean-data on Windows 11 OS
In the step to transform dates into ISO dates the quoted value ``` sqlite-utils convert manatees.db locations \ REPDATE created_date last_edited_date \ 'r.parsedatetime(value)' --dry-run 1975/01/31 00:00:00+00 --- becomes: r.parsedatetime(value) Would affect 13568 rows ``` However, if I change the code from single quotes to double quotes, it works as expected. ``` sqlite-utils convert manatees.db locations \ REPDATE created_date last_edited_date \ "r.parsedatetime(value)" --dry-run 1975/01/31 00:00:00+00 --- becomes: 1975-01-31T00:00:00+00:00 Would affect 13568 rows ``` Specifying the transform code recipe should work with single quotes on Windows. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1355193529 | I_kwDOCGYnMM5Qxpy5 | 479 | OperationalError: cannot VACUUM from within a transaction | chapmanjacobd 7908073 | open | 0 | 0 | 2022-08-30T05:34:24Z | 2022-08-30T05:34:24Z | CONTRIBUTOR | Maybe when calling ``` 46 db["media"].optimize() # type: ignore ---> 47 db.vacuum() File ~/.local/lib/python3.10/site-packages/sqlite_utils/db.py:1047, in Database.vacuum(self)
1045 def vacuum(self):
1046 "Run a SQLite File ~/.local/lib/python3.10/site-packages/sqlite_utils/db.py:470, in Database.execute(self, sql, parameters) 468 return self.conn.execute(sql, parameters) 469 else: --> 470 return self.conn.execute(sql) OperationalError: cannot VACUUM from within a transaction ``` It might also be nice to add a sentence or two about how transactions are committed on the docs page. When I was swapping out my sqlite3 code for this library it was nice that everything was pretty much drop-in but I was/am unsure what to do about the places I explicitly call Related to https://github.com/simonw/sqlite-utils/issues/121 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1410548368 | I_kwDODFdgUs5UE0KQ | 77 | Feature: Support GitHub discussions | frosencrantz 631242 | open | 0 | 0 | 2022-10-16T16:53:38Z | 2022-10-16T16:53:38Z | CONTRIBUTOR | Hi @simonw I've been a happy user of this tool. Thank you for writing it and sharing it. I wanted to suggest a feature request to support Discussions. For example the VisiData project has discussions https://github.com/saulpw/visidata/discussions , and it would be useful if there was a way to pull that data into the database. However, I'm not offering a pull request. |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/77/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1469796454 | I_kwDOBm6k_c5Xm1Bm | 1920 | Document Datasette.metadata() method | eyeseast 25778 | open | 0 | 0 | 2022-11-30T15:10:36Z | 2022-11-30T15:10:36Z | CONTRIBUTOR | Code is here: https://github.com/simonw/datasette/blob/main/datasette/app.py#L503 This will be the official way to access metadata from plugins. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1469821027 | I_kwDOBm6k_c5Xm7Bj | 1921 | Document methods to get canned queries | eyeseast 25778 | open | 0 | 0 | 2022-11-30T15:26:33Z | 2022-11-30T23:34:21Z | CONTRIBUTOR | Two methods will get canned queries for a Datasette instance:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1509783085 | I_kwDOBm6k_c5Z_XYt | 1969 | sql-formatter javascript is not now working with CloudFlare rocketloader | fgregg 536941 | open | 0 | 0 | 2022-12-23T21:14:06Z | 2023-01-10T01:56:33Z | CONTRIBUTOR | This is probably not a bug with datasette, but I thought you might want to know, @simonw. I noticed today that my CloudFlare proxied datasette instance lost the "Format SQL" option. I'm pretty sure it was there last week. In the CloudFlare settings, if I turn off Rocket Loader, I get the "Format SQL" option back. Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading? I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1565179870 | I_kwDOBm6k_c5dSr_e | 2013 | Datasette uses non-standard quoting for identifiers | cldellow 193185 | open | 0 | 0 | 2023-02-01T00:05:39Z | 2023-02-01T00:06:30Z | CONTRIBUTOR | Related to #2001, but where #2001 was about literals, this is about identifiers From https://www.sqlite.org/lang_keywords.html:
Datasette uses this quoting here -- https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/init.py#L345-L349, in some of the other DB access code, and in some of the test fixtures. Migrating to standard double quote identifiers would make it easier to get Datasette working with alternative backends |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1571711808 | I_kwDOBm6k_c5drmtA | 2018 | `check_visibility` gives confusing (wrong?) results if permission is `None` | cldellow 193185 | open | 0 | 0 | 2023-02-06T01:03:08Z | 2023-02-06T01:03:46Z | CONTRIBUTOR | I'm trying to gate access to an edit UI on the user having I expected datasette.check_visibility to be a good way to do this: ```python visible, private = await datasette.check_visibility( request.actor, permissions=[ ("update-row", (database, table)), ], )
``` But Based on the update-row permissions docs, I expected this to be default deny, and so no explicit permission would result in false. I think the root cause is that But |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1605959201 | I_kwDOBm6k_c5fuP4h | 2032 | datasette errors when foreign key integrity is enabled | cldellow 193185 | open | 0 | 0 | 2023-03-02T01:27:51Z | 2023-03-02T01:31:58Z | CONTRIBUTOR | By default, SQLite does not enforce foreign key constraints. I typically enable these checks by running:
inside of a If a plugin causes the schema to change (eg datasette-scraper creating a new table, or datasette-edit-schema changing a column), then https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/internal_db.py#L71-L77 will fail with:
This could be resolved by either:
- deleting from the Let me know if you'd be open to a PR that addresses this -- since foreign key constraints aren't enabled by default, I guess it's questionable whether this is a bug. I think I can workaround this by inspecting the database parameter in |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1620515757 | I_kwDOBm6k_c5glxut | 2039 | Subtle bug with `--load-extension` and `--static` flags with absolute Windows paths with`C:\` | asg017 15178711 | open | 0 | 0 | 2023-03-12T21:18:52Z | 2023-03-12T21:18:52Z | CONTRIBUTOR | From the Datasette discord: A user tried running the following command on windows:
This is hard because most absolute windows paths have a colon in them, like The "solution" is to use a relative path instead, but that doesn't feel that great. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1783304750 | I_kwDOBm6k_c5qSxIu | 2094 | JS Plugin Hooks for the Code Editor | asg017 15178711 | open | 0 | 0 | 2023-07-01T00:51:57Z | 2023-07-01T00:51:57Z | CONTRIBUTOR | When #2052 merges, I'd like to add support to add extensions/functions to the Datasette code editor. I'd eventually like to build a JS plugin for
I did some hacking to see what this would look like, see here:
There can be a new hook that allows JS plugins to add new "extension" in the CodeMirror editorview here: Will need some more planning. For example, the Codemirror bundle in Datasette has functions that we could re-export for plugins to use (so we don't load 2 version of |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2094/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1821108702 | I_kwDOCGYnMM5si-ne | 579 | Special handling for SQLite column of type `JSON` | asg017 15178711 | open | 0 | 0 | 2023-07-25T20:37:23Z | 2023-07-25T20:37:23Z | CONTRIBUTOR |
Automatic NestingAccording to "Nested JSON Values", sqlite-utils will only expand JSON if the Instead,
I'm sure there's other ways |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1822813627 | I_kwDOBm6k_c5spe27 | 2108 | some (many?) SQL syntax errors are not throwing errors with a .csv endpoint | fgregg 536941 | open | 0 | 0 | 2023-07-26T16:57:45Z | 2023-07-26T16:58:07Z | CONTRIBUTOR | here's a CTE query that should always fail with a syntax error:
when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B same with this bad sql
vs but, datasette catches this bad sql at both endpoints:
https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
892383270 | MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4 | 12 | Recovering of malformed ENEX file | engdan77 8431437 | open | 0 | 0 | 2021-05-15T07:49:31Z | 2021-05-15T19:57:50Z | FIRST_TIMER | dogsheep/evernote-to-sqlite/pulls/12 | Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :) Some background to this PR... I've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services. Now I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process. My first attempt I tried to adapt to more robust lxml package allowing huge data and with "recover", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble. And with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a "hugexmlparser" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for <note> to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within <content> so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end </note> before new (malformed?) it would add this after encounter a new start-tag. The code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted. Now with the above we pass this a minor changed version of save_note_recovery() assure the existing works. Also adding this as a new recover-enex command to click and kept the original options. A couple of new tests was added as well to check against using this command. Now this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository. As a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-) |
evernote-to-sqlite 303218369 | pull | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
359075028 | MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx | 364 | Support for other types of databases using external connectors | jsancho-gpl 11912854 | open | 0 | 0 | 2018-09-11T14:31:47Z | 2018-09-11T14:31:47Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/364 | This PR is related to #293, but now all commits have been merged. The purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points. The modifications in the original datasette code are minimal and many are in a separated file. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
655974395 | MDExOlB1bGxSZXF1ZXN0NDQ4MzU1Njgw | 30 | Handle empty bucket on first upload. Allow specifying the endpoint_url for services other than S3 (like b2 and digitalocean spaces) | scanner 110038 | open | 0 | 0 | 2020-07-13T16:15:26Z | 2020-07-13T16:15:26Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/30 | Finally got around to trying dogsheep-photos but I want to use backblaze's b2 service instead of AWS S3. Had to add a way to optionally specify the endpoint_url to connect to. Then with the bucket being empty the initial key retrieval would fail. Probably a better way to see that the bucket is empty than doing a test inside the paginator loop. Also probably a better way to specify the endpoint_url as we get and test for it twice using the same code in two different places but did not want to spend too much time worrying about it. |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
723499985 | MDExOlB1bGxSZXF1ZXN0NTA1MDc2NDE4 | 5 | Add fitbit-to-sqlite | mrphil007 4632208 | open | 0 | 0 | 2020-10-16T20:04:05Z | 2020-10-16T20:04:05Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep.github.io/pulls/5 | dogsheep.github.io 214746582 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
793907673 | MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz | 15 | added try / except to write_records | ryancheley 9857779 | open | 0 | 0 | 2021-01-26T03:56:21Z | 2021-01-26T03:56:21Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/15 | to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following: ``` db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered healthkit-to-sqlite 8 <module> sys.exit(cli()) core.py 829 call return self.main(args, *kwargs) core.py 782 main rv = self.invoke(ctx) core.py 1066 invoke return ctx.invoke(self.callback, **ctx.params) core.py 610 invoke return callback(args, *kwargs) cli.py 57 cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) utils.py 42 convert_xml_to_sqlite write_records(records, db) utils.py 143 write_records db[table].insert_all( db.py 1899 insert_all self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1714 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered ``` Adding the try / except in the |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
830901133 | MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1 | 16 | Add a fallback ID, print if no ID found | n8henrie 1234956 | open | 0 | 0 | 2021-03-13T13:38:29Z | 2021-03-13T14:44:04Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/16 | healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
836064851 | MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw | 18 | Add datetime parsing | n8henrie 1234956 | open | 0 | 0 | 2021-03-19T14:34:22Z | 2021-03-19T14:34:22Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/18 | Parses the datetime columns so they are subsequently properly recognized as datetime. Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17 |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
925384329 | MDExOlB1bGxSZXF1ZXN0NjczODcyOTc0 | 7 | Add instagram-to-sqlite | gavindsouza 36654812 | open | 0 | 0 | 2021-06-19T12:26:16Z | 2021-07-28T07:58:59Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep.github.io/pulls/7 | The tool covers only chat imports at the time of opening this PR but I'm planning to import everything else that I feel inquisitive about |
dogsheep.github.io 214746582 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
947596222 | MDExOlB1bGxSZXF1ZXN0NjkyNTU3Mzgx | 1399 | Multiple sort | jgryko5 87192257 | open | 0 | 0 | 2021-07-19T12:20:14Z | 2021-07-19T12:20:14Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1399 | Closes #197. I have added support for sorting by multiple parameters as mentioned in the issue above, and together with that, a suggestion on how to implement such sorting in the user interface. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
981690086 | MDExOlB1bGxSZXF1ZXN0NzIxNjg2NzIx | 67 | Replacing step ID key with step_id | jshcmpbll 16374374 | open | 0 | 0 | 2021-08-28T01:26:41Z | 2021-08-28T01:27:00Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/67 | Workflows that have an e.g.
ChangesI'm proposing that the key for Special thanks to @sarcasticadmin @egiffen and @ruebenramirez for helping a bit on this 😄 |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/67/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
987985935 | MDExOlB1bGxSZXF1ZXN0NzI2OTkwNjgw | 35 | Support for Datasette's --base-url setting | brandonrobertz 2670795 | open | 0 | 0 | 2021-09-03T17:47:45Z | 2021-09-03T17:47:45Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-beta/pulls/35 | This makes it so you can use Dogsheep if you're using Datasette with the |
dogsheep-beta 197431109 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/35/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1001104942 | PR_kwDOBm6k_c4r-EVH | 1475 | feat: allow joins using _through in both directions | bram2000 5268174 | open | 0 | 0 | 2021-09-20T15:28:20Z | 2021-09-20T15:28:20Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1475 | Currently the This is an admittedly hacky change to implement bidirectional joins using |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1013506559 | PR_kwDODFdgUs4skaNS | 68 | Add support for retrieving teams / members | philwills 68329 | open | 0 | 0 | 2021-10-01T15:55:02Z | 2021-10-01T15:59:53Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/68 | Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1042759769 | PR_kwDOEhK-wc4uAJb9 | 15 | include note tags in the export | d-rep 436138 | open | 0 | 0 | 2021-11-02T20:04:31Z | 2021-11-02T20:04:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/evernote-to-sqlite/pulls/15 | When parsing the Evernote Here is an example of how to query the data after the script has run:
My .enex source file is 3+ years old so I am assuming the structure hasn't changed. Interestingly, my notebook names show up in the tags list where the tag name is prefixed with |
evernote-to-sqlite 303218369 | pull | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1046887492 | PR_kwDODFE5qs4uMsMJ | 9 | Removed space from filename My Activity.json | widadmogral 91880982 | open | 0 | 0 | 2021-11-08T00:04:31Z | 2021-11-08T00:04:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/9 | File name from google takeout has no space. The code only runs without error if filename is "MyActivity.json" and not "My Activity.json". Is it a new change by Google? |
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1149402080 | PR_kwDODFdgUs4zaUta | 70 | scrape-dependents: enable paging through package menu option if present | stanbiryukov 36061055 | open | 0 | 0 | 2022-02-24T15:07:25Z | 2022-02-24T15:07:25Z | FIRST_TIME_CONTRIBUTOR | dogsheep/github-to-sqlite/pulls/70 | Some repos organize network dependents by a Package toggle. This PR adds the ability to page through those options and scrape underlying dependents. |
github-to-sqlite 207052882 | pull | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1160327106 | PR_kwDODEm0Qs4z_V3w | 65 | Update Twitter dev link, clarify apps vs projects | rixx 2657547 | open | 0 | 0 | 2022-03-05T11:56:08Z | 2022-03-05T11:56:08Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/65 | Twitter pushes you heavily towards v2 projects instead of v1 apps – I know the README mentions v1 API compatibility at the top, but I still nearly got turned around here. |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/65/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1244082183 | PR_kwDODEm0Qs44PPLy | 66 | Ageinfo workaround | ashanan 11887 | open | 0 | 0 | 2022-05-21T21:08:29Z | 2022-05-21T21:09:16Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/66 | I'm not sure if this is due to a new format or just because my ageinfo file is blank, but trying to import an archive would crash when it got to that file. This PR adds a guard clause in the Let me know if you want any changes! |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/66/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1250287607 | PR_kwDODFE5qs44jvRV | 11 | Update README.md | ashanan 11887 | open | 0 | 0 | 2022-05-27T03:13:59Z | 2022-05-27T03:13:59Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/11 | Fix typo |
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/11/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1293698966 | PR_kwDOD079W84600uh | 37 | Fix former command name in readme | DanLipsitt 578773 | open | 0 | 0 | 2022-07-05T02:09:13Z | 2022-07-05T02:09:13Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/37 | Looks like a previous commit missed a |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/37/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1307359454 | PR_kwDOBm6k_c47iWbd | 1772 | Convert to setup.cfg | kfdm 89725 | open | 0 | 0 | 2022-07-18T03:39:53Z | 2022-07-18T03:39:53Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1772 | Recent versions of setuptools can run most things from setup.cfg so one can have a simpler version that does not require executing code on install. The bulk of the changes were automated by running https://pypi.org/project/setup-py-upgrade/ with a few minor edits for the bits that it can not auto convert (the initial |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1353418822 | PR_kwDODtX3eM497MOV | 5 | The program fails when the user has no submissions | fernand0 2467 | open | 0 | 0 | 2022-08-28T17:25:45Z | 2022-08-28T17:25:45Z | FIRST_TIME_CONTRIBUTOR | dogsheep/hacker-news-to-sqlite/pulls/5 | Tested with:
Result:
There is a problem of style with the patch (but not sure what to do) because with the new inicialization ( submitted = []) the part
is not needed. Maybe there is a more adequate way of doing this. |
hacker-news-to-sqlite 248903544 | pull | { "url": "https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1393330070 | PR_kwDODD6af84__DNJ | 14 | Photo links | redmanmale 6782721 | open | 0 | 0 | 2022-10-01T09:44:15Z | 2022-11-18T17:10:49Z | FIRST_TIME_CONTRIBUTOR | dogsheep/swarm-to-sqlite/pulls/14 |
Fixes #9. |
swarm-to-sqlite 205429375 | pull | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1513237712 | PR_kwDODEm0Qs5GUoG_ | 67 | Add support for app-only bearer tokens | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:31:20Z | 2022-12-28T23:31:20Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/67 | Previously, twitter-to-sqlite only supported OAuth1 authentication, and the token must be on behalf of a user. However, Twitter also supports application-only bearer tokens, documented here: https://developer.twitter.com/en/docs/authentication/oauth-2-0/bearer-tokens This PR adds support to twitter-to-sqlite for using application-only bearer tokens. To use, the auth.json file just needs to contain a "bearer_token" key instead of "api_key", "api_secret_key", etc. |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/67/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1513237982 | PR_kwDODEm0Qs5GUoKL | 68 | Archive: Import mute table | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:32:06Z | 2022-12-28T23:32:06Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/68 | twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1513238152 | PR_kwDODEm0Qs5GUoMM | 69 | Archive: Import new tweets table name | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:32:44Z | 2022-12-28T23:32:44Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/69 | Given the code here, it seems like in the past this file was named "tweet.js". In recent exports, it's named "tweets.js". The archive importer needs to be modified to take this into account. Existing logic is reused for importing this table. (However, the resulting table name will be different, matching the different file name -- archive_tweets, rather than archive_tweet). |
twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1513238314 | PR_kwDODEm0Qs5GUoN6 | 70 | Archive: Import Twitter Circle data | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:33:09Z | 2022-12-28T23:33:09Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/70 | twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1513238455 | PR_kwDODEm0Qs5GUoPm | 71 | Archive: Fix "ni devices" typo in importer | sometimes-i-send-pull-requests 26161409 | open | 0 | 0 | 2022-12-28T23:33:31Z | 2022-12-28T23:33:31Z | FIRST_TIME_CONTRIBUTOR | dogsheep/twitter-to-sqlite/pulls/71 | twitter-to-sqlite 206156866 | pull | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1515717718 | PR_kwDOC8tyDs5Gc-VH | 23 | Include workout statistics | badboy 2129 | open | 0 | 0 | 2023-01-01T17:29:57Z | 2023-01-01T17:29:57Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/23 | Not sure when this changed (iOS 16 maybe?), but the Adding it as another column at leat allows me to pull these out (using SQLite's JSON support). I'm running with this patch on my own data now. |
healthkit-to-sqlite 197882382 | pull | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/23/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1581218043 | PR_kwDOBm6k_c5JyqPy | 2025 | Add database metadata to index.html template context | palewire 9993 | open | 0 | 0 | 2023-02-12T11:16:58Z | 2023-02-12T11:17:14Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2025 | Fixes #2016 :books: Documentation preview :books:: https://datasette--2025.org.readthedocs.build/en/2025/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1586980089 | PR_kwDOBm6k_c5KF-by | 2026 | Avoid repeating primary key columns if included in _col args | runderwood 8513 | open | 0 | 0 | 2023-02-16T04:16:25Z | 2023-02-16T04:16:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2026 | ...while maintaining given order. Fixes #1975 (if I'm understanding correctly). :books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1639873822 | PR_kwDOBm6k_c5M29tt | 2044 | Expand labels in row view as well (patch for 0.64.x branch) | tmcl-it 82332573 | open | 0 | 0 | 2023-03-24T18:44:44Z | 2023-03-24T18:44:57Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2044 | This is a version of #2031 for the 0.64.x branch. :books: Documentation preview :books:: https://datasette--2044.org.readthedocs.build/en/2044/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1650984552 | PR_kwDOJHON9s5NbyYN | 13 | use universal command | amlestin 14314871 | open | 0 | 0 | 2023-04-02T15:10:54Z | 2023-04-02T15:37:34Z | FIRST_TIME_CONTRIBUTOR | dogsheep/apple-notes-to-sqlite/pulls/13 | apple-notes-to-sqlite 611552758 | pull | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
1674322631 | PR_kwDOBm6k_c5OpEz_ | 2061 | Add "Packaging a plugin using Poetry" section in docs | rclement 1238873 | open | 0 | 0 | 2023-04-19T07:23:28Z | 2023-04-19T07:27:18Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2061 | This PR adds a new section about packaging a plugin using :books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1708981860 | PR_kwDOBm6k_c5QdMea | 2074 | sort files by mtime | abbbi 3919561 | open | 0 | 0 | 2023-05-14T15:25:15Z | 2023-05-14T15:25:29Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2074 | serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;) :books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1715468032 | PR_kwDOBm6k_c5QzEAM | 2076 | Datsette gpt plugin | StudioCordillera 130708713 | open | 0 | 0 | 2023-05-18T11:22:30Z | 2023-05-18T11:22:45Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2076 | :books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1734786661 | PR_kwDOBm6k_c5R0fcK | 2082 | Catch query interrupted on facet suggest row count | redraw 10843208 | open | 0 | 0 | 2023-05-31T18:42:46Z | 2023-05-31T18:45:26Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2082 | Just like facet's I've included :books: Documentation preview :books:: https://datasette--2082.org.readthedocs.build/en/2082/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1794604602 | PR_kwDOBm6k_c5U-akg | 2096 | Clarify docs for descriptions in metadata | garthk 15906 | open | 0 | 0 | 2023-07-08T01:57:58Z | 2023-07-08T01:58:13Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2096 | G'day! I got confused while debugging, earlier today. That's on me, but it does strike me a little repetition in the metadata documentation might help those flicking around it rather than reading it from top to bottom. No worries if you think otherwise. :books: Documentation preview :books:: https://datasette--2096.org.readthedocs.build/en/2096/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1802613340 | PR_kwDOBm6k_c5VZhfw | 2100 | Make primary key view accessible to render_cell hook | meowcat 1563881 | open | 0 | 0 | 2023-07-13T09:30:36Z | 2023-08-10T13:15:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2100 | :books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1827436260 | PR_kwDOD079W85WtVyk | 39 | Missing option in datasette instructions | coldclimate 319473 | open | 0 | 0 | 2023-07-29T10:34:48Z | 2023-07-29T10:34:48Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/39 | Gotta tell it where to look |
dogsheep-photos 256834907 | pull | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1865983069 | PR_kwDOBm6k_c5YvQSi | 2158 | add brand option to metadata.json. | publicmatt 52261150 | open | 0 | 0 | 2023-08-24T22:37:41Z | 2023-08-24T22:37:57Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2158 | This adds a brand link to the top navbar if 'brand' key is populated in metadata.json. The link will be either '#' or use the contents of 'brand_url' in metadata.json for href. I was able to get this done on my own site by replacing :books: Documentation preview :books:: https://datasette--2158.org.readthedocs.build/en/2158/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1866815458 | PR_kwDOBm6k_c5YyF-C | 2159 | Implement Dark Mode colour scheme | jamietanna 3315059 | open | 0 | 0 | 2023-08-25T10:46:23Z | 2023-08-25T10:46:35Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2159 | Closes #2095. :books: Documentation preview :books:: https://datasette--2159.org.readthedocs.build/en/2159/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1880968405 | PR_kwDOJHON9s5ZhYny | 14 | fix: fix the problem of Chinese character garbling | barretlee 2698003 | open | 0 | 0 | 2023-09-04T23:48:28Z | 2023-09-04T23:48:28Z | FIRST_TIME_CONTRIBUTOR | dogsheep/apple-notes-to-sqlite/pulls/14 |
|
apple-notes-to-sqlite 611552758 | pull | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1884499674 | PR_kwDODFE5qs5ZtYMc | 13 | use poetry for packages, asdf for versioning, and gh actions for ci | iloveitaly 150855 | open | 0 | 0 | 2023-09-06T17:59:16Z | 2023-09-06T17:59:16Z | FIRST_TIME_CONTRIBUTOR | dogsheep/google-takeout-to-sqlite/pulls/13 |
|
google-takeout-to-sqlite 206649770 | pull | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
505673645 | MDU6SXNzdWU1MDU2NzM2NDU= | 16 | Do a better job with archived direct message threads | simonw 9599 | open | 0 | 0 | 2019-10-11T06:55:21Z | 2019-10-11T06:55:27Z | MEMBER | twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
599776345 | MDU6SXNzdWU1OTk3NzYzNDU= | 24 | Feature idea: github-to-sqlite everything ... | simonw 9599 | open | 0 | 0 | 2020-04-14T18:34:00Z | 2020-04-14T18:34:00Z | MEMBER | At the moment if you want to pull all your repos, issues, issues comments etc you have to do it with a sequence of separate commands. Consider adding a |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/24/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
621486115 | MDU6SXNzdWU2MjE0ODYxMTU= | 27 | photos_with_apple_metadata view should include labels | simonw 9599 | open | 0 | 0 | 2020-05-20T06:06:17Z | 2020-05-20T06:06:17Z | MEMBER | Here's one way to add that:
|
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/27/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
673602857 | MDU6SXNzdWU2NzM2MDI4NTc= | 9 | Define a view that displays photos correctly | simonw 9599 | open | 0 | 0 | 2020-08-05T14:53:39Z | 2020-08-05T14:53:39Z | MEMBER | The id | createdAt | source | prefix | suffix | width | height | visibility | created ▲ | user -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 5e12c9708506bc000840262a | January 06, 2020 - 05:45:20 UTC | Swarm for iOS 1 | https://fastly.4sqi.net/img/general/ | /15889193_AXxGk4I1nbzUZuyYqObgbXdJNyEHiwj6AUDq0tPZWtw.jpg | 1920 | 1440 | public | 2020-01-06T05:45:20 | 15889193 The photo URL can be derived from those pieces - define a SQL view which does that (using |
swarm-to-sqlite 205429375 | issue | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
689848827 | MDU6SXNzdWU2ODk4NDg4Mjc= | 6 | ISO timestamps | simonw 9599 | open | 0 | 0 | 2020-09-01T06:16:42Z | 2020-09-01T06:16:42Z | MEMBER | The
Should use ISO instead, e.g. |
pocket-to-sqlite 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
689850810 | MDU6SXNzdWU2ODk4NTA4MTA= | 6 | Set up a demo instance | simonw 9599 | open | 0 | 0 | 2020-09-01T06:20:24Z | 2020-09-01T06:20:24Z | MEMBER | Once I've got the Datasette plugin to a state where it's worth building a demo: #3 I can use data from my public https://github-to-sqlite.dogsheep.net/ demo plus the Pocket data subset I use for the demo in https://github.com/dogsheep/pocket-to-sqlite/issues/5 - I could pull in the https://dogsheep-photos.dogsheep.net/ photos data too. |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
692202408 | MDU6SXNzdWU2OTIyMDI0MDg= | 12 | Idea: maps and GeoJSON support | simonw 9599 | open | 0 | 0 | 2020-09-03T18:47:10Z | 2020-09-04T01:45:03Z | MEMBER | It would be cool if the Then I could render workout routes on a map, or Swarm checkin points. |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
703216044 | MDU6SXNzdWU3MDMyMTYwNDQ= | 49 | Feature: gists and starred gists | simonw 9599 | open | 0 | 0 | 2020-09-17T02:30:52Z | 2020-09-17T02:30:52Z | MEMBER | github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/49/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
703218448 | MDU6SXNzdWU3MDMyMTg0NDg= | 51 | Documentation for twitter-to-sqlite fetch | simonw 9599 | open | 0 | 0 | 2020-09-17T02:38:10Z | 2020-09-17T02:38:10Z | MEMBER | It's mentioned in passing in the README but it deserves its own section:
|
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/51/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
709789634 | MDU6SXNzdWU3MDk3ODk2MzQ= | 27 | Sort order is not persisted by facet filter links | simonw 9599 | open | 0 | 0 | 2020-09-27T18:22:07Z | 2020-09-27T18:22:07Z | MEMBER | A link to |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/27/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
836923194 | MDU6SXNzdWU4MzY5MjMxOTQ= | 32 | JSON API for search results | simonw 9599 | open | 0 | 0 | 2021-03-20T22:21:36Z | 2021-03-20T22:21:36Z | MEMBER | dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/32/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
897212458 | MDU6SXNzdWU4OTcyMTI0NTg= | 63 | Ability to fetch commits from branches other than the default | simonw 9599 | open | 0 | 0 | 2021-05-20T17:58:08Z | 2021-05-20T17:58:08Z | MEMBER | This tool is currently almost entirely ignorant of the concept of branches. One example: you can't retrieve commits from any branch other than the default (usually main). |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/63/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1616440856 | I_kwDOJHON9s5gWO4Y | 5 | Configure full text search | simonw 9599 | open | 0 | 0 | 2023-03-09T05:20:46Z | 2023-03-09T05:20:46Z | MEMBER | FTS would be useful. Maybe even extract the plain text from the notes to make that index easier to create, rather than creating it against the HTML. Can use the |
apple-notes-to-sqlite 611552758 | issue | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1617938730 | I_kwDOJHON9s5gb8kq | 9 | Default to just storing plaintext, store HTML if `--html` is passed | simonw 9599 | open | 0 | 0 | 2023-03-09T20:19:06Z | 2023-03-09T20:19:06Z | MEMBER | The full I'm tempted to say you don't get HTML unless you pass a |
apple-notes-to-sqlite 611552758 | issue | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1888477283 | I_kwDOC8SPRc5wj-Bj | 38 | Run `rebuild_fts` after building the index | simonw 9599 | open | 0 | 0 | 2023-09-08T23:17:45Z | 2023-09-08T23:17:45Z | MEMBER | In: - https://github.com/simonw/datasette.io/issues/152#issuecomment-1712323347 This turned out to be the fix:
|
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
504720731 | MDU6SXNzdWU1MDQ3MjA3MzE= | 1 | Add more details on how to request data from google takeout correctly. | dazzag24 1055831 | open | 0 | 0 | 2019-10-09T15:17:34Z | 2019-10-09T15:17:34Z | NONE | The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:
In addition unless you specify that "My Activity" is downloaded in JSON format the default is HTML. This then causes the
command to fail as it only contains html files not json files. Thanks |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
541274681 | MDU6SXNzdWU1NDEyNzQ2ODE= | 2 | Add linkedin-to-sqlite | mnp 881925 | open | 0 | 0 | 2019-12-21T03:13:40Z | 2019-12-21T03:13:40Z | NONE | There is an API available. https://developer.linkedin.com/docs/rest-api# At the minimum, I would think contact list and messages would be of interest. |
dogsheep.github.io 214746582 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
664793260 | MDU6SXNzdWU2NjQ3OTMyNjA= | 2 | Yak shave | ekg 145425 | open | 0 | 0 | 2020-07-23T22:04:18Z | 2020-07-23T22:04:18Z | NONE | Just a quick note... The 23andme data is not exactly your genome, but a SNP chip of your genome. It's "some of your genotypes." Or about 0.1% of your genome. Nice work in any case! It deserves to be liberated!!!!! |
genome-to-sqlite 209590345 | issue | { "url": "https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
697162939 | MDU6SXNzdWU2OTcxNjI5Mzk= | 20 | Add more tags so people can find your project. | ran88dom99 7902810 | open | 0 | 0 | 2020-09-09T21:14:09Z | 2020-09-09T21:14:09Z | NONE | quantified-self habit-tracking google-fit time-tracking wearables quantifiedself for example |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions", "total_count": 1, "+1": 0, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
769397742 | MDU6SXNzdWU3NjkzOTc3NDI= | 3 | sqlite-utils error on takeout import | khimaros 231498 | open | 0 | 0 | 2020-12-17T01:18:48Z | 2020-12-17T01:19:04Z | NONE |
there is no table create in additionally, this package and hackernews-to-sqlite have conflicting |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/3/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
787104850 | MDU6SXNzdWU3ODcxMDQ4NTA= | 1192 | Form Plugin for in-depth Datasette Querying | tomershvueli 1024355 | open | 0 | 0 | 2021-01-15T18:24:50Z | 2021-01-15T18:24:50Z | NONE | I envision a sort of easy-to-build form plugin that would be able to map a user's inputs to different fields/columns in a Datasette database. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
797728929 | MDU6SXNzdWU3OTc3Mjg5Mjk= | 8 | QUESTION: extract full text | darribas 417363 | open | 0 | 0 | 2021-01-31T14:50:10Z | 2021-01-31T14:50:10Z | NONE | This may be solved or a feature already, but I couldn't figure it out, is it possible to extract and store also full text from the saved pages? The same way that Pocket parses the text, it'd be amazing to be able to store (and thus make searchable later) the text. Thank you very much for the project, it's such an amazing idea! |
pocket-to-sqlite 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
808771690 | MDU6SXNzdWU4MDg3NzE2OTA= | 1225 | More flexible formatting of records with CSS grid | mhalle 649467 | open | 0 | 0 | 2021-02-15T19:28:17Z | 2021-02-15T19:28:35Z | NONE | In several applications I've been experimenting with alternate formatting of datasette query results. Lately I've found that CSS grids work very well and seem quite general for formatting rows. In CSS I use grid templates to define the layout of each record and the regions for each field, hiding the fields I don't want. It's pretty flexible and looks good. It's also a great basis for highly responsive layout. I initially thought I'd only use this feature for record detail views, but now I use it for index views as well. However, there are some limitations:
* With the existing table templates, it seems that you can change the It would be helpful to at least have an official example or test that used a grid layout for records to make sure nothing in datasette breaks with it. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
818684978 | MDU6SXNzdWU4MTg2ODQ5Nzg= | 243 | How can i use this utils to deal with fts on column meta of tables ? | svjack 27874014 | open | 0 | 0 | 2021-03-01T09:45:05Z | 2021-03-01T09:45:05Z | NONE | Thank you to release this bravo project. When i use this project on multi table db, I want to implement convenient search on column name from different tables. I want to develop a meta table to save the meta data of different columns of different tables and search on this meta table to get rows from the data table (which the meta table describes) does this project provide some simple function on it ? You can think a have a knowledge graph about the table in the db, and i save this knowledge graph into the db with fts enabled. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
824750134 | MDU6SXNzdWU4MjQ3NTAxMzQ= | 1251 | facet option not appearing when table is big | verajosemanuel 15836677 | open | 0 | 0 | 2021-03-08T16:54:04Z | 2021-03-08T16:54:16Z | NONE | I have a big table with more than 500.000 rows. Trying to facet by one of my columns, the options are not available as for the other smaller tables. I have tried to set it in URL as:
to no avail. is there any limit? how can I force the option "facet" to appear for big tables? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
830283447 | MDU6SXNzdWU4MzAyODM0NDc= | 34 | bucket name | dsisnero 6213 | open | 0 | 0 | 2021-03-12T16:40:57Z | 2021-03-12T16:40:57Z | NONE | I followed the instructions to setup credentials but I am getting a invalid bucket name. Can you put a sample auth.json file in the base that shows the correct format for this? Thanks |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
836063389 | MDU6SXNzdWU4MzYwNjMzODk= | 17 | Datetime columns are not properly formatted to be recognizes as datetime | n8henrie 1234956 | open | 0 | 0 | 2021-03-19T14:33:04Z | 2021-03-19T14:33:04Z | NONE | Currently, the datetimes are formatted in a way that is not recognized by datasette-vega for plotting with a For example, if you have datasette running locally with
The plot is blank unless you choose The If instead the format for this column is changed slightly: I have a PR that addresses this issue, will submit shortly. |
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/17/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
849512840 | MDU6SXNzdWU4NDk1MTI4NDA= | 1288 | Facets: show counts for null | jungle-boogie 1111743 | open | 0 | 0 | 2021-04-02T22:33:44Z | 2021-04-02T22:33:44Z | NONE | Hi, Thank you for Datasette and being a fan of SQLite! Not all rows in a record will always contain data. So when using a facet on a column where some records have data and others don't, you don't get an accurate count of the results. Please consider also counting and showing null records with facets. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1288/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
870946764 | MDU6SXNzdWU4NzA5NDY3NjQ= | 1312 | how to query many-to-many relationship via json API? | bram2000 5268174 | open | 0 | 0 | 2021-04-29T12:09:49Z | 2021-04-29T12:09:49Z | NONE | Hi, Firstly thanks for Datasette, it's great! I'm trying to use the JSON API to query data from a Datasette instance. I have a simple 3 table many-to-many relationship, like so:
the Now I want to return "all documents within category X" but I cannot see a way to do this without executing two queries; the first to lookup the row_id of category X, and the second to join I could easily write this in SQL, but this makes programmatic handling of pagination much more difficult (we'd have to dynamically modify the SQL to select the row_id and include the correct where and limit clauses). Is there a way to achieve this using the JSON API? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);
comments 1 ✖