issues
168 rows where comments = 4 and state = "closed" sorted by author_association
This data as json, CSV (advanced)
Suggested facets: milestone, author_association, created_at (date), updated_at (date), closed_at (date)
repo 9
state 1
- closed · 168 ✖
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association ▼ | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
273775212 | MDU6SXNzdWUyNzM3NzUyMTI= | 88 | Add NHS England Hospitals example to wiki | tomdyson 15543 | closed | 0 | 4 | 2017-11-14T12:29:10Z | 2021-03-22T23:46:36Z | 2017-11-14T22:54:06Z | CONTRIBUTOR | https://nhs-england-hospitals.now.sh and an associated map visualisation: http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/ Datasette is wonderful! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/88/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
274343647 | MDExOlB1bGxSZXF1ZXN0MTUyOTE0NDgw | 107 | add support for ?field__isnull=1 | raynae 3433657 | closed | 0 | 4 | 2017-11-15T23:36:36Z | 2017-11-17T15:12:29Z | 2017-11-17T13:29:22Z | CONTRIBUTOR | simonw/datasette/pulls/107 | Is this what you had in mind for this issue? |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
325352370 | MDExOlB1bGxSZXF1ZXN0MTg5NzA3Mzc0 | 279 | Add version number support with Versioneer | rgieseke 198537 | closed | 0 | 4 | 2018-05-22T15:39:45Z | 2018-05-22T19:35:23Z | 2018-05-22T19:35:22Z | CONTRIBUTOR | simonw/datasette/pulls/279 | I think that's all for getting Versioneer support, I've been happily using it in a couple of projects ...
Versioneer Licence: Public Domain (CC0-1.0) Closes #273 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
336924199 | MDU6SXNzdWUzMzY5MjQxOTk= | 330 | Limit text display in cells containing large amounts of text | psychemedia 82988 | closed | 0 | 4 | 2018-06-29T09:15:22Z | 2018-07-24T04:53:20Z | 2018-07-10T16:20:48Z | CONTRIBUTOR | The default preview of a database shows all columns (is the row count limited?) which is fine in many cases but can take a long time to load / offer a large overhead if the table is a SpatiaLite table containing geometry columns that include large shapefiles. Would it make sense to have a setting that can limit the amount of text displayed in any given cell in the table preview, or (less useful?) suppress (with notification) the display of overlong columns unless enabled by the user? An issue then arises if a user does want to see all the text in a cell: 1) for a particular cell; 2) for every cell in the table; 3) for all cells in a particular column or columns (I haven't checked but what if a column contains e.g. raw image data? Does this display as raw data? Or can this be rendered in a context aware way as an image preview? I guess a custom template would be one way to do that?) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
465728430 | MDExOlB1bGxSZXF1ZXN0Mjk1NzExNTA0 | 554 | Fix static mounts using relative paths and prevent traversal exploits | abdusco 3243482 | closed | 0 | 4 | 2019-07-09T11:32:02Z | 2019-07-11T16:29:26Z | 2019-07-11T16:13:19Z | CONTRIBUTOR | simonw/datasette/pulls/554 | While debugging why my static mounts using a relative path ( The reason is that datasette tries to prevent traversal exploits by checking if the path is relative to its registered directory. This check fails when the mount is a relative directory, because This also has the consequence of returning any requested file, because when I've implemented the mentioned changes and also updated the tests. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
487987958 | MDExOlB1bGxSZXF1ZXN0MzEzMTA1NjM0 | 57 | Add triggers while enabling FTS | amjith 49260 | closed | 0 | 4 | 2019-09-02T04:23:40Z | 2019-09-03T01:03:59Z | 2019-09-02T23:42:29Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/57 | This adds the option for a user to set up triggers in the database to keep their FTS table in sync with the parent table. Ref: https://sqlite.org/fts5.html#external_content_and_contentless_tables I would prefer to make the creation of triggers the default behavior, but that will break existing usage where people have been calling I am happy to make changes to the PR as you see fit. |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/57/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
518725064 | MDU6SXNzdWU1MTg3MjUwNjQ= | 29 | `import` command fails on empty files | jacobian 21148 | closed | 0 | 4 | 2019-11-06T20:34:26Z | 2019-11-09T20:33:38Z | 2019-11-09T19:36:36Z | CONTRIBUTOR | If a file in the export is empty (in my case it was
This appears to be because I hacked around this by modifying
I'm happy to work up a real PR if that's the right approach, but I'm not sure it is. |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/29/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
727915394 | MDExOlB1bGxSZXF1ZXN0NTA4NzE5NTY3 | 1043 | Include LICENSE in sdist | bollwyvl 45380 | closed | 0 | 4 | 2020-10-23T05:04:12Z | 2020-10-26T00:14:57Z | 2020-10-23T20:54:35Z | CONTRIBUTOR | simonw/datasette/pulls/1043 | Hi, thanks for This PR adds the I noticed the 0.50.2 sdist doesn't ship Motivation: It might be a bit of a slog, but I'm looking to see about getting |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
729017519 | MDExOlB1bGxSZXF1ZXN0NTA5NTkwMjA1 | 1049 | Add template block prior to extra URL loaders | psychemedia 82988 | closed | 0 | 4 | 2020-10-25T13:08:55Z | 2020-10-29T09:20:52Z | 2020-10-29T09:20:34Z | CONTRIBUTOR | simonw/datasette/pulls/1049 | To handle packages that require Javascript state setting prior to loading a package (eg |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
752966476 | MDU6SXNzdWU3NTI5NjY0NzY= | 1114 | --load-extension=spatialite not working with datasetteproject/datasette docker image | danp 2182 | closed | 0 | 4 | 2020-11-29T17:35:20Z | 2022-01-20T21:29:42Z | 2020-11-29T17:37:45Z | CONTRIBUTOR | https://github.com/simonw/datasette/commit/6aa5886379dd9017215904fb28567b80018902f9 added the https://github.com/simonw/datasette/blob/12877d7a48e2aa28bb5e780f929a218f7265d849/datasette/utils/init.py#L56-L60 However, in the datasetteproject/datasette docker image the file is at This results in the example command here failing:
But it does work when given an explicit path:
Perhaps |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
773913793 | MDExOlB1bGxSZXF1ZXN0NTQ0OTIzNDM3 | 1158 | Modernize code to Python 3.6+ | eumiro 6774676 | closed | 0 | Datasette 0.54 6346396 | 4 | 2020-12-23T16:21:38Z | 2021-01-24T21:20:50Z | 2020-12-23T17:04:32Z | CONTRIBUTOR | simonw/datasette/pulls/1158 |
please feel free to accept/reject any of these independent commits |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||
777677671 | MDU6SXNzdWU3Nzc2Nzc2NzE= | 1169 | Prettier package not actually being cached | benpickles 3637 | closed | 0 | 4 | 2021-01-03T17:04:41Z | 2021-01-04T19:52:34Z | 2021-01-04T19:52:33Z | CONTRIBUTOR | With the current configuration Prettier seems to be installed on every run - which can been seen from the output:
Prettier isn't explicitly being installed (it's surprising that actually installing the dependencies isn't included in the actions/cache docs) but it turns out that
I think there are a couple of approaches to tackling this, you could manually install/cache Prettier within the action, or add a I've tested the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
797651831 | MDU6SXNzdWU3OTc2NTE4MzE= | 1212 | Tests are very slow. | kbaikov 4488943 | closed | 0 | 4 | 2021-01-31T08:06:16Z | 2021-02-19T22:54:13Z | 2021-02-19T22:54:13Z | CONTRIBUTOR | Working on my PR i noticed that tests are very slow. The plain pytest run took about 37 minutes for me.
However i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution.
I can create a PR to mention that in your documentation. This will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation. Does that make sense to you? After a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run. Profiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense. Perhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases. And/or use conn = sqlite3.connect(":memory:") which is much faster. And/or truncate tables after each TC instead of deleting the file and re-creating them. I can take a look which is the best approach if you give the go-ahead. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
831751367 | MDU6SXNzdWU4MzE3NTEzNjc= | 246 | Escaping FTS search strings | DeNeutoy 16001974 | closed | 0 | 4 | 2021-03-15T12:15:09Z | 2021-08-18T18:57:13Z | 2021-08-18T18:43:12Z | CONTRIBUTOR | Thanks for the excellent library, it's very nice to use! I've been building some in memory search functionality for a data annotation tool i'm making, and I got tripped up a little bit with escaping the full text search queries. First I tried using http://search-24ways.herokuapp.com/24ways-f8f455f/articles?_search=acces%2A I got around this by aggressively escaping quotes inside the query string like this: ```python quoted = q.replace('"', '""') quoted = f'"{quoted}"' print(quoted) results = db["data"].search(quoted, columns=["id"]) return [x["id"] for x in results] ``` This works in the sense it doesn't crash, but it also removes access to the search query syntax. Given the well specified definition, it might be possible for sqlite-utils to provide a |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
864979486 | MDExOlB1bGxSZXF1ZXN0NjIxMTE3OTc4 | 1306 | Avoid error sorting by relationships if related tables are not allowed | gfrmin 416374 | closed | 0 | 4 | 2021-04-22T13:53:17Z | 2021-06-02T04:27:00Z | 2021-06-02T04:25:28Z | CONTRIBUTOR | simonw/datasette/pulls/1306 | Refs #1305 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
884952179 | MDU6SXNzdWU4ODQ5NTIxNzk= | 1320 | Can't use apt-get in Dockerfile when using datasetteproj/datasette as base | brandonrobertz 2670795 | closed | 0 | 4 | 2021-05-10T19:37:27Z | 2021-05-24T18:15:56Z | 2021-05-24T18:07:08Z | CONTRIBUTOR | The datasette base Docker image is super convenient, but there's one problem: if any of the plugins you install require additional system dependencies (e.g., xz, git, curl) then any attempt to use apt in said Dockerfile results in an explosion: ``` $ docker-compose build Building server [+] Building 9.9s (7/9) => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 666B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 34B 0.0s => [internal] load metadata for docker.io/datasetteproject/datasette:latest 0.6s => [base 1/4] FROM docker.io/datasetteproject/datasette@sha256:2250d0fbe57b1d615a8d6df0c9d43deb9533532e00bac68854773d8ff8dcf00a 0.0s => [internal] load build context 1.8s => => transferring context: 2.44MB 1.8s => CACHED [base 2/4] WORKDIR /datasette 0.0s => ERROR [base 3/4] RUN apt-get update && apt-get install --no-install-recommends -y git ssh curl xz-utils 9.2s
6 0.446 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]6 0.449 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]6 0.459 Get:3 http://httpredir.debian.org/debian sid InRelease [157 kB]6 0.784 Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]6 0.790 Get:5 http://httpredir.debian.org/debian sid/main amd64 Packages [8626 kB]6 1.003 Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]6 1.180 Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [286 kB]6 7.095 Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [10.9 kB]6 8.058 Fetched 17.2 MB in 8s (2243 kB/s)6 8.058 Reading package lists...6 9.166 E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)6 9.166 E: Could not open file - open (2: No such file or directory)6 9.166 E: Problem opening6 9.166 E: The package lists or status file could not be parsed or opened.``` The problem seems to be from completely wiping out https://github.com/simonw/datasette/blob/1b697539f5b53cec3fe13c0f4ada13ba655c88c7/Dockerfile#L18 I've tested without removing the directory and apt works as expected. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1083246400 | PR_kwDOBm6k_c4wAMK8 | 1562 | Update janus requirement from <0.8,>=0.6.2 to >=0.6.2,<1.1 | dependabot[bot] 49699333 | closed | 0 | 4 | 2021-12-17T13:11:10Z | 2021-12-17T23:08:29Z | 2021-12-17T23:08:28Z | CONTRIBUTOR | simonw/datasette/pulls/1562 | Updates the requirements on janus to permit the latest version. Release notesSourced from janus's releases.
ChangelogSourced from janus's changelog.
... (truncated) Commits
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting Dependabot commands and optionsYou can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1126692066 | I_kwDOCGYnMM5DJ_Ti | 403 | Document how to add a primary key to a rowid table using `sqlite-utils transform --pk` | fgregg 536941 | closed | 0 | 4 | 2022-02-08T01:39:40Z | 2022-02-09T04:22:43Z | 2022-02-08T19:33:59Z | CONTRIBUTOR | Original title: Add option for adding a new, serial, primary key sometimes we have tables that don't have primary keys, but ought to have them. we can use rowid for that, but it would often be nicer to have an explicit primary key. using the current value of rowid would be fine. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1279863844 | I_kwDOCGYnMM5MSSwk | 449 | Utilities for duplicating tables and creating a table with the results of a query | davidleejy 1690072 | closed | 0 | 4 | 2022-06-22T09:41:43Z | 2022-07-15T21:46:13Z | 2022-07-15T21:21:36Z | CONTRIBUTOR | is there a duplicate table functionality? Otherwise, I'd be happy to submit a PR. In sqlite3 it would look like: ```python import sqlite3 as sl con = sl.connect('prompt-tune.db') def db_duplicate_table(table_name, table_name_new, con=con):
# Duplicates table db_duplicate_table('orig_table', 'new_table') ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1355433619 | PR_kwDOCGYnMM4-B7Mc | 480 | search_sql add include_rank option | chapmanjacobd 7908073 | closed | 0 | 4 | 2022-08-30T09:10:29Z | 2022-08-31T03:40:35Z | 2022-08-31T03:40:35Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/480 | I haven't tested this yet but wanted to get a heads-up whether this kind of change would be useful or if I should just duplicate the function and tweak it within my code :books: Documentation preview :books:: https://sqlite-utils--480.org.readthedocs.build/en/480/ |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1452495049 | I_kwDOBm6k_c5Wk1DJ | 1899 | Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused | bgrins 95570 | closed | 0 | 4 | 2022-11-17T00:29:52Z | 2022-11-18T07:28:28Z | 2022-11-18T07:20:53Z | CONTRIBUTOR | After the upgrade to 6 (#1893) I noticed this. I think it's because we're doing overflow:hidden to accomplish the CSS resizer. When there's a single line of SQL there's a gap below that line where clicking doesn't do anything. It should focus at the end of the line. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1473659191 | I_kwDOBm6k_c5X1kE3 | 1929 | Incorrect link from the API explorer to the JSON API documentation | davidbgk 3556 | closed | 0 | 4 | 2022-12-03T02:08:58Z | 2022-12-06T19:36:23Z | 2022-12-06T19:34:20Z | CONTRIBUTOR | I installed When I go to http://127.0.0.1:8001/-/api I have a link: I'm not sure where it has to be fixed, should it link to the stable page https://docs.datasette.io/en/stable/json_api.html , the latest one https://docs.datasette.io/en/latest/json_api.html#the-json-write-api or would it be more appropriated to deploy documentation for the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1575131737 | I_kwDOCGYnMM5d4ppZ | 525 | Repeated calls to `Table.convert()` fail | mcarpenter 167893 | closed | 0 | 4 | 2023-02-07T22:40:47Z | 2023-05-08T21:59:41Z | 2023-05-08T21:54:02Z | CONTRIBUTOR | SummaryWhen using the API, repeated calls to Example```python from sqlite_utils import Database db = Database(memory=True) table = db['table'] col = 'x' table.insert_all([{col: 1}]) print(table.get(1)) table.convert(col, lambda x: x*2) print(table.get(1)) def zeroize(x): return 0 zeroize = lambda x: 0zeroize.name = 'zeroize'table.convert(col, zeroize) print(table.get(1)) ``` Output:
ExplanationThis is some relevant documentation.
There's a mismatch between the comments and the code: https://github.com/simonw/sqlite-utils/blob/fc221f9b62ed8624b1d2098e564f525c84497969/sqlite_utils/db.py#L404 but actually the existing function is returned/used instead (as the "registering custom sql functions" doc I linked above says too). Seems like this can be rectified to match the comment? Suggested fixI think there are four things:
1. The call to See also
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1620164673 | PR_kwDOCGYnMM5L08O8 | 531 | Add paths for homebrew on Apple silicon | eyeseast 25778 | closed | 0 | 4 | 2023-03-11T22:27:52Z | 2023-04-09T01:49:44Z | 2023-04-09T01:49:43Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/531 | This also passes in the extension path when specified in GIS methods. Wherever we know an extension path, we use :books: Documentation preview :books:: https://sqlite-utils--531.org.readthedocs.build/en/531/ |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1816917522 | PR_kwDOCGYnMM5WJ6Jm | 573 | feat: Implement a prepare_connection plugin hook | asg017 15178711 | closed | 0 | 4 | 2023-07-22T22:48:44Z | 2023-07-22T22:59:09Z | 2023-07-22T22:59:09Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/573 | Just like the Datasette prepare_connection hook, this PR adds a similar hook for the The sole argument is I want to do this so I can release An example plugin: https://gist.github.com/asg017/d7cdf0d56e2be87efda28cebee27fa3c ```bash $ sqlite-utils install https://gist.github.com/asg017/d7cdf0d56e2be87efda28cebee27fa3c/archive/5f5ad549a40860787629c69ca120a08c32519e99.zip $ sqlite-utils memory 'select hello("alex") as response' [{"response": "Hello, alex!"}] ``` Refs: - #574 :books: Documentation preview :books:: https://sqlite-utils--573.org.readthedocs.build/en/573/ |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1870672704 | PR_kwDOBm6k_c5Y-7Em | 2162 | Add new `--internal internal.db` option, deprecate legacy `_internal` database | asg017 15178711 | closed | 0 | 4 | 2023-08-29T00:05:07Z | 2023-08-29T03:24:23Z | 2023-08-29T03:24:23Z | CONTRIBUTOR | simonw/datasette/pulls/2162 | refs #2157 This PR adds a new This PR also removes and deprecates the previous in-memory A note on the new
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1891212159 | PR_kwDOBm6k_c5aD33C | 2183 | `datasette.yaml` plugin support | asg017 15178711 | closed | 0 | 4 | 2023-09-11T20:26:04Z | 2023-09-13T21:06:25Z | 2023-09-13T21:06:25Z | CONTRIBUTOR | simonw/datasette/pulls/2183 | Part of #2093 In #2149 , we ported over From now on, no plugin-related configuration is allowed in An example of what ```yaml plugins: datasette-my-plugin: config_key: value databases: fixtures: plugins: datasette-my-plugin: config_key: fixtures-db-value tables: students: plugins: datasette-my-plugin: config_key: fixtures-students-table-value ``` As an additional benefit, this now works with the new
Marked as a "Draft" right now until I add better documentation. We also should have a plan for the next alpha release to document and publicize this change, especially for plugin authors (since their docs will have to change to say :books: Documentation preview :books:: https://datasette--2183.org.readthedocs.build/en/2183/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1901768721 | PR_kwDOBm6k_c5anSg5 | 2191 | Move `permissions`, `allow` blocks, canned queries and more out of `metadata.yaml` and into `datasette.yaml` | asg017 15178711 | closed | 0 | 4 | 2023-09-18T21:21:16Z | 2023-10-12T16:16:38Z | 2023-10-12T16:16:38Z | CONTRIBUTOR | simonw/datasette/pulls/2191 | The PR moves the following fields from
This is a significant breaking change that users will need to upgrade their One note: I'm still working on the Configuration docs, specifically the "reference" section. Though it's pretty small, the rest of read to review |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
2001006157 | PR_kwDOCGYnMM5f2OZC | 604 | Add more STRICT table support | tkhattra 16437338 | closed | 0 | 4 | 2023-11-19T19:38:53Z | 2023-12-08T05:17:20Z | 2023-12-08T05:05:27Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/604 | Make :books: Documentation preview :books:: https://sqlite-utils--604.org.readthedocs.build/en/604/ |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
487600595 | MDU6SXNzdWU0ODc2MDA1OTU= | 3 | Option to fetch only checkins more recent than the current max checkin | simonw 9599 | closed | 0 | 4 | 2019-08-30T17:46:45Z | 2019-10-16T20:41:23Z | 2019-10-16T20:39:59Z | MEMBER | The Foursquare checkins API supports "return every checkin occurring after this point" - I can pass it the maximum createdAt date currently stored in the database. This will allow for quick incremental fetches via a cron. |
swarm-to-sqlite 205429375 | issue | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/3/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
490803176 | MDU6SXNzdWU0OTA4MDMxNzY= | 8 | --sql and --attach options for feeding commands from SQL queries | simonw 9599 | closed | 0 | 4 | 2019-09-08T20:35:49Z | 2020-03-20T23:13:01Z | 2020-03-20T23:13:01Z | MEMBER | Say you want to fetch Twitter profiles for a list of accounts that are stored in another database:
The SQL query you feed in is expected to return a list of screen names suitable for processing further by the command. Should be supported by all three of:
The |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
503233021 | MDU6SXNzdWU1MDMyMzMwMjE= | 1 | Use better pagination (and implement progress bar) | simonw 9599 | closed | 0 | 4 | 2019-10-07T04:58:11Z | 2020-03-27T22:13:57Z | 2020-03-27T22:13:57Z | MEMBER | Right now we attempt to load everything at once - which caps out at 5,000 items and is really slow. We can do better by implementing pagination using count and offset. |
pocket-to-sqlite 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
505928530 | MDU6SXNzdWU1MDU5Mjg1MzA= | 18 | Command to import home-timeline | simonw 9599 | closed | 0 | 4 | 2019-10-11T15:47:54Z | 2019-10-11T16:51:33Z | 2019-10-11T16:51:12Z | MEMBER | Feature request: https://twitter.com/johankj/status/1182563563136868352
|
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
514459062 | MDU6SXNzdWU1MTQ0NTkwNjI= | 27 | retweets-of-me command | simonw 9599 | closed | 0 | 4 | 2019-10-30T07:43:01Z | 2019-11-03T01:12:58Z | 2019-11-03T01:12:58Z | MEMBER | twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/27/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||||
519038979 | MDU6SXNzdWU1MTkwMzg5Nzk= | 10 | Failed to import workout points | simonw 9599 | closed | 0 | 4 | 2019-11-07T04:50:22Z | 2019-11-08T01:18:37Z | 2019-11-08T01:18:37Z | MEMBER | I just ran the script and it failed to import any |
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
660355904 | MDU6SXNzdWU2NjAzNTU5MDQ= | 43 | github-to-sqlite tags command for fetching tags | simonw 9599 | closed | 0 | 4 | 2020-07-18T20:14:12Z | 2020-07-18T23:05:56Z | 2020-07-18T21:52:15Z | MEMBER | Fetches paginated data from https://api.github.com/repos/simonw/datasette/tags |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/43/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
771316301 | MDU6SXNzdWU3NzEzMTYzMDE= | 31 | Searching for "github-to-sqlite" throws an error | simonw 9599 | closed | 0 | 4 | 2020-12-19T06:07:20Z | 2020-12-19T06:18:07Z | 2020-12-19T06:18:07Z | MEMBER | https://datasette.io/-/beta?q=github-to-sqlite&sort=relevance&type=blog.db%2Fentries - "no such column: to" |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/31/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
978743426 | MDU6SXNzdWU5Nzg3NDM0MjY= | 13 | xml.etree.ElementTree.ParseError: not well-formed (invalid token) | simonw 9599 | closed | 0 | 4 | 2021-08-25T05:48:21Z | 2021-08-26T18:45:13Z | 2021-08-26T18:45:13Z | MEMBER | Got this error today:
|
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
277589569 | MDU6SXNzdWUyNzc1ODk1Njk= | 155 | A primary key column that has foreign key restriction associated won't rendering label column | wsxiaoys 388154 | closed | 0 | Custom templates edition 2949431 | 4 | 2017-11-29T00:40:02Z | 2017-12-07T05:39:53Z | 2017-12-07T05:39:53Z | NONE | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
278814220 | MDU6SXNzdWUyNzg4MTQyMjA= | 161 | Support WITH query | wsxiaoys 388154 | closed | 0 | 4 | 2017-12-03T20:00:40Z | 2017-12-08T06:18:12Z | 2017-12-04T04:52:41Z | NONE | Currently datasettle failed with error message: Statement must begin with SELECT Example query
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
292011379 | MDU6SXNzdWUyOTIwMTEzNzk= | 184 | 500 from missing table name | carlmjohnson 222245 | closed | 0 | 4 | 2018-01-26T19:46:45Z | 2019-05-21T16:17:29Z | 2018-04-13T18:18:59Z | NONE | https://github.com/simonw/datasette/blob/56623e48da5412b25fb39cc26b9c743b684dd968/datasette/app.py#L517-L519 throws an error if it gets an empty list back. Simplest solution is to write a helper func that just says
and use it anywhere |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
322283067 | MDU6SXNzdWUzMjIyODMwNjc= | 254 | Escaping named parameters in canned queries | philroche 247131 | closed | 0 | 4 | 2018-05-11T12:43:30Z | 2020-05-10T14:54:14Z | 2020-05-10T14:54:13Z | NONE | Thank you very much for this project. I have created some canned queries but some of the filters include a colon eg. "com.ubuntu.cloud:server:18.04:amd64". When saved these colons are parsed as named parameters. Is there a way to escape colons in a canned query? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
340396247 | MDU6SXNzdWUzNDAzOTYyNDc= | 339 | Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way | bsilverm 12617395 | closed | 0 | 4 | 2018-07-11T20:38:06Z | 2022-03-21T22:22:40Z | 2022-03-21T22:22:34Z | NONE | Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
341123355 | MDU6SXNzdWUzNDExMjMzNTU= | 342 | Requesting support for query description | bsilverm 12617395 | closed | 0 | 4 | 2018-07-13T18:50:16Z | 2018-07-24T04:53:21Z | 2018-07-16T02:33:54Z | NONE | It would be great if the metadata file allowed you to enter a description for the query. We have a lot of pre-defined queries that can only be so descriptive by their name. It would be nice if an optional description could be included underneath the name within the UI, or on hover where it currently shows the SQL. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
467218270 | MDU6SXNzdWU0NjcyMTgyNzA= | 558 | Support unicode in url | 0x1997 380586 | closed | 0 | 4 | 2019-07-12T04:43:24Z | 2019-07-15T01:29:30Z | 2019-07-14T02:49:33Z | NONE | Hi, I defined some custom queries in my Btw, thanks for the great work! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
472429048 | MDU6SXNzdWU0NzI0MjkwNDg= | 9 | Too many SQL variables | tholo 166463 | closed | 0 | 4 | 2019-07-24T18:24:17Z | 2019-07-26T10:01:05Z | 2019-07-26T10:01:05Z | NONE | Decided to try importing my data, and ran into this:
Added some debug output in sqlite_utils/db.py, which resulted in:
with the attached data:
|
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
544571092 | MDU6SXNzdWU1NDQ1NzEwOTI= | 15 | Assets table with downloads | garethr 2029 | closed | 0 | 1.0 5225818 | 4 | 2020-01-02T13:05:28Z | 2020-03-28T12:17:01Z | 2020-03-23T19:17:32Z | NONE | The |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
546051181 | MDU6SXNzdWU1NDYwNTExODE= | 16 | Exception running first command: IndexError: list index out of range | jayvdb 15092 | closed | 0 | 4 | 2020-01-07T03:01:58Z | 2020-04-14T18:37:21Z | 2020-04-14T18:37:21Z | NONE | Exception running first command without an existing db or auth. ```py
|
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
549287310 | MDU6SXNzdWU1NDkyODczMTA= | 76 | order_by mechanism | metab0t 10501166 | closed | 0 | 4 | 2020-01-14T02:06:03Z | 2020-04-16T06:23:29Z | 2020-04-16T03:13:06Z | NONE | In some cases, I want to iterate rows in a table with |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/76/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
569317377 | MDU6SXNzdWU1NjkzMTczNzc= | 681 | Cashe-header missing in http-response | clausjuhl 2181410 | closed | 0 | 4 | 2020-02-22T10:50:45Z | 2020-02-24T20:53:57Z | 2020-02-24T20:53:56Z | NONE | Hi Simon. I need some help with both understanding and adding http-headers. If I call datasette on localhost with --config default_cache_ttl:120 and --cors, I only get the following response-headers: access-control-allow-origin: * content-type: text/html; charset=utf-8 date: Sat, 22 Feb 2020 10:32:15 GMT referrer-policy: no-referrer server: uvicorn transfer-encoding: chunked Cors works, but no caching-header is set? Same thing happens if I use the command in a Dockerfile and run datasette with docker. Second, how can one add headers to uvicorn? I've tried to add uvicorn commands to the Dockerfile, before the final datasette command, but it doesn't work. Is there any way to add headers to the uvicorn.run() command i datasette? I particular, I would like to add some of the missing security-headers: Thank you for a great product! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
593751293 | MDU6SXNzdWU1OTM3NTEyOTM= | 97 | Adding a "recreate" flag to the `Database` constructor | betatim 1448859 | closed | 0 | 4 | 2020-04-04T05:41:10Z | 2020-04-15T14:29:31Z | 2020-04-13T03:52:29Z | NONE | I have a script that imports data into a sqlite DB. When I re-run that script I'd like to remove the existing sqlite DB, instead of adding to it. The pragmatic answer is to add the check and file deletion to my script. However I thought it would be easy and useful for others to add a Does anyone have an idea/suggestion where to start investigating? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/97/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
609950090 | MDU6SXNzdWU2MDk5NTAwOTA= | 33 | Fall back to authentication via ENV | garethr 2029 | closed | 0 | 4 | 2020-04-30T12:58:14Z | 2020-05-02T18:46:10Z | 2020-05-02T18:45:37Z | NONE | Would you accept a PR that falls back to looking for an environment variable for the GitHub token? Specifically a change here: https://github.com/dogsheep/github-to-sqlite/blob/c34d5a18bfc41fa08755ba3d5cf9fe09ff204238/github_to_sqlite/cli.py#L271 I'd like to use Wanted to check first, I'm happy to submit a PR with tests and updates to the docs. |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
611284481 | MDU6SXNzdWU2MTEyODQ0ODE= | 38 | [Feature Request] Support Repo Name in Search 🥺 | zzeleznick 5779832 | closed | 0 | 4 | 2020-05-02T22:08:51Z | 2020-05-03T02:34:32Z | 2020-05-02T23:15:11Z | NONE | DescriptionPer your v2.2 release tweet I played with the demo, but the output did not match my expectations. Expected BehaviorExpected a search query for "twitter" contained within the Actual Behavior😭 0 rows where repo contains "twitter" sorted by starred_at descending Best ExplanationPer the table schema (see appendix) Desired BehaviorGiven that searching for "206156866" is less intuitive than "twitter", it would be great to support this via extending the search capabilities or by adding an additional column. ✅ 104 rows where repo contains "twitter" ❌ 104 rows where repo contains "206156866" sorted by starred_at descending Appendix
|
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
705108492 | MDU6SXNzdWU3MDUxMDg0OTI= | 970 | request an "-o" option on "datasette server" to open the default browser at the running url | secretGeek 2861690 | closed | 0 | Datasette 0.50 5971510 | 4 | 2020-09-20T13:16:34Z | 2020-10-08T23:54:27Z | 2020-09-22T14:27:04Z | NONE | This is a request for a "convenience" feature, and only a nice to have. It's based on seeing this feature in several little command line hypertext server apps. If you run, for example:
I would like it if default browser is launched, at the URL that is being served. The angular cli does this, for example ng serve <project> --open #see https://angular.io/cli/serve ...as does my usual mini web server of choice when inspecting local static files.... npx http-server -o # see https://www.npmjs.com/package/http-server Just a tiny thing. Love your work! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
718521469 | MDU6SXNzdWU3MTg1MjE0Njk= | 1011 | column name links broken in 0.50.1 | mhalle 649467 | closed | 0 | 4 | 2020-10-10T03:37:51Z | 2020-10-10T04:09:32Z | 2020-10-10T03:52:07Z | NONE | I just upgraded from 0.49 to 0.50.1 and found that the links on column headers are broken. If I inspect the source, they have a leading "//" (without host or port) rather than including base_url like other links on the page do. The links in the "gears" menu for each column do work. I don't have custom templates for my project. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
760312579 | MDU6SXNzdWU3NjAzMTI1Nzk= | 1134 | "_searchmode=raw" throws an index out of range error when combined with "_search_COLUMN" | clausjuhl 2181410 | closed | 0 | 4 | 2020-12-09T13:05:37Z | 2020-12-10T05:57:17Z | 2020-12-09T19:56:55Z | NONE | Hi Simon! Maybe it's just me, but when using _searchmode=raw (trying to enable wildcard-searching) in combination with the "_search_COLUMN"-table argument, I get a list index out of range error. When combining with the simpler "_search"-argument everything works, including wildcard-seaches.. Here's the traceback: ``` Traceback (most recent call last): File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py", line 122, in route_path return await view(new_scope, receive, send) File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py", line 196, in view request, scope["url_route"]["kwargs"] File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py", line 204, in get request, database, hash, correct_hash_provided, kwargs File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py", line 342, in view_get request, database, hash, **kwargs File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/table.py", line 393, in data search_col = key.split("search", 1)[1] IndexError: list index out of range ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
781262510 | MDU6SXNzdWU3ODEyNjI1MTA= | 1181 | Certain database names results in 404: "Database not found: None" | jieter 1470389 | closed | 0 | Datasette 0.54 6346396 | 4 | 2021-01-07T12:01:16Z | 2021-12-21T18:25:15Z | 2021-01-25T05:13:19Z | NONE | I have a file named However, if I click any of the links, datasette replies with: It seems the hash is crucial, as renaming the file to This lines checks for a single dash: https://github.com/simonw/datasette/blob/97fb10c17dd007a275ab743742e93e932335ad67/datasette/views/base.py#L184 ``` $ datasette test-database\ (1).sqlite INFO: Started server process [68314] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) INFO: 127.0.0.1:54043 - "GET /favicon.ico HTTP/1.1" 200 OK INFO: 127.0.0.1:54043 - "GET / HTTP/1.1" 200 OK ... INFO: 127.0.0.1:54044 - "GET /favicon.ico HTTP/1.1" 200 OK INFO: 127.0.0.1:54044 - "GET /test-database (1) HTTP/1.1" 404 Not Found
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
806743116 | MDU6SXNzdWU4MDY3NDMxMTY= | 1220 | Installing datasette via docker: Path 'fixtures.db' does not exist | aborruso 30607 | closed | 0 | 4 | 2021-02-11T21:09:14Z | 2021-02-12T21:35:17Z | 2021-02-12T21:35:17Z | NONE | Hi, If I run
I have
If I run What's my error? Thank you |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
807174161 | MDU6SXNzdWU4MDcxNzQxNjE= | 227 | Error reading csv files with large column data | camallen 295329 | closed | 0 | 4 | 2021-02-12T11:51:47Z | 2021-02-16T11:48:03Z | 2021-02-14T21:17:19Z | NONE | Feel free to close this issue - I mostly added it for reference for future folks that run into this :) I have a CSV file with one column that has very long strings. When i try to import this file via the Traceback (most recent call last):
File "/usr/local/bin/sqlite-utils", line 10, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in call
return self.main(args, kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(args, kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 774, in insert
default=default,
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 705, in insert_upsert_implementation
docs, pk=pk, batch_size=batch_size, alter=alter, extra_kwargs
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py", line 1852, in insert_all
first_record = next(records)
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 703, in <genexpr>
docs = (decode_base64_values(doc) for doc in docs)
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 681, in <genexpr>
docs = (dict(zip(headers, row)) for row in reader)
_csv.Error: field larger than field limit (131072)
sqlite-utils --versionsqlite-utils, version 3.4.1 datasette --versiondatasette, version 0.54 ``` It appears this is a known issue reading in csv files in python and doesn't look to be modifiable through system / env vars (i may be very wrong on this). Noting that using sqlite3 Finally, I'm loving https://datasette.io/ thank you very much for an amazing tool and data ecosytem 🙇♀️ |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
919314806 | MDU6SXNzdWU5MTkzMTQ4MDY= | 270 | Cannot set type JSON | frafra 4068 | closed | 0 | 4 | 2021-06-11T23:53:22Z | 2021-06-16T17:34:49Z | 2021-06-16T15:47:06Z | NONE | It would be great if the column type could be set to JSON. That would not be different from handling a regular string. It would be something like |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
990844088 | MDU6SXNzdWU5OTA4NDQwODg= | 325 | sqlite-utils memory can't deal with multiple files with the same name | karlb 144773 | closed | 0 | 4 | 2021-09-08T08:14:42Z | 2021-09-22T20:52:56Z | 2021-09-22T20:45:45Z | NONE | When I use multiple files with the same name, e.g. in This can be reproduced with ```sh !/bin/bashmkdir foo mkdir bar echo -e 'col1,col2\nval1,val2' > foo/bug.csv echo -e 'col3,col4\nval3,val4' > bar/bug.csv sqlite-utils memory */bug.csv 'SELECT 1' ``` Ideally, the tables would get unique names by including the next path segment until the names are unique. But just making the numbered t* aliases work would be good enough. This problem can of course be worked around by renaming the files, but it would be nice if this case was handled more gracefully. Thanks a lot for this great tool! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
995098231 | MDU6SXNzdWU5OTUwOTgyMzE= | 1470 | ?_sort=rowid with _next= returns error | eigenfoo 19851673 | closed | 0 | 4 | 2021-09-13T16:36:15Z | 2021-10-18T19:30:15Z | 2021-10-10T01:15:03Z | NONE | For example:
This is because the search URL includes the The FTS search request should strip any
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1063388037 | I_kwDOCGYnMM4_YgOF | 343 | Provide function to generate hash_id from specified columns | psychemedia 82988 | closed | 0 | 4 | 2021-11-25T10:12:12Z | 2022-03-02T04:25:25Z | 2022-03-02T04:25:25Z | NONE | Hi I note that you define It would be useful to be able to call a complementary function to generate a corresponding Or is there a better pattern for doing that? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1257724585 | I_kwDOCGYnMM5K91qp | 441 | Combining `rows_where()` and `search()` to limit which rows are searched | betatim 1448859 | closed | 0 | 4 | 2022-06-02T06:01:55Z | 2022-06-14T21:57:57Z | 2022-06-14T21:54:38Z | NONE | What is the right way to limit a full text search query to some rows of a table? For example, I have a table that contains the following columns: I tried to combine My two questions:
1. is adding a Right now I am thinking I will make my own version of Bonus question: is this generally useful/something to add to sqlite-utils or too niche? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1359557737 | I_kwDOBm6k_c5RCTRp | 1798 | Parts of YAML file do not work when db name is "off" | CharlesNepote 562352 | closed | 0 | 4 | 2022-09-01T22:10:57Z | 2022-09-02T00:02:53Z | 2022-09-01T23:56:33Z | NONE | I guess this issue is not very important and probably rare. To reproduce:
* create and populate a db named YAML file:
```yaml
title: Some title
description_html: |-
This is an experiment. databases: off: tables: products_from_owners: title: products_from_owners* description_html: |- Description ```The result for http://xxxx.xxx/-/metadata gives:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1382457780 | I_kwDOCGYnMM5SZqG0 | 490 | Ability to insert multi-line files | jeqo 6180701 | closed | 0 | 4 | 2022-09-22T13:29:22Z | 2022-09-26T18:24:44Z | 2022-09-23T16:37:58Z | NONE | I was looking into how to parse application log files that contain multiline text (e.g. Java stack traces) into sqlite.
I can see that at the moment I wonder if this functionality would be useful for sqlite-utils. A similar approach to Elastic logstash/filebeat can be adopted: https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html Potential changes:
Or if this is achievable in a different way, please share. Thanks! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
268469569 | MDU6SXNzdWUyNjg0Njk1Njk= | 39 | Protect against malicious SQL that causes damage even though our DB is immutable | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-10-25T16:44:27Z | 2021-08-17T23:52:07Z | 2017-11-05T02:53:47Z | OWNER | I’m currently operating under the assumption that it’s safe to allow arbitrary SQL statements because we are dealing with an immutable database. But this might not be the case - there are some pretty weird SQLite language extensions (ATTACH, PRAGMA etc) and I’m not certain they cannot be used to break things in a way that would affect future requests to the API. Solution: provide a “safe mode” option which disables the ?sql= mechanism. This still leaves the URL filter lookups, so I need to make sure that those are “safe”. In the future I may also implement a whitelist option where datasets can be configured to only allow specific filters against specific columns. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
268591332 | MDU6SXNzdWUyNjg1OTEzMzI= | 42 | Homepage UI for editing metadata file | simonw 9599 | closed | 0 | 4 | 2017-10-26T00:22:03Z | 2017-12-10T03:02:14Z | 2017-12-10T03:02:14Z | OWNER | Since we are going to have a metadata file which sets the title/description/etc for each database, why not allow you to run the app in —dev mode which makes the homepage into a WYSIWYG editor that can save to that file format. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/42/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
272391665 | MDU6SXNzdWUyNzIzOTE2NjU= | 48 | Switch to ujson | simonw 9599 | closed | 0 | 4 | 2017-11-08T23:50:29Z | 2019-06-24T06:57:54Z | 2019-06-24T06:57:43Z | OWNER | ujson is already a dependency of Sanic, and should be quite a bit faster. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/48/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
272661336 | MDU6SXNzdWUyNzI2NjEzMzY= | 49 | Pick a name | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-11-09T17:56:17Z | 2017-11-10T18:33:22Z | 2017-11-10T18:33:22Z | OWNER | Options so far:
Terms to play with:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/49/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
273157085 | MDU6SXNzdWUyNzMxNTcwODU= | 59 | datasette publish hyper | simonw 9599 | closed | 0 | 4 | 2017-11-11T16:27:26Z | 2019-05-13T19:01:00Z | 2019-05-13T19:00:44Z | OWNER | This is a bit tricky, because unlike Now there doesn't seem to be a way to tell Hyper to "build this Dockerfile and deploy the resulting image". They expect you to build a container and publish it to a registry instead. https://docs.hyper.sh/Reference/CLI/load.html allows you to publish an image directly from a tarball, but that still leaves the challenge of creating that image. The nice thing about the Now integration is that you don't need to have Docker installed on your local machine. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/59/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
273247186 | MDU6SXNzdWUyNzMyNDcxODY= | 68 | Support for title/source/license metadata | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-11-12T17:04:21Z | 2017-12-04T04:55:43Z | 2017-11-13T15:26:11Z | OWNER | I've decided this is important for launch: I want to set a precedent for people citing, licensing and documenting their datasets. Not sure how best to go about supporting this. I'd like to allow for the following data to be optionally attached to any given database:
I'd also like the ability to attach descriptions to individual tables - and maybe even to table columns? The question then becomes: how should this information be stored. A few options:
Whatever the format, it can be made much more usable by offering a web-based editing UI for populating it (a special mode the server can be run in). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
273248366 | MDU6SXNzdWUyNzMyNDgzNjY= | 69 | Enforce pagination (or at least limits) for arbitrary custom SQL | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-11-12T17:21:33Z | 2017-11-13T20:32:47Z | 2017-11-13T19:35:47Z | OWNER | It's way too easy to accidentally trigger a page that returns 100,000 rows at the moment. I need to use the LIMIT clause on views and custom SQL - I can support pagination "next" links using offset as well. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
274314940 | MDU6SXNzdWUyNzQzMTQ5NDA= | 105 | Consider data-package as a format for metadata | simonw 9599 | closed | 0 | 4 | 2017-11-15T21:43:34Z | 2017-11-20T19:50:53Z | 2017-11-20T19:50:53Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||||
275087397 | MDU6SXNzdWUyNzUwODczOTc= | 120 | Plugin that adds an authentication layer of some sort | simonw 9599 | closed | 0 | 4 | 2017-11-18T15:39:13Z | 2020-03-16T18:48:06Z | 2020-03-16T18:48:06Z | OWNER | Would allow people who want to host private data to do so. .sh |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/120/reactions", "total_count": 7, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
completed | ||||||
275089535 | MDU6SXNzdWUyNzUwODk1MzU= | 121 | ?_json=foo&_json=bar query string argument | simonw 9599 | closed | 0 | 4 | 2017-11-18T16:09:55Z | 2018-05-31T13:48:12Z | 2018-05-28T18:11:51Z | OWNER | Causes the specified columns in the output to be treated as JSON, and returned deserialized in the .json or .jsono response. This will be particularly powerful when combined with https://sqlite.org/json1.html |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
312620566 | MDU6SXNzdWUzMTI2MjA1NjY= | 199 | Ability to apply sort on mobile in portrait mode | simonw 9599 | closed | 0 | 4 | 2018-04-09T17:35:04Z | 2018-04-10T00:37:53Z | 2018-04-10T00:34:38Z | OWNER | Missed this in #189... on mobile in portrait mode we hide the column headers, which means you can't click them to sort! You can sort in landscape mode at least. Need to come up with an alternative sort UI for portrait on mobile. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
316323336 | MDU6SXNzdWUzMTYzMjMzMzY= | 231 | metadata.json support for plugin configuration options | simonw 9599 | closed | 0 | 4 | 2018-04-20T15:58:47Z | 2019-05-13T18:56:21Z | 2019-05-13T18:56:21Z | OWNER | My datasette-cluster-map plugin currently works by detecting One way to do this could be to support optional plugin configuration as part of
These settings should be supported at the root level or at the individual database or table level. They could also be exposed in the https://datasette-cluster-map-demo.now.sh/-/plugins debug tool. Refs #14 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
320592643 | MDU6SXNzdWUzMjA1OTI2NDM= | 251 | Explore "distinct values for column" in inspect() | simonw 9599 | closed | 0 | 4 | 2018-05-06T13:27:24Z | 2018-05-14T22:47:55Z | 2018-05-14T22:47:55Z | OWNER | A lot of datasets have columns which have a small number of possible values in them - this one for example: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+distinct+category+from+%5Binconvenient-sequel%2Fratings%5D%3B Detecting these could be interesting as part of The problem is detecting them efficiently. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
324162476 | MDU6SXNzdWUzMjQxNjI0NzY= | 271 | Mechanism for automatically picking up changes when on-disk .db file changes | simonw 9599 | closed | 0 | 4 | 2018-05-17T19:53:15Z | 2019-01-10T21:35:18Z | 2019-01-10T21:35:18Z | OWNER | It would be useful if Datasette could spot when a SQLite database file changes on disk and restart itself (hence re-running .inspect() and picking up the new content hash). Ideally this could happen in an atomic way so no requests get dropped during the switch-over. This may not play well with SQLite opening databases in immutable mode. Research required. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/271/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
328172521 | MDU6SXNzdWUzMjgxNzI1MjE= | 303 | Support table names ending with .json or .csv | simonw 9599 | closed | 0 | 4 | 2018-05-31T14:53:23Z | 2018-06-15T06:55:50Z | 2018-06-15T06:55:50Z | OWNER | This is needed for #266 - if a table name ends with We should be smarter about this. This does mean we will have some URLs that look like this:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
336464733 | MDU6SXNzdWUzMzY0NjQ3MzM= | 328 | Installation instructions, including how to use the docker image | simonw 9599 | closed | 0 | 4 | 2018-06-28T03:59:33Z | 2023-09-05T14:10:39Z | 2018-06-28T04:02:10Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||||
338768551 | MDU6SXNzdWUzMzg3Njg1NTE= | 333 | Datasette on Zeit Now returns http URLs for facet and next links | simonw 9599 | closed | 0 | 4 | 2018-07-06T00:40:49Z | 2018-07-24T04:53:20Z | 2018-07-24T01:51:53Z | OWNER | e.g. on https://fivethirtyeight.datasettes.com/fivethirtyeight-ac35616/nba-elo%2Fnbaallelo.json?_facet=lg_id&_size=0
Note that suggested facets doesn't include the full URL at all, which is a consistency bug. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
345821500 | MDU6SXNzdWUzNDU4MjE1MDA= | 352 | render_cell(value) plugin hook | simonw 9599 | closed | 0 | 4 | 2018-07-30T15:56:20Z | 2020-02-10T16:18:58Z | 2018-08-05T00:14:57Z | OWNER | To allow plugins to customize how values matching a specific pattern are displayed in the HTML table view. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
346028655 | MDU6SXNzdWUzNDYwMjg2NTU= | 356 | Ability to display facet counts for many-to-many relationships | simonw 9599 | closed | 0 | 4 | 2018-07-31T04:14:26Z | 2019-05-29T21:39:12Z | 2019-05-25T16:30:09Z | OWNER | Parent: #354 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
413867537 | MDU6SXNzdWU0MTM4Njc1Mzc= | 16 | add_column() should support REFERENCES {other_table}({other_column}) | simonw 9599 | closed | 0 | 4 | 2019-02-24T21:00:45Z | 2019-05-29T05:17:59Z | 2019-05-29T04:56:18Z | OWNER | Related to #2 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
423316403 | MDU6SXNzdWU0MjMzMTY0MDM= | 422 | Figure out what to do about table counts in a mutable world | simonw 9599 | closed | 0 | 4 | 2019-03-20T15:27:15Z | 2019-05-02T05:43:11Z | 2019-05-02T05:43:11Z | OWNER | In moving away from the existing static inspect method (see #420 and #419) the biggest thing lost is full table row counts. These can be expensive against large tables, but currently Datasette runs the We can run those counts with a timelimit, but this means that for larger tables we won't be able to show a count at all, which is disappointing. Is there a way we can find an approximate or lower bound count for a table? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
432893491 | MDExOlB1bGxSZXF1ZXN0MjcwMjUxMDIx | 432 | Refactor facets to a class and new plugin, refs #427 | simonw 9599 | closed | 0 | 4 | 2019-04-13T20:04:45Z | 2019-05-03T00:04:24Z | 2019-05-03T00:04:24Z | OWNER | simonw/datasette/pulls/432 | WIP for #427 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
435531034 | MDU6SXNzdWU0MzU1MzEwMzQ= | 435 | Tracing support for seeing what SQL queries were executed | simonw 9599 | closed | 0 | 0.28 4305096 | 4 | 2019-04-21T17:37:37Z | 2019-05-11T20:32:21Z | 2019-05-11T19:07:42Z | OWNER | Features like faceting, foreign key expansions and now the inspect-less index view mean Datasette can end up executing a surprisingly large number of SQL queries to render a single page. Past experience with projects like tikbar have shown that being able to see what actually went into rendering a page can be critical for optimizing performance and generally understanding how everything works. Support a tracing mode (probably via a |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
443023308 | MDU6SXNzdWU0NDMwMjMzMDg= | 462 | Replace most of `.inspect()` (and `datasette inspect`) with table counting | simonw 9599 | closed | 0 | 0.28 4305096 | 4 | 2019-05-11T18:26:06Z | 2019-05-16T14:31:05Z | 2019-05-16T14:31:05Z | OWNER | This is the last part of #419 - with the move to supporting mutable databases by default, the inspect-data mechanism currently in use no-longer makes much sense. The one optimization I think it's worth keeping for databases opened in immutable mode is the cached table counts. I think If performing them at run-time has performance issues, I would rather cache those results internally within Datasette after they are first calculated than continue to support them in the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
443038584 | MDU6SXNzdWU0NDMwMzg1ODQ= | 465 | Decide what to do about /-/inspect | simonw 9599 | closed | 0 | 4 | 2019-05-11T21:39:46Z | 2019-06-28T16:34:33Z | 2019-06-28T16:34:33Z | OWNER | It's not clear to me what this endpoint should do now as a result of #419 - it's still useful to be able to introspect databases for tools like datasette-registry, but since we aren't pre-calculating introspection data any more I need to rethink the approach. For one thing, this endpoint may need to be paginated. Or maybe it should be split up into separate endpoints for each connected database? Those should probably be paginated too seeing as fivethirtyeight has 400+ tables. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
449848803 | MDU6SXNzdWU0NDk4NDg4MDM= | 25 | Allow .insert(..., foreign_keys=()) to auto-detect table and primary key | simonw 9599 | closed | 0 | 4 | 2019-05-29T14:39:22Z | 2019-06-13T05:32:32Z | 2019-06-13T05:32:32Z | OWNER | The |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/25/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
449854604 | MDU6SXNzdWU0NDk4NTQ2MDQ= | 492 | Facets not correctly persisted in hidden form fields | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 4 | 2019-05-29T14:49:39Z | 2020-09-15T20:12:29Z | 2020-09-15T20:12:29Z | OWNER | Steps to reproduce: visit https://2a4b892.datasette.io/fixtures/roadside_attractions?_facet_m2m=attraction_characteristic and click "Apply" Result is a 500: The error occurs because of this hidden HTML input:
This should be:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
456568880 | MDU6SXNzdWU0NTY1Njg4ODA= | 509 | Support opening multiple databases with the same stem | simonw 9599 | closed | 0 | simonw 9599 | Datasette 1.0 3268330 | 4 | 2019-06-15T19:32:00Z | 2020-12-22T20:04:35Z | 2020-12-22T20:04:35Z | OWNER | e.g. I should be able to do this:
This currently errors because you can't have two databases taking the Instead, how about in this particular case assigning the second database |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/509/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||
459590021 | MDU6SXNzdWU0NTk1OTAwMjE= | 519 | Decide what goes into Datasette 1.0 | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 4 | 2019-06-23T15:47:41Z | 2021-11-15T23:26:11Z | 2021-11-15T23:26:11Z | OWNER | Datasette ASGI #272 is a big part of it... but 1.0 will generally be an indicator that Datasette is a stable platform for developers to write plugins and custom templates against. So lots to think about. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
466996584 | MDExOlB1bGxSZXF1ZXN0Mjk2NzM1MzIw | 557 | Get tests running on Windows using Travis CI | simonw 9599 | closed | 0 | 4 | 2019-07-11T16:36:57Z | 2021-07-10T23:39:48Z | 2021-07-10T23:39:48Z | OWNER | simonw/datasette/pulls/557 | Refs #511 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
473083260 | MDU6SXNzdWU0NzMwODMyNjA= | 50 | "Too many SQL variables" on large inserts | simonw 9599 | closed | 0 | 4 | 2019-07-25T21:43:31Z | 2022-11-04T14:38:36Z | 2019-07-28T11:59:33Z | OWNER | Reported here: https://github.com/dogsheep/healthkit-to-sqlite/issues/9 It looks like there's a default limit of 999 variables - we need to be smart about that, maybe dynamically lower the batch size based on the number of columns. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/50/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
520715188 | MDU6SXNzdWU1MjA3MTUxODg= | 622 | Datasette should work with Python 3.8 (and drop compatibility with Python 3.5) | simonw 9599 | closed | 0 | 4 | 2019-11-11T03:12:36Z | 2019-11-12T05:52:49Z | 2019-11-12T05:09:13Z | OWNER | See #595, #594, #404. The big thing holding me back from ditching Python 3.5 was glitch.com - but they now offer Python 3.7: https://support.glitch.com/t/can-you-upgrade-python-to-latest-version/7980/25?u=simonw |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/622/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
530653633 | MDU6SXNzdWU1MzA2NTM2MzM= | 645 | Mechanism for register_output_renderer to suggest extension or not | simonw 9599 | closed | 0 | 4 | 2019-12-01T01:26:27Z | 2020-05-28T02:22:18Z | 2020-05-28T02:22:12Z | OWNER | datasette-atom only works if the user constructs a SQL query with specific output columns ( It would be good if the See also #581. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
558600274 | MDU6SXNzdWU1NTg2MDAyNzQ= | 81 | Remove .detect_column_types() from table, make it a documented API | simonw 9599 | closed | 0 | 4 | 2020-02-01T21:25:54Z | 2020-02-01T21:55:35Z | 2020-02-01T21:55:35Z | OWNER | I used it in It would make more sense for this method to live on the Database rather than the Table - or even to exist as a separate utility method entirely. Then it should be documented. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/81/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);
comments 1 ✖