issues
86 rows where comments = 6 and state = "closed" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, milestone, author_association, updated_at (date), closed_at (date)
repo 6
state 1
- closed · 86 ✖
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1907655261 | I_kwDOBm6k_c5xtIJd | 2193 | "Test DATASETTE_LOAD_PLUGINS" test shows errors but did not fail the CI run | simonw 9599 | closed | 0 | 6 | 2023-09-21T19:49:34Z | 2023-09-21T21:56:43Z | 2023-09-21T21:56:43Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/2057#issuecomment-1730201226 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1886771493 | I_kwDOCGYnMM5wddkl | 592 | `table.transform()` should preserve `rowid` values | simonw 9599 | closed | 0 | 6 | 2023-09-08T00:42:38Z | 2023-09-10T17:46:41Z | 2023-09-09T00:45:32Z | OWNER | I just spotted a bug when using https://datasette.io/plugins/datasette-configure-fts and https://datasette.io/plugins/datasette-edit-schema at the same time. Steps to reproduce:
I got the wrong search results, which I think is because the Reconfiguring FTS on the table fixed the problem. I think |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1886791100 | I_kwDOBm6k_c5wdiW8 | 2180 | Plugin hook: `actors_from_ids()` | simonw 9599 | closed | 0 | 6 | 2023-09-08T01:16:41Z | 2023-09-10T17:44:14Z | 2023-09-08T04:28:03Z | OWNER | In building Datasette Cloud we realized that a bunch of the features we are building need a way of resolving an actor ID to the actual actor, in order to display something more interesting than just an integer ID. Social plugins in particular need this - comments by X, CSV uploaded by X, that kind of thing. I think the solution is a new plugin hook: The default implementation can return Pluggy has a |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1874327336 | PR_kwDOBm6k_c5ZLMSe | 2165 | DATASETTE_LOAD_PLUGINS environment variable for loading specific plugins | simonw 9599 | closed | 0 | 6 | 2023-08-30T20:33:30Z | 2023-08-30T22:12:25Z | 2023-08-30T22:12:25Z | OWNER | simonw/datasette/pulls/2165 |
TODO:
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
685806511 | MDU6SXNzdWU2ODU4MDY1MTE= | 950 | Private/secret databases: database files that are only visible to plugins | simonw 9599 | closed | 0 | 6 | 2020-08-25T20:46:17Z | 2023-08-24T22:26:09Z | 2023-08-24T22:26:08Z | OWNER | In thinking about the best way to implement https://github.com/simonw/datasette-auth-passwords/issues/6 (SQL-backed user accounts for
Idea: allow one or more private database files to be attached to Datasette, something like this:
The So
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1786258502 | I_kwDOCGYnMM5qeCRG | 565 | Table renaming: db.rename_table() and sqlite-utils rename-table | simonw 9599 | closed | 0 | 6 | 2023-07-03T14:07:42Z | 2023-07-22T22:12:40Z | 2023-07-22T22:12:40Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/llm/issues/65#issuecomment-1618375042 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1718607907 | I_kwDOCGYnMM5mb-Aj | 551 | Make as many examples in the CLI docs as possible copy-and-pastable | simonw 9599 | closed | 0 | 6 | 2023-05-21T19:04:10Z | 2023-05-21T21:04:04Z | 2023-05-21T20:57:24Z | OWNER | e.g. in this section: https://sqlite-utils.datasette.io/en/stable/cli.html#running-queries-directly-against-csv-or-json The little copy button will also copy the |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1718517882 | I_kwDOCGYnMM5mboB6 | 545 | Try out Trogon for a tui interface | simonw 9599 | closed | 0 | 6 | 2023-05-21T14:08:25Z | 2023-05-21T19:33:13Z | 2023-05-21T18:41:58Z | OWNER | sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/545/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||||
1617769847 | I_kwDOJHON9s5gbTV3 | 7 | Folder support | simonw 9599 | closed | 0 | 6 | 2023-03-09T18:21:33Z | 2023-03-09T20:48:18Z | 2023-03-09T20:48:18Z | MEMBER | Notes can live in folders. These relationships should be exported too. |
apple-notes-to-sqlite 611552758 | issue | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1615862295 | I_kwDOBm6k_c5gUBoX | 2036 | `publish cloudrun` reuses image tags, which can lead to very surprising deploy problems | simonw 9599 | closed | 0 | 6 | 2023-03-08T20:11:44Z | 2023-03-08T20:57:34Z | 2023-03-08T20:57:34Z | OWNER | See this issue: - https://github.com/simonw/datasette.io/issues/141 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1495431932 | I_kwDOBm6k_c5ZInr8 | 1951 | `datasette.create_token(...)` method for creating signed API tokens | simonw 9599 | closed | 0 | Datasette 1.0a2 8711695 | 6 | 2022-12-14T01:25:34Z | 2022-12-14T02:43:45Z | 2022-12-14T02:42:05Z | OWNER | I need this for: - #1947 And I can refactor this to use it too: - #1855 By making this a documented internal API it can be used by other plugins too. It's also going to be really useful for writing tests. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1486011362 | PR_kwDOBm6k_c5E3XqB | 1940 | register_permissions() plugin hook | simonw 9599 | closed | 0 | Datasette 1.0a2 8711695 | 6 | 2022-12-09T05:09:28Z | 2022-12-13T02:05:55Z | 2022-12-13T02:05:54Z | OWNER | simonw/datasette/pulls/1940 | Refs #1939 From this comment: https://github.com/simonw/datasette/issues/1939#issuecomment-1343872168
:books: Documentation preview :books:: https://datasette--1940.org.readthedocs.build/en/1940/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||
1468709531 | I_kwDOBm6k_c5Xirqb | 1915 | Interactive demo of Datasette 1.0 write APIs | simonw 9599 | closed | 0 | 6 | 2022-11-29T21:16:03Z | 2022-11-30T04:05:46Z | 2022-11-30T04:05:46Z | OWNER | I'm going to try to get this working on https://latest.datasette.io/ - it already has a way for people to sign in as root, but none of the databases there are writable. So I'm going to build a plugin which adds a writable named in-memory database. And some kind of mechanism for clearing out that database on a regular basis - maybe tables in that database get deleted automatically an hour after they are created? (Would be neat to display their time-left-until-deleted too) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1363440999 | I_kwDOBm6k_c5RRHVn | 1804 | Ability to set a custom facet_size per table | simonw 9599 | closed | 0 | 6 | 2022-09-06T15:11:40Z | 2022-09-07T00:21:56Z | 2022-09-06T18:06:53Z | OWNER | Suggestion from Discord: https://discord.com/channels/823971286308356157/823971286941302908/1016725586351247430
This is a really good idea, it could be done in |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1804/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1348294436 | PR_kwDOCGYnMM49qP2V | 468 | db[table].create(..., transform=True) and create-table --transform | simonw 9599 | closed | 0 | 3.29 8355157 | 6 | 2022-08-23T17:27:58Z | 2022-08-27T23:17:55Z | 2022-08-27T23:17:55Z | OWNER | simonw/sqlite-utils/pulls/468 | Work in progress. Still needs documentation and tests (and to cover more cases of things that might have changed). Refs: - #467 :books: Documentation preview :books:: https://sqlite-utils--468.org.readthedocs.build/en/468/ |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||
1352932038 | I_kwDOCGYnMM5QpBrG | 470 | Upgrade `--load-extension` to accept entrypoints like Datasette | simonw 9599 | closed | 0 | 3.29 8355157 | 6 | 2022-08-27T03:53:20Z | 2022-08-27T05:55:49Z | 2022-08-27T05:55:48Z | OWNER | Imitate: - https://github.com/simonw/datasette/pull/1789 ``` would load default entrypoint like beforedatasette data.db --load-extension ext loads the extensions with the "sqlite3_foo_init" entrpointdatasette data.db --load-extension ext:sqlite3_foo_init loads the extensions with the "sqlite3_bar_init" entrpointdatasette data.db --load-extension ext:sqlite3_bar_init ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1084193403 | PR_kwDOBm6k_c4wDKmb | 1574 | introduce new option for datasette package to use a slim base image | fs111 33631 | closed | 0 | 6 | 2021-12-19T21:18:19Z | 2022-08-15T08:49:31Z | 2022-08-15T08:49:31Z | NONE | simonw/datasette/pulls/1574 | The official python images on docker hub come with a slim variant that is significantly smaller than the default. The diff does not change the default, but allows to switch to the Size comparison: ``` $ datasette package some.db -t fat --install "datasette-basemap datasette-cluster-map" $ datasette package some.db -t slim --slim-base-image --install "datasette-basemap datasette-cluster-map" $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE fat latest 807b393ace0d 9 seconds ago 978MB slim latest 31bc5e63505c 8 minutes ago 191MB ``` |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1223699280 | I_kwDOBm6k_c5I8CtQ | 1739 | .db downloads should be served with an ETag | simonw 9599 | closed | 0 | 6 | 2022-05-03T05:11:21Z | 2022-05-04T18:21:18Z | 2022-05-03T14:59:51Z | OWNER | I noticed that my Pyodide Datasette prototype is downloading the same database file every single time rather than browser caching it: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1174423568 | I_kwDOBm6k_c5GAEgQ | 1670 | Ship Datasette 0.61 | simonw 9599 | closed | 0 | 6 | 2022-03-20T02:47:54Z | 2022-03-23T18:32:32Z | 2022-03-23T18:32:03Z | OWNER | Let the alpha bake for a while, since #1668 is a big last-minute change. After shipping, release a new |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1126604194 | I_kwDOBm6k_c5DJp2i | 1632 | datasette one.db one.db opens database twice, as one and one_2 | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 6 | 2022-02-07T23:14:47Z | 2022-03-19T04:04:49Z | 2022-02-07T23:50:01Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette-publish-fly/pull/12#issuecomment-1032029874 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1145882578 | I_kwDOCGYnMM5ETMfS | 408 | `deterministic=True` fails on versions of SQLite prior to 3.8.3 | learning4life 24938923 | closed | 0 | 6 | 2022-02-21T14:36:43Z | 2022-03-13T16:54:09Z | 2022-03-02T00:38:11Z | NONE | Hi, love your work. I am unable to lookup indexes in a database using sqlite-utils:
or
Software sqlite-utils, version 3.24 sqlite3 --version: 3.36.0 Output: Traceback (most recent call last): File "/opt/app-root/bin/sqlite-utils", line 8, in <module> sys.exit(cli()) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1128, in call return self.main(args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, ctx.params) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 754, in invoke return __callback(args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py", line 2123, in indexes ctx.invoke( File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 754, in invoke return __callback(args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py", line 1624, in query db.register_fts4_bm25() File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 403, in register_fts4_bm25 self.register_function(rank_bm25, deterministic=True) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 399, in register_function register(fn) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 392, in register self.conn.create_function(name, arity, fn, *kwargs) sqlite3.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1087913724 | I_kwDOBm6k_c5A2D78 | 1577 | Drop support for Python 3.6 | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 6 | 2021-12-23T18:17:03Z | 2022-01-25T23:30:03Z | 2022-01-20T04:31:41Z | OWNER | Original title: Decide when to drop support for Python 3.6
Originally posted by @simonw in https://github.com/simonw/datasette/issues/1576#issuecomment-999878907 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1083669410 | I_kwDOBm6k_c5Al3ui | 1566 | Release Datasette 0.60 | simonw 9599 | closed | 0 | Datasette 0.60 7571612 | 6 | 2021-12-17T22:58:12Z | 2022-01-14T01:59:55Z | 2022-01-14T01:59:55Z | OWNER | Using this as a tracking issue. I'm hoping to get the bulk of the JSON redesign work from the refactor in #1554 in for this release. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1076388044 | I_kwDOBm6k_c5AKGDM | 1547 | Writable canned queries fail to load custom templates | wragge 127565 | closed | 0 | Datasette 0.60 7571612 | 6 | 2021-12-10T03:31:48Z | 2022-01-13T22:27:59Z | 2021-12-19T21:12:00Z | CONTRIBUTOR | I've created a canned query with
My non-writeable canned queries pick up custom templates as expected, and if I look at their HTML I see the canned query name added to the templates considered (the canned query here is
So it seems like the writeable canned query is behaving differently for some reason. Is it an authentication thing? I'm using the built in Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1067771698 | I_kwDOCGYnMM4_pOcy | 348 | Command for creating an empty database | simonw 9599 | closed | 0 | 3.21 7558727 | 6 | 2021-11-30T23:24:27Z | 2022-01-13T07:06:59Z | 2022-01-09T20:33:20Z | OWNER | I sometimes find the need to create an empty SQLite database file - for example if I want to enable WAL on it before using it with another script. I currently do that like this:
It would be nice if |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1097128334 | I_kwDOCGYnMM5BZNmO | 371 | Support mutating row in `--convert` without returning it | simonw 9599 | closed | 0 | 3.21 7558727 | 6 | 2022-01-09T07:38:44Z | 2022-01-10T19:27:30Z | 2022-01-09T20:06:15Z | OWNER | Currently you have to do this:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1082743068 | PR_kwDOBm6k_c4v-izc | 1559 | filters_from_request plugin hook, now used in TableView | simonw 9599 | closed | 0 | 6 | 2021-12-16T23:59:33Z | 2021-12-17T23:09:41Z | 2021-12-17T19:02:15Z | OWNER | simonw/datasette/pulls/1559 | New plugin hook, refs #473 Used it to extract the logic from TableView that handles _search and _through and _where - refs #1518 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1052851176 | I_kwDOBm6k_c4-wTvo | 1507 | ReadTheDocs build failed for 0.59.2 release | simonw 9599 | closed | 0 | 6 | 2021-11-14T05:24:34Z | 2021-11-14T05:41:55Z | 2021-11-14T05:41:55Z | OWNER | I had to cancel the 0.59.2 release because ReadTheDocs was failing to build the documentation. https://readthedocs.org/projects/datasette/builds/15268454/ ``` /home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/bin/python -m sphinx -T -b html -d _build/doctrees -D language=en . _build/html Running Sphinx v1.8.5 loading translations [en]... done making output directory... building [mo]: targets for 0 po files that are out of date building [html]: targets for 27 source files that are out of date updating environment: 27 added, 0 changed, 0 removed reading sources... [ 3%] authentication Traceback (most recent call last): File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/cmd/build.py", line 304, in build_main app.build(args.force_all, filenames) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/application.py", line 341, in build self.builder.build_update() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 347, in build_update len(to_build)) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 360, in build updated_docnames = set(self.read()) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 468, in read self._read_serial(docnames) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 490, in _read_serial self.read_doc(docname) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 534, in read_doc doctree = read_doc(self.app, self.env, self.env.doc2path(docname)) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/io.py", line 318, in read_doc pub.publish() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py", line 219, in publish self.apply_transforms() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py", line 200, in apply_transforms self.document.transformer.apply_transforms() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/init.py", line 90, in apply_transforms Transformer.apply_transforms(self) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/transforms/init.py", line 171, in apply_transforms transform.apply(**kwargs) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/init.py", line 245, in apply apply_source_workaround(n) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py", line 94, in apply_source_workaround for classifier in reversed(node.parent.traverse(nodes.classifier)): TypeError: argument to reversed() must be a sequence Exception occurred: File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py", line 94, in apply_source_workaround for classifier in reversed(node.parent.traverse(nodes.classifier)): TypeError: argument to reversed() must be a sequence The full traceback has been saved in /tmp/sphinx-err-vkl0oE.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. A bug report can be filed in the tracker at https://github.com/sphinx-doc/sphinx/issues. Thanks! ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
707478649 | MDU6SXNzdWU3MDc0Nzg2NDk= | 173 | Progress bar for sqlite-utils insert | simonw 9599 | closed | 0 | 6 | 2020-09-23T15:43:56Z | 2021-11-01T08:42:24Z | 2020-10-27T18:16:04Z | OWNER | It would be nice if |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
907645813 | MDU6SXNzdWU5MDc2NDU4MTM= | 57 | Error: Use either --since or --since_id, not both | rubenv 42904 | closed | 0 | 6 | 2021-05-31T18:11:04Z | 2021-08-20T00:01:31Z | 2021-08-20T00:01:31Z | CONTRIBUTOR | I'm using the following command:
Which gives the following error:
Running without
|
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/57/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
974987856 | MDU6SXNzdWU5NzQ5ODc4NTY= | 1442 | Mechanism to cause specific branches to deploy their own demos | simonw 9599 | closed | 0 | 6 | 2021-08-19T19:41:39Z | 2021-08-19T21:11:45Z | 2021-08-19T21:09:40Z | OWNER | A useful capability would be if it was super-easy to say "any pushes to branch X should be deployed to I'd like to use this for the column query information work in #1434 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
465815372 | MDU6SXNzdWU0NjU4MTUzNzI= | 37 | Experiment with type hints | simonw 9599 | closed | 0 | 6 | 2019-07-09T14:30:34Z | 2021-08-18T21:48:57Z | 2021-08-18T21:48:57Z | OWNER | Since it's designed to be used in Jupyter or for rapid prototyping in an IDE (and it's still pretty small) https://veekaybee.github.io/2019/07/08/python-type-hints/ is good. It suggests the mypy docs for getting started: https://mypy.readthedocs.io/en/latest/existing_code.html plus this tutorial: https://pymbook.readthedocs.io/en/latest/typehinting.html |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/37/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
944326512 | MDU6SXNzdWU5NDQzMjY1MTI= | 296 | `table.search(..., quote=True)` parameter and `sqlite-utils search --quote` option | deafmute1 32427188 | closed | 0 | 6 | 2021-07-14T11:26:47Z | 2021-08-18T20:13:12Z | 2021-08-18T20:10:48Z | NONE | Hi,
Recently got this error:
My solution was to just strip these out of the query using this line
Perhaps this could be included into the |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
963897111 | MDU6SXNzdWU5NjM4OTcxMTE= | 309 | sqlite-utils insert errors should show SQL and parameters, if possible | scaleoutsean 16622642 | closed | 0 | 6 | 2021-08-09T11:24:14Z | 2021-08-09T23:40:29Z | 2021-08-09T22:25:58Z | NONE | I've tried several approaches, but this is the current one:
I googled the error and checked SO answers and advice, all good. I changed my JSON file to not use integers so I no longer get this error. Of course, that makes using the database a bit harder, so I also tried to solve the problem by modifying DB structure (while using integers in JSON). If change all If that is the case, can this error be a bit more specific for easier troubleshooting - maybe tell us which which record caused the problem when that error is thrown? My table has 60+ columns, many of which use 64-bit integers (not all records are large or known in advance), so while I can modify JSON to use strings instead of integers, it decreases usability and finding out which records have values for which SQLite integers aren't sufficient requires some work (I'm thinking about parsing all integers with My environment:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
940077168 | MDU6SXNzdWU5NDAwNzcxNjg= | 1389 | "searchmode": "raw" in table metadata | simonw 9599 | closed | 0 | 6 | 2021-07-08T17:32:10Z | 2021-07-10T18:33:13Z | 2021-07-10T18:33:13Z | OWNER |
Originally posted by @Krazybug in https://github.com/simonw/datasette/issues/759#issuecomment-624860451 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
926777310 | MDU6SXNzdWU5MjY3NzczMTA= | 290 | `db.query()` method (renamed `db.execute_returning_dicts()`) | simonw 9599 | closed | 0 | 6 | 2021-06-22T03:03:54Z | 2021-06-24T23:17:38Z | 2021-06-24T22:54:43Z | OWNER | Most of this library deals with lists of Python dictionaries - The There is a clumsily named It needs a better name, and needs to be properly documented. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
395236066 | MDU6SXNzdWUzOTUyMzYwNjY= | 393 | CSV export in "Advanced export" pane doesn't respect query | ltrgoddard 1727065 | closed | 0 | 6 | 2019-01-02T12:39:41Z | 2021-06-17T18:14:24Z | 2019-01-03T02:44:10Z | NONE | It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at songs released in 1989 in the It may be that this is intended behaviour related to the streaming CSV stuff discussed here, but if that's the case then I think it should be a little clearer. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
906356331 | MDU6SXNzdWU5MDYzNTYzMzE= | 263 | `sqlite-utils indexes` command | simonw 9599 | closed | 0 | 6 | 2021-05-29T04:52:34Z | 2021-06-03T04:34:38Z | 2021-06-03T04:34:38Z | OWNER | While working on #260 I realized there's no command to show indexes in a database, even though there is one for showing tables and one for triggers. I should implement #261 first. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
724369025 | MDExOlB1bGxSZXF1ZXN0NTA1NzY5NDYy | 1031 | Fallback to databases in inspect-data.json when no -i options are passed | frankier 299380 | closed | 0 | 6 | 2020-10-19T07:51:06Z | 2021-03-29T01:46:45Z | 2021-03-29T00:23:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1031 | Currenlty |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
826613352 | MDExOlB1bGxSZXF1ZXN0NTg4NjAxNjI3 | 1254 | Update Docker Spatialite version to 5.0.1 + add support for Spatialite topology functions | durkie 3200608 | closed | 0 | 6 | 2021-03-09T20:49:08Z | 2021-03-10T18:27:45Z | 2021-03-09T22:04:23Z | NONE | simonw/datasette/pulls/1254 | This requires adding the RT Topology library (Spatialite changed to RT Topology from LWGEOM between 4.4 and 5.0), as well as upgrading the GEOS version (which is the reason for switching to
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
763320133 | MDExOlB1bGxSZXF1ZXN0NTM3NzkxNjc1 | 208 | sqlite-utils analyze-tables command and table.analyze_column() method | simonw 9599 | closed | 0 | 6 | 2020-12-12T05:27:49Z | 2020-12-13T07:20:16Z | 2020-12-13T07:20:12Z | OWNER | simonw/sqlite-utils/pulls/208 | Refs #207
|
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
753767911 | MDExOlB1bGxSZXF1ZXN0NTI5NzgzMjc1 | 1117 | Support for generated columns | simonw 9599 | closed | 0 | 6 | 2020-11-30T20:10:46Z | 2020-11-30T22:23:19Z | 2020-11-30T21:29:58Z | OWNER | simonw/datasette/pulls/1117 | Refs #1116. My first attempt at this worked on my laptop but broke in CI, so I'm going to iterate on it in a pull request instead. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
714449879 | MDU6SXNzdWU3MTQ0NDk4Nzk= | 992 | Change "--config foo:bar" to "--setting foo bar" | simonw 9599 | closed | 0 | Datasette 0.52 6055094 | 6 | 2020-10-05T01:27:45Z | 2020-11-24T20:01:54Z | 2020-11-24T20:01:54Z | OWNER | I designed the config format before I had a good feel for CLI design using Click. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
737476423 | MDU6SXNzdWU3Mzc0NzY0MjM= | 198 | Support order by relevance against FTS4 | simonw 9599 | closed | 0 | 6 | 2020-11-06T05:36:31Z | 2020-11-06T18:30:44Z | 2020-11-06T18:30:44Z | OWNER | For #192 and #197 I've decided I want to be able to order by relevance in FTS4 as well as FTS5. This means I need to port over my work on bm25() from https://github.com/simonw/sqlite-fts4 (since I don't want to add a full dependency). |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
733796942 | MDU6SXNzdWU3MzM3OTY5NDI= | 1075 | PrefixedUrlString mechanism broke everything | simonw 9599 | closed | 0 | 0.51 6026070 | 6 | 2020-10-31T19:58:05Z | 2020-10-31T20:48:51Z | 2020-10-31T20:48:51Z | OWNER | Added in 7a67bc7a569509d65b3a8661e0ad2c65f0b09166 refs #1026. Lots of tests are failing now. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
732905360 | MDU6SXNzdWU3MzI5MDUzNjA= | 1067 | Table actions menu on view pages, not on query pages | simonw 9599 | closed | 0 | 0.51 6026070 | 6 | 2020-10-30T05:56:39Z | 2020-10-31T17:51:31Z | 2020-10-31T17:40:14Z | OWNER | Follow-on from #1066. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
733303548 | MDExOlB1bGxSZXF1ZXN0NTEzMTA2MDI2 | 1069 | load_template() plugin hook | simonw 9599 | closed | 0 | 0.51 6026070 | 6 | 2020-10-30T15:59:45Z | 2020-10-30T17:47:20Z | 2020-10-30T17:47:19Z | OWNER | simonw/datasette/pulls/1069 | Refs #1042 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||
729096595 | MDU6SXNzdWU3MjkwOTY1OTU= | 1051 | Better display of binary data on arbitrary query results page | simonw 9599 | closed | 0 | 6 | 2020-10-25T19:38:06Z | 2020-10-29T22:12:16Z | 2020-10-29T22:01:39Z | OWNER | Problem: if these were larger fields that HTML page could have multiple megabytes of Python binary string representations on it. It should behave more like the regular table view does: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
718723543 | MDU6SXNzdWU3MTg3MjM1NDM= | 1014 | Add Link: pagination HTTP headers | simonw 9599 | closed | 0 | 0.51 6026070 | 6 | 2020-10-10T23:42:40Z | 2020-10-23T19:44:05Z | 2020-10-11T00:18:51Z | OWNER | Spun off from #782. These can go on all of the JSON endpoints that support pagination. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
710506708 | MDU6SXNzdWU3MTA1MDY3MDg= | 978 | Rendering glitch with column headings on mobile | simonw 9599 | closed | 0 | Datasette 0.50 5971510 | 6 | 2020-09-28T19:04:45Z | 2020-10-08T23:54:40Z | 2020-09-28T22:43:01Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
314506446 | MDU6SXNzdWUzMTQ1MDY0NDY= | 214 | Ability for plugins to define extra JavaScript and CSS | simonw 9599 | closed | 0 | 6 | 2018-04-16T05:29:34Z | 2020-09-30T20:36:11Z | 2018-04-18T03:13:03Z | OWNER | This can hook in to the existing The plugins should be able to bundle their own assets though, so it will also have to integrate with the Refs #14 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
691265198 | MDU6SXNzdWU2OTEyNjUxOTg= | 7 | Mechanism for differentiating between "by me" and "liked by me" | simonw 9599 | closed | 0 | 6 | 2020-09-02T17:44:37Z | 2020-09-02T21:06:28Z | 2020-09-02T21:06:28Z | MEMBER | Some of the content I'm indexing is by me - photos I've taken, tweets I wrote, commits, comments I posted. Some of it is stuff that I've "liked" or "bookmarked" in some way - favourited tweets, Pocket articles, starred GitHub repos. It woud be useful to be able to differentiate between the two. |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
679779797 | MDU6SXNzdWU2Nzk3Nzk3OTc= | 939 | extra_ plugin hooks should take the same arguments | simonw 9599 | closed | 0 | 6 | 2020-08-16T16:04:54Z | 2020-08-16T18:25:05Z | 2020-08-16T16:50:29Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/938#issuecomment-674544691 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
677326155 | MDU6SXNzdWU2NzczMjYxNTU= | 930 | Datasette sdist is missing templates (hence broken when installing from Homebrew) | simonw 9599 | closed | 0 | 6 | 2020-08-12T02:20:16Z | 2020-08-12T03:30:59Z | 2020-08-12T03:30:59Z | OWNER | Pretty nasty bug this: I'm getting 500 errors for all pages that try to render a template after installing the newly released Datasette 0.47 - both from |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
644309017 | MDU6SXNzdWU2NDQzMDkwMTc= | 864 | datasette.add_message() doesn't work inside plugins | simonw 9599 | closed | 0 | Datasette 0.45 5533512 | 6 | 2020-06-24T04:30:06Z | 2020-06-29T00:51:01Z | 2020-06-29T00:51:01Z | OWNER | Similar problem to #863 - calling |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
637342551 | MDU6SXNzdWU2MzczNDI1NTE= | 834 | startup() plugin hook | simonw 9599 | closed | 0 | Datasette 0.45 5533512 | 6 | 2020-06-11T21:48:14Z | 2020-06-28T19:38:50Z | 2020-06-13T17:56:12Z | OWNER | It might be useful to have an My initial use-case for this is configuration verification - checking that the I imagine there are plenty of other potential uses for this as well. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
529429214 | MDU6SXNzdWU1Mjk0MjkyMTQ= | 642 | Provide a cookiecutter template for creating new plugins | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 6 | 2019-11-27T15:46:36Z | 2020-06-20T03:20:33Z | 2020-06-20T03:20:25Z | OWNER | See this conversation: https://twitter.com/psychemedia/status/1199707352540368896 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
638241779 | MDU6SXNzdWU2MzgyNDE3Nzk= | 846 | "Too many open files" error running tests | simonw 9599 | closed | 0 | 6 | 2020-06-13T22:11:40Z | 2020-06-14T00:26:31Z | 2020-06-14T00:26:31Z | OWNER | I got this on my laptop: ```pytest ... /Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.7/site-packages/jinja2/loaders.py:171: in get_source f = open_if_exists(filename) filename = '/Users/simon/Dropbox/Development/datasette/datasette/templates/400.html', mode = 'rb'
/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.7/site-packages/jinja2/utils.py:154: OSError ``` Based on the conversation in https://github.com/pytest-dev/pytest/issues/2970 I'm worried that my tests are opening too many files without closing them. In particular... I call Could this be resulting in my tests eventually opening too many unclosed file handles? How could I confirm this? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
631932926 | MDU6SXNzdWU2MzE5MzI5MjY= | 801 | allow_by_query setting for configuring permissions with a SQL statement | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 6 | 2020-06-05T20:30:19Z | 2020-06-11T18:58:56Z | 2020-06-11T18:58:49Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/698#issuecomment-639787304 See also #800 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
636426530 | MDU6SXNzdWU2MzY0MjY1MzA= | 829 | Ability to set ds_actor cookie such that it expires | simonw 9599 | closed | 0 | Datasette 0.44 5512395 | 6 | 2020-06-10T17:31:40Z | 2020-06-10T19:41:35Z | 2020-06-10T19:40:05Z | OWNER | I need this for |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
632673972 | MDU6SXNzdWU2MzI2NzM5NzI= | 804 | python tests/fixtures.py command has a bug | simonw 9599 | closed | 0 | Datasette 0.44 5512395 | 6 | 2020-06-06T19:17:36Z | 2020-06-09T20:01:30Z | 2020-06-09T19:58:34Z | OWNER | This command is meant to write out |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
635147716 | MDU6SXNzdWU2MzUxNDc3MTY= | 825 | Way to enable a default=False permission for anonymous users | simonw 9599 | closed | 0 | Datasette 0.44 5512395 | 6 | 2020-06-09T06:26:27Z | 2020-06-09T17:19:19Z | 2020-06-09T17:01:10Z | OWNER | I'd like plugins to be able to ship with a default that says "anonymous users cannot do this", but allow site administrators to over-ride that such that anonymous users can use the feature after all. This is tricky because right now the anonymous user doesn't have an actor dictionary at all, so there's no key to match to an allow block. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
634139848 | MDU6SXNzdWU2MzQxMzk4NDg= | 813 | Mechanism for specifying allow_sql permission in metadata.json | simonw 9599 | closed | 0 | Datasette 0.44 5512395 | 6 | 2020-06-08T04:57:19Z | 2020-06-09T00:09:57Z | 2020-06-09T00:07:19Z | OWNER | Split from #811. It would be useful if finely-grained permissions configured in We have a permission check call for this already: https://github.com/simonw/datasette/blob/9397d718345c4b35d2a5c55bfcbd1468876b5ab9/datasette/views/database.py#L159 But there's currently no way to implement this check without writing a plugin. I think a |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
585633142 | MDU6SXNzdWU1ODU2MzMxNDI= | 706 | Documentation for the "request" object | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 6 | 2020-03-22T02:55:50Z | 2020-05-30T13:20:00Z | 2020-05-27T22:31:22Z | OWNER | Since that object is passed to the I could also start passing it to the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
613755043 | MDU6SXNzdWU2MTM3NTUwNDM= | 110 | Support decimal.Decimal type | dvhthomas 134771 | closed | 0 | 6 | 2020-05-07T03:57:19Z | 2020-05-11T01:58:20Z | 2020-05-11T01:50:11Z | NONE | Decimal types in Postgres cause a failure in db.py data type selectionI have a Django app using a MoneyField, which uses a
Looking at From the SQLite docs it looks like DECIMAL in other DBs are considered numeric. I'm not quite sure if it's as simple as adding a data type to that list or if there are repercussions beyond it. Thanks for a great tool! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
610408908 | MDU6SXNzdWU2MTA0MDg5MDg= | 34 | Command for retrieving dependents for a repo | simonw 9599 | closed | 0 | 6 | 2020-04-30T21:47:51Z | 2020-05-03T15:53:01Z | 2020-05-03T15:53:01Z | MEMBER | I really, really want to start grabbing this data: https://github.com/simonw/datasette/network/dependents |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
573583971 | MDU6SXNzdWU1NzM1ODM5NzE= | 689 | "Templates considered" comment broken in >=0.35 | chrishas35 35075 | closed | 0 | 6 | 2020-03-01T17:31:21Z | 2020-04-05T19:39:44Z | 2020-04-05T19:39:44Z | NONE | Noticed that the "Templates Considered" comment is missing in 0.37. Believe I traced it back to #664 as you can see it in https://v0-34.datasette.io/ but not https://v0-35.datasette.io/. Looking at the template context debug between the two you can see what is missing from 0.35 vs. 0.34: ```diff < "datasette_version": "0.34", < "app_css_hash": "ffa51a", < "select_templates": [ < "*index.html" < ], < "zip": "<class 'zip'>", < "body_scripts": [], < "extra_css_urls": "<generator object BaseView._asset_urls at 0x7f6529ac05f0>", < "extra_js_urls": "<generator object BaseView._asset_urls at 0x7f6529ac0660>", < "format_bytes": "<function format_bytes at 0x7f652a1588b0>", < "database_url": "<bound method BaseView.database_url of \<datasette.views.index.IndexView object at 0x7f6529b03e50>>", < "database_color": "<bound method BaseView.database_color of \<datasette.views.index.IndexView object at 0x7f6529b03e50>>"
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
592829135 | MDU6SXNzdWU1OTI4MjkxMzU= | 713 | Support YAML in metadata - metadata.yaml | simonw 9599 | closed | 0 | 6 | 2020-04-02T18:10:05Z | 2020-04-02T19:36:17Z | 2020-04-02T19:30:55Z | OWNER | I was originally going to do this with a plugin - see #357 - but the more I work with The best example why is still this one: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/master/russian-ads-metadata.yaml YAML is just SO much better than JSON for multi-line strings - in particular HTML and SQL, both of which are common in |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
545407916 | MDU6SXNzdWU1NDU0MDc5MTY= | 73 | upsert_all() throws issue when upserting to empty table | psychemedia 82988 | closed | 0 | 6 | 2020-01-05T11:58:57Z | 2020-01-31T14:21:09Z | 2020-01-05T17:20:18Z | NONE | If I try to add a list of ```python import sqlite3 from sqlite_utils import Database import pandas as pd conx = sqlite3.connect(':memory') cx = conx.cursor() cx.executescript('CREATE TABLE "test" ("Col1" TEXT);') q="SELECT * FROM test;" pd.read_sql(q, conx) #shows empty table db = Database(conx) db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) TypeError Traceback (most recent call last) <ipython-input-74-8c26d93d7587> in <module> 1 db = Database(conx) ----> 2 db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1159 upsert=True, 1160 ) 1161 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert) 1040 sql = "INSERT OR IGNORE INTO {table} VALUES({pk_placeholders});".format( 1041 table=self.name, -> 1042 pks=", ".join(["[{}]".format(p) for p in pks]), 1043 pk_placeholders=", ".join(["?" for p in pks]), 1044 ) TypeError: 'NoneType' object is not iterable ``` A hacky workaround in use is:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
513008936 | MDU6SXNzdWU1MTMwMDg5MzY= | 608 | Improve UI of "datasette publish cloudrun" to reduce chances of accidentally over-writing a service | simonw 9599 | closed | 0 | 6 | 2019-10-27T19:21:28Z | 2019-11-08T02:51:36Z | 2019-11-08T02:48:46Z | OWNER | The concept of a "service" in Cloud Run is crucial: if you deploy to the same service, you will over-write what you deployed there last! As such, I'd like to make service a required positional argument for
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
512996469 | MDU6SXNzdWU1MTI5OTY0Njk= | 607 | Ways to improve fuzzy search speed on larger data sets? | zeluspudding 8431341 | closed | 0 | 6 | 2019-10-27T17:31:37Z | 2019-11-07T03:38:10Z | 2019-11-07T03:38:10Z | NONE | I have an sqlite table with 16 million rows in it. Having read @simonw article "Fast Autocomplete Search for Your Website" I was curious to try datasette to see what kind of query performance I could get out of it. In truth I don't need to do full text search since all I would like to do is give my users a way to search for the names of investors such as "Warren Buffet", or "Tim Cook" (who's names are in a single column). On the first search, Datasette takes over 20 seconds to return all records associated with If I rerun the same search, it then takes almost 9 seconds: That's far to slow to implement an autocomplete feature. I could reduce the latency by making a special table of only unique investor names, thereby reducing the search space to less than a million rows (then I'd need to implement a way to add only new investor names to the table as I received new data.. about 4,000 rows a day). If I did that, I'm still concerned the new table wouldn't be lean enough to lookup investor names quickly. Plus, even if I can implement the autocomplete feature, I would still finally have to lookup records for that investors which would take between 8 - 20 seconds. Are there any tricks for speeding this up? Here's my hardware: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
488833975 | MDU6SXNzdWU0ODg4MzM5NzU= | 3 | Command for running a search and saving tweets for that search | simonw 9599 | closed | 0 | 6 | 2019-09-03T21:29:56Z | 2019-11-04T05:31:56Z | 2019-11-04T05:31:16Z | MEMBER |
|
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/3/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
459621683 | MDU6SXNzdWU0NTk2MjE2ODM= | 521 | Easier way of creating custom row templates | simonw 9599 | closed | 0 | 6 | 2019-06-23T21:49:27Z | 2019-07-03T03:23:56Z | 2019-07-03T03:23:56Z | OWNER | I was messing around with a custom {% for cell in row %} {% if cell.column == "First_Name" %} {{ cell.value }} {% elif cell.column == "Last_Name" %} {{ cell.value }}{% elif cell.column == "Short_Description" %}{{ cell.column }}: {{ cell.value }} {% else %} {{ cell.column }}: {{ cell.value }} {% endif %} {% endfor %} {% endfor %}
{{ row["First_Name"] }} {{ row["Last_Name"] }}... ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
449818897 | MDU6SXNzdWU0NDk4MTg4OTc= | 24 | Additional Column Constraints? | IgnoredAmbience 98555 | closed | 0 | 6 | 2019-05-29T13:47:03Z | 2019-06-13T06:47:17Z | 2019-06-13T06:30:26Z | NONE | I'm looking to import data from XML with a pre-defined schema that maps fairly closely to a relational database. In particular, it has explicit annotations for when fields are required, optional, or when a default value should be inferred. Would there be value in adding the ability to define |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/24/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
349827640 | MDU6SXNzdWUzNDk4Mjc2NDA= | 359 | Faceted browse against a JSON list of tags | simonw 9599 | closed | 0 | 6 | 2018-08-12T17:01:14Z | 2019-05-29T21:39:12Z | 2019-05-03T00:21:44Z | OWNER | If a table has a
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
322787470 | MDU6SXNzdWUzMjI3ODc0NzA= | 259 | inspect() should detect many-to-many relationships | simonw 9599 | closed | 0 | 6 | 2018-05-14T12:03:58Z | 2019-05-23T03:55:37Z | 2019-05-23T03:55:37Z | OWNER | Relates to #255 - in particular supporting facets across M2M relationships. It should be possible for When rendering a table with a m2m relationship we could display the first X associated records as a comma separated list of hyperlinks in a new column on the table view, with a column name derived from the table on the other side. Since SQLite doesn't have RANK or an equivalent of https://www.xaprb.com/blog/2006/12/02/how-to-number-rows-in-mysql/ this would be implemented as N+1 queries (one query per cell that we want to display an m2m summary). This should be OK in SQLite: https://sqlite.org/np1queryprob.html |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
310533258 | MDU6SXNzdWUzMTA1MzMyNTg= | 191 | Figure out how to bundle a more up-to-date SQLite | simonw 9599 | closed | 0 | 6 | 2018-04-02T16:33:25Z | 2018-07-10T17:46:13Z | 2018-07-10T17:46:13Z | OWNER | The version of SQLite that ships with Python 3 is a bit limited - it doesn't support row values for example https://www.sqlite.org/rowvalue.html Figure out how to bundle a more recent SQLite engine with datasette. We need to figure out two cases:
I want it working on Mac OS X too because I don't want to force Docker as a dependency for anyone who just want to hack around with Datasette a little and run the test suite. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
333086005 | MDU6SXNzdWUzMzMwODYwMDU= | 313 | Deploy demo of Datasette on every commit that passes tests | simonw 9599 | closed | 0 | 6 | 2018-06-17T19:19:12Z | 2018-06-17T21:52:58Z | 2018-06-17T21:52:58Z | OWNER | We can use Travis CI and Zeit Now to ensure there is always a live demo of current master. We can ship archived demos for releases as well. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
314455877 | MDExOlB1bGxSZXF1ZXN0MTgxNzIzMzAz | 209 | Don't duplicate simple primary keys in the link column | russss 45057 | closed | 0 | 6 | 2018-04-15T21:56:15Z | 2018-04-18T08:40:37Z | 2018-04-18T01:13:04Z | CONTRIBUTOR | simonw/datasette/pulls/209 | When there's a simple (single-column) primary key, it looks weird to duplicate it in the link column. This change removes the second PK column and treats the link column as if it were the PK column from a header/sorting perspective. This might make it a bit more difficult to tell what the link for the row is, I'm not sure yet. I feel like the alternative is to change the link column to just have the text "view" or something, instead of repeating the PK. (I doubt it makes much more sense with compound PKs.) Bonus change in this PR: fix urlencoding of links in the displayed HTML. Before: After: |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
314471743 | MDU6SXNzdWUzMTQ0NzE3NDM= | 211 | Load plugins from a `--plugins-dir=plugins/` directory | simonw 9599 | closed | 0 | 6 | 2018-04-16T01:17:43Z | 2018-04-16T05:22:02Z | 2018-04-16T05:22:02Z | OWNER | In #14 and 33c7c53ff87c2 I've added working support for setuptools entry_points plugins. These can be installed from PyPI using I imagine some projects will benefit from being able to add plugins without first publishing them to PyPI. Datasette already supports loading custom templates like so:
I propose an additional option,
This will also need to be supported by |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
280315352 | MDU6SXNzdWUyODAzMTUzNTI= | 167 | Nasty bug: last column not being correctly displayed | simonw 9599 | closed | 0 | Custom templates edition 2949431 | 6 | 2017-12-07T23:23:46Z | 2017-12-10T01:00:21Z | 2017-12-10T01:00:20Z | OWNER | e.g. https://datasette-bwnojrhmmg.now.sh/dk3-bde9a9a/dk?source__contains=http The JSON output shows that the column is there, but is being displayed incorrectly: https://datasette-bwnojrhmmg.now.sh/dk3-bde9a9a/dk.jsono?source__contains=http |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
274284246 | MDExOlB1bGxSZXF1ZXN0MTUyODcwMDMw | 104 | [WIP] Add publish to heroku support | jacobian 21148 | closed | 0 | 6 | 2017-11-15T19:56:22Z | 2017-11-21T20:55:05Z | 2017-11-21T20:55:05Z | CONTRIBUTOR | simonw/datasette/pulls/104 | Refs #90 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
273678673 | MDU6SXNzdWUyNzM2Nzg2NzM= | 85 | Detect foreign keys and use them to link HTML pages together | simonw 9599 | closed | 0 | Foreign key edition 2919870 | 6 | 2017-11-14T06:12:05Z | 2017-11-19T06:08:19Z | 2017-11-19T06:08:19Z | OWNER | https://stackoverflow.com/a/44430157/6083 documents the PRAGMA needed to extract foreign key references for a table. At a minimum we can link column values known to be foreign keys to the corresponding row page. We could try to summarize the linked row in some way too - somehow extracting a sensible link title, maybe based on additional configuration in the metadata.json file. Still todo:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/85/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
267726219 | MDU6SXNzdWUyNjc3MjYyMTk= | 16 | Default HTML/CSS needs to look reasonable and be responsive | simonw 9599 | closed | 0 | Ship first public release 2857392 | 6 | 2017-10-23T16:05:22Z | 2017-11-11T20:19:07Z | 2017-11-11T20:19:07Z | OWNER | Version one should have the following characteristics:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
267788884 | MDU6SXNzdWUyNjc3ODg4ODQ= | 23 | Support Django-style filters in querystring arguments | simonw 9599 | closed | 0 | Ship first public release 2857392 | 6 | 2017-10-23T19:29:42Z | 2017-10-25T04:23:03Z | 2017-10-25T04:23:02Z | OWNER | e.g
Same format as Django: double underscore as the split. If you need to match against a column that happens to contain a double underscore in its official name, do this:
__exact is the default operation if none is supplied. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/23/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
267513424 | MDU6SXNzdWUyNjc1MTM0MjQ= | 1 | Addressable pages for every row in a table | simonw 9599 | closed | 0 | Ship first public release 2857392 | 6 | 2017-10-23T00:44:16Z | 2017-10-24T14:11:04Z | 2017-10-24T14:11:03Z | OWNER |
Tricky part will be figuring out what the private key is - especially since it could be a compound primary key and it might involve different data types. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);