github
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1386562662 | I_kwDOCGYnMM5SpURm | 493 | Tiny typographical error in install/uninstall docs | 9599 | open | 0 | 3 | 2022-09-26T19:00:42Z | 2022-10-25T21:31:15Z | OWNER | Added in: - #483 I don't know how to fix this in Sphinx: I'm getting this: https://sqlite-utils.datasette.io/en/latest/cli.html#cli-install > The [insert –convert](https://sqlite-utils.datasette.io/en/latest/cli.html#cli-insert-convert) and [query –functions](https://sqlite-utils.datasette.io/en/latest/cli.html#cli-query-functions) options <img width="849" alt="image" src="https://user-images.githubusercontent.com/9599/192358225-4fae509e-9fa8-4e8d-91d4-48aa1b79225e.png"> But I want it to display `insert --convert` and not `insert –convert` there. Here's the code: https://github.com/simonw/sqlite-utils/blob/85247038f70d7eb2f3e272cfeaa4c44459cafba8/docs/cli.rst#L2125 | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1197926598 | I_kwDOBm6k_c5HZujG | 1705 | How to upgrade your plugin for 1.0 documentation | 9599 | open | 0 | 8755003 | 1 | 2022-04-08T23:16:47Z | 2022-12-13T05:29:05Z | OWNER | Among other things, needed by: - #1704 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
797097140 | MDU6SXNzdWU3OTcwOTcxNDA= | 60 | Use Data from SQLite in other commands | 22578954 | open | 0 | 3 | 2021-01-29T18:35:52Z | 2021-02-12T18:29:43Z | CONTRIBUTOR | As a total beginner here how could you access data from the sqlite table to run other commands. What I am thinking is I want to get all the repos in an organization then using the repo list pull all the commit messages for each repo. I love this project by the way! | 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
503243784 | MDU6SXNzdWU1MDMyNDM3ODQ= | 3 | Extract images into separate tables | 9599 | open | 0 | 1 | 2019-10-07T05:43:01Z | 2020-09-01T06:17:45Z | MEMBER | As already done with authors. Slightly harder because images do not have a universally unique ID. Also need to figure out what to do about there being columns for both `image` and `images`. <img width="1522" alt="memory__items" src="https://user-images.githubusercontent.com/9599/66287418-9ab20680-e88a-11e9-96bf-6c80d881eff0.png"> | 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/3/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
924203783 | MDU6SXNzdWU5MjQyMDM3ODM= | 1379 | Idea: ?_end=1 option for streaming CSV responses | 9599 | open | 0 | 0 | 2021-06-17T18:11:21Z | 2021-06-17T18:11:30Z | OWNER | As discussed in this thread: https://twitter.com/simonw/status/1405554676993433605 - one of the disadvantages of Datasette's streaming CSV feature is that it's hard to tell if you got the whole file or if the connection ended early - or if an error occurred. Idea: offer an optional `?_end=1` parameter which, if enabled, adds a single row to the end of the CSV file that looks like this: `END,,,,,,,,,` For however many columns the CSV file usually has. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1197925865 | I_kwDOBm6k_c5HZuXp | 1704 | File PRs against incompatible plugins pinning to datasette<1.0 | 9599 | open | 0 | 3268330 | 0 | 2022-04-08T23:15:30Z | 2022-04-08T23:15:30Z | OWNER | As part of the preparation for the 1.0 release, test all existing known plugins against the alpha. For any that break, submit a PR suggesting they pin to a version <1.0 - and include a link to the documentation on how to upgrade the plugin for 1.0. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
944846776 | MDU6SXNzdWU5NDQ4NDY3NzY= | 297 | Option for importing CSV data using the SQLite .import mechanism | 9599 | open | 0 | 23 | 2021-07-14T22:36:41Z | 2023-09-22T20:49:52Z | OWNER | As seen in https://til.simonwillison.net/sqlite/import-csv - `.mode csv` and then `.import school.csv schools` is hugely faster than importing via `sqlite-utils insert` and doing the work in Python - but it can only be implemented by shelling out to the `sqlite3` CLI tool, it's not functionality that is exposed to the Python `sqlite3` module. An option to use this would be useful - maybe something like this: sqlite-utils insert blah.db blah blah.csv --fast | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
695556681 | MDU6SXNzdWU2OTU1NTY2ODE= | 19 | Figure out incremental re-indexing | 9599 | open | 0 | 2 | 2020-09-08T05:23:31Z | 2020-09-08T05:27:07Z | MEMBER | As tables get bigger reindexing everything on a schedule (essentially recreating the entire index from scratch) will start to become a performance bottleneck. | 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/19/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1531991339 | I_kwDOBm6k_c5bUFUr | 1989 | Suggestion: Hiding columns | 116795 | open | 0 | 3 | 2023-01-13T09:33:32Z | 2023-03-31T06:18:05Z | NONE | As there's the possibility of [hiding tables](https://docs.datasette.io/en/stable/metadata.html#hiding-tables) - I've run into the **need of hiding specific columns** - data that's either not relevant for public or can't be shown due to privacy reasons. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1989/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1174708375 | I_kwDOBm6k_c5GBKCX | 1673 | Streaming CSV spends a lot of time in `table_column_details` | 9599 | open | 0 | 1 | 2022-03-20T22:25:28Z | 2022-03-20T22:34:06Z | OWNER | At least I think it does. I tried running `py-spy top -p $PID` against a Datasette process that was trying to do: datasette covid.db --get '/covid/ny_times_us_counties.csv?_size=10&_stream=on' While investigating: - #1355 And spotted this: ``` datasette covid.db --get /covid/ny_times_us_counties.csv?_size=10&_stream=on' (python v3.10.2) Total Samples 5800 GIL: 71.00%, Active: 98.00%, Threads: 4 %Own %Total OwnTime TotalTime Function (filename:line) 8.00% 8.00% 4.32s 4.38s sql_operation_in_thread (datasette/database.py:212) 5.00% 5.00% 3.77s 3.93s table_column_details (datasette/utils/__init__.py:614) 6.00% 6.00% 3.72s 3.72s _worker (concurrent/futures/thread.py:81) 7.00% 7.00% 2.98s 2.98s _read_from_self (asyncio/selector_events.py:120) 5.00% 6.00% 2.35s 2.49s detect_fts (datasette/utils/__init__.py:571) 4.00% 4.00% 1.34s 1.34s _write_to_self (asyncio/selector_events.py:140) ``` Relevant code: https://github.com/simonw/datasette/blob/798f075ef9b98819fdb564f9f79c78975a0f71e8/datasette/utils/__init__.py#L609-L625 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
275159710 | MDU6SXNzdWUyNzUxNTk3MTA= | 128 | Every visualization should have an "embed" button | 9599 | open | 0 | 0 | 2017-11-19T13:38:13Z | 2019-05-13T18:33:51Z | OWNER | At least for the first round of visualizations, any time you construct one using the UI the result should include an "embed this" button that returns source code to copy and paste These examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
2028698018 | I_kwDOBm6k_c5463mi | 2213 | feature request: gzip compression of database downloads | 536941 | open | 0 | 1 | 2023-12-06T14:35:03Z | 2023-12-06T15:05:46Z | CONTRIBUTOR | At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/) | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
599776345 | MDU6SXNzdWU1OTk3NzYzNDU= | 24 | Feature idea: github-to-sqlite everything ... | 9599 | open | 0 | 0 | 2020-04-14T18:34:00Z | 2020-04-14T18:34:00Z | MEMBER | At the moment if you want to pull all your repos, issues, issues comments etc you have to do it with a sequence of separate commands. Consider adding a `everything` or `all` command which fetches everything that the tool knows how to fetch, and is designed to be run on a cron in a way that fetches just new stuff each time. | 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/24/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
374953006 | MDU6SXNzdWUzNzQ5NTMwMDY= | 369 | Interface should show same JSON shape options for custom SQL queries | 416374 | open | 0 | 3268330 | 2 | 2018-10-29T10:39:15Z | 2020-05-30T17:24:06Z | CONTRIBUTOR | At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the `_shape` parameter to the JSON API URL manually still works, so perhaps there should be consistency in the interface by having the same "Advanced Export" box for custom SQL queries. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1237871948 | I_kwDOBm6k_c5JyG1M | 1743 | `datasette.utils.to_css_class()` should be a documented internal | 9599 | open | 0 | 0 | 2022-05-16T23:57:26Z | 2022-05-16T23:57:26Z | OWNER | Because I'm using it in this plugin: - https://github.com/simonw/datasette-upload-dbs/issues/1 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
462117311 | MDU6SXNzdWU0NjIxMTczMTE= | 531 | /database/-/inspect | 9599 | open | 0 | 1 | 2019-06-28T16:33:41Z | 2019-07-08T15:43:57Z | OWNER | Build `/database/-/inspect` which shows tables, columns, column types and foreign keys It won't show table counts. Or maybe it will include them optionally but only for `-i` databases, in a special area of the JSON reserved for immutable-only inspect details. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/465#issuecomment-506797086_ | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1504352503 | I_kwDOBm6k_c5Zqpj3 | 1968 | Allow to hide some queries in metadata.yml | 562352 | open | 0 | 0 | 2022-12-20T10:45:41Z | 2022-12-20T10:45:41Z | NONE | By default all queries are displayed. But there are many cases where it would be interesting to hide the queries by default: * the website is targeting non-tech people * the query is veeeeeery long ([eg.](https://mirabelle.openfoodfacts.org/products/energy_calculator)) * reading the query is not important for the users, they only want to see the result Of course, the user still could have the option to see the query. It could be an option in the metadata file: ```yml databases: awesome_db: tables: products: hide_sql: true queries: great_query: hide_sql: true sql: select * from products where code = :barcode ``` The priority could be: * no option in the metadata and nothing in the URL: query displayed * hide_sql in the metadata and nothing in the URL: query displayed as asked in the metadata * hide_sql in the metadata and &_hide_sql= in the URL: query as asked in the URL See also: #1824 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1605959201 | I_kwDOBm6k_c5fuP4h | 2032 | datasette errors when foreign key integrity is enabled | 193185 | open | 0 | 0 | 2023-03-02T01:27:51Z | 2023-03-02T01:31:58Z | CONTRIBUTOR | By default, [SQLite does not enforce foreign key constraints](https://www.sqlite.org/foreignkeys.html#fk_enable). I typically enable these checks by running: ```sql PRAGMA foreign_keys = ON; ``` inside of a `prepare_connection` hook. If a plugin causes the schema to change (eg datasette-scraper creating a new table, or datasette-edit-schema changing a column), then https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/utils/internal_db.py#L71-L77 will fail with: ``` FOREIGN KEY constraint failed ``` This could be resolved by either: - deleting from the `tables` column last - changing the schema so that the foreign keys have [ON DELETE CASCADE](https://www.sqlite.org/foreignkeys.html#fk_actions) Let me know if you'd be open to a PR that addresses this -- since foreign key constraints aren't enabled by default, I guess it's questionable whether this is a bug. I think I can workaround this by inspecting the database parameter in `prepare_connection` and trying not to enable fkey checks on the `_internal` database. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
534629631 | MDU6SXNzdWU1MzQ2Mjk2MzE= | 650 | Add a glossary to the documentation | 9599 | open | 0 | 3 | 2019-12-09T00:23:45Z | 2022-01-13T22:04:56Z | OWNER | Call it `glossary.rst` - it can use a definition list something like this: ```rst .. _glossary: Glossary ======== Term A definition of the term. Another term Another definition. ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
648435885 | MDU6SXNzdWU2NDg0MzU4ODU= | 878 | New pattern for views that return either JSON or HTML, available for plugins | 9599 | open | 0 | 3268330 | 26 | 2020-06-30T19:26:13Z | 2022-03-19T16:19:30Z | OWNER | Can be part of #870 - refactoring existing views to use `register_routes()`. > I'm going to put the new `check_permissions()` method on `BaseView` as well. If I want that method to be available to plugins I can do so by turning that `BaseView` class into a documented API that plugins are encouraged to use themselves. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/832#issuecomment-651995453_ | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
961008507 | MDU6SXNzdWU5NjEwMDg1MDc= | 308 | Add an interactive tutorial as a Jupyter notebook | 9599 | open | 0 | 2 | 2021-08-04T20:34:22Z | 2021-08-04T21:30:59Z | OWNER | Can show people how to open this up in Binder. | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
734777631 | MDU6SXNzdWU3MzQ3Nzc2MzE= | 1080 | "View all" option for facets, to provide a (paginated) list of ALL of the facet counts plus a link to view them | 9599 | open | 0 | 3268330 | 7 | 2020-11-02T19:55:06Z | 2022-02-04T06:25:18Z | OWNER | Can use `/database/-/...` namespace from #296 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1536851861 | I_kwDOBm6k_c5bmn-V | 1994 | Stuck on loading screen | 10913053 | open | 0 | 1 | 2023-01-17T18:33:49Z | 2023-01-23T08:21:08Z | NONE | Can’t actually open it! Downloaded today from the releases tab Running macOS13.1 ``` bin/python3.9 --version Python 3.9.6 Took 83ms bin/python3.9 --version Python 3.9.6 Took 113ms bin/pip install datasette>=0.59 datasette-app-support>=0.11.6 datasette-vega>=0.6.2 datasette-cluster-map>=0.17.1 datasette-pretty-json>=0.2.1 datasette-edit-schema>=0.4 datasette-configure-fts>=1.1 datasette-leaflet>=0.2.2 --disable-pip-version-check Requirement already satisfied: datasette>=0.59 in lib/python3.9/site-packages (0.63) Requirement already satisfied: datasette-app-support>=0.11.6 in lib/python3.9/site-packages (0.11.6) Requirement already satisfied: datasette-vega>=0.6.2 in lib/python3.9/site-packages (0.6.2) Requirement already satisfied: datasette-cluster-map>=0.17.1 in lib/python3.9/site-packages (0.17.2) Requirement already satisfied: datasette-pretty-json>=0.2.1 in lib/python3.9/site-packages (0.2.2) Requirement already satisfied: datasette-edit-schema>=0.4 in lib/python3.9/site-packages (0.5.1) Requirement already satisfied: datasette-configure-fts>=1.1 in lib/python3.9/site-packages (1.1) Requirement already satisfied: datasette-leaflet>=0.2.2 in lib/python3.9/site-packages (0.2.2) Requirement already satisfied: click>=7.1.1 in lib/python3.9/site-packages (from datasette>=0.59) (8.1.3) Requirement already satisfied: hupper>=1.9 in lib/python3.9/site-packages (from datasette>=0.59) (1.10.3) Requirement already satisfied: pint>=0.9 in lib/python3.9/site-packages (from datasette>=0.59) (0.20.1) Requirement already satisfied: PyYAML>=5.3 in lib/python3.9/site-packages (from datasette>=0.59) (6.0) Requirement already satisfied: httpx>=0.20 in lib/python3.9/site-packages (from datasette>=0.59) (0.23.0) Requirement already satisfied: aiofiles>=0.4 in lib/python3.9/site-packages (from datasette>=0.59) (22.1.0) Requirement already satisfied: asgi-csrf>=0.9 in lib/python3.9/site-packages (from datasette>=0.59) (0.9) Requirement already satisfied: asgiref>=3.2.10 in lib/python3.9/site-packages… | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1994/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1469796454 | I_kwDOBm6k_c5Xm1Bm | 1920 | Document Datasette.metadata() method | 25778 | open | 0 | 0 | 2022-11-30T15:10:36Z | 2022-11-30T15:10:36Z | CONTRIBUTOR | Code is here: https://github.com/simonw/datasette/blob/main/datasette/app.py#L503 This will be the official way to access metadata from plugins. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
727848625 | MDU6SXNzdWU3Mjc4NDg2MjU= | 12 | Some workout columns should be float, not text | 9599 | open | 0 | 4 | 2020-10-23T02:47:02Z | 2022-06-23T04:35:02Z | MEMBER | Columns `duration`, `totalDistance` and `totalEnergyBurned` should be converted to float. https://github.com/dogsheep/healthkit-to-sqlite/blob/71e36e1cf034b96de2a8e6652265d782d3fdf63b/healthkit_to_sqlite/utils.py#L50-L57 | 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1122427321 | I_kwDOBm6k_c5C5uG5 | 1624 | Index page `/` has no CORS headers | 9599 | open | 0 | 2 | 2022-02-02T21:56:10Z | 2022-09-28T16:54:22Z | OWNER | Compare the following: ``` % curl -I 'https://latest.datasette.io/fixtures' HTTP/1.1 200 OK link: https://latest.datasette.io/fixtures.json; rel="alternate"; type="application/json+datasette" cache-control: max-age=5 referrer-policy: no-referrer access-control-allow-origin: * access-control-allow-headers: Authorization access-control-expose-headers: Link content-type: text/html; charset=utf-8 x-databases: _memory, _internal, fixtures, extra_database Date: Wed, 02 Feb 2022 21:55:49 GMT Server: Google Frontend Transfer-Encoding: chunked % curl -I 'https://latest.datasette.io/' HTTP/1.1 200 OK link: https://latest.datasette.io/.json; rel="alternate"; type="application/json+datasette" content-type: text/html; charset=utf-8 x-databases: _memory, _internal, fixtures, extra_database Date: Wed, 02 Feb 2022 21:55:52 GMT Server: Google Frontend Transfer-Encoding: chunked ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1497577017 | I_kwDOBm6k_c5ZQzY5 | 1957 | Reconsider row value truncation on query page | 9599 | open | 0 | 1 | 2022-12-14T23:49:47Z | 2022-12-14T23:50:50Z | OWNER | Consider this example: https://ripgrep.datasette.io/repos?sql=select+json_group_array%28full_name%29+from+repos ```sql select json_group_array(full_name) from repos ``` ![CleanShot 2022-12-14 at 15 48 32@2x](https://user-images.githubusercontent.com/9599/207739709-8177f683-f938-49a1-8225-42791fad88fe.png) My intention here was to get a string of JSON I can copy and paste elsewhere - see: https://til.simonwillison.net/sqlite/compare-before-after-json The truncation isn't helping here. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
903986178 | MDU6SXNzdWU5MDM5ODYxNzg= | 1344 | Test Datasette Docker images built for different architectures | 9599 | open | 0 | 10 | 2021-05-27T16:52:29Z | 2022-09-06T00:07:58Z | OWNER | Continuing on from #1319 - now that we have the ability to build Datasette's Docker image against multiple architectures we should test that it works. We can do this with QEMU emulation, see https://twitter.com/nevali/status/1397958044571602945 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
628156527 | MDU6SXNzdWU2MjgxNTY1Mjc= | 789 | Mechanism for enabling pluggy tracing | 9599 | open | 0 | 2 | 2020-06-01T05:10:14Z | 2020-06-01T05:11:03Z | OWNER | Could be useful for debugging plugins: https://pluggy.readthedocs.io/en/latest/#call-tracing I tried this out by adding these two lines in `plugins.py`: ```python pm = pluggy.PluginManager("datasette") pm.add_hookspecs(hookspecs) # Added these: pm.trace.root.setwriter(print) pm.enable_tracing() ``` Output looked something like this: ``` INFO: 127.0.0.1:52724 - "GET /-/-/static/app.css HTTP/1.1" 404 Not Found actor_from_request [hook] datasette: <datasette.app.Datasette object at 0x106277ad0> request: <datasette.utils.asgi.Request object at 0x106550a50> finish actor_from_request --> [] [hook] extra_body_script [hook] template: show_json.html database: None table: None view_name: json_data datasette: <datasette.app.Datasette object at 0x106277ad0> finish extra_body_script --> [] [hook] extra_template_vars [hook] template: show_json.html database: None table: None view_name: json_data request: <datasette.utils.asgi.Request object at 0x1065504d0> datasette: <datasette.app.Datasette object at 0x106277ad0> finish extra_template_vars --> [] [hook] extra_css_urls [hook] template: show_json.html database: None table: None datasette: <datasette.app.Datasette object at 0x106277ad0> finish extra_css_urls --> [] [hook] extra_js_urls [hook] template: show_json.html database: None table: None datasette: <datasette.app.Datasette object at 0x106277ad0> finish extra_js_urls --> [] [hook] INFO: 127.0.0.1:52724 - "GET /-/actor HTTP/1.1" 200 OK actor_from_request [hook] datasette: <datasette.app.Datasette object at 0x106277ad0> request: <datasette.utils.asgi.Request object at 0x1065500d0> finish actor_from_request --> [] [hook] ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1359604075 | I_kwDOCGYnMM5RCelr | 481 | Idea: `sqlite-utils create-table tablename --sql "select ..."` | 9599 | open | 0 | 0 | 2022-09-02T01:41:24Z | 2022-09-02T01:42:08Z | OWNER | Could offer syntactic sugar for: ```sql create table foo as select * from bar ``` ``` sqlite-utils create-table data.db foo --sql "select * from bar" ``` https://sqlite-utils.datasette.io/en/stable/cli-reference.html#create-table | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
275755475 | MDU6SXNzdWUyNzU3NTU0NzU= | 140 | Heatmap visualization plugin | 9599 | open | 0 | 2 | 2017-11-21T15:34:23Z | 2019-05-13T18:33:51Z | OWNER | Could use https://github.com/scottbedard/svelte-heatmap | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1060631257 | I_kwDOBm6k_c4_N_LZ | 1528 | Add new `"sql_file"` key to Canned Queries in metadata? | 15178711 | open | 0 | 3 | 2021-11-22T21:58:01Z | 2022-06-10T03:23:08Z | CONTRIBUTOR | Currently for canned queries, you have to inline SQL in your `metadata.yaml` like so: ```yaml databases: fixtures: queries: neighborhood_search: sql: |- select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood title: Search neighborhoods ``` This works fine, but for a few reasons, I usually have my canned queries already written in separate `.sql` files. I'd like to instead re-use those instead of re-writing it. So, I'd like to see a new `"sql_file"` key that works like so: `metadata.yaml`: ```yaml databases: fixtures: queries: neighborhood_search: sql_file: neighborhood_search.sql title: Search neighborhoods ``` `neighborhood_search.sql`: ```sql select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood ``` Both of these would work in the exact same way, where Datasette would instead open + include `neighborhood_search.sql` on startup. A few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml: - Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code - Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit - Works well with other tools (can pipe `.sql` files into the `sqlite3` CLI, or use with other SQLite clients easier) - Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible Let me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right! | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1486036269 | I_kwDOBm6k_c5Ykx0t | 1941 | Mechanism for supporting key rotation for DATASETTE_SECRET | 9599 | open | 0 | 1 | 2022-12-09T05:24:53Z | 2022-12-09T05:25:20Z | OWNER | Currently if you change `DATASETTE_SECRET` all existing signed tokens - both cookies and API tokens and potentially other things too - will instantly expire. Adding support for key rotation would allow keys to be rotated on a semi-regular basis without logging everyone out / invalidating every API token instantly. Can model this on how Django does it: https://github.com/django/django/commit/0dcd549bbe36c060f536ec270d34d9e7d4b8e6c7 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1823428714 | I_kwDOBm6k_c5sr1Bq | 2120 | Add __all__ to datasette/__init__.py | 9599 | open | 0 | 0 | 2023-07-27T01:07:10Z | 2023-07-27T01:07:10Z | OWNER | Currently looks like this: https://github.com/simonw/datasette/blob/08181823990a71ffa5a1b57b37259198eaa43e06/datasette/__init__.py#L1-L6 Adding `__all__ = ["Permission", "Forbidden"...]` would let me get rid of those `# noqa` comments. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1879214365 | I_kwDOCGYnMM5wAokd | 590 | Ability to tell if a Database is an in-memory one | 9599 | open | 0 | 1 | 2023-09-03T19:50:15Z | 2023-09-03T19:50:36Z | OWNER | Currently the constructor accepts `memory=True` or `memory_name=...` and uses those to create a connection, but does not record what those values were: https://github.com/simonw/sqlite-utils/blob/1260bdc7bfe31c36c272572c6389125f8de6ef71/sqlite_utils/db.py#L307-L349 This makes it hard to tell if a database object is to an in-memory or a file-based database, which is sometimes useful to know. | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
724878151 | MDU6SXNzdWU3MjQ4NzgxNTE= | 1032 | Bring date parsing into Datasette core | 9599 | open | 0 | 8 | 2020-10-19T18:30:45Z | 2020-10-19T19:37:55Z | OWNER | Currently this is mainly handled by a plugin - https://github.com/simonw/datasette-dateutil - but I realise now that this really needs to be core functionality. See also Twitter thread: https://twitter.com/simonw/status/1318234808653213696 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1920416843 | I_kwDOCGYnMM5ydzxL | 597 | sqlite-utils insert-files should be able to convert fields | 1737541 | open | 0 | 0 | 2023-09-30T22:20:47Z | 2023-09-30T22:20:47Z | NONE | Currently using both `insert-files` and `convert` is needed in order to create sqlar files, it would be more convenient if it could be done with just one command. ```shell ~ ❯ cat test.py import os class Example: def __init__(self, arg1, arg2): self.arg1 = arg1 ~ ❯ sqlite-utils insert-files test.sqlar sqlar test.py -c name:name -c data:content -c mode:mode -c mtime:mtime -c sz:size --pk=name [####################################] 100% ~ ❯ sqlite-utils convert test.sqlar sqlar data "zlib.compress(value)" --import=zlib --where "name = 'test.py'" [####################################] 100% ~ ❯ cat test.py | sqlite-utils convert test.sqlar sqlar data "zlib.compress(sys.stdin.buffer.read())" --import=zlib --import=sys --where "name = 'test.py'" # Alternative way [####################################] 100% ~ ❯ sqlite3 test.sqlar "SELECT hex(data) FROM sqlar WHERE name = 'test.py';" | python3 -c "import sys, zlib; sys.stdout.buffer.write(zlib.decompress(bytes.fromhex(sys.stdin.read())))" import os class Example: def __init__(self, arg1, arg2): self.arg1 = arg1 ~ ❯ rm test.py ~ ❯ sqlar -l test.sqlar test.py ~ ❯ sqlar -x test.sqlar ~ ❯ cat test.py import os class Example: def __init__(self, arg1, arg2): self.arg1 = arg1 ``` | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
626211658 | MDU6SXNzdWU2MjYyMTE2NTg= | 778 | Ability to configure keyset pagination for views and queries | 9599 | open | 0 | 1 | 2020-05-28T04:48:56Z | 2020-10-02T02:26:25Z | OWNER | Currently views offer pagination, but it uses offset/limit - e.g. https://latest.datasette.io/fixtures/paginated_view?_next=100 This means pagination will perform poorly on deeper pages. If a view is based on a table that has a primary key it should be possible to configure efficient keyset pagination that works the same way that table pagination works. This may be as simple as configuring a column that can be treated as a "primary key" for the purpose of pagination using `metadata.json` - or with a `?_view_pk=colname` querystring argument. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/778/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1200650491 | I_kwDOBm6k_c5HkHj7 | 1711 | Template context powered entirely by the JSON API format | 9599 | open | 0 | 8755003 | 1 | 2022-04-11T22:59:27Z | 2022-12-13T05:29:06Z | OWNER | Datasette 1.0 will have a stable template context. I'm going to achieve this by refactoring the templates to work only with keys returned by the API (or some of its extras) - then the API documentation will double up as template documentation. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
520667773 | MDU6SXNzdWU1MjA2Njc3NzM= | 620 | Mechanism for indicating foreign key relationships in the table and query page URLs | 9599 | open | 0 | 6 | 2019-11-10T22:26:27Z | 2021-04-05T03:57:22Z | OWNER | Datasette currently only inflates foreign keys (into names hyperlinks) if it detects them as foreign key constraints in the underlying database. It would be useful if you could specify additional "foreign keys" using both `metadata.json` and the querystring - similar time how you can pass `?_fts_table=x` https://datasette.readthedocs.io/en/stable/full_text_search.html#configuring-full-text-search-for-a-table-or-view | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/620/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
||||||||
1563264257 | I_kwDOBm6k_c5dLYUB | 2010 | Row page should default to card view | 9599 | open | 0 | 3268330 | 1 | 2023-01-30T21:49:37Z | 2023-01-30T21:52:06Z | OWNER | Datasette currently uses the same table layout on the row pages as it does on the table pages: https://datasette.io/content/pypi_packages?_sort=name&name__exact=datasette-column-inspect <img width="1068" alt="image" src="https://user-images.githubusercontent.com/9599/215602793-b15dad48-c3e9-4f34-9875-3e79b7aa586d.png"> https://datasette.io/content/pypi_packages/datasette-column-inspect <img width="1068" alt="image" src="https://user-images.githubusercontent.com/9599/215602883-9d714eec-3908-493e-93d0-c788af47ba21.png"> If you shrink down to mobile width you get this instead, on both of those pages: <img width="613" alt="image" src="https://user-images.githubusercontent.com/9599/215602964-b95aa089-0269-4647-beae-66fffad332f5.png"> I think that view, which I think of as the "card view", is plain better if you're looking at just a single row - and it (or a variant of it) should be the default presentation on the row page. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
895686039 | MDU6SXNzdWU4OTU2ODYwMzk= | 1336 | Document turning on WAL for live served SQLite databases | 9599 | open | 0 | 1 | 2021-05-19T17:08:58Z | 2022-01-13T21:55:59Z | OWNER | Datasette docs don't talk about WAL yet, which allows you to safely serve reads from a database file while it is accepting writes. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1399933513 | I_kwDOBm6k_c5TcUpJ | 1833 | Ability to submit long queries by POST | 9599 | open | 0 | 0 | 2022-10-06T16:03:26Z | 2022-10-06T16:18:00Z | OWNER | Datasette doesn't limit URL lengths but some common web proxies do - the one in front of Google Cloud Run for example limits to 8KB total for incoming request headers: https://cloud.google.com/load-balancing/docs/quotas#https-lb-header-limits This means longer SQL queries can break! Need an optional mechanism for submitting queries by POST instead. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1833/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1447050738 | I_kwDOBm6k_c5WQD3y | 1886 | Call for birthday presents: if you're using Datasette, let us know how you're using it here | 9599 | open | 0 | 13 | 2022-11-13T19:25:51Z | 2022-12-18T17:34:20Z | OWNER | Datasette is 5 years old today. To celebrate, I'm asking the community for birthday presents: https://simonwillison.net/2022/Nov/13/datasette-birthday/ > To celebrate this open source project’s birthday, I’ve decided to try something new: I’m going to ask for birthday presents. > > An aspect of Datastte’s marketing that I’ve so far neglected is social proof. I think it’s time to change that: I know people are using the software to do cool things, but this often happens behind closed doors. > > For Datastte’s birthday, I’m looking for endorsements and case studies and just general demonstrations that show how people are using it do so cool stuff. > > So: if you’ve used Datasette to solve a problem, and you’re willing to publicize it, please give us the gift of your endorsement! > > [...] > > Add a comment to [this issue thread](https://github.com/simonw/datasette/issues/1886) describing what you’re doing. Just a few sentences is fine—though a screenshot or even a link to a live instance would be even better | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1886/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
316621102 | MDU6SXNzdWUzMTY2MjExMDI= | 235 | Add limit on the size in KB of data returned from a single query | 9599 | open | 0 | 2 | 2018-04-22T23:01:15Z | 2018-04-24T00:30:02Z | OWNER | Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized. It does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server. I think the easiest place to implement that is here: https://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190 Currently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits. The bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
323658641 | MDU6SXNzdWUzMjM2NTg2NDE= | 262 | Add ?_extra= mechanism for requesting extra properties in JSON | 9599 | open | 0 | 3268330 | 27 | 2018-05-16T14:55:42Z | 2023-03-29T06:22:22Z | OWNER | Datasette views currently work by creating a set of data that should be returned as JSON, then defining an additional, optional `template_data()` function which is called if the view is being rendered as HTML. This `template_data()` function calculates extra template context variables which are necessary for the HTML view but should not be included in the JSON. Example of how that is used today: https://github.com/simonw/datasette/blob/2b79f2bdeb1efa86e0756e741292d625f91cb93d/datasette/views/table.py#L672-L704 With features like Facets in #255 I'm beginning to want to move more items into the `template_data()` - in the case of facets it's the `suggested_facets` array. This saves that feature from being calculated (involving several SQL queries) for the JSON case where it is unlikely to be used. But... as an API user, I want to still optionally be able to access that information. Solution: Add a `?_extra=suggested_facets&_extra=table_metadata` argument which can be used to optionally request additional blocks to be added to the JSON API. Then redefine as many of the current `template_data()` features as extra arguments instead, and teach Datasette to return certain extras by default when rendering templates. This could allow the JSON representation to be slimmed down further (removing e.g. the `table_definition` and `view_definition` keys) while still making that information available to API users who need it. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
678760988 | MDU6SXNzdWU2Nzg3NjA5ODg= | 932 | End-user documentation | 9599 | open | 0 | 3268330 | 6 | 2020-08-13T22:04:39Z | 2022-03-08T15:20:48Z | OWNER | Datasette's documentation is aimed at people who install and configure it. What about end users of preconfigured and deployed Datasette instances? Something that can be linked to from the Datasette UI would be really useful. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1216436131 | I_kwDOBm6k_c5IgVej | 1721 | Implement plugin hooks: `register_table_extras`, `register_row_extras`, `register_query_extras` | 9599 | open | 0 | 8755003 | 0 | 2022-04-26T20:21:49Z | 2022-12-13T05:29:07Z | OWNER | Designed in: - #1720 Part of: - #262 - #1709 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
531502365 | MDU6SXNzdWU1MzE1MDIzNjU= | 646 | Make database level information from metadata.json available in the index.html template | 18017473 | open | 0 | 3268330 | 3 | 2019-12-02T19:55:10Z | 2022-03-15T20:50:34Z | NONE | Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks! | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1054244712 | I_kwDOBm6k_c4-1n9o | 1510 | Datasette 1.0 documented template context (maybe via API docs) | 9599 | open | 0 | 3268330 | 3 | 2021-11-15T23:23:58Z | 2023-06-28T02:05:21Z | OWNER | Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1375792876 | I_kwDOBm6k_c5SAO7s | 1811 | Drop-down menu with "REGEXP" choice | 562352 | open | 0 | 0 | 2022-09-16T11:06:18Z | 2022-09-16T15:30:31Z | NONE | Drop-down menu below could add "REGEXP" choice when REGEXP sqlite extension is installed and used ![image](https://user-images.githubusercontent.com/562352/190675352-810fbdca-0827-4034-8b9f-fd67d5c35afb.png) Not sure. Close the issue if you don't find it relevant. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
456569067 | MDU6SXNzdWU0NTY1NjkwNjc= | 510 | Ability to facet by delimiter (e.g. comma separated fields) | 9599 | open | 0 | 9599 | 1 | 2019-06-15T19:34:41Z | 2019-07-08T15:44:51Z | OWNER | E.g. if a field contains "Tags,With,Commas" be able to facet them in the same way as `_facet_array=` lets you facet `["Tags", "With", "Commas"]` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
530491074 | MDU6SXNzdWU1MzA0OTEwNzQ= | 14 | Command for importing events | 9599 | open | 0 | 3 | 2019-11-29T21:28:58Z | 2020-04-14T19:38:34Z | MEMBER | Eg from https://api.github.com/users/simonw/events Docs here: https://developer.github.com/v3/activity/events/#list-events-performed-by-a-user | 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1524983536 | I_kwDOBm6k_c5a5Wbw | 1981 | Canned query field labels truncated | 9599 | open | 0 | 1 | 2023-01-09T06:04:24Z | 2023-01-09T06:05:44Z | OWNER | Eg here on mobile: https://timezones.datasette.io/timezones/by_point?longitude=-0.1406632&latitude=50.8246776 ![107A1894-D1DA-4158-9EA3-40C840DD10E3](https://user-images.githubusercontent.com/9599/211248895-c922ce61-95d3-47ca-9314-dcff7c86afab.jpeg) | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1493471221 | I_kwDOBm6k_c5ZBI_1 | 1949 | `.json` errors should be returned as JSON | 9599 | open | 0 | 8755003 | 10 | 2022-12-13T06:14:12Z | 2022-12-15T00:46:27Z | OWNER | Eg the error in this issue: - #1945 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
803356942 | MDU6SXNzdWU4MDMzNTY5NDI= | 1218 | /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory | 11855322 | open | 0 | 1 | 2021-02-08T09:07:00Z | 2021-02-23T12:12:17Z | NONE | Error as above, however I do have python3.8 and the readme indicates this is supported. ``` (venv) (base) Robins-MacBook:datasette robin$ ls /usr/local/opt/python3/bin/ .. pip3 python3 python3.8 ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1616440856 | I_kwDOJHON9s5gWO4Y | 5 | Configure full text search | 9599 | open | 0 | 0 | 2023-03-09T05:20:46Z | 2023-03-09T05:20:46Z | MEMBER | FTS would be useful. Maybe even extract the plain text from the notes to make that index easier to create, rather than creating it against the HTML. Can use the `plaintext` property for that. | 611552758 | issue | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
602533300 | MDU6SXNzdWU2MDI1MzMzMDA= | 1 | Import photo metadata from Apple Photos into SQLite | 9599 | open | 0 | 5324096 | 8 | 2020-04-18T19:23:26Z | 2020-05-04T02:41:40Z | MEMBER | Faces, albums, locations, that kind of thing. | 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
447408527 | MDU6SXNzdWU0NDc0MDg1Mjc= | 483 | Option to facet by date using month or year | 9599 | open | 0 | 5 | 2019-05-23T01:25:29Z | 2019-05-29T21:38:27Z | OWNER | Facet by date (from #481) can take datetimes and facet them by the day component. https://latest.datasette.io/fixtures/facetable?_facet_date=created I'd like to also be able to facet by month or year. I'm not sure what the best way to achieve this is. Could be two more Facet classes (YearFacet and MonthFacet) but I think it might be nicer if the existing DateFacet could take an optional argument that changed its behaviour. But... if I do that, do I expose it in the UI somewhere or is it only available to URL-hackers? | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/483/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
712368432 | MDU6SXNzdWU3MTIzNjg0MzI= | 984 | Review accessibility of new column action menus | 9599 | open | 0 | 1 | 2020-09-30T23:56:44Z | 2020-10-01T00:01:36Z | OWNER | Feature added in #981 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1339444565 | I_kwDOBm6k_c5P1k1V | 1783 | Better guidance as to what to do after you've installed Datasette | 9599 | open | 0 | 2 | 2022-08-15T20:11:06Z | 2022-08-15T20:14:01Z | OWNER | Feedback [from Discord](https://discord.com/channels/823971286308356157/823971286941302908/1008822978793984060): > hello, love the project and came for help and to point out a possible gap in the docs. starting with "getting started" and "installation" every thing looks great, but then there's a giant leap after you have it installed and running. from the user perspective of "i have a csv of set of csvs that i want to turn into a table(s), what do i do next?" --- so something like maybe a page for creating your first project should go after "installation". - https://docs.datasette.io/en/0.62/getting_started.html - https://docs.datasette.io/en/0.62/installation.html | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1121121305 | I_kwDOBm6k_c5C0vQZ | 1618 | Reconsider policy on blocking queries containing the string "pragma" | 770231 | open | 0 | 6 | 2022-02-01T19:39:46Z | 2022-02-02T19:42:03Z | NONE | First of all, thanks for creating this cool project, and also supporting publishing to various hosting services out of the box. While testing out, I noticed legitimate queries such as ``` select * from books where title like 'Pragmatic%' ``` or ``` select * from books where title = 'The Pragmatic Programmer' ``` are blocked, due to the regular expression check here: https://github.com/simonw/datasette/blob/main/datasette/utils/__init__.py#L185 Example as seen from a Datasette instance: https://fivethirtyeight.datasettes.com/polls?sql=select+*+from+books+where+title+like+%27Pragmatic%25%27%0D%0A I'd propose a regular expression like ``` re.compile(f"pragma_(?!({'|'.join(allowed_pragmas)}))"), ``` instead of ``` re.compile(f"pragma(?!_({'|'.join(allowed_pragmas)}))"), ``` I can create a pull request with this change, unless the maintainers think it would allow unwanted queries to be executed. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
629473827 | MDU6SXNzdWU2Mjk0NzM4Mjc= | 5 | Set up a demo | 26745575 | open | 0 | 1 | 2020-06-02T19:56:49Z | 2020-09-01T06:18:43Z | NONE | First off, thanks for open sourcing this application! This is a suggestion to increase the amount of people that would make use of it: an example in the readme file would help. Currently, users have to clone the app, install it, authorize through pocket, run a command, an then find out if this application does what they hope it does. Another possibility is to add a file `example-output.db`, containing one (mock) Pocket article. Keep up the good work! | 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
612287234 | MDU6SXNzdWU2MTIyODcyMzQ= | 16 | Import machine-learning detected labels (dog, llama etc) from Apple Photos | 9599 | open | 0 | 13 | 2020-05-05T02:45:43Z | 2020-05-05T05:38:16Z | MEMBER | Follow-on from #1. Apple Photos runs some very sophisticated machine learning on-device to figure out if photos are of dogs, llamas and so on. I really want to extract those labels out into my own database. | 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 1, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1058803238 | I_kwDOBm6k_c4_HA4m | 1520 | Pattern for avoiding accidental URL over-rides | 9599 | open | 0 | 1 | 2021-11-19T18:28:05Z | 2021-11-19T18:29:26Z | OWNER | Following #1517 I'm experimenting with a plugin that does this: ```python @hookimpl def register_routes(): return [ (r"/(?P<db_name>[^/]+)/(?P<table_and_format>[^/]+?)$", Table().view), ] ``` This is supposed to replace the default table page with new code... but there's a problem: `/-/versions` on that instance now returns 404 `Database '-' does not exist`! Need to figure out a pattern to avoid that happening. Plugins get to add their routes before Datasette's default routes, which is why this is happening here. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
817989436 | MDU6SXNzdWU4MTc5ODk0MzY= | 242 | Async support | 25778 | open | 0 | 13 | 2021-02-27T18:29:38Z | 2021-10-28T14:37:56Z | CONTRIBUTOR | Following our conversation last week, want to note this here before I forget. I've had a couple situations where I'd like to do a bunch of updates in an async event loop, but I run into SQLite's issues with concurrent writes. This feels like something sqlite-utils could help with. PeeWee ORM has a [SQLite write queue](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq) that might be a good model. It's using threads or gevent, but I _think_ that approach would translate well enough to asyncio. Happy to help with this, too. | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
803333769 | MDU6SXNzdWU4MDMzMzM3Njk= | 32 | KeyError: 'Contents' on running upload | 11855322 | open | 0 | 3 | 2021-02-08T08:36:37Z | 2021-07-22T06:40:25Z | NONE | Following the readme, on big sur, and having entered my auth creds via `dogsheep-photos s3-auth`: ``` (venv) (base) Robins-MacBook:datasette robin$ dogsheep-photos upload photos.db ~/Pictures/Photos\ /Users/robin/Pictures/Library.photoslibrary --dry-run Fetching existing keys from S3... Traceback (most recent call last): File "/Users/robin/datasette/venv/bin/dogsheep-photos", line 8, in <module> sys.exit(cli()) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/cli.py", line 96, in upload key.split(".")[0] for key in get_all_keys(client, creds["photos_s3_bucket"]) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/utils.py", line 46, in get_all_keys for row in page["Contents"]: KeyError: 'Contents' ``` Possibly since the bucket is in `EU (London) eu-west-2` and this into is not requested? | 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1428630253 | I_kwDOBm6k_c5VJyrt | 1873 | Ensure insert API has good tests for rowid and compound primark key tables | 9599 | open | 0 | 8755003 | 11 | 2022-10-30T06:22:17Z | 2022-12-13T05:29:08Z | OWNER | Following: - #1866 I need to design and implement various edge-cases or primary keys: - Table without an auto-incrementing primary key - Table with compound primary keys - Table with just a `rowid` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
reopened | ||||||
1700936245 | I_kwDOCGYnMM5lYjo1 | 542 | Remove `skip_false=True` and `--no-skip-false` in `sqlite-utils` 4.0 | 9599 | open | 0 | 9374594 | 1 | 2023-05-08T21:04:28Z | 2023-05-08T21:07:41Z | OWNER | Following: - #527 The only reason I didn't remove fix this mis-feature entirely is that it represents a backwards incompatible change. I'll make that change in 4.0. | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1125576543 | I_kwDOBm6k_c5DFu9f | 1630 | Review datasette.utils and decide which functions should be documented for 1.0 | 9599 | open | 0 | 3268330 | 0 | 2022-02-07T06:39:52Z | 2022-02-07T06:39:52Z | OWNER | Follows: - #1176 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
756875827 | MDU6SXNzdWU3NTY4NzU4Mjc= | 1129 | Fix footer to the bottom of the page | 3243482 | open | 0 | 0 | 2020-12-04T07:28:07Z | 2020-12-04T16:04:29Z | CONTRIBUTOR | Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport. ![before & after](https://user-images.githubusercontent.com/3243482/101134785-f6595a80-361b-11eb-81ce-b8b5cb9c5bc2.png) This can be fixed using flexbox. ```css body { min-height: 100vh; display: flex; flex-direction: column; } .content { flex-grow: 1; } ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
698791218 | MDU6SXNzdWU2OTg3OTEyMTg= | 50 | favorites --stop_after=N stops after min(N, 200) | 370930 | open | 0 | 2 | 2020-09-11T03:38:14Z | 2020-09-13T05:11:14Z | CONTRIBUTOR | For any number greater than 200, `favorites --stop_after` stops after getting 200 tweets, e.g. ``` $ twitter-to-sqlite favorites tweets.db --stop_after=300 Importing favorites [####################################] 199 $ ``` I don't _think_ this is a limitation of the API (if you omit `--stop_after` you get some very large number, possibly all of them), so I _think_ this is a bug. | 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
327365110 | MDU6SXNzdWUzMjczNjUxMTA= | 294 | inspect should record column types | 9599 | open | 0 | 7 | 2018-05-29T15:10:41Z | 2019-06-28T16:45:28Z | OWNER | For each table we want to know the columns, their order and what type they are. I'm going to break with SQLite defaults a little on this one and allow datasette to define additional types - to start with just a `geometry` type for columns that are detected as SpatiaLite geometries. Possible JSON design: "columns": [{ "name": "title", "type": "text" }, ...] Refs #276 | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1186696202 | I_kwDOBm6k_c5Gu4wK | 1696 | Show foreign key label when filtering | 9599 | open | 0 | 2 | 2022-03-30T16:18:54Z | 2023-01-29T20:56:20Z | OWNER | For example here: <img width="624" alt="image" src="https://user-images.githubusercontent.com/9599/160882762-a5000ad7-7646-4f60-8d58-23ad9e79b184.png"> 3 corresponds to "Human Related: Other" - it would be neat to display this in this area of the page somehow. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
348043884 | MDU6SXNzdWUzNDgwNDM4ODQ= | 357 | Plugin hook for loading metadata.json | 9599 | open | 0 | 6 | 2018-08-06T19:00:01Z | 2020-06-21T22:19:58Z | OWNER | For https://github.com/simonw/russian-ira-facebook-ads-datasette/tree/af6d956995e14afd585c35a6a06bb01da32043ba I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements. Example yaml metadata file: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/af6d956995e14afd585c35a6a06bb01da32043ba/russian-ads-metadata.yaml It would be useful if Datasette could be fed a YAML file directly: datasette -m metadata.yaml Question is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a `datasette-metadata-yaml` plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be? | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1179998071 | I_kwDOBm6k_c5GVVd3 | 1684 | Mechanism for disabling faceting on large tables only | 9599 | open | 0 | 1 | 2022-03-24T20:06:11Z | 2022-03-24T20:13:19Z | OWNER | Forest turned off faceting on https://labordata.bunkum.us/ because it was causing performance problems on some of the huge tables - but it would be nice if it could still be an option on smaller tables such as https://labordata.bunkum.us/voluntary_recognitions-4421085/voluntary_recognitions One option: a new setting that automatically disables faceting (and facet suggestion) for tables that have either more than X rows or that are so big that the count could not be completed within the time limit. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1087919372 | I_kwDOBm6k_c5A2FUM | 1578 | Confirm if documented nginx proxy config works for row pages with escaped characters in their primary key | 9599 | open | 0 | 4 | 2021-12-23T18:27:59Z | 2021-12-24T21:33:19Z | OWNER | Found this while working on https://github.com/simonw/datasette-tiddlywiki <img width="1254" alt="image" src="https://user-images.githubusercontent.com/9599/147279097-e02f80f3-cc88-4bdd-a26f-03f924c13b5e.png"> Then clicking on `/tiddlywiki/tiddlers/%24%3A%2FDefaultTiddlers` returns a 404. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
314834783 | MDU6SXNzdWUzMTQ4MzQ3ODM= | 219 | Expose units in the JSON API? | 45057 | open | 0 | 0 | 2018-04-16T22:04:25Z | 2018-04-16T22:04:25Z | CONTRIBUTOR | From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map. I'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
703246031 | MDU6SXNzdWU3MDMyNDYwMzE= | 51 | github-to-sqlite should handle rate limits better | 9599 | open | 0 | 4 | 2020-09-17T04:01:50Z | 2022-10-14T16:34:07Z | MEMBER | From #50 - right now it will crash with an error of it hits the rate limit. Since the rate limit information (including reset time) is available in the headers it could automatically sleep and try again instead. | 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1615692818 | I_kwDOBm6k_c5gTYQS | 2035 | Potential feature: special support for `?a=1&a=2` on the query page | 9599 | open | 0 | 3268330 | 14 | 2023-03-08T18:05:03Z | 2023-03-31T16:09:08Z | OWNER | From a discussion on Discord: https://discord.com/channels/823971286308356157/996877076982415491/1082789517062320138 The key idea is to make it easier for people to implement `where id in (...)` that's populated from query string arguments. What if you could add `?id=11&id=32&id=62` to the URL and have that made available as a list that can be used in the query? | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1515815014 | I_kwDOBm6k_c5aWYBm | 1973 | render_cell plugin hook's row object is not a sqlite.Row | 193185 | open | 0 | 4 | 2023-01-01T20:27:46Z | 2023-01-29T00:40:31Z | CONTRIBUTOR | From https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-row-value-column-table-database-datasette: > row - sqlite.Row > The SQLite row object that the value being rendered is part of This appears to actually be a [CustomRow](https://github.com/simonw/datasette/blob/f0fadc28ddb9f82e5cc1ecaa51e8a342eb6dc528/datasette/utils/__init__.py#L773-L789), but I think that's unrelated to my issue. I have a table: ```sql CREATE TABLE IF NOT EXISTS "dss_job_stats"( job_id integer not null references dss_job(id) on delete cascade, host text not null, // other columns elided as irrelevant primary key (job_id, host) ); ``` On datasette 0.63.2, the `render_cell` hook receives a `row` value that looks like: ``` CustomRow([('job_id', {'value': 2, 'label': '2'}), ('host', 'cldellow.com')]) ``` I expected the `job_id` value to be `2`, but it's actually `{'value': 2, 'label': '2'}`. I can work around this, but was wondering if this was intended behaviour? | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
778380836 | MDU6SXNzdWU3NzgzODA4MzY= | 4 | Feature Request: Gmail | 203343 | open | 0 | 5 | 2021-01-04T21:31:09Z | 2021-03-04T20:54:44Z | NONE | From takeout, I only exported my Gmail account. Ideally I could parse this into sqlite via this tool. | 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1620515757 | I_kwDOBm6k_c5glxut | 2039 | Subtle bug with `--load-extension` and `--static` flags with absolute Windows paths with`C:\` | 15178711 | open | 0 | 0 | 2023-03-12T21:18:52Z | 2023-03-12T21:18:52Z | CONTRIBUTOR | From the Datasette discord: A user tried running the following command on windows: ``` datasette --load-extension="C:\spatialite\mod_spatialite-5.0.1-win-x86\mod_spatialite.dll" ``` This failed with `"The specified module could not be found"`, because the entrypoint option introduced in #1789 splits the input differently. Instead of loading the extension found at `"C:\spatialite\mod_spatialite-5.0.1-win-x86\mod_spatialite.dll"`, it instead tried to load the extension at `"C"` with entrypoint `"\spatialite\mod_spatialite-5.0.1-win-x86\mod_spatialite.dll". This is hard because most absolute windows paths have a colon in them, like `C:\foo.txt` or `D:\bar.txt`. I'd image the `--static` flag is also vulnerable to this type of bug. The "solution" is to use a relative path instead, but that doesn't feel that great. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1891614971 | I_kwDOCGYnMM5wv8D7 | 594 | Represent compound foreign keys in table.foreign_keys output | 9599 | open | 0 | 2 | 2023-09-12T03:48:24Z | 2023-09-12T03:51:13Z | OWNER | Given this schema: ```sql CREATE TABLE departments ( campus_name TEXT NOT NULL, dept_code TEXT NOT NULL, dept_name TEXT, PRIMARY KEY (campus_name, dept_code) ); CREATE TABLE courses ( course_code TEXT PRIMARY KEY, course_name TEXT, campus_name TEXT NOT NULL, dept_code TEXT NOT NULL, FOREIGN KEY (campus_name, dept_code) REFERENCES departments(campus_name, dept_code) ); ``` The output of `db["courses"].foreign_keys` right now is: ``` [ForeignKey(table='courses', column='campus_name', other_table='departments', other_column='campus_name'), ForeignKey(table='courses', column='dept_code', other_table='departments', other_column='dept_code')] ``` Which suggests two normal foreign keys, not one compound foreign key. | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
763361458 | MDU6SXNzdWU3NjMzNjE0NTg= | 1142 | "Stream all rows" is not at all obvious | 9599 | open | 0 | 9 | 2020-12-12T06:24:57Z | 2021-06-17T18:12:31Z | OWNER | Got a question about how to download all rows - the current option isn't at all clear. <img width="668" alt="loans__ppp_loans__9_511_rows_where_where_search_matches__tech__sorted_by_rowid" src="https://user-images.githubusercontent.com/9599/101977057-ac660b00-3bff-11eb-88f4-c93ffd03d3e0.png"> | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
811458446 | MDU6SXNzdWU4MTE0NTg0NDY= | 1233 | "datasette publish cloudrun" cannot publish files with spaces in their name | 9599 | open | 0 | 1 | 2021-02-18T21:08:31Z | 2021-02-18T21:10:08Z | OWNER | Got this error: ``` Step 6/9 : RUN datasette inspect fixtures.db extra database.db --inspect-file inspect-data.json ---> Running in db9da0068592 Usage: datasette inspect [OPTIONS] [FILES]... Try 'datasette inspect --help' for help. Error: Invalid value for '[FILES]...': Path 'extra' does not exist. The command '/bin/sh -c datasette inspect fixtures.db extra database.db --inspect-file inspect-data.json' returned a non-zero code: 2 ERROR ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 2 ``` While working on the demo for #1232, using this deploy command: ``` GITHUB_SHA=crossdb datasette publish cloudrun fixtures.db 'extra database.db' \ -m fixtures.json \ --plugins-dir=plugins \ --branch=$GITHUB_SHA \ --version-note=$GITHUB_SHA \ --extra-options="--setting template_debug 1 --crossdb" \ --install=pysqlite3-binary \ --service=datasette-latest-crossdb ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1557507274 | I_kwDOBm6k_c5c1azK | 2005 | `extra_template_vars` should be OK to return `None` | 9599 | open | 0 | 1 | 2023-01-26T01:40:45Z | 2023-01-26T01:41:50Z | OWNER | Got this exception and had to make sure it always returned `{}`: ``` File ".../python3.11/site-packages/datasette/app.py", line 1049, in render_template assert isinstance(extra_vars, dict), "extra_vars is of type {}".format( AssertionError: extra_vars is of type <class 'NoneType'> ``` | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1386530156 | I_kwDOCGYnMM5SpMVs | 492 | Idea: ability to pass extra variables to `--convert` scripts | 9599 | open | 0 | 1 | 2022-09-26T18:30:45Z | 2022-09-26T18:33:19Z | OWNER | Got this idea from this example in https://jeqo.github.io/notes/2022-09-24-ingest-logs-sqlite/ ```bash sqlite-utils insert /tmp/kafka-logs.db logs server.log.2022-09-24-21 --text --convert " import re r = re.compile(r'^\[(?P<datetime>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})\] (?P<level>\w+) (?P<log>(.+(\n(?\!\[).+|)+))', re.MULTILINE) def convert(text): rows = [m.groupdict() for m in r.finditer(text)] for row in rows: row.update({'server': 'localhost'}) row.update({'component': 'broker'}) return rows " ``` And the accompanying note: > The `row.update` allows to label rows as I’m planning to ingest logs from different hosts and potentially different components. This made me think: it might be neat if you could inject additional variable values into that script with extra command-line options, to make this kind of reuse easier. Something like this: ```bash sqlite-utils insert /tmp/kafka-logs.db logs server.log.2022-09-24-21 --text --convert " import re r = re.compile(r'^\[(?P<datetime>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})\] (?P<level>\w+) (?P<log>(.+(\n(?\!\[).+|)+))', re.MULTILINE) def convert(text): rows = [m.groupdict() for m in r.finditer(text)] for row in rows: row.update({'server': server}) row.update({'component': component}) return rows " --var server "localhost" --var component "broker" ``` | 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1113384383 | I_kwDOBm6k_c5CXOW_ | 1611 | Avoid ever running count(*) against SpatiaLite KNN table | 9599 | open | 0 | 1 | 2022-01-25T03:32:54Z | 2022-02-02T06:45:47Z | OWNER | Got this in a trace: <img width="941" alt="image" src="https://user-images.githubusercontent.com/9599/150906011-5f09dc6d-4def-433a-8546-e4fd94e0edf0.png"> Looks like running `count(*)` against KNN took 83s! It ignored the time limit. And still only returned a count of 0. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1410305897 | I_kwDOBm6k_c5UD49p | 1845 | Reconsider the Datasette first-run experience | 9599 | open | 0 | 3 | 2022-10-15T22:21:31Z | 2022-10-16T08:54:53Z | OWNER | Had a really interesting conversation today about how hard it is to get from "I installed Datasette" to "I've done something useful with it": https://news.ycombinator.com/item?id=33216789#33218590 Spending some time focusing on that first-run experience feels very worthwhile. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
616087149 | MDU6SXNzdWU2MTYwODcxNDk= | 765 | publish heroku should default to currently tagged version | 9599 | open | 0 | 1 | 2020-05-11T18:24:06Z | 2020-05-11T18:25:43Z | OWNER | Had a report that deploying to Heroku was using the previously installed version of Datasette, not the latest. Could be because of this: https://github.com/simonw/datasette/blob/af6c6c5d6f929f951c0e63bfd1c82e37a071b50f/datasette/publish/heroku.py#L172-L179 Heroku documentation recommends pinning to specific versions https://devcenter.heroku.com/articles/python-pip So... we could ensure we default to an install value of `["datasette>=current_tag"]`. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
901009787 | MDU6SXNzdWU5MDEwMDk3ODc= | 1340 | Research: Cell action menu (like column action but for individual cells) | 9599 | open | 0 | 1 | 2021-05-25T15:49:16Z | 2021-05-26T18:59:58Z | OWNER | Had an idea today that it might be useful to select an individual cell and say things like "show me all other rows with the same value" - maybe even a set of other menu options against cells as well. Mocked up a show-on-hover ellipses demo using the CSS inspector: ![idea](https://user-images.githubusercontent.com/9599/119528316-f0744480-bd35-11eb-8eb4-1deea6d60cce.gif) | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
849396758 | MDU6SXNzdWU4NDkzOTY3NTg= | 1287 | Upgrade to Python 3.9.4 | 9599 | open | 0 | 5 | 2021-04-02T18:43:15Z | 2021-04-03T22:38:39Z | OWNER | Has some security fixes https://pythoninsider.blogspot.com/2021/04/python-393-and-389-are-now-available.html | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
803338729 | MDU6SXNzdWU4MDMzMzg3Mjk= | 33 | photo-to-sqlite: command not found | 11855322 | open | 0 | 4 | 2021-02-08T08:42:57Z | 2021-02-12T15:00:44Z | NONE | Having installed in a venv I get: ``` (venv) (base) Robins-MacBook:datasette robin$ photo-to-sqlite apple-photos photos.db -bash: photo-to-sqlite: command not found ``` | 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1452572348 | I_kwDOBm6k_c5WlH68 | 1900 | datasette package --spatialite throws error during build | 419145 | open | 0 | 11 | 2022-11-17T02:03:28Z | 2022-11-18T08:00:38Z | NONE | Hello! Attempting to use `datasette package` to bundle up a SpatiaLite DB and I'm getting this error during the `docker build`: ``` sqlite3.OperationalError: /usr/lib/x86_64-linux-gnu/mod_spatialite.so.so: cannot open shared object file: No such file or directory ``` Seems to be throwing when this step is ran: ``` ERROR [6/6] RUN datasette inspect results.db --inspect-file inspect-data.json ``` This is with `v0.63.1`. | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
982803408 | MDU6SXNzdWU5ODI4MDM0MDg= | 1454 | Feature Request: Publish to IPFS | 1560788 | open | 0 | 0 | 2021-08-30T13:36:18Z | 2021-08-30T13:36:18Z | NONE | Hello, I am a huge fan of this being used for exploring data. I think it has a lot of flexibility not found in other tools. I'm not sure if what I'm asking for is possible: Can this be extended to publish to IPFS? IPFS is an attractive hosting option for decentralized journalism. Food for thought ~ | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1646068413 | I_kwDOBm6k_c5iHQK9 | 2048 | Test failures encountered while packaging for GNU Guix | 8332263 | open | 0 | 0 | 2023-03-29T15:36:54Z | 2023-03-29T15:36:54Z | NONE | Hello, While reviewing a packaged submitted to Guix to add `datasette`, the test suite produces the following errors: ``` =================================== FAILURES =================================== _________________________ test_row_strange_table_name __________________________ [gw21] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffef099be0> def test_row_strange_table_name(app_client): response = app_client.get( "/fixtures/table~2Fwith~2Fslashes~2Ecsv/3.json?_shape=objects" ) > assert response.status == 200 E assert 400 == 200 E + where 400 = <datasette.utils.testing.TestResponse object at 0x7fffef236a30>.status /tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:701: AssertionError ----------------------------- Captured stderr call ----------------------------- ERROR: conn=<sqlite3.Connection object at 0x7fffeedfe5d0>, sql = 'select rowid, * from [table%7E2Fwith%7E2Fslashes%7E2Ecsv] where "rowid"=:p0', params = {'p0': '3'}: no such table: table%7E2Fwith%7E2Fslashes%7E2Ecsv _______________ test_database_page_for_database_with_dot_in_name _______________ [gw15] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_with_dot = <datasette.utils.testing.TestClient object at 0x7fffef3416a0> def test_database_page_for_database_with_dot_in_name(app_client_with_dot): response = app_client_with_dot.get("/fixtures~2Edot.json") > assert response.status == 200 E assert 302 == 200 E + where 302 = <datasette.utils.testing.TestResponse object at 0x7fffef089fa0>.status /tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:633: AssertionError ___________________ test_tilde_encoded_database_names[fo%o] ____________________ [gw6] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/b… | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1123393829 | I_kwDODFE5qs5C9aEl | 10 | sqlite3.OperationalError: no such table: main.my_activity | 69208826 | open | 0 | 1 | 2022-02-03T17:59:29Z | 2022-03-20T02:38:07Z | NONE | Hello, When i run the command `google-takeout-to-sqlite my-activity db.db takeout-20220203T174446Z-001.zip`, i get this error : ``` Traceback (most recent call last): File "c:\users\julie\appdata\local\programs\python\python39-32\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\julie\appdata\local\programs\python\python39-32\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\julie\AppData\Local\Programs\Python\Python39-32\Scripts\google-takeout-to-sqlite.exe\__main__.py", line 7, in <module> File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\click\core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\click\core.py", line 1053, in main rv = self.invoke(ctx) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\click\core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\click\core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\click\core.py", line 754, in invoke return __callback(*args, **kwargs) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\google_takeout_to_sqlite\cli.py", line 31, in my_activity utils.save_my_activity(db, zf) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\google_takeout_to_sqlite\utils.py", line 19, in save_my_activity db["my_activity"].create_index(["time"]) File "c:\users\julie\appdata\local\programs\python\python39-32\lib\site-packages\sqlite_utils\db.py", line 629, in create_index self.db.conn.execute(sql) sqlite3.OperationalError: no such table: main.my_activity ``` Thank you for your help … | 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1727478903 | I_kwDOBm6k_c5m9zx3 | 2081 | Update Endpoints defined in metadata throws 403 Forbidden after a while | 15085007 | open | 0 | 0 | 2023-05-26T11:52:30Z | 2023-05-26T11:52:30Z | NONE | Hello. I expose an endpoint to update `tasks`: ``` { "title": "My Datasette Instance", "databases": { "tasks": { "queries": { "update_task": { "sql": "UPDATE tasks SET status = :status, result = :result, systemMessage = :systemMessage WHERE queueID = :queueID", "write": true, "on_success_message": "Task updated", "on_success_redirect": "/tasks/tasks.json", "on_error_message": "Task update failed", "on_error_redirect": "/tasks.json", "params": ["queueID", "taskData", "status", "result", "systemMessage"] } } } } } ``` This works really well! But after a while, the Datasette Instanz answers with **403 Forbidden**. I have to delete the database and recreate it in order to work again. Any help here? (´。_。`) | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
457147936 | MDU6SXNzdWU0NTcxNDc5MzY= | 512 | "about" parameter in metadata does not appear when alone | 7936571 | open | 0 | 3 | 2019-06-17T21:04:20Z | 2019-10-11T15:49:13Z | NONE | Here's an example of metadata I have for one database on datasette. ``` "Records-requests": { "tables": { "Some table": { "about": "This table has data." } } } ``` The text in `about` does not show up when I publish the data. But it shows up after I add a `"source"` parameter in the metadata. Is this intended? | 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
801780625 | MDU6SXNzdWU4MDE3ODA2MjU= | 9 | SSL Error | 12669260 | open | 0 | 2 | 2021-02-05T02:12:56Z | 2021-02-07T18:45:04Z | NONE | Here's the error I get when running `pip install pocket-to-sqlite`: ``` Could not fetch URL https://pypi.python.org/simple/pocket-to-sqlite/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:661) - skipping Could not find a version that satisfies the requirement pocket-to-sqlite (from versions: ) No matching distribution found for pocket-to-sqlite ``` Does this require python 3? | 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |