pull_requests: 1492889894
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | repo | url | merged_by | auto_merge |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1492889894 | PR_kwDOBm6k_c5Y-7Em | 2162 | closed | 0 | Add new `--internal internal.db` option, deprecate legacy `_internal` database | 15178711 | refs #2157 This PR adds a new `--internal` option to datasette serve. If provided, it is the path to a persistent internal database that Datasette core and Datasette plugins can use to store data, as discussed in the proposal issue. This PR also removes and deprecates the previous in-memory `_internal` database. Those tables now appear in the `internal` database, with `core_` prefixes (ex `tables` in `_internal` is now `core_tables` in `internal`). ## A note on the new `core_` tables However, one important notes about those new `core_` tables: If a `--internal` DB is passed in, that means those `core_` tables will persist across multiple Datasette instances. This wasn't the case before, since `_internal` was always an in-memory database created from scratch. I tried to put those `core_` tables as `TEMP` tables - after all, there's always one 1 `internal` DB connection at a time, so I figured it would work. But, since we use the `Database()` wrapper for the internal DB, it has two separate connections: a default read-only connection and a write connection that is created when a write operation occurs. Which meant the `TEMP` tables would be created by the write connection, but not available in the read-only connection. So I had a brillant idea: Attach an in-memory named database with `cache=shared`, and create those tables there! ```sql ATTACH DATABASE 'file:datasette_internal_core?mode=memory&cache=shared' AS core; ``` We'd run this on both the read-only connection and the write-only connection. That way, those tables would stay in memory, they'd communicate with the `cache=shared` feature, and we'd be good to go. However, I couldn't find an easy way to run a `ATTACH DATABASE` command on the read-only query. Using `Database()` as a wrapper for the internal DB is pretty limiting - it's meant for Datasette "data" databases, where we want multiple readers and possibly 1 write connection at a time. But the internal database doesn't really require that kind of support - I think we could get away with a single read/write connection, but it seemed like too big of a rabbithole to go through now. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2162.org.readthedocs.build/en/2162/ <!-- readthedocs-preview datasette end --> | 2023-08-29T00:05:07Z | 2023-08-29T03:24:23Z | 2023-08-29T03:24:23Z | 2023-08-29T03:24:23Z | 92b8bf38c02465f624ce3f48dcabb0b100c4645d | 0 | 73489cac8ef8e934e601302fa6594e27b75a382d | 2e2825869fc2655b5fcadc743f6f9dec7a49bc65 | CONTRIBUTOR | 107914493 | https://github.com/simonw/datasette/pull/2162 |
Links from other tables
- 0 rows from pull_requests_id in labels_pull_requests