html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app https://github.com/simonw/datasette/issues/1169#issuecomment-753653260,https://api.github.com/repos/simonw/datasette/issues/1169,753653260,MDEyOklzc3VlQ29tbWVudDc1MzY1MzI2MA==,9599,2021-01-03T17:54:40Z,2021-01-03T17:54:40Z,OWNER,And @benpickles yes I would land that pull request straight away as-is. Thanks!,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777677671, https://github.com/simonw/datasette/issues/1169#issuecomment-753653033,https://api.github.com/repos/simonw/datasette/issues/1169,753653033,MDEyOklzc3VlQ29tbWVudDc1MzY1MzAzMw==,9599,2021-01-03T17:52:53Z,2021-01-03T17:52:53Z,OWNER,"Oh that's so frustrating! I was worried about that - I spotted a few runs that seemed faster and hoped that it meant that the package was coming out of the `~/.npm` cache, but evidently that's not the case. You've convinced me that Datasette itself should have a `package.json` - the Dependabot argument is a really good one. But... I'd really love to figure out a general pattern for using `npx` scripts in GitHub Actions workflows in a cache-friendly way. I have plenty of other projects that I'd love to run Prettier or Uglify or `puppeteer-cli` in without adding a `package.json` to them. Any ideas? The best I can think of is for the workflow itself to write out a `package.json` file (using `echo '{ ... }' > package.json`) as part of the run - that way the cache should work (I think) but I don't get a misleading `package.json` file sitting in the repo.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777677671, https://github.com/simonw/datasette/issues/983#issuecomment-753570710,https://api.github.com/repos/simonw/datasette/issues/983,753570710,MDEyOklzc3VlQ29tbWVudDc1MzU3MDcxMA==,9599,2021-01-03T05:29:56Z,2021-01-03T05:29:56Z,OWNER,"I thought about using browser events, but they don't quite match the API that I'm looking to provide. In particular, the great thing about Pluggy is that if you have multiple handlers registered for a specific plugin hook each of those handlers can return a value, and Pluggy will combine those values into a list of replies. This is great for things like plugin hooks that add extra menu items - each plugin can return a menu item (maybe as a label/URL/click-callback object) and the calling code can then add all of those items to the menu. See https://docs.datasette.io/en/stable/plugin_hooks.html#table-actions-datasette-actor-database-table for a Python example. I'm on the fence about relying on JavaScript modules. I need to think about browser compatibility for them - but I'm already commited to requiring support for `() => {}` arrow functions so maybe I'm committed to module support too already?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429, https://github.com/simonw/datasette/issues/1160#issuecomment-753568428,https://api.github.com/repos/simonw/datasette/issues/1160,753568428,MDEyOklzc3VlQ29tbWVudDc1MzU2ODQyOA==,9599,2021-01-03T05:02:32Z,2021-01-03T05:02:32Z,OWNER,"Should this command include a `--fts` option for configuring full-text search on one-or-more columns? I thought about doing that for `sqlite-utils insert` in https://github.com/simonw/sqlite-utils/issues/202 and decided not to because of the need to include extra options covering the FTS version, porter stemming options and whether or not to create triggers. But maybe I can set sensible defaults for that with `datasette insert ... -f title -f body`? Worth thinking about a bit more.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",775666296, https://github.com/simonw/sqlite-utils/issues/202#issuecomment-753568264,https://api.github.com/repos/simonw/sqlite-utils/issues/202,753568264,MDEyOklzc3VlQ29tbWVudDc1MzU2ODI2NA==,9599,2021-01-03T05:00:24Z,2021-01-03T05:00:24Z,OWNER,"I'm not going to implement this, because it actually needs several additional options that already exist on `sqlite-utils enable-fts`: ``` --fts4 Use FTS4 --fts5 Use FTS5 --tokenize TEXT Tokenizer to use, e.g. porter --create-triggers Create triggers to update the FTS tables when the parent table changes. ``` I'd rather not add all four of those options to `sqlite-utils insert` just to support this shortcut.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",738514367, https://github.com/simonw/sqlite-utils/issues/202#issuecomment-753567969,https://api.github.com/repos/simonw/sqlite-utils/issues/202,753567969,MDEyOklzc3VlQ29tbWVudDc1MzU2Nzk2OQ==,9599,2021-01-03T04:55:17Z,2021-01-03T04:55:43Z,OWNER,"The long version of this can be `--fts`, same as in `csvs-to-sqlite`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",738514367, https://github.com/simonw/sqlite-utils/pull/203#issuecomment-753567932,https://api.github.com/repos/simonw/sqlite-utils/issues/203,753567932,MDEyOklzc3VlQ29tbWVudDc1MzU2NzkzMg==,9599,2021-01-03T04:54:43Z,2021-01-03T04:54:43Z,OWNER,"Another option: expand the `ForeignKey` object to have `.columns` and `.other_columns` properties in addition to the existing `.column` and `.other_column` properties. These new plural properties would always return a tuple, which would be a one-item tuple for a non-compound-foreign-key. The question then is what should `.column` and `.other_column` return for compound foreign keys? I'd be inclined to say they should return `None` - which would trigger errors in code that encounters a compound foreign key for the first time, but those errors would at least be a strong indicator as to what had gone wrong. We can label `.column` and `.other_column` as deprecated and then remove them in `sqlite-utils 4.0`. Since this would still be a breaking change in some minor edge-cases I'm thinking maybe 4.0 needs to happen in order to land this feature. I'm not opposed to doing that, I was just hoping it might be avoidable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829, https://github.com/simonw/sqlite-utils/pull/203#issuecomment-753567744,https://api.github.com/repos/simonw/sqlite-utils/issues/203,753567744,MDEyOklzc3VlQ29tbWVudDc1MzU2Nzc0NA==,9599,2021-01-03T04:51:44Z,2021-01-03T04:51:44Z,OWNER,"One way that this could avoid a breaking change would be to have `fk.column` and `fk.other_column` remain as strings for non-compound-foreign-keys, but turn into tuples for a compound foreign key. This is a bit of an ugly API design, and it could still break existing code that encounters a compound foreign key for the first time - but it would leave code working for the more common case of a non-compound-foreign-key.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829, https://github.com/simonw/sqlite-utils/pull/203#issuecomment-753567508,https://api.github.com/repos/simonw/sqlite-utils/issues/203,753567508,MDEyOklzc3VlQ29tbWVudDc1MzU2NzUwOA==,9599,2021-01-03T04:48:17Z,2021-01-03T04:48:17Z,OWNER,"Sorry for taking so long to review this! This approach looks great to me - being able to optionally pass a tuple anywhere the API currently expects a column is smart, and it's consistent with how the `pk=` parameter works elsewhere. There's just one problem I can see with this: the way it changes the `ForeignKey(...)` interface to always return a tuple for `.column` and `.other_column`, even if that tuple only contains a single item. This represents a breaking change to the existing API - any code that expects `ForeignKey.column` to be a single string (which is any code that has been written against that) will break. As such, I'd have to bump the major version of `sqlite-utils` to `4.0` in order to ship this. Ideally I'd like to make this change in a way that doesn't represent an API compatibility break. I need to think a bit harder about how that might be achieved.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829, https://github.com/simonw/sqlite-utils/issues/217#issuecomment-753566184,https://api.github.com/repos/simonw/sqlite-utils/issues/217,753566184,MDEyOklzc3VlQ29tbWVudDc1MzU2NjE4NA==,9599,2021-01-03T04:27:38Z,2021-01-03T04:27:38Z,OWNER,Documented here: https://sqlite-utils.datasette.io/en/latest/python-api.html#quoting-strings-for-use-in-sql,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777543336, https://github.com/simonw/sqlite-utils/issues/216#issuecomment-753566156,https://api.github.com/repos/simonw/sqlite-utils/issues/216,753566156,MDEyOklzc3VlQ29tbWVudDc1MzU2NjE1Ng==,9599,2021-01-03T04:27:14Z,2021-01-03T04:27:14Z,OWNER,Documented here: https://sqlite-utils.datasette.io/en/latest/python-api.html#introspection,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777540352, https://github.com/simonw/sqlite-utils/issues/218#issuecomment-753563757,https://api.github.com/repos/simonw/sqlite-utils/issues/218,753563757,MDEyOklzc3VlQ29tbWVudDc1MzU2Mzc1Nw==,9599,2021-01-03T03:49:51Z,2021-01-03T03:49:51Z,OWNER,Documentation: https://sqlite-utils.datasette.io/en/latest/cli.html#listing-triggers,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777560474, https://github.com/simonw/sqlite-utils/issues/215#issuecomment-753545757,https://api.github.com/repos/simonw/sqlite-utils/issues/215,753545757,MDEyOklzc3VlQ29tbWVudDc1MzU0NTc1Nw==,9599,2021-01-02T23:58:07Z,2021-01-02T23:58:07Z,OWNER,"Thought: maybe there should be a `.reset_counts()` method too, for if the table gets out of date with the triggers. One way that could happen is if a table is dropped and recreated - the counts in the `_counts` table would likely no longer match the number of rows in that table.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777535402, https://github.com/simonw/sqlite-utils/issues/215#issuecomment-753545381,https://api.github.com/repos/simonw/sqlite-utils/issues/215,753545381,MDEyOklzc3VlQ29tbWVudDc1MzU0NTM4MQ==,9599,2021-01-02T23:52:52Z,2021-01-02T23:52:52Z,OWNER,Idea: a `db.cached_counts()` method that returns a dictionary of data from the `_counts` table. Call it with a list of tables to get back the counts for just those tables.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777535402, https://github.com/simonw/sqlite-utils/issues/217#issuecomment-753544914,https://api.github.com/repos/simonw/sqlite-utils/issues/217,753544914,MDEyOklzc3VlQ29tbWVudDc1MzU0NDkxNA==,9599,2021-01-02T23:47:42Z,2021-01-02T23:47:42Z,OWNER,https://github.com/simonw/sqlite-utils/blob/9a5c92b63e7917c93cc502478493c51c781b2ecc/sqlite_utils/db.py#L231-L239,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777543336, https://github.com/simonw/sqlite-utils/issues/213#issuecomment-753535488,https://api.github.com/repos/simonw/sqlite-utils/issues/213,753535488,MDEyOklzc3VlQ29tbWVudDc1MzUzNTQ4OA==,9599,2021-01-02T22:03:48Z,2021-01-02T22:03:48Z,OWNER,"I got this error while prototyping this: too many levels of trigger recursion It looks like that's because SQLite doesn't like triggers on a table that themselves then update that table - so I'm going to exclude the `_counts` table from this mechanism.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777529979, https://github.com/simonw/sqlite-utils/issues/213#issuecomment-753533775,https://api.github.com/repos/simonw/sqlite-utils/issues/213,753533775,MDEyOklzc3VlQ29tbWVudDc1MzUzMzc3NQ==,9599,2021-01-02T21:47:10Z,2021-01-02T21:47:10Z,OWNER,"I'm going to skip virtual tables, which I can identify using this property: https://github.com/simonw/sqlite-utils/blob/1cad7fad3e7a5b734088f5cc545b69a055e636da/sqlite_utils/db.py#L720-L726","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777529979, https://github.com/simonw/datasette/issues/1168#issuecomment-753524779,https://api.github.com/repos/simonw/datasette/issues/1168,753524779,MDEyOklzc3VlQ29tbWVudDc1MzUyNDc3OQ==,9599,2021-01-02T20:19:26Z,2021-01-02T20:19:26Z,OWNER,Idea: version the metadata scheme. If the table is called `_metadata_v1` it gives me a clear path to designing a new scheme in the future.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/sqlite-utils/issues/212#issuecomment-753422324,https://api.github.com/repos/simonw/sqlite-utils/issues/212,753422324,MDEyOklzc3VlQ29tbWVudDc1MzQyMjMyNA==,9599,2021-01-02T03:00:34Z,2021-01-02T03:00:34Z,OWNER,"Here's a prototype: ```python with db.conn: db.conn.executescript("""""" CREATE TABLE IF NOT EXISTS [_counts] ([table] TEXT PRIMARY KEY, [count] INTEGER DEFAULT 0); CREATE TRIGGER IF NOT EXISTS [Street_Tree_List_counts_ai] AFTER INSERT ON [Street_Tree_List] BEGIN INSERT OR REPLACE INTO _counts VALUES ('Street_Tree_List', COALESCE( (SELECT count FROM _counts WHERE [table]='Street_Tree_List'), 0) + 1); END; CREATE TRIGGER IF NOT EXISTS [Street_Tree_List_counts_ad] AFTER DELETE ON [Street_Tree_List] BEGIN INSERT OR REPLACE INTO _counts VALUES ('Street_Tree_List', COALESCE( (SELECT count FROM _counts WHERE [table]='Street_Tree_List'), 0) - 1); END; INSERT OR REPLACE INTO _counts VALUES ('Street_Tree_List', (select count(*) from [Street_Tree_List])); """""") ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777392020, https://github.com/simonw/sqlite-utils/issues/210#issuecomment-753406744,https://api.github.com/repos/simonw/sqlite-utils/issues/210,753406744,MDEyOklzc3VlQ29tbWVudDc1MzQwNjc0NA==,9599,2021-01-02T00:02:39Z,2021-01-02T00:02:39Z,OWNER,"It looks like https://github.com/ofajardo/pyreadr is a good library for this. I won't add this to `sqlite-utils` because it's quite a bulky dependency for a relatively small feature. Normally I'd write a `rdata-to-sqlite` tool similar to https://pypi.org/project/dbf-to-sqlite/ - but I'm actually working on a new plugin hook for Datasette that might be an even better fit for this. The idea is to allow Datasette plugins to define input formats - such as RData - which would then result in being able to import them on the command-line with `datasette insert my.db file.rdata` or by uploading a file through the Datasette web interface. That work is happening over here: https://github.com/simonw/datasette/issues/1160 - I'll close this issue in favour of a sometime-in-the-future `datasette-import-rdata` plugin.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",767685961, https://github.com/simonw/sqlite-utils/issues/209#issuecomment-753405835,https://api.github.com/repos/simonw/sqlite-utils/issues/209,753405835,MDEyOklzc3VlQ29tbWVudDc1MzQwNTgzNQ==,9599,2021-01-01T23:52:06Z,2021-01-01T23:52:06Z,OWNER,I just hit this one too. Such a weird bug!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",766156875, https://github.com/simonw/datasette/issues/1168#issuecomment-753402423,https://api.github.com/repos/simonw/datasette/issues/1168,753402423,MDEyOklzc3VlQ29tbWVudDc1MzQwMjQyMw==,9599,2021-01-01T23:16:05Z,2021-01-01T23:16:05Z,OWNER,"One catch: solving the ""show me all metadata for everything in this Datasette instance"" problem. Ideally there would be a SQLite table that can be queried for this. But the need to resolve the potentially complex set of precedence rules means that table would be difficult if not impossible to provide at run-time. Ideally a denormalized table would be available that featured the results of running those precedence rule calculations. But how to handle keeping this up-to-date? It would need to be recalculated any time a `_metadata` table in any of the attached databases had an update. This is a much larger problem - but one potential fix would be to use triggers to maintain a ""version number"" for the `_metadata` table - similar to SQLite's own built-in `schema_version` mechanism. Triggers could increment a counter any time a record in that table was added, deleted or updated. Such a mechanism would have applications outside of just this `_metadata` system. The ability to attach a version number to any table and have it automatically incremented when that table changes (via triggers) could help with all kinds of other Datasette-at-scale problems, including things like cached table counts.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753401001,https://api.github.com/repos/simonw/datasette/issues/1168,753401001,MDEyOklzc3VlQ29tbWVudDc1MzQwMTAwMQ==,9599,2021-01-01T23:01:45Z,2021-01-01T23:01:45Z,OWNER,I need to prototype this. Could I do that as a plugin? I think so - I could try out the algorithm for loading metadata and display it on pages using some custom templates.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753400420,https://api.github.com/repos/simonw/datasette/issues/1168,753400420,MDEyOklzc3VlQ29tbWVudDc1MzQwMDQyMA==,9599,2021-01-01T22:53:58Z,2021-01-01T22:53:58Z,OWNER,"Precedence idea: - First priority is non-_internal metadata from other databases - if those conflict then pick then the alphabetically-ordered-first database name wins - Next priority: `_internal` metadata, which should have been loaded from `metadata.json` - Last priority: the `_metadata` table from that database itself, i.e. the default ""baked in"" metadata","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753400306,https://api.github.com/repos/simonw/datasette/issues/1168,753400306,MDEyOklzc3VlQ29tbWVudDc1MzQwMDMwNg==,9599,2021-01-01T22:52:44Z,2021-01-01T22:52:44Z,OWNER,"Also: probably load column metadata as part of the table metadata rather than loading column metadata individually, since it's going to be rare to want the metadata for a single column rather than for an entire table full of columns.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753400265,https://api.github.com/repos/simonw/datasette/issues/1168,753400265,MDEyOklzc3VlQ29tbWVudDc1MzQwMDI2NQ==,9599,2021-01-01T22:52:09Z,2021-01-01T22:52:09Z,OWNER,"From an implementation perspective, I think the way this works is SQL queries read the relevant metadata from ALL available metadata tables, then Python code solves the precedence rules to produce the final, combined metadata for a database/table/column.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753399635,https://api.github.com/repos/simonw/datasette/issues/1168,753399635,MDEyOklzc3VlQ29tbWVudDc1MzM5OTYzNQ==,9599,2021-01-01T22:45:21Z,2021-01-01T22:50:21Z,OWNER,"Would also need to figure out the precedence rules: - What happens if the database has a `_metadata` table with data that conflicts with a remote metadata record from another database? I think the other database should win, because that allows plugins to over-ride the default metadata for something. - Do JSON values get merged together? So if one table provides a description and another provides a title do both values get returned? - If a database has a `license`, does that ""cascade"" down to the tables? What about `source` and `about`? - What if there are two databases (or more) that provide conflicting metadata for a table in some other database? Also, `_internal` may have loaded data from `metadata.json` that conflicts with some other remote table metadata definition.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753399428,https://api.github.com/repos/simonw/datasette/issues/1168,753399428,MDEyOklzc3VlQ29tbWVudDc1MzM5OTQyOA==,9599,2021-01-01T22:43:14Z,2021-01-01T22:43:22Z,OWNER,"Could this use a compound primary key on `database, table, column`? Does that work with null values?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753399366,https://api.github.com/repos/simonw/datasette/issues/1168,753399366,MDEyOklzc3VlQ29tbWVudDc1MzM5OTM2Ng==,9599,2021-01-01T22:42:37Z,2021-01-01T22:42:37Z,OWNER,"So what would the database schema for this look like? I'm leaning towards a single table called `_metadata`, because that's a neater fit for baking the metadata into the database file along with the data that it is describing. Alternatively I could have multiple tables sharing that prefix - `_metadata_database` and `_metadata_tables` and `_metadata_columns` perhaps. If it's just a single `_metadata` table, the schema could look like this: | database | table | column | metadata | | --- | --- | --- | --- | | | mytable | | {""title"": ""My Table"" } | | | mytable | mycolumn | {""description"": ""Column description"" } | | otherdb | othertable | | {""description"": ""Table in another DB"" } | If the `database` column is `null` it means ""this is describing a table in the same database file as this `_metadata` table"". The alternative to the `metadata` JSON column would be separate columns for each potential metadata value - `license`, `source`, `about`, `about_url` etc. But that makes it harder for people to create custom metadata fields.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753398542,https://api.github.com/repos/simonw/datasette/issues/1168,753398542,MDEyOklzc3VlQ29tbWVudDc1MzM5ODU0Mg==,9599,2021-01-01T22:37:24Z,2021-01-01T22:37:24Z,OWNER,"The direction I'm leaning in now is the following: - Metadata always lives in SQLite tables - These tables can be co-located with the database they describe (same DB file) - ... or they can be in a different DB file and reference the other database that they are describing - Metadata provided on startup in a `metadata.json` file is loaded into an in-memory metadata table using that same mechanism Plugins that want to provide metadata can do so by populating a table. They could even maintain their own in-memory database for this, or they could write to the `_internal` in-memory database, or they could write to a table in a database on disk.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753392102,https://api.github.com/repos/simonw/datasette/issues/1168,753392102,MDEyOklzc3VlQ29tbWVudDc1MzM5MjEwMg==,9599,2021-01-01T22:06:33Z,2021-01-01T22:06:33Z,OWNER,"Some SQLite databases include SQL comments in the schema definition which tell you what each column means: ```sql CREATE TABLE User -- A table comment ( uid INTEGER, -- A field comment flags INTEGER -- Another field comment ); ``` The problem with these is that they're not exposed to SQLite in any mechanism other than parsing the `CREATE TABLE` statement from the `sqlite_master` table to extract those columns. I had an idea to build a plugin that could return these. That would be easy with a ""get metadata for this column"" plugin hook - in the absence of one a plugin could still run that reads the schemas on startup and uses them to populate a metadata database table somewhere.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753391869,https://api.github.com/repos/simonw/datasette/issues/1168,753391869,MDEyOklzc3VlQ29tbWVudDc1MzM5MTg2OQ==,9599,2021-01-01T22:04:30Z,2021-01-01T22:04:30Z,OWNER,"The sticking point here seems to be the plugin hook. Allowing plugins to over-ride the way the question ""give me the metadata for this database/table/column"" is answered makes the database-backed metadata mechanisms much more complicated to think about. What if plugins didn't get to over-ride metadata in this way, but could instead update the metadata in a persistent Datasette-managed storage mechanism? Then maybe Datasette could do the following: - Maintain metadata in `_internal` that has been loaded from `metadata.json` - Know how to check a database for baked-in metadata (maybe in a `_metadata` table) - Know how to fall back on the `_internal` metadata if no baked-in metadata is available If database files were optionally allowed to store metadata about tables that live in another database file this could perhaps solve the plugin needs - since an ""edit metadata"" plugin would be able to edit records in a separate, dedicated `metadata.db` database to store new information about tables in other files.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753390791,https://api.github.com/repos/simonw/datasette/issues/1168,753390791,MDEyOklzc3VlQ29tbWVudDc1MzM5MDc5MQ==,9599,2021-01-01T22:00:42Z,2021-01-01T22:00:42Z,OWNER,"Here are the requirements I'm currently trying to satisfy: - It should be possible to query the metadata for ALL attached tables in one place, potentially with pagination and filtering - Metadata should be able to exist in the current `metadata.json` file - It should also be possible to bundle metadata in a table in the SQLite database files themselves - Plugins should be able to define their own special mechanisms for metadata. This is particularly interesting for providing a UI that allows users to edit the metadata for their existing tables.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753390262,https://api.github.com/repos/simonw/datasette/issues/1168,753390262,MDEyOklzc3VlQ29tbWVudDc1MzM5MDI2Mg==,9599,2021-01-01T21:58:11Z,2021-01-01T21:58:11Z,OWNER,"One possibility: plugins could write directly to that in-memory database table. But how would they know to write again should the server restart? Maybe they would write to it once when called by the `startup` plugin hook, and then update it (and their own backing store) when metadata changes for some reason. Feels a bit messy though. Also: if I want to support metadata optionally living in a `_metadata` table colocated with the data in a SQLite database file itself, how would that affect the `metadata` columns in `_internal`? How often would Datasette denormalize and copy data across from the on-disk `_metadata` tables to the `_internal` in-memory columns?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753389938,https://api.github.com/repos/simonw/datasette/issues/1168,753389938,MDEyOklzc3VlQ29tbWVudDc1MzM4OTkzOA==,9599,2021-01-01T21:54:15Z,2021-01-01T21:54:15Z,OWNER,"So what if the `databases`, `tables` and `columns` tables in `_internal` each grew a new `metadata` text column? These columns could be populated by Datasette on startup through reading the `metadata.json` file. But how would plugins interact with them?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753389477,https://api.github.com/repos/simonw/datasette/issues/1168,753389477,MDEyOklzc3VlQ29tbWVudDc1MzM4OTQ3Nw==,9599,2021-01-01T21:49:57Z,2021-01-01T21:49:57Z,OWNER,"What if metadata was stored in a JSON text column in the existing `_internal` tables? This would allow for users to invent additional metadata fields in the future beyond the current `license`, `license_url` etc fields - without needing a schema change. The downside of JSON columns generally is that they're harder to run indexed queries against. For metadata I don't think that matters - even with 10,000 tables each with their own metadata a SQL query asking for e.g. ""everything that has Apache 2 as the license"" would return in just a few ms.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753388809,https://api.github.com/repos/simonw/datasette/issues/1168,753388809,MDEyOklzc3VlQ29tbWVudDc1MzM4ODgwOQ==,9599,2021-01-01T21:47:51Z,2021-01-01T21:47:51Z,OWNER,"A database that exposes metadata will have the same restriction as the new `_internal` database that exposes columns and tables, in that it needs to take permissions into account. A user should not be able to view metadata for tables that they are not able to see. As such, I'd rather bundle any metadata tables into the existing `_internal` database so I don't have to solve that permissions problem in two places.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1168#issuecomment-753366024,https://api.github.com/repos/simonw/datasette/issues/1168,753366024,MDEyOklzc3VlQ29tbWVudDc1MzM2NjAyNA==,9599,2021-01-01T18:48:34Z,2021-01-01T18:48:34Z,OWNER,Also: in #188 I proposed bundling metadata in the SQLite database itself alongside the data. This is a great way of ensuring metadata travels with the data when it is downloaded as a SQLite `.db` file. But how would that play with the idea of an in-memory `_metadata` table? Could that table perhaps offer views that join data across multiple attached physical databases?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1166#issuecomment-753224351,https://api.github.com/repos/simonw/datasette/issues/1166,753224351,MDEyOklzc3VlQ29tbWVudDc1MzIyNDM1MQ==,9599,2020-12-31T23:23:29Z,2020-12-31T23:23:29Z,OWNER,I should configure the action to only run if changes have been made within the `datasette/static` directory.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777140799, https://github.com/simonw/datasette/issues/983#issuecomment-753221646,https://api.github.com/repos/simonw/datasette/issues/983,753221646,MDEyOklzc3VlQ29tbWVudDc1MzIyMTY0Ng==,9599,2020-12-31T22:58:47Z,2020-12-31T22:58:47Z,OWNER,"https://github.com/mishoo/UglifyJS/issues/1905#issuecomment-300485490 says: > `sourceMappingURL` aren't added by default in `3.x` due to one of the feature requests not to - some users are putting them within HTTP response headers instead. > > So the command line for that would be: > > ```js > $ uglifyjs main.js -cmo main.min.js --source-map url=main.min.js.map > ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429, https://github.com/simonw/datasette/issues/1164#issuecomment-753221362,https://api.github.com/repos/simonw/datasette/issues/1164,753221362,MDEyOklzc3VlQ29tbWVudDc1MzIyMTM2Mg==,9599,2020-12-31T22:55:57Z,2020-12-31T22:55:57Z,OWNER,"I had to add this as the first line in `table.min.js` for the source mapping to work: ``` //# sourceMappingURL=/-/static/table.min.js.map ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",776634318, https://github.com/simonw/datasette/issues/1164#issuecomment-753220665,https://api.github.com/repos/simonw/datasette/issues/1164,753220665,MDEyOklzc3VlQ29tbWVudDc1MzIyMDY2NQ==,9599,2020-12-31T22:49:36Z,2020-12-31T22:49:36Z,OWNER,"I started with a 7K `table.js` file. `npx uglifyjs table.js --source-map -o table.min.js` gave me a 5.6K `table.min.js` file. `npx uglifyjs table.js --source-map -o table.min.js --compress --mangle` gave me 4.5K.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",776634318, https://github.com/simonw/datasette/issues/1164#issuecomment-753220412,https://api.github.com/repos/simonw/datasette/issues/1164,753220412,MDEyOklzc3VlQ29tbWVudDc1MzIyMDQxMg==,9599,2020-12-31T22:47:36Z,2020-12-31T22:47:36Z,OWNER,"I'm trying to minify `table.js` and I ran into a problem: Uglification failed. Unexpected character '`' It turns out `uglify-js` doesn't support ES6 syntax! But `uglify-es` does: npm install uglify-es Annoyingly it looks like `uglify-es` uses the same CLI command, `uglifyjs`. So after installing it this seemed to work: npx uglifyjs table.js --source-map -o table.min.js I really don't like how `npx uglifyjs` could mean different things depending on which package was installed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",776634318, https://github.com/simonw/datasette/issues/983#issuecomment-753219521,https://api.github.com/repos/simonw/datasette/issues/983,753219521,MDEyOklzc3VlQ29tbWVudDc1MzIxOTUyMQ==,9599,2020-12-31T22:39:52Z,2020-12-31T22:39:52Z,OWNER,For inlining the `plugins.min.js` file into the Jinja templates I could use the trick described here: https://stackoverflow.com/a/41404611 - which adds a `{{ include_file('file.txt') }}` function to Jinja.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",712260429, https://github.com/simonw/datasette/issues/983#issuecomment-753219407,https://api.github.com/repos/simonw/datasette/issues/983,753219407,MDEyOklzc3VlQ29tbWVudDc1MzIxOTQwNw==,9599,2020-12-31T22:38:45Z,2020-12-31T22:39:10Z,OWNER,"You'll be able to add JavaScript plugins using a bunch of different mechanisms: - In a custom template, dropping the code in to a `