{"html_url": "https://github.com/simonw/datasette/issues/164#issuecomment-804541064", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/164", "id": 804541064, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:45:12Z", "updated_at": "2021-03-23T02:45:12Z", "author_association": "CONTRIBUTOR", "body": "\"datasette skeleton\" feature removed #476", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 280013907, "label": "datasette skeleton command for kick-starting database and table metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1510423051", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1510423051, "node_id": "IC_kwDOBm6k_c5aBzoL", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2023-04-16T16:12:14Z", "updated_at": "2023-04-20T05:14:39Z", "author_association": "CONTRIBUTOR", "body": "# Javascript Plugin Docs (alpha)\r\n\r\n## Motivation\r\n\r\nThe Datasette JS Plugin API allows developers to add interactive features to the UI, without having to modify the Python source code. \r\n\r\n## Setup\r\n\r\nNo external/NPM dependencies are needed.\r\n\r\nPlugin behavior is coordinated by the Datasette `manager`. Every page has 1 `manager`.\r\n\r\nThere are 2 ways to add your plugin to the `manager`.\r\n\r\n1. Read `window.__DATASETTE__` if the manager was already loaded.\r\n\r\n```js\r\nconst manager = window.__DATASETTE__;\r\n```\r\n\r\n2. Wait for the `datasette_init` event to fire if your code was loaded before the manager is ready. \r\n\r\n```js\r\ndocument.addEventListener(\"datasette_init\", function (evt) {\r\n const { detail: manager } = evt;\r\n \r\n // register plugin here\r\n});\r\n```\r\n\r\n3. Add plugin to the manager by calling `manager.registerPlugin` in a JS file. Each plugin will supply 1 or more hooks with\r\n\r\n- unique name (`YOUR_PLUGIN_NAME`)\r\n- a numeric version (starting at `0.1`), \r\n- configuration value, the details vary by hook. (In this example, `getColumnActions` takes a function)\r\n\r\n```js\r\nmanager.registerPlugin(\"YOUR_PLUGIN_NAME\", {\r\n version: 0.1,\r\n makeColumnActions: (columnMeta) => {\r\n return [\r\n {\r\n label: \"Copy name to clipboard\",\r\n // evt = native click event\r\n onClick: (evt) => copyToClipboard(columnMeta.column),\r\n }\r\n ];\r\n },\r\n });\r\n```\r\n\r\nThere are 2 plugin hooks available to `manager.registerPlugin`:\r\n\r\n- `makeColumnActions` - Add items to the cog menu for headers on datasette table pages\r\n- `makeAboveTablePanelConfigs` - Add items to \"tabbed\" panel above the `` on pages that use the Datasette table template.\r\n\r\nWhile there are additional properties on the `manager`, but it's not advised to depend on them directly as the shape is subject to change.\r\n\r\n4. To make your JS file available as a Datasette plugin from the Python side, you can add a python file resembling [this](https://github.com/simonw/datasette/pull/2052/files#diff-c5ecf3d22075a60d04a4e95da2e15c612cf1bc84e38d777b67ba60dbd156e293) to your plugins directory. Note that you could host your JS file anywhere, it doesn't have to be served from the Datasette statics folder.\r\n\r\nI welcome ideas for more hooks, or feedback on the current design!\r\n\r\n## Examples\r\n\r\nSee the [example plugins file](https://github.com/simonw/datasette/blob/2d92b9328022d86505261bcdac419b6ed9cb2236/datasette/static/table-example-plugins.js) for additional examples.\r\n\r\n## Hooks API Guide\r\n\r\n### `makeAboveTablePanelConfigs`\r\n\r\nProvide a function with a list of panel objects. Each panel object should contain\r\n\r\n1. A unique string `id`\r\n2. A string `label` for the tab\r\n3. A `render` function. The first argument is reference to an HTML [Element](https://developer.mozilla.org/en-US/docs/Web/API/Element). \r\n\r\nExample:\r\n\r\n```js\r\n manager.registerPlugin(\"panel-plugin-graphs\", {\r\n version: 0.1,\r\n makeAboveTablePanelConfigs: () => {\r\n return [\r\n {\r\n id: 'first-panel',\r\n label: \"My new panel\",\r\n render: node => {\r\n const description = document.createElement('p');\r\n description.innerText = 'Hello world';\r\n node.appendChild(description);\r\n }\r\n }\r\n ];\r\n },\r\n });\r\n```\r\n\r\n### `makeColumnActions`\r\n\r\nProvide a function that returns a list of action objects. Each action object has\r\n\r\n1. A string `label` for the menu dropdown label\r\n2. An onClick `render` function.\r\n\r\nExample:\r\n\r\n```js\r\n manager.registerPlugin(\"column-name-plugin\", {\r\n version: 0.1,\r\n getColumnActions: (columnMeta) => {\r\n \r\n // Info about selected column. \r\n const { columnName, columnNotNull, columnType, isPk } = columnMeta;\r\n\r\n return [\r\n {\r\n label: \"Copy name to clipboard\",\r\n onClick: (evt) => copyToClipboard(column),\r\n }\r\n ];\r\n },\r\n });\r\n```\r\n\r\nThe getColumnActions callback has access to an object with metadata about the clicked column. These fields include:\r\n\r\n- columnName: string (name of the column)\r\n- columnNotNull: boolean\r\n- columnType: sqlite datatype enum (text, number, etc)\r\n- isPk: Whether this is the primary key: boolean\r\n\r\nYou can use this column metadata to customize the action config objects (for example, handling different summaries for text vs number columns).\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1510423215", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1510423215, "node_id": "IC_kwDOBm6k_c5aBzqv", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2023-04-16T16:12:59Z", "updated_at": "2023-04-16T16:12:59Z", "author_association": "CONTRIBUTOR", "body": "## Research notes\r\n\r\n- I stuck to the \"minimal dependencies\" ethos of datasette (no React, Typescript, JS linting, etc).\r\n- Main threads on JS plugin development\r\n - Main: sketch of pluggy-inspired system: https://github.com/simonw/datasette/issues/983\r\n - Main: provide locations in Datasette HTML that are designed for multiple plugins to safely cooperate with each other (starting with the panel, but eventually could extend to \"search boxes\"): https://github.com/simonw/datasette/issues/1191\r\n - Main: HTML hooks for JS plugin authors: https://github.com/simonw/datasette/issues/987\r\n- Prior threads on JS plugins in Datasette for future design directions\r\n - Idea: pass useful strings to JS plugins: https://github.com/simonw/datasette/issues/1565\r\n - Idea: help with plugin dependency loading: https://github.com/simonw/datasette/issues/1542 . (IMO - the plugin providing the dependency can emit an event once it's done. Other plugins can listen for it, or ask the manager to inform them when the dependency is available). \r\n - Idea: help plugins to manage state in shareable URLs (plugins shouldn't have to interact with the URL directly, should have some basic insulation from clobbering each others' keys): https://github.com/simonw/datasette/issues/1144\r\n- Articles on plugins reviewed\r\n - https://css-tricks.com/designing-a-javascript-plugin-system/\r\n- Plugin/Extension systems reviewed (mostly JS).\r\n - Yarn: https://yarnpkg.com/advanced/plugin-tutorial\r\n - Tappable https://github.com/webpack/tapable (used by Auto, webpack)\r\n - Pluggy: https://pluggy.readthedocs.io/en/stable/\r\n - VSCode: https://code.visualstudio.com/api/get-started/your-first-extension\r\n - Chrome: https://developer.chrome.com/docs/extensions/reference/\r\n - Figma/Figjam Widget: https://www.figma.com/widget-docs/\r\n - Datadog Apps: [Programming Model](https://github.com/DataDog/apps/blob/master/docs/en/programming-model.md)\r\n - Storybook: https://storybook.js.org/docs/react/addons/addons-api", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1112#issuecomment-735279355", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1112", "id": 735279355, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTI3OTM1NQ==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-11-28T19:21:09Z", "updated_at": "2020-11-28T19:21:09Z", "author_association": "CONTRIBUTOR", "body": "(Even more annoying is that I see my editor leaked an extra delete space at the end of the line. I'm happy to rebuild this to be less annoying, but you probably don't want the changelog update either way)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752749485, "label": "Fix --metadata doc usage"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1356#issuecomment-853895159", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1356", "id": 853895159, "node_id": "MDEyOklzc3VlQ29tbWVudDg1Mzg5NTE1OQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-03T14:03:59Z", "updated_at": "2021-06-03T14:03:59Z", "author_association": "CONTRIBUTOR", "body": "(Putting thoughts here to keep the conversation in one place.)\r\n\r\nI think using datasette for this use-case is the right approach. I usually have both datasette and sqlite-utils installed in the same project, and that's where I'm trying out queries, so it probably makes the most sense to have datasette also manage the output (and maybe the input, too).\r\n\r\nIt seems like both `--get` and `--query` could work better as subcommands, rather than options, if you're looking at building out a full CLI experience in datasette. It would give a cleaner separation in what you're trying to do and let each have its own dedicated options. So something like this:\r\n\r\n```sh\r\n# run an arbitrary query\r\ndatasette query covid.db \"select * from ny_times_us_counties limit 1\" --format yaml\r\n\r\n# run a canned query\r\ndatasette get covid.db some-canned-query --format yaml\r\n```\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 910092577, "label": "Research: syntactic sugar for using --get with SQL queries, maybe \"datasette query\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1606352600", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1606352600, "node_id": "IC_kwDOBm6k_c5fvv7Y", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2023-06-26T00:17:04Z", "updated_at": "2023-06-26T00:17:04Z", "author_association": "CONTRIBUTOR", "body": ":wave: would love to see this get merged soon! I want to make a javascript plugin on top of the code-mirror editor to make a few things nicer (function auto-complete, table/column descriptions, etc.), and this would help out a bunch", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316320521", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316320521, "node_id": "IC_kwDOBm6k_c5OdXUJ", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T04:29:23Z", "updated_at": "2022-11-16T04:29:23Z", "author_association": "CONTRIBUTOR", "body": "\"Screenshot\r\n\r\nUI issue I see on the autocomplete popup with overlapping icon & text. Screenshot's from Firefox, it seems even a little more pronounced on Safari", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647936117", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647936117, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:25:17Z", "updated_at": "2020-06-23T06:25:17Z", "author_association": "CONTRIBUTOR", "body": "> \r\n> \r\n> ```\r\n> sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50\r\n> ```\r\n> \r\n> Looks like that will take 35 minutes to run (it's not a particularly fast tool).\r\n\r\nTry chunking write operations into batches every 1000 records or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1317329157", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1317329157, "node_id": "IC_kwDOBm6k_c5OhNkF", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T16:46:52Z", "updated_at": "2022-11-16T16:46:52Z", "author_association": "CONTRIBUTOR", "body": "> \"Screenshot\r\n> \r\n> UI issue I see on the autocomplete popup with overlapping icon & text. Screenshot's from Firefox, it seems even a little more pronounced on Safari\r\n\r\nI checked and if I empty out app.css the bug goes away, so there's some kind of inheritance issue there. It's hard to debug bc the autocomplete popup goes away on blur (i.e. when trying to inspect it in devtools), but at least it's narrowed down a bit.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074182", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074182, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:37:42Z", "updated_at": "2021-06-26T23:37:42Z", "author_association": "CONTRIBUTOR", "body": "> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries.\r\n> > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try?\r\n> \r\n> Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160\r\n> \r\n> That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it.\r\n> \r\n> This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem.\r\n> \r\n> But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead.\r\n\r\n_Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.)\r\n\r\nIn my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504684831", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504684831, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY4NDgzMQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T17:38:23Z", "updated_at": "2019-06-22T17:38:23Z", "author_association": "CONTRIBUTOR", "body": "> > WorkingDirectory=/path/to/data\r\n> \r\n> @russss, Which directory does this represent?\r\n\r\nIt's the working directory (cwd) of the spawned process. In this case if you set it to the directory your data is in, you can use relative paths to the db (and metadata/templates/etc) in the `ExecStart` command.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-401312981", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 401312981, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMTMxMjk4MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-06-29T10:14:54Z", "updated_at": "2018-06-29T10:14:54Z", "author_association": "CONTRIBUTOR", "body": "> @RusSs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg kartena/Proj4Leaflet) although the leaflet side would need to detect or be informed of the original projection?\r\n\r\nWell, as @simonw mentioned, GeoJSON only supports WGS84, and GeoJSON (and/or TopoJSON) is the standard we probably want to aim for. On-the-fly reprojection in spatialite is not an issue anyway, and in general I think you want to be serving stuff to web maps in WGS84 or Web Mercator.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/578#issuecomment-1648339661", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/578", "id": 1648339661, "node_id": "IC_kwDOCGYnMM5iP6rN", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2023-07-24T17:44:30Z", "updated_at": "2023-07-24T17:44:30Z", "author_association": "CONTRIBUTOR", "body": "> A related feature would be support for plugins to add new ways of ingesting data - currently sqlite-utils insert works against JSON, newline-JSON, CSV and TSV.\r\n\r\nThis is my goal, to have one plugin that handles input and output symmetrically. I'd like to be able to do something like this:\r\n\r\n```sh\r\nsqlite-utils insert data.db table file.geojson --format geojson\r\n# ... explore and manipulate in Datasette\r\nsqlite-utils query data.db ... --format geojson > output.geojson\r\n```\r\n\r\nThis would work especially well with [datasette-query-files](https://github.com/eyeseast/datasette-query-files), since I already have the queries I need saved in standalone SQL files.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1818838294, "label": "Plugin hook for adding new output formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066222323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066222323, "node_id": "IC_kwDOBm6k_c4_jULz", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-14T00:36:42Z", "updated_at": "2022-03-14T00:36:42Z", "author_association": "CONTRIBUTOR", "body": "> Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)\r\n\r\nAll good. Report back any issues you find with this stuff. Metadata/dynamic config hasn't been tested widely outside of what I've done AFAIK. If you find a strong use case for async meta, it's going to be better to know sooner rather than later!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1870#issuecomment-1295667649", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1870", "id": 1295667649, "node_id": "IC_kwDOBm6k_c5NOlHB", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-29T00:52:43Z", "updated_at": "2022-10-29T00:53:43Z", "author_association": "CONTRIBUTOR", "body": "> Are you saying that I can build a container, but then when I run it and it does `datasette serve -i data.db ...` it will somehow modify the image, or create a new modified filesystem layer in the runtime environment, as a result of running that `serve` command?\r\n\r\nSomehow, `datasette serve -i data.db` will lead to the `data.db` being modified, which will trigger a [copy-on-write](https://docs.docker.com/storage/storagedriver/#the-copy-on-write-cow-strategy) of `data.db` into the read-write layer of the container.\r\n\r\nI don't understand **how** that happens.\r\n\r\nit kind of feels like a bug in sqlite, but i can't quite follow the sqlite code.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1426379903, "label": "don't use immutable=1, only mode=ro"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/432#issuecomment-488595724", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/432", "id": 488595724, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4ODU5NTcyNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-02T08:50:53Z", "updated_at": "2019-05-02T08:50:53Z", "author_association": "CONTRIBUTOR", "body": "> Can I pull those needs out of the Facet class somehow?\r\n\r\nI was thinking that it might be handy for datasette to have a request object which wraps the Sanic Request. This could include the datasette-specific querystring decoding and the `special_args` parsing from TableView.data.\r\n\r\nThis would mean that we could expose the request object to plugin hooks without coupling them to Sanic.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432893491, "label": "Refactor facets to a class and new plugin, refs #427"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066169718", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066169718, "node_id": "IC_kwDOBm6k_c4_jHV2", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-13T19:48:49Z", "updated_at": "2022-03-13T19:48:49Z", "author_association": "CONTRIBUTOR", "body": "> For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests?\r\n\r\nYou shouldn't need to do this, as I mentioned previously. The code inside `render_cell` hook already has access to the most recently sync'd metadata via `datasette._metadata_local`. Refreshing the metadata for every cell seems ... excessive.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316339035", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316339035, "node_id": "IC_kwDOBm6k_c5Odb1b", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T04:47:11Z", "updated_at": "2022-11-16T04:47:11Z", "author_association": "CONTRIBUTOR", "body": "> Have you ever seen CodeMirror correctly auto-completing columns? I'm not entirely sure I believe that the feature works anywhere else.\r\n\r\nI was thinking of the BigQuery console, like \r\n\r\n\"Screenshot\r\n\r\nBut they must be doing something pretty custom & appears to be using Monaco anyway. I suspect some kind of lower level autocomplete integration could make this work, but if the table completion is a good-enough starting point I think it's not too hard. The main issue is that we don't pass the relevant table data down to QueryView.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074701", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074701, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:45:18Z", "updated_at": "2021-06-26T23:45:37Z", "author_association": "CONTRIBUTOR", "body": "> Here's where the plugin hook is called, demonstrating the `fallback=` argument:\r\n> \r\n> https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472\r\n> \r\n> I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459?\r\n\r\nI think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/279#issuecomment-391073009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/279", "id": 391073009, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA3MzAwOQ==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-22T17:23:26Z", "updated_at": "2018-05-22T17:23:26Z", "author_association": "CONTRIBUTOR", "body": "> I think I prefer the aesthetics of just \"0.22\" for the version string if it's a tagged release with no additional changes - does that work?\r\n\r\nYes! That's the default versioneer behaviour.\r\n\r\n> I'd like to continue to provide a tuple that can be imported from the version.py module as well, as seen here:\r\n\r\nShould work now, it can be a two (for a tagged version), three or four items tuple.\r\n\r\n```\r\nIn [2]: datasette.__version__\r\nOut[2]: '0.12+292.ga70c2a8.dirty'\r\n\r\nIn [3]: datasette.__version_info__\r\nOut[3]: ('0', '12+292', 'ga70c2a8', 'dirty')\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325352370, "label": "Add version number support with Versioneer"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/200#issuecomment-380608372", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/200", "id": 380608372, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MDYwODM3Mg==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-11T21:55:46Z", "updated_at": "2018-04-11T21:55:46Z", "author_association": "CONTRIBUTOR", "body": "> I think the most reliable way to detect spatialite is to run `SELECT AddGeometryColumn(1, 2, 3, 4, 5);` against a `:memory:` database and see if it throws an exception\r\n\r\nOr just see if there's a `geometry_columns` table? I think that's quite unlikely to be added by accident (and it's an OGC standard). It also tells you if Spatialite is installed in the database rather than just loaded.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313494458, "label": "Hide Spatialite system tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139", "id": 682182178, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-27T20:46:18Z", "updated_at": "2020-08-27T20:46:18Z", "author_association": "CONTRIBUTOR", "body": "> I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem.\r\n\r\nSo the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048\r\n\r\nWith regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this:\r\nhttps://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63\r\n\r\nIt doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here:\r\nhttps://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716\r\n\r\nThe existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either.\r\n\r\n@simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made?\r\n\r\nWould be grateful for your thoughts! Thanks!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 686978131, "label": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030741289", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/399", "id": 1030741289, "node_id": "IC_kwDOCGYnMM49b90p", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-02-06T03:03:43Z", "updated_at": "2022-02-06T03:03:43Z", "author_association": "CONTRIBUTOR", "body": "> I wonder if there are any interesting non-geospatial canned conversions that it would be worth including?\r\n\r\nOff the top of my head:\r\n\r\n- Un-nesting JSON objects into columns\r\n- Splitting arrays\r\n- Normalizing dates and times\r\n- URL munging with `urlparse`\r\n- Converting strings to numbers\r\n\r\nSome of this is easy enough with SQL functions, some is easier in Python. Maybe that's where having pre-built classes gets really handy, because it saves you from thinking about which way it's implemented.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1124731464, "label": "Make it easier to insert geometries, with documentation and maybe code"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/203#issuecomment-381315675", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/203", "id": 381315675, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTMxNTY3NQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-14T09:14:45Z", "updated_at": "2018-04-14T09:27:30Z", "author_association": "CONTRIBUTOR", "body": "> I'd like to figure out a sensible opt-in way to expose this in the JSON output as well. Maybe with a &_units=true parameter?\r\n\r\nFrom a machine-readable perspective I'm not sure why it would be useful to decorate the values with units. Edit: Should have had some coffee first. It's clearly useful for stuff like map rendering!\r\n\r\nI agree that the unit metadata should definitely be exposed in the JSON.\r\n\r\n> In #204 you said \"I'd like to add support for using units when querying but this is PR is pretty usable as-is.\" - I'm fascinated to hear more about how this could work.\r\n\r\nI'm thinking about a couple of approaches here. I think the simplest one is: if the column has a unit attached, optionally accept units in query fields:\r\n\r\n```python\r\ncolumn_units = ureg(\"Hz\") # Create a unit object for the column's unit\r\nquery_variable = ureg(\"4 GHz\") # Supplied query variable\r\n\r\n# Now we can convert the query units into column units before querying\r\nsupplied_value.to(column_units).magnitude\r\n> 4000000000.0\r\n\r\n# If the user doesn't supply units, pint just returns the plain\r\n# number and we can query as usual assuming it's the base unit\r\nquery_variable = ureg(\"50\")\r\nquery_variable\r\n> 50\r\n\r\nisinstance(query_variable, numbers.Number)\r\n> True\r\n```\r\n\r\nThis also lets us do some nice unit conversion on querying:\r\n\r\n```python\r\ncolumn_units = ureg(\"m\")\r\nquery_variable = ureg(\"50 ft\")\r\n\r\nsupplied_value.to(column_units)\r\n> \r\n```\r\n\r\nThe alternative would be to provide a dropdown of units next to the query field (so a \"Hz\" field would give you \"kHz\", \"MHz\", \"GHz\"). Although this would be clearer to the user, it isn't so easy - we'd need to know more about the context of the field to give you sensible SI prefixes (I'm not so interested in nanoHertz, for example).\r\n\r\nYou also lose the bonus of being able to convert - although pint will happily show you all the compatible units, it again suffers from a lack of context:\r\n\r\n```python\r\nureg(\"m\").compatible_units()\r\n> frozenset({,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n ,\r\n })\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313837303, "label": "Support for units"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/88#issuecomment-344430689", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/88", "id": 344430689, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQzMDY4OQ==", "user": {"value": 15543, "label": "tomdyson"}, "created_at": "2017-11-14T23:08:22Z", "updated_at": "2017-11-14T23:08:22Z", "author_association": "CONTRIBUTOR", "body": "> I'm getting an internal server error on http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/ at the moment\r\n\r\nSorry about that - here's a working version on Netlify:\r\n\r\nhttps://nhs-england-map.netlify.com", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273775212, "label": "Add NHS England Hospitals example to wiki"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-391505930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 391505930, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTUwNTkzMA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-05-23T21:41:37Z", "updated_at": "2018-05-23T21:41:37Z", "author_association": "CONTRIBUTOR", "body": "> I'm not keen on anything that modifies the SQLite file itself on startup\r\n\r\nAh I didn't mean that - I meant altering the SELECT query to fetch the data so that it ran a spatialite function to transform that specific column.\r\n\r\nI think that's less useful as a general-purpose plugin hook though, and it's not that hard to parse the WKB in Python (my default approach would be to use [shapely](https://github.com/Toblerity/Shapely), which is great, but geomet looks like an interesting pure-python alternative).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066006292", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066006292, "node_id": "IC_kwDOBm6k_c4_ifcU", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-13T02:09:44Z", "updated_at": "2022-03-13T02:09:44Z", "author_association": "CONTRIBUTOR", "body": "> If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config?\r\n\r\nReading from sqlite DBs is pretty quick and I didn't notice significant performance issues when I was benchmarking. I tested on very large Datasette deployments (hundreds of DBs, millions of rows). See [\"Many small queries are efficient in sqlite\"](https://sqlite.org/np1queryprob.html) for more information on the rationale here. Also note that in the [datasette-live-config](https://github.com/next-LI/datasette-live-config) reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had.\r\n\r\nIf you need to ensure fresh metadata is being read inside of a `render_cell` hook specifically, you don't need to do anything further! `get_metadata` gets called before `render_cell` every request, so it already has access to the synced meta. There shouldn't be a need to call `get_metadata(...)` or `metadata(...)` inside `render_cell`, you can just use `datasette._metadata_local` if you're really worried about performance.\r\n\r\n> The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases.\r\n\r\nYes correct, the datadette-remote-metadata plugin doesn't do that. But the datasette-live-config plugin does. [It supports a `__metadata` table](https://github.com/next-LI/datasette-live-config/blob/main/datasette_live_config/__init__.py#L107-L138) that, when it exists on an attached DB, gets pulled into the Datasette internal `_metadata` and is also accessible via `get_metadata`. Updating is instantaneous so there's no gotchas for users or security issues for users relying on the metadata-based permissions. Simon talked about eventually making something like this a standard feature of Datasette, but I'm not sure what the status is on that!\r\n\r\nGood luck!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316256386", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316256386, "node_id": "IC_kwDOBm6k_c5OdHqC", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T03:18:06Z", "updated_at": "2022-11-16T03:18:06Z", "author_association": "CONTRIBUTOR", "body": "> If you can get a version of this working with table and column autocompletion just using a static JavaScript object in the source code with the right tables and columns, I'm happy to take on the work of turning that static object into something that Datasette includes in the page itself with all of the correct values.\r\n\r\nThis version \"sort of\" works when on the main database page where the template passes the relevant data https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25 by doing this and passing that into the `schema` object:\r\n\r\n```\r\n let TABLES_DATA = [];\r\n {% if tables is defined %} \r\n TABLES_DATA = {{ tables | tojson(indent=2) }};\r\n {% endif %}\r\n\r\n // Turn into an object, shaped like https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/test/test-complete.ts#L27.\r\n const TABLES_SCHEMA = Object.fromEntries(\r\n new Map(\r\n TABLES_DATA.map((table) => {\r\n return [table.name, table.columns];\r\n })\r\n ).entries()\r\n );\r\n```\r\n\r\nBut there are a number of papercuts with it - it's not escaping table names with spaces (likely be fixable from the data being passed into the view) but mainly it doesn't seem to autocomplete columns. I think it might only want to do it when you first type the table name from my read of https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/test/test-complete.ts#L37. It's possible I'm just passing something wrong, but it may end up being something that needs feature work upstream.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1688#issuecomment-1079550754", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1688", "id": 1079550754, "node_id": "IC_kwDOBm6k_c5AWKMi", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2022-03-26T01:27:27Z", "updated_at": "2022-03-26T03:16:29Z", "author_association": "CONTRIBUTOR", "body": "> Is there a way to serve a static assets when using the plugins/ directory method instead of installing plugins as a new python package?\r\n\r\nAs a workaround, I found I can serve my statics from a non-plugin specific folder using the [--static](https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files) CLI flag.\r\n\r\n```bash\r\ndatasette ~/Library/Safari/History.db \\\r\n --plugins-dir=plugins/ \\\r\n --static assets:dist/\r\n```\r\n\r\nIt's not ideal because it means I'll change the cache pattern path depending on how the plugin is running (via pip install or as a one off script), but it's usable as a workaround.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1181432624, "label": "[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1616095810", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1616095810, "node_id": "IC_kwDOBm6k_c5gU6pC", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2023-07-01T20:31:31Z", "updated_at": "2023-07-01T20:31:31Z", "author_association": "CONTRIBUTOR", "body": "> Just curious, is there a query that can be used to compile this programmatically, or did you identify these through memory?\r\n\r\nI just did a github search for `user:simonw \"def extra_js_urls(\"` ! Though I'm sure other plugins made by people other than Simon also exist out there https://github.com/search?q=user%3Asimonw+%22def+extra_js_urls%28%22&type=code", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1699#issuecomment-1092357672", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1699", "id": 1092357672, "node_id": "IC_kwDOBm6k_c5BHA4o", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-08T01:39:40Z", "updated_at": "2022-04-08T01:39:40Z", "author_association": "CONTRIBUTOR", "body": "> My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me.\r\n\r\nThat's my thinking, too. It's really the thing I've been wanting since writing `datasette-geojson`, since I'm always exporting with `datasette --get`. The workflow I'm always looking for is something like this:\r\n\r\n```sh\r\ncd alltheplaces-datasette\r\ndatasette query dunkin_in_suffolk -f geojson -o dunkin_in_suffolk.geojson\r\n```\r\n\r\nI think this probably needs either a new plugin hook separate from `register_output_renderer` or a way to use that without going through the HTTP stack. Or maybe a render mode that writes to a stream instead of a response. Maybe there's a new key in the dictionary that `register_output_renderer` returns that handles CLI exports.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1193090967, "label": "Proposal: datasette query"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1099#issuecomment-1402900354", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1099", "id": 1402900354, "node_id": "IC_kwDOBm6k_c5Tno-C", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-01-25T00:58:26Z", "updated_at": "2023-01-25T00:58:26Z", "author_association": "CONTRIBUTOR", "body": "> My original idea for compound foreign keys was to turn both of those columns into links, but that doesn't fit here because `database_name` is already part of a different foreign key.\r\n\r\nit's pretty hard to know what the right thing to do is if a field is part of multiple foreign keys. \r\n\r\nbut, if that's not the case, what about making each of the columns a link. seems like an improvement over the status quo.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 743371103, "label": "Support linking to compound foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1615997736", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1615997736, "node_id": "IC_kwDOBm6k_c5gUiso", "user": {"value": 9020979, "label": "hydrosquall"}, "created_at": "2023-07-01T16:55:24Z", "updated_at": "2023-07-01T16:55:24Z", "author_association": "CONTRIBUTOR", "body": "> Ok @hydrosquall a couple things before this PR should be good to go:\r\n\r\nThank you @asg017 ! I've pushed both suggested changes onto this branch.\r\n\r\n> Not sure how difficult it'll be to inject it server-side\r\n\r\nIf we are OK with having a build system, it would free me up to do do many things! We could make datasette-manager.js a server-side rendered file as a \"template\" instead of having it as a static JS file, but I'm not sure it's worth the extra jump in complexity / loss of syntax highlighting in the JS file.\r\n\r\nIn the short-term, I could see an intermediary solution where a unit test in the preferred language was able to read both `version.py` and `datasette-manager.js`, and make sure that the strings versions are in sync. (This assumes that we want the manager and datasette's versions to be synced, and not decoupled). Since the version is not changing very often, a \"manual sync\" might be good enough. \r\n\r\n> In terms of how to integrate this into Datasette, a few options I can see working:\r\n\r\nThis sounds good to me. I'm not sure how to add a settings flag, but will be interested to see the PR that adds support for it.\r\n\r\n> I'm also curious to see how \"plugins for a plugin' would work\r\n\r\nI'm comfortable to wait until we have a realistic usecase for this. In the short term, I think we could give plugins a way to grant access to a \"public API of other plugins\", and also ask to be notified when plugins with other names have loaded, but don't picture the datasette manager getting more involved than that. \r\n\r\n> here's a list of Simon's Datasette plugins that use \"extra_js_urls()\"\r\n\r\nNeat, thanks for compiling this list! Just curious, is there a query that can be used to compile this programmatically, or did you identify these through memory?\r\n\r\n> I want to make a javascript plugin on top of the code-mirror editor to make a few things nicer (function auto-complete, table/column descriptions, etc.)\r\n\r\nI look forward to trying this out \ud83d\udc4d \r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1871#issuecomment-1309650806", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1871", "id": 1309650806, "node_id": "IC_kwDOBm6k_c5OD692", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2022-11-10T01:38:58Z", "updated_at": "2022-11-10T01:38:58Z", "author_association": "CONTRIBUTOR", "body": "> Realized the API explorer doesn't need the API key piece at all - it can work with standard cookie-based auth.\r\n> \r\n> This also reflects how most plugins are likely to use this API, where they'll be adding JavaScript that uses `fetch()` to call the write API directly.\r\n\r\nI agree (that's what I did with the previous insert plugin), maybe a complete example using `fetch()` in the documentation would be valuable as a \u201cGetting started with the API\u201d or similar?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1427293909, "label": "API explorer tool"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1168#issuecomment-869076254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1168", "id": 869076254, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NjI1NA==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-27T00:03:16Z", "updated_at": "2021-06-27T00:05:51Z", "author_association": "CONTRIBUTOR", "body": "> Related: Here's an implementation of a `get_metadata()` plugin hook by @brandonrobertz [next-LI@3fd8ce9](https://github.com/next-LI/datasette/commit/3fd8ce91f3108c82227bf65ff033929426c60437)\r\n\r\nHere's a plugin that implements metadata-within-DBs: [next-LI/datasette-live-config](https://github.com/next-LI/datasette-live-config)\r\n\r\nHow it works: If a database has a `__metadata` table, then it gets parsed and included in the global metadata. It also implements a database-action hook with a UI for managing config.\r\n\r\nMore context: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L109-L140", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777333388, "label": "Mechanism for storing metadata in _metadata tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1296#issuecomment-819467759", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1296", "id": 819467759, "node_id": "MDEyOklzc3VlQ29tbWVudDgxOTQ2Nzc1OQ==", "user": {"value": 295329, "label": "camallen"}, "created_at": "2021-04-14T12:07:37Z", "updated_at": "2021-04-14T12:11:36Z", "author_association": "CONTRIBUTOR", "body": "> Removing /var/lib/apt and /var/lib/dpkg makes apt and dpkg unusable in\r\nimages based on this one. Running `apt-get clean` and removing\r\n/var/lib/apt/lists achieves similar size savings.\r\n\r\nthis PR helps me as removing the /var/lib/apt and /var/lib/dpkg directories breaks my ability to add packages when using `datasetteproject/datasette:0.56` as a base image.\r\n\r\n\r\n---- \r\nShorterm workaround for me was to use this in my Dockerfile\r\n```\r\nFROM datasetteproject/datasette:0.56\r\n\r\nRUN mkdir -p /var/lib/apt\r\nRUN mkdir -p /var/lib/dpkg\r\nRUN mkdir -p /var/lib/dpkg/updates\r\nRUN mkdir -p /var/lib/dpkg/info\r\nRUN touch /var/lib/dpkg/status\r\n\r\nRUN apt-get update # and install your packages etc\r\n```\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855446829, "label": "Dockerfile: use Ubuntu 20.10 as base"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/467#issuecomment-1224382336", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/467", "id": 1224382336, "node_id": "IC_kwDOCGYnMM5I-peA", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2022-08-23T17:16:13Z", "updated_at": "2022-08-23T17:16:13Z", "author_association": "CONTRIBUTOR", "body": "> Should passing `alter=True` also drop any columns that aren't included in the new table structure?\r\n> \r\n> It could even spot column types that aren't correct and fix those.\r\n> \r\n> Is that consistent with the expectations set by how `alter=True` works elsewhere?\r\n\r\nI would lean towards not dropping them (or making a `drop=True` or `drop_columns=True`or `drop_missing_columns=True`) to work with existing tables easier. \r\n\r\nI do like that sqlite-utils mostly just works with existing tables but it's also nice to add to existing fields in a few cases. \r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1348169997, "label": "Mechanism for ensuring a table has all the columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2143#issuecomment-1684496274", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2143", "id": 1684496274, "node_id": "IC_kwDOBm6k_c5kZ1-S", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2023-08-18T22:30:45Z", "updated_at": "2023-08-18T22:30:45Z", "author_association": "CONTRIBUTOR", "body": "> That said, I do really like a bias towards settings that can be changed at runtime\r\n\r\nDoes this include things like `--settings` values or plugin config? I can totally see being able to update metadata without restarting, but not sure if that would work well with `--setting`, plugin config, or auth/permissions stuff. \r\n\r\nWell it could work with `--setting` and auth/permissions, with a lot of core changes. But changing plugin config on the fly could be challenging, for plugin authors. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1855885427, "label": "De-tangling Metadata before Datasette 1.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655643078", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655643078, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTY0MzA3OA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T17:05:59Z", "updated_at": "2020-07-08T17:05:59Z", "author_association": "CONTRIBUTOR", "body": "> The only thing missing from this PR is updates to the documentation.\r\n\r\nAh, yes, thanks for this reminder! I've repushed with doc bits added.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1316318961", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1316318961, "node_id": "IC_kwDOBm6k_c5OdW7x", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T04:27:51Z", "updated_at": "2022-11-16T04:27:51Z", "author_association": "CONTRIBUTOR", "body": "> The resize handle doesn't appear on Mobile Safari on iPhone - I don't think that particularly matters though.\r\n> \r\n> The textarea does get a weird border around it when focused on iPhone though.\r\n\r\nThe default focus styles appear to be\r\n\r\n```\r\n.c1.cm-editor.cm-focused {\r\n outline: 1px dotted #212121;\r\n}\r\n```\r\n\r\nWhich I also see on desktop. Would be nice to changed to whatever the default UA textarea styles are to blend in better but I wouldn't recommend removing it entirely - just to keep the visual indication that the element is focused. Maybe followup material to have a theming pass", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/434#issuecomment-489163939", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/434", "id": 489163939, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTE2MzkzOQ==", "user": {"value": 10352819, "label": "rprimet"}, "created_at": "2019-05-03T16:49:45Z", "updated_at": "2019-05-03T16:50:03Z", "author_association": "CONTRIBUTOR", "body": "> The second time I ran the command I got an error:\r\n\r\n> \r\n> ERROR: (gcloud.beta.run.deploy) Deployment endpoint was not found. Perhaps the\r\n> provided region was invalid. Set the `run/region` property to a valid region and\r\n> retry. Ex: `gcloud config set run/region us-central1`\r\n> \r\n\r\nYes, I was able to reproduce this; I used to get prompted for a run region interactively by the `gcloud` tool before, but maybe this is changing? (the [documentation](https://cloud.google.com/run/docs/deploying) now assumes `run/region` is set).\r\n\r\nNot sure which course of action is best: making `datasette` ensure that `run/region` is set beforehand or wait a bit until the gcloud CLI stabilizes?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 434321685, "label": "\"datasette publish cloudrun\" command to publish to Google Cloud Run"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1851#issuecomment-1292519956", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1851", "id": 1292519956, "node_id": "IC_kwDOBm6k_c5NCkoU", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2022-10-26T19:20:33Z", "updated_at": "2022-10-26T19:20:33Z", "author_association": "CONTRIBUTOR", "body": "> This could use a new plugin hook, too. I don't want to complicate your life too much, but for things like GIS, I'd want a way to turn regular JSON into SpatiaLite geometries or combine X/Y coordinates into point geometries and such. Happy to help however I can.\r\n\r\n @eyeseast Maybe you could do this with triggers? Like you can insert JSON-friendly data into a \"raw\" table, and create a trigger that transforms that inserted data into the proper table\r\n\r\nHere's an example:\r\n\r\n```sql\r\n-- meant to be updated from a Datasette insert\r\ncreate table points_raw(longitude int, latitude int);\r\n\r\n-- the target table with proper spatliate geometries\r\ncreate table points(point geometry);\r\n\r\nCREATE TRIGGER insert_points_raw INSERT ON points_raw \r\n BEGIN\r\n insert into points(point) values (makepoint(new.longitude, new.latitude))\r\n END;\r\n```\r\n\r\nYou could then POST a new row to `points_raw` like this:\r\n```\r\nPOST /db/points_raw\r\nAuthorization: Bearer xxx\r\nContent-Type: application/json\r\n{\r\n \"row\": {\r\n \"longitude\": 27.64356,\r\n \"latitude\": -47.29384\r\n }\r\n}\r\n```\r\n\r\nThen SQLite with run the trigger and insert a new row in `points` with the correct geometry point. Downside is you'd have duplicated data with `points_raw`, but maybe it could be a `TEMP` table (or have a cron that deletes all rows from that table every so often?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1421544654, "label": "API to insert a single record into an existing table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/514#issuecomment-504663766", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/514", "id": 504663766, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDY2Mzc2Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-06-22T12:57:59Z", "updated_at": "2019-06-22T12:57:59Z", "author_association": "CONTRIBUTOR", "body": "> This example is useful to - I like how it has a Makefile that knows how to set up systemd: https://github.com/pikesley/Queube\r\n\r\nI wasn't even aware it was possible to add a systemd service at an arbitrary path, but it seems a little messy to me.\r\n\r\nMaybe worth noting that systemd does support [per-user services](https://wiki.archlinux.org/index.php/Systemd/User) which don't require root access. Cool but probably overkill for most people (especially when you're going to need root to listen on port 80 anyway, directly or via a reverse proxy).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459397625, "label": "Documentation with recommendations on running Datasette in production without using Docker"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1388#issuecomment-876213177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1388", "id": 876213177, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NjIxMzE3Nw==", "user": {"value": 80737, "label": "aslakr"}, "created_at": "2021-07-08T07:47:17Z", "updated_at": "2021-07-08T07:47:17Z", "author_association": "CONTRIBUTOR", "body": "> This sounds like a valuable feature for people running Datasette behind a proxy.\r\n\r\nYes, in some cases it is easer to use e.g. Apache's [ProxyPass Directive](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass) with Unix Domain Socket like `unix:/home/www.socket|http://localhost/whatever/`.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 939051549, "label": "Serve using UNIX domain socket"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1548617257", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1548617257, "node_id": "IC_kwDOBm6k_c5cTgYp", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-05-15T21:32:20Z", "updated_at": "2023-05-15T21:32:20Z", "author_association": "CONTRIBUTOR", "body": "> Were you picturing that the whole plugin config object could be returned as a promise, or that the individual hooks (like makeColumnActions or makeAboveTablePanelConfigs supported returning a promise of arrays instead only returning plain arrays?\r\n\r\nThe latter - that you could return a promise of arrays, so it parallels the [\"await me maybe\" pattern in Datasette](https://simonwillison.net/2020/Sep/2/await-me-maybe/), where you can return either a value, a callable or an awaitable.\r\n\r\n> I have a hunch that what you're describing might be achievable without adding Promises to the API with something\r\n\r\nOops, I did a poor job explaining. Yes, this would work - but it requires me to continue to communicate the column names out of band (in order to fetch the facet data per-column before registering my plugin), vs being able to re-use them from the plugin implementation.\r\n\r\nThis isn't that big of a deal - it'd be a nice ergonomic improvement, but nowhere near as a big of an improvement as having an officially sanctioned way to add stuff to the column menus in the first place.\r\n\r\nThis could also be layered on in a future commit without breaking v1 users, too, so it's not at all urgent.\r\n\r\n> especially if those lines are encapsulated by a function we provide (maybe something that's available on the window provided by Datasette as an inline script tag\r\n\r\nAh, this is maybe the the key point. Since it's all hosted inside Datasette, Datasette can provide some arbitrary sugar to make it easier to work with.\r\n\r\nMy experience with async scripts in JS is that people sometimes don't understand the race conditions inherent to them. If they copy/paste from a tutorial, it does just work. But then they'll delete half the code, and by chance it still works on their machine/Datasette templates, and now someone's headed for an annoying debugging session -- maybe them, maybe someone else who tries to re-use their plugin.\r\n\r\nAgain, a fairly minor thing, though.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1651082214, "label": "feat: Javascript Plugin API (Custom panels, column menu items with JS actions)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647935300", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647935300, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:23:01Z", "updated_at": "2020-06-23T06:23:01Z", "author_association": "CONTRIBUTOR", "body": "> You said \"200k+, 50+ rows in a couple of tables\" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes.\r\n\r\nAh that was a typo, I meant 50k.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029326568", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/385", "id": 1029326568, "node_id": "IC_kwDOCGYnMM49Wkbo", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-02-03T19:28:26Z", "updated_at": "2022-02-03T19:28:26Z", "author_association": "CONTRIBUTOR", "body": "> `from sqlite_utils.utils import find_spatialite` is part of the documented API already:\r\n> \r\n> https://sqlite-utils.datasette.io/en/3.22.1/python-api.html#finding-spatialite\r\n> \r\n> To avoid needing to bump the major version number to 4 to indicate a backwards incompatible change, we should keep a `from .gis import find_spatialite` line at the top of `utils.py` such that any existing code with that documented import continues to work.\r\n\r\nThis is fixed now. I had to take out the type annotations for `Database` and `Table` to avoid a circular import, but that's fine and may be moot if these become class methods.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1102899312, "label": "Add new spatialite helper methods"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-918621705", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 918621705, "node_id": "IC_kwDOBm6k_c42wQ4J", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-13T22:17:17Z", "updated_at": "2021-09-13T22:17:17Z", "author_association": "CONTRIBUTOR", "body": "> haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there?\r\n\r\nNot certain, but I think tests in CI run on Ubuntu and don't appear to install any additional Sqlite-related dependencies, and so my guess is the version of Sqlite installed by default on Ubuntu has the `SQLITE_ENABLE_FTS3_PARENTHESIS` option enabled and so doesn't run into this issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1264218914", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/491", "id": 1264218914, "node_id": "IC_kwDOCGYnMM5LWnMi", "user": {"value": 7908073, "label": "chapmanjacobd"}, "created_at": "2022-10-01T03:18:36Z", "updated_at": "2023-06-14T22:14:24Z", "author_association": "CONTRIBUTOR", "body": "> some good concrete use-cases in mind\r\n\r\nI actually found myself wanting something like this the past couple days. The use-case was databases with slightly different schema but same table names.\r\n\r\nhere is a full script:\r\n\r\n```\r\nimport argparse\r\nfrom pathlib import Path\r\n\r\nfrom sqlite_utils import Database\r\n\r\n\r\ndef connect(args, conn=None, **kwargs) -> Database:\r\n db = Database(conn or args.database, **kwargs)\r\n with db.conn:\r\n db.conn.execute(\"PRAGMA main.cache_size = 8000\")\r\n return db\r\n\r\n\r\ndef parse_args() -> argparse.Namespace:\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"database\")\r\n parser.add_argument(\"dbs_folder\")\r\n parser.add_argument(\"--db\", \"-db\", help=argparse.SUPPRESS)\r\n parser.add_argument(\"--verbose\", \"-v\", action=\"count\", default=0)\r\n args = parser.parse_args()\r\n\r\n if args.db:\r\n args.database = args.db\r\n Path(args.database).touch()\r\n args.db = connect(args)\r\n\r\n return args\r\n\r\n\r\ndef merge_db(args, source_db):\r\n source_db = str(Path(source_db).resolve())\r\n\r\n s_db = connect(argparse.Namespace(database=source_db, verbose = args.verbose))\r\n for table in s_db.table_names():\r\n data = s_db[table].rows\r\n args.db[table].insert_all(data, alter=True, replace=True)\r\n\r\n args.db.conn.commit()\r\n\r\n\r\ndef merge_directory():\r\n args = parse_args()\r\n source_dbs = list(Path(args.dbs_folder).glob('*.db'))\r\n for s_db in source_dbs:\r\n merge_db(args, s_db)\r\n\r\n\r\nif __name__ == '__main__':\r\n merge_directory()\r\n```\r\n\r\nedit: I've made some improvements to this and put it on PyPI:\r\n\r\n```\r\n$ pip install xklb\r\n$ lb merge-db -h\r\nusage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...]\r\n\r\n Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided,\r\n and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row\r\n replacing all existing fields.\r\n\r\n Upsert mode will update matching PK rows such that if a source row has a NULL field and\r\n the destination row has a value then the value will be preserved instead of changed to the source row's NULL value.\r\n\r\n Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db\r\n\r\n Test first by using temp databases as the destination db.\r\n Try out different modes / flags until you are satisfied with the behavior of the program\r\n\r\n library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db\r\n\r\n Merge database data and tables\r\n\r\n library merge-dbs --upsert --pk path video.db tv.db movies.db\r\n library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path audio-fts.db audio.db\r\n\r\n library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db\r\n library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v\r\n\r\npositional arguments:\r\n database\r\n source_dbs\r\n```\r\n\r\nAlso if you want to dedupe a table based on a \"business key\" which isn't explicitly your primary key(s) you can run this:\r\n\r\n```\r\n$ lb dedupe-db -h\r\nusage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS]\r\n\r\n Dedupe your database (not to be confused with the dedupe subcommand)\r\n\r\n It should not need to be said but *backup* your database before trying this tool!\r\n\r\n Dedupe-DB will help remove duplicate rows based on non-primary-key business keys\r\n\r\n library dedupe-db ./video.db media --bk path\r\n\r\n If --primary-keys is not provided table metadata primary keys will be used\r\n If --only-columns is not provided all non-primary and non-business key columns will be upserted\r\n\r\npositional arguments:\r\n database\r\n table\r\n\r\noptions:\r\n -h, --help show this help message and exit\r\n --skip-0\r\n --only-columns ONLY_COLUMNS\r\n Comma separated column names to upsert\r\n --primary-keys PRIMARY_KEYS, --pk PRIMARY_KEYS\r\n Comma separated primary keys\r\n --business-keys BUSINESS_KEYS, --bk BUSINESS_KEYS\r\n Comma separated business keys\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1383646615, "label": "Ability to merge databases and tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271100651", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271100651, "node_id": "IC_kwDOBm6k_c5Lw3Tr", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T04:38:14Z", "updated_at": "2022-10-07T04:38:14Z", "author_association": "CONTRIBUTOR", "body": "> yes, and i also think that this is causing the apparent memory problems in #1480. when the container starts up, it will make some operation on the database in `immutable` mode which apparently makes some small change to the db file. if that's so, then the db files will be copied to the read/write layer which counts against cloudrun's memory allocation!\r\n> \r\n> running a test of that now.\r\n\r\nthis completely addressed #1480 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1298#issuecomment-823093669", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1298", "id": 823093669, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMzA5MzY2OQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-20T08:38:10Z", "updated_at": "2021-04-20T08:40:22Z", "author_association": "CONTRIBUTOR", "body": "@dracos I appreciate your ideas!\r\n\r\n1. Ooh, I like this: https://codepen.io/astro87/pen/LYRQNbd?editors=1100 (That's the codepen from your linked stackoverflow.)\r\n2. I worry that a max height will be a problem when my facets are open. (I've got 35 active ingredients, and so I've set the default_facet_size to 35.)\r\n3. I don't understand this one. I'm observing the screenshot... very helpful! (Ah, okay, TR = Top Right and BR = Bottom Right. Absolute grid refers to position style.) All the scroll bars look a little wonky to me. I've also got a lot of facets, and prefer the extra horizontal space so that not as many facets disappear below the fold. My site also has end users... some will be on mobile... not sure what the absolute grid would do there... \r\n4. (I still think a hover-arrow that scrolls upon click would help, too...)\r\n\r\nBut meanwhile, I'm going to go ahead and see if I can apply that shadow. (Never would've thought of that.) Hmmm... I'm not an SCSS person. This looks helpful! https://jsonformatter.org/scss-to-css", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855476501, "label": "improve table horizontal scroll experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748562288, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:44:22Z", "updated_at": "2020-12-20T04:44:22Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748436779, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-19T07:49:00Z", "updated_at": "2020-12-19T07:49:00Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-401310732", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 401310732, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMTMxMDczMg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-06-29T10:05:04Z", "updated_at": "2018-06-29T10:07:25Z", "author_association": "CONTRIBUTOR", "body": "@russs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg [kartena/Proj4Leaflet](https://kartena.github.io/Proj4Leaflet/)) although the leaflet side would need to detect or be informed of the original projection?\r\n\r\nAnother possibility would be to provide an easy way/guidance for users to create an FK'd table containing the WGS84 projection of a non-WGS84 geometry in the original/principle table? This could then as a proxy for serving GeoJSON to the leaflet map?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688479163", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688479163, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODQ3OTE2Mw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-07T19:10:33Z", "updated_at": "2020-09-07T19:11:57Z", "author_association": "CONTRIBUTOR", "body": "@simonw -- I've gone ahead updated the documentation to reflect the changes introduced in this PR. IMO it's ready to merge now.\r\n\r\nIn writing the documentation changes, I begin to wonder about the value and role of `batch_size` at all, tbh. May I assume it was originally intended to prevent using the entire row set to determine columns and column types, and that this was a performance consideration? If so, this PR entirely undermines its purpose. I've been passing in excess of 500,000 rows at a time to `insert_all()` with these changes and although I'm sure the performance difference is measurable it's not really noticeable; given #145, I don't know that any performance advantages outweigh the problems doing it this way removes. What do you think about just dropping the argument and defaulting to the maximum `batch_size` permissible given `SQLITE_MAX_VARS`? Are there other reasons one might want to restrict `batch_size` that I've overlooked? I could open a new issue to discuss/implement this.\r\n\r\nOf course the documentation will need to change again too if/when something is done about #147.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 626667235, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T12:20:34Z", "updated_at": "2020-05-11T12:20:34Z", "author_association": "CONTRIBUTOR", "body": "@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example:\r\n\r\n```python\r\nimport osxphotos\r\nphotos = osxphotos.PhotosDB().photos()\r\nexiftool = osxphotos.exiftool.ExifTool(photos[0].path)\r\nexifdata = exiftool.as_dict()\r\ntags = exifdata[\"IPTC:Keywords\"]\r\n```\r\n\r\nNot as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC.\r\n\r\nAs for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1013698557", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/79", "id": 1013698557, "node_id": "IC_kwDOCGYnMM48a8_9", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-01-15T15:15:22Z", "updated_at": "2022-01-15T15:15:22Z", "author_association": "CONTRIBUTOR", "body": "@simonw I have a PR here https://github.com/simonw/sqlite-utils/pull/385 that adds Spatialite helpers on the Python side. Please let me know how it looks.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 557842245, "label": "Helper methods for working with SpatiaLite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/407#issuecomment-1040580250", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/407", "id": 1040580250, "node_id": "IC_kwDOCGYnMM4-Bf6a", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-02-15T17:40:00Z", "updated_at": "2022-02-15T17:40:00Z", "author_association": "CONTRIBUTOR", "body": "@simonw I think this is ready for a look.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1138948786, "label": "Add SpatiaLite helpers to CLI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029175907", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/385", "id": 1029175907, "node_id": "IC_kwDOCGYnMM49V_pj", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-02-03T16:36:54Z", "updated_at": "2022-02-03T16:36:54Z", "author_association": "CONTRIBUTOR", "body": "@simonw Not sure if you've seen this, but any chance you can run the tests?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1102899312, "label": "Add new spatialite helper methods"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/46#issuecomment-344810525", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/46", "id": 344810525, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDgxMDUyNQ==", "user": {"value": 54999, "label": "ingenieroariel"}, "created_at": "2017-11-16T04:11:25Z", "updated_at": "2017-11-16T04:11:25Z", "author_association": "CONTRIBUTOR", "body": "@simonw On the spatialite support, here is some info to make it work and a screenshot:\r\n\r\n\"screen\r\n\r\nI used the following Dockerfile:\r\n```\r\nFROM prolocutor/python3-sqlite-ext:3.5.1-spatialite as build\r\n\r\nRUN mkdir /code\r\nADD . /code/\r\n\r\nRUN pip install /code/\r\n\r\nEXPOSE 8001\r\nCMD [\"datasette\", \"serve\", \"/code/ne.sqlite\", \"--host\", \"0.0.0.0\"]\r\n```\r\n\r\nand added this to `prepare_connection`:\r\n```\r\n conn.enable_load_extension(True)\r\n conn.execute(\"SELECT load_extension('/usr/local/lib/mod_spatialite.so')\")\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 271301468, "label": "Dockerfile should build more recent SQLite with FTS5 and spatialite support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1208#issuecomment-774286962", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1208", "id": 774286962, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-05T21:02:39Z", "updated_at": "2021-02-05T21:02:39Z", "author_association": "CONTRIBUTOR", "body": "@simonw could you please take a look at the PR 1211 that fixes this issue?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 794554881, "label": "A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2183#issuecomment-1716801971", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2183", "id": 1716801971, "node_id": "IC_kwDOBm6k_c5mVFGz", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2023-09-13T01:34:01Z", "updated_at": "2023-09-13T01:34:01Z", "author_association": "CONTRIBUTOR", "body": "@simonw docs are finished, this is ready for review!\r\n\r\nOne thing: I added \"Configuration\" as a top-level item in the documentation site, at the very bottom. Not sure if this is the best, maybe it can be named \"datasette.yaml Configuration\" or something similar?\r\n\r\nMostly because \"Configuration\" by itself can mean many things, but adding \"datasette.yaml\" would make it pretty clear it's about that specific file, and is easier to scan. I'd also be fine with using \"datasette.yaml\" instead of \"datasette.json\", since writing in YAML is much more forgiving (and advanced users will know JSON is also supported)\r\n\r\nAlso, maybe this is a chance to consolidate the docs a bit? I think \"Settings\", \"Configuration\", \"Metadata\", and \"Authentication and permissions\" should possibly be under the same section. Maybe even consolidate the different Plugin pages that exist?\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1891212159, "label": "`datasette.yaml` plugin support"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395507, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:54:45Z", "updated_at": "2020-05-10T21:54:45Z", "author_association": "CONTRIBUTOR", "body": "@simonw does Photos show valid reverse geolocation info? Are you sure you're using [bpylist2](https://github.com/xa4a/bpylist2) and not bpylist? They're both unfortunately imported as \"bpylist\" so if you somehow got the wrong (original bpylist) version installed, it could be the issue. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1796#issuecomment-1364345071", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1796", "id": 1364345071, "node_id": "IC_kwDOBm6k_c5RUkDv", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-12-23T21:27:02Z", "updated_at": "2022-12-23T21:27:02Z", "author_association": "CONTRIBUTOR", "body": "@simonw is this issue closed by #1893?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1355148385, "label": "Research an upgrade to CodeMirror 6"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-346116745", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 346116745, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjExNjc0NQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-21T18:23:25Z", "updated_at": "2017-11-21T18:23:25Z", "author_association": "CONTRIBUTOR", "body": "@simonw ready for a review and merge if you want.\r\n\r\nThere's still some nasty duplicated code in cli.py and utils.py, which is just going to get worse if/when we start adding any other deploy targets (and I want to do one for cloud.gov, at least). I think there's an opportunity for some refactoring here. I'm happy to do that now as part of this PR, or if you merge this first I'll do it in a different one.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1552#issuecomment-995296725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1552", "id": 995296725, "node_id": "IC_kwDOBm6k_c47UwXV", "user": {"value": 3556, "label": "davidbgk"}, "created_at": "2021-12-15T23:29:32Z", "updated_at": "2021-12-15T23:29:32Z", "author_association": "CONTRIBUTOR", "body": "@simonw thank you for your fast answer and your guidance!\r\n\r\nWhile digging into the code, I found an undocumented way of doing it:\r\n\r\n```yaml\r\nfacets: [\"Facet for a column\", {\"array\": \"Facet for an array\"}]\r\n```\r\n\r\nThe only remaining problem with that solution is here: https://github.com/simonw/datasette/blob/250db8192cb8aba5eb8cd301ccc2a49525bc3d24/datasette/facets.py#L33\r\n\r\nWe have:\r\n\r\n```python\r\ntype, metadata_config = metadata_config.items()[0]\r\n```\r\n\r\nBut it requires to cast the `dict_items` as a list prior to access the first element:\r\n\r\n```python\r\ntype, metadata_config = list(metadata_config.items())[0]\r\n```\r\n\r\nI guess it's an unspotted bug? (I mean, independently of the facets-with-arrays issue.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1078702875, "label": "Allow to set `facets_array` in metadata (like current `facets`)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1789#issuecomment-1223347322", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1789", "id": 1223347322, "node_id": "IC_kwDOBm6k_c5I6sx6", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2022-08-23T00:03:20Z", "updated_at": "2022-08-23T00:03:20Z", "author_association": "CONTRIBUTOR", "body": "@simonw to build the extension on ubuntu, you can run:\r\n\r\n```\r\napt-get update && apt-get install libsqlite3-dev gcc\r\ngcc ext.c -fPIC -shared -o ext.so\r\n```\r\n\r\nI'm not the best with Actions, but if you set the cache key to `ext.c`, run those two commands to download dependencies + compile to `ext.so`, then the unit test should pick it up and run it correctly. Let me know if you want me to update the PR with that added", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1344823170, "label": "Add new entrypoint option to `--load-extension`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2157#issuecomment-1700291967", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2157", "id": 1700291967, "node_id": "IC_kwDOBm6k_c5lWGV_", "user": {"value": 15178711, "label": "asg017"}, "created_at": "2023-08-31T02:45:56Z", "updated_at": "2023-08-31T02:45:56Z", "author_association": "CONTRIBUTOR", "body": "@simonw what do you think about adding a `DATASETTE_INTERNAL_DB_PATH` env variable, where when defined, is the default location of the internal DB? This means when the `--internal` flag is NOT provided, Datasette would check to see if `DATASETTE_INTERNAL_DB_PATH` exists, and if so, uses that as the internal database (and would fallback to an ephemeral memory database)\r\n\r\nMy rationale: some plugins may require, or strongly encourage, a persistent internal database (`datasette-comments`, `datasette-bookmarks`, `datasette-link-shortener`, etc.). However, for users that have a global installation of Datasette (say from `brew install` or a global `pip install`), it would be annoying having to specify `--internal` every time. So instead, they can just add `export DATASETTE_INTERNAL_DB_PATH=\"/path/to/internal.db\"` to their bashrc/zshrc/whereever to not have to worry about `--internal`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1865869205, "label": "Proposal: Make the `_internal` database persistent, customizable, and hidden"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/511#issuecomment-510730200", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/511", "id": 510730200, "node_id": "MDEyOklzc3VlQ29tbWVudDUxMDczMDIwMA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2019-07-12T03:23:22Z", "updated_at": "2019-07-12T03:23:22Z", "author_association": "CONTRIBUTOR", "body": "@simonw yes it works fine on Windows, but test suite doesn't run properly, for that I had to use WSL", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 456578474, "label": "Get Datasette tests passing on Windows in GitHub Actions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2003#issuecomment-1402898033", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2003", "id": 1402898033, "node_id": "IC_kwDOBm6k_c5TnoZx", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-01-25T00:54:41Z", "updated_at": "2023-01-25T00:54:41Z", "author_association": "CONTRIBUTOR", "body": "@simonw, let me know what you think about this approach!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1555701851, "label": "Show referring tables and rows when the referring foreign key is compound"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1836#issuecomment-1271103097", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1836", "id": 1271103097, "node_id": "IC_kwDOBm6k_c5Lw355", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T04:43:41Z", "updated_at": "2022-10-07T04:43:41Z", "author_association": "CONTRIBUTOR", "body": "@simonw, should i open up a new issue for investigating the differences between \"immutable=1\" and \"mode=ro\" and possibly switching to \"mode=ro\". Or would you like to keep that conversation in this issue?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1400374908, "label": "docker image is duplicating db files somehow"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/61#issuecomment-533818697", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/61", "id": 533818697, "node_id": "MDEyOklzc3VlQ29tbWVudDUzMzgxODY5Nw==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2019-09-21T18:09:01Z", "updated_at": "2019-09-21T18:09:28Z", "author_association": "CONTRIBUTOR", "body": "@witeshadow The library version doesn't have helpers around CSV (at least not from what I can see in the code). \r\n\r\nBut here's a snippet that makes it easy to insert from CSV using the library. \r\n\r\n```\r\nimport csv\r\nfrom sqlite_utils import Database\r\n\r\n# CSV Reader\r\n\r\ncsv_file = open(\"filename.csv\") # open the csv file.\r\nreader = csv.reader(csv_file) # Create a CSV reader\r\nheaders = next(reader) # First line is the header\r\ndocs = (dict(zip(headers, row)) for row in reader)\r\n\r\n# Now you can use the `sqlite_utils` library. \r\n\r\ndb = Database(\"my_database.db\")\r\ndb[\"table_name\"].insert_all(docs)\r\n```\r\n\r\nThis snippet is adapted from reading the CLI source code on how it implements the csv option.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 491219910, "label": "importing CSV to SQLite as library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1655#issuecomment-1767219901", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1655", "id": 1767219901, "node_id": "IC_kwDOBm6k_c5pVaK9", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-10-17T21:29:03Z", "updated_at": "2023-10-17T21:29:03Z", "author_association": "CONTRIBUTOR", "body": "@yejiyang why don\u2019t you move this discussion to my fork to spare simon\u2019s notifications ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1163369515, "label": "query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1973#issuecomment-1407523547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1973", "id": 1407523547, "node_id": "IC_kwDOBm6k_c5T5Rrb", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-29T00:40:31Z", "updated_at": "2023-01-29T00:40:31Z", "author_association": "CONTRIBUTOR", "body": "A +1 for switching to `CustomRow`: I think you currently only get a `CustomRow` if the result set had a column that was an fkey ([this code](https://github.com/simonw/datasette/blob/3c352b7132ef09b829abb69a0da0ad00be5edef9/datasette/views/table.py#L667-L682))\r\n\r\nOtherwise you get vanilla `sqlite3.Row`s, which will fail if you try to access `.columns` or lookup the cell by name, which surprised me recently", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1515815014, "label": "render_cell plugin hook's row object is not a sqlite.Row"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558687342", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558687342, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODY4NzM0Mg==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2019-11-26T15:40:00Z", "updated_at": "2019-11-26T15:40:00Z", "author_association": "CONTRIBUTOR", "body": "A bit of background: the reason `heroku git:clone` brings down an empty directory is because `datasette publish heroku` uses the [builds API](https://devcenter.heroku.com/articles/build-and-release-using-the-api), rather than a `git push`, to release the app. I originally did this because it seemed like a lower bar than having a working `git`, but the downside is, as you found out, that tweaking the created app is hard. \r\n\r\nSo there's one option -- change `datasette publish heroku` to use `git push` instead of `heroku builds:create`.\r\n\r\n@pkoppstein - what you suggested seems like it ought to work (you don't need maintenance mode, though). I'm not sure why it doesn't.\r\n\r\nYou could also look into using the [slugs API](https://devcenter.heroku.com/articles/platform-api-deploying-slugs) to download the slug, change `metadata.json`, re-pack and re-upload the slug.\r\n\r\nUltimately though I think I think @simonw's idea of reading `metadata.json` from an external source might be better (#357). Reading from an alternate URL would be fine, or you could also just stuff the whole `metadata.json` into a Heroku config var, and write a plugin to read it from there. \r\n\r\nHope this helps a bit!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1810#issuecomment-1248204219", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1810", "id": 1248204219, "node_id": "IC_kwDOBm6k_c5KZhW7", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-09-15T14:44:47Z", "updated_at": "2022-09-15T14:46:26Z", "author_association": "CONTRIBUTOR", "body": "A couple+ of possible use case examples:\r\n\r\n- someone has a collection of articles indexed with FTS; they want to publish a simple search tool over the results;\r\n- someone has an image collection and they want to be able to search over description text to return images;\r\n- someone has a set of locations with descriptions, and wants to run a query over places and descriptions and get results as a listing or on a map;\r\n- someone has a set of audio or video files with titles, descriptions and/or transcripts, and wants to be able to search over them and return playable versions of returned items.\r\n\r\nIn many cases, I suspect the raw content will be in one table, but the search table will be a second (eg FTS) table. Generally, the search may be over one or more joined tables, and the results constructed from one or more tables (which may or may not be distinct from the search tables).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1374626873, "label": "Featured table(s) on the homepage"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1238#issuecomment-789186458", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1238", "id": 789186458, "node_id": "MDEyOklzc3VlQ29tbWVudDc4OTE4NjQ1OA==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2021-03-02T20:19:30Z", "updated_at": "2021-03-02T20:19:30Z", "author_association": "CONTRIBUTOR", "body": "A custom `templates/index.html` seems to work and custom `pages` as a workaround with moving them to `pages/base_url_dir`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813899472, "label": "Custom pages don't work with base_url setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-344710204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 344710204, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDcxMDIwNA==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-15T19:57:50Z", "updated_at": "2017-11-15T19:57:50Z", "author_association": "CONTRIBUTOR", "body": "A first basic stab at making this work, just to prove the approach. Right now this requires [a Heroku CLI plugin](https://github.com/heroku/heroku-builds), which seems pretty unreasonable. I think this can be replaced with direct API calls, which could clean up a lot of things. But I wanted to prove it worked first, and it does.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1495#issuecomment-974108455", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1495", "id": 974108455, "node_id": "IC_kwDOBm6k_c46D7cn", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-11-19T14:14:35Z", "updated_at": "2021-11-19T14:14:35Z", "author_association": "CONTRIBUTOR", "body": "A nudge on this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1033678984, "label": "Allow routes to have extra options"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-777927946", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 777927946, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-12T02:29:54Z", "updated_at": "2021-02-12T02:29:54Z", "author_association": "CONTRIBUTOR", "body": "According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt \\\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\n```\r\n\r\nThis uses `/mnt/fixtures.db` whereas you're using `fixtures.db` - did you try using this path instead?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2008#issuecomment-1407558284", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2008", "id": 1407558284, "node_id": "IC_kwDOBm6k_c5T5aKM", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-29T04:23:58Z", "updated_at": "2023-01-29T04:24:27Z", "author_association": "CONTRIBUTOR", "body": "Ack, this PR is broken. I see now that the `inner.*` is necessary for ensuring the correct count in the face of rows having duplicate values in views.\r\n\r\nThat fixes the overcounting, but I think can undercount when the rows have the same data, eg a view like:\r\n\r\n```sql\r\nSELECT '[\"bar\"]' tags UNION ALL SELECT '[\"bar\"]'\r\n```\r\n\r\nwill produce a count of `{\"bar\": 1 }`, when it should be `{\"bar\": 2}`. In fact, this could apply in tables without primary keys, too.\r\n\r\nIf `inner` came from a base table that had a primary key or a rowid, we could use those column(s) to solve that case.\r\n\r\nI guess a general solution would be to compute a window function so we have a distinct ID for each row. Will fiddle to see if I can get that working.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1560982210, "label": "array facet: don't materialize unnecessary columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/439#issuecomment-487542486", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/439", "id": 487542486, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzU0MjQ4Ng==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T11:20:30Z", "updated_at": "2019-04-29T11:20:30Z", "author_association": "CONTRIBUTOR", "body": "Actually I think this is not the whole story because of the rowid issue. I'm going to think about this one a bit more.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438240541, "label": "[WIP] Add primary key to the extra_body_script hook arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/104#issuecomment-346124073", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/104", "id": 346124073, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjEyNDA3Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-21T18:49:55Z", "updated_at": "2017-11-21T18:49:55Z", "author_association": "CONTRIBUTOR", "body": "Actually hang on, don't merge - there are some bugs that #141 masked when I tested this out elsewhere.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274284246, "label": "[WIP] Add publish to heroku support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/424#issuecomment-487692377", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/424", "id": 487692377, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4NzY5MjM3Nw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T18:30:46Z", "updated_at": "2019-04-29T18:30:46Z", "author_association": "CONTRIBUTOR", "body": "Actually no, I ended up not using the inspected column types in my plugin, and the binary column issue can be solved a lot more simply, so I'll close this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 427429265, "label": "Column types in inspected metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2000#issuecomment-1399847946", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2000", "id": 1399847946, "node_id": "IC_kwDOBm6k_c5Tb_wK", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-23T06:08:00Z", "updated_at": "2023-01-23T06:08:00Z", "author_association": "CONTRIBUTOR", "body": "Actually, I discovered [your post](https://til.simonwillison.net/datasette/register-new-plugin-hooks) showing how a plugin can add a Datasette hook. That's wild! I've released `datasette-rewrite-sql` that adds this ability, albeit via monkey patching.\r\n\r\nI had hoped to be able to expose `request` to the hook (or, even better `actor`) when the SQL was being run as a result of a user's HTTP request.\r\n\r\nBut some spelunking in the code makes me suspect that would actually require co-operation from Datasette itself. I'd be happy to be wrong and pointed in the right direction, though!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1552368054, "label": "rewrite_sql hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-753531657", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 753531657, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzUzMTY1Nw==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2021-01-02T21:25:36Z", "updated_at": "2021-01-02T21:25:36Z", "author_association": "CONTRIBUTOR", "body": "Actually, on more research, I found out this is handled by the [trove-classifiers package](https://github.com/pypa/trove-classifiers/blob/master/src/trove_classifiers/__init__.py#L2) now, so it's just a one-liner pr instead of fire-up-a-docker-container-and-do-some-migrations", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/590#issuecomment-541587823", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/590", "id": 541587823, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTU4NzgyMw==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-14T09:58:23Z", "updated_at": "2019-10-14T09:58:23Z", "author_association": "CONTRIBUTOR", "body": "Added tests.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 505818256, "label": "Handle spaces in DB names"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/203#issuecomment-381763651", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/203", "id": 381763651, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTc2MzY1MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-16T21:59:17Z", "updated_at": "2018-04-16T21:59:17Z", "author_association": "CONTRIBUTOR", "body": "Ah, I had no idea you could bind python functions into sqlite!\r\n\r\nI think the primary purpose of this issue has been served now - I'm going to close this and create a new issue for the only bit of this that hasn't been touched yet, which is (optionally) exposing units in the JSON API.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 313837303, "label": "Support for units"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655052451", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655052451, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTA1MjQ1MQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-07T18:45:23Z", "updated_at": "2020-07-07T18:45:23Z", "author_association": "CONTRIBUTOR", "body": "Ah, I see the problem. The truncate is inside a loop I didn't realize was there.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/523#issuecomment-504809397", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/523", "id": 504809397, "node_id": "MDEyOklzc3VlQ29tbWVudDUwNDgwOTM5Nw==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-06-24T01:38:14Z", "updated_at": "2019-06-24T01:38:14Z", "author_association": "CONTRIBUTOR", "body": "Ah, apologies \u2013 I had found and read those issues, but I was under the impression that they refered only to the filtered row count, not the unfiltered total row count.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459627549, "label": "Show total/unfiltered row count when filtering"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/590#issuecomment-541562581", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/590", "id": 541562581, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MTU2MjU4MQ==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2019-10-14T08:57:46Z", "updated_at": "2019-10-14T08:57:46Z", "author_association": "CONTRIBUTOR", "body": "Ah, thank you \u2013 I saw the need for unit tests but wasn't sure what the best way to add one would be.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 505818256, "label": "Handle spaces in DB names"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111705323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111705323, "node_id": "IC_kwDOBm6k_c5CQ0br", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T03:32:06Z", "updated_at": "2022-04-28T03:32:06Z", "author_association": "CONTRIBUTOR", "body": "Ah, that would be it! I have a core set of data which doesn't change to which I want authorised users to be able to submit corrections. I was going to deal with the persistence issue by just grabbing the user corrections at regular intervals and saving to GitHub. I might need to rethink. Thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/2001#issuecomment-1403084856", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2001", "id": 1403084856, "node_id": "IC_kwDOBm6k_c5ToWA4", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-01-25T04:31:02Z", "updated_at": "2023-01-25T04:31:02Z", "author_association": "CONTRIBUTOR", "body": "Aha, it's user error on my part.\r\n\r\nAdding\r\n\r\n```\r\nsqlite3_db_config.argtypes = [ctypes.c_void_p, ctypes.c_int, ctypes.c_int, ctypes.c_int]\r\n```\r\n\r\nmakes it work reliably both on the CLI and from datasette, and now I can reproduce the errors you mentioned in the issue description.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1553615704, "label": "Datasette is not compatible with SQLite's strict quoting compilation option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030740826", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/399", "id": 1030740826, "node_id": "IC_kwDOCGYnMM49b9ta", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-02-06T02:59:10Z", "updated_at": "2022-02-06T02:59:10Z", "author_association": "CONTRIBUTOR", "body": "All this said, I don't think it's unreasonable to point people to dedicated tools like `geojson-to-sqlite`. If I'm dealing with a bunch of GeoJSON or Shapefiles, I need to something to read those anyway (or I need to figure out virtual tables). But something like this might make it easier to build those libraries, or standardize the underlying parts.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1124731464, "label": "Make it easier to insert geometries, with documentation and maybe code"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1317805482", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1317805482, "node_id": "IC_kwDOBm6k_c5OjB2q", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T23:18:17Z", "updated_at": "2022-11-16T23:18:17Z", "author_association": "CONTRIBUTOR", "body": "Alright with https://github.com/simonw/datasette/pull/1893/commits/f254be4b38936e95e7a7f25866e7c6b0520db96f we should be getting autocomplete on fixture data. Give that a test and see what you think", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1893#issuecomment-1317681193", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1893", "id": 1317681193, "node_id": "IC_kwDOBm6k_c5Oijgp", "user": {"value": 95570, "label": "bgrins"}, "created_at": "2022-11-16T21:19:13Z", "updated_at": "2022-11-16T21:19:13Z", "author_association": "CONTRIBUTOR", "body": "Alright, added Cmd+Enter to submit (Ctrl+Enter on Windows as well bc of using Meta-Enter on codemirror). We can make that MacOS only by changing the combo to Cmd+Enter specifically but I think it's probably fine to have both.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1450363982, "label": "Upgrade to CodeMirror 6, add SQL autocomplete"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/279#issuecomment-391077700", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/279", "id": 391077700, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MTA3NzcwMA==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2018-05-22T17:38:17Z", "updated_at": "2018-05-22T17:38:17Z", "author_association": "CONTRIBUTOR", "body": "Alright, that should work now -- let me know if you would prefer any different behaviour.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325352370, "label": "Add version number support with Versioneer"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/446#issuecomment-489222223", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/446", "id": 489222223, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTIyMjIyMw==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-05-03T20:01:19Z", "updated_at": "2019-05-03T20:01:29Z", "author_association": "CONTRIBUTOR", "body": "Also I have a slight preference against (ab)using `__slots__` to enforce fields, although I have done it myself in the past. It would be possible to do this with `__setattr__` instead, although that's an implementation detail and I'm not too fussed about it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 440134714, "label": "Define mechanism for plugins to return structured data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/441#issuecomment-487748271", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/441", "id": 487748271, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Nzc0ODI3MQ==", "user": {"value": 45057, "label": "russss"}, "created_at": "2019-04-29T21:20:17Z", "updated_at": "2019-04-29T21:20:17Z", "author_association": "CONTRIBUTOR", "body": "Also I just pushed a change to add registered output renderers to the templates:\r\n![image](https://user-images.githubusercontent.com/45057/56927799-f18e0580-6acc-11e9-8ea9-a0ee961323ec.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 438437973, "label": "Add register_output_renderer hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/2052#issuecomment-1530822437", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/2052", "id": 1530822437, "node_id": "IC_kwDOBm6k_c5bPn8l", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2023-05-02T03:35:30Z", "updated_at": "2023-05-02T16:02:38Z", "author_association": "CONTRIBUTOR", "body": "Also, just checking - is this how I'd write bulletproof plugin registration code that is robust against the order in which the script tags load (eg if both my code and the Datasette code are loaded via a `