{"html_url": "https://github.com/simonw/sqlite-utils/issues/240#issuecomment-786036355", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240", "id": 786036355, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAzNjM1NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T16:38:07Z", "updated_at": "2021-02-25T16:38:07Z", "author_association": "OWNER", "body": "Documentation: https://sqlite-utils.datasette.io/en/latest/python-api.html#listing-rows-with-their-primary-keys", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816560819, "label": "table.pks_and_rows_where() method returning primary keys along with the rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-786035142", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 786035142, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAzNTE0Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T16:36:17Z", "updated_at": "2021-02-25T16:36:17Z", "author_association": "OWNER", "body": "WIP in a pull request.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/240#issuecomment-786016380", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240", "id": 786016380, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAxNjM4MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T16:10:01Z", "updated_at": "2021-02-25T16:10:01Z", "author_association": "OWNER", "body": "I prototyped this and I like it:\r\n```\r\nIn [1]: import sqlite_utils\r\nIn [2]: db = sqlite_utils.Database(\"/Users/simon/Dropbox/Development/datasette/fixtures.db\")\r\nIn [3]: list(db[\"compound_primary_key\"].pks_and_rows_where())\r\nOut[3]: [(('a', 'b'), {'pk1': 'a', 'pk2': 'b', 'content': 'c'})]\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816560819, "label": "table.pks_and_rows_where() method returning primary keys along with the rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/240#issuecomment-786007209", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240", "id": 786007209, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAwNzIwOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:57:50Z", "updated_at": "2021-02-25T15:57:50Z", "author_association": "OWNER", "body": "`table.pks_and_rows_where(...)` is explicit and I think less ambiguous than the other options.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816560819, "label": "table.pks_and_rows_where() method returning primary keys along with the rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/240#issuecomment-786006794", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240", "id": 786006794, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAwNjc5NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:57:17Z", "updated_at": "2021-02-25T15:57:28Z", "author_association": "OWNER", "body": "I quite like `pks_with_rows_where(...)` - but grammatically it suggests it will return the primary keys that exist where their rows match the criteria - \"pks with rows\" can be interpreted as \"pks for the rows that...\" as opposed to \"pks accompanied by rows\"", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816560819, "label": "table.pks_and_rows_where() method returning primary keys along with the rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/240#issuecomment-786005078", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240", "id": 786005078, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAwNTA3OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:54:59Z", "updated_at": "2021-02-25T15:56:16Z", "author_association": "OWNER", "body": "Is `pk_rows_where()` a good name? It sounds like it returns \"primary key rows\" which isn't a thing. It actually returns rows along with their primary key.\r\n\r\nOther options:\r\n\r\n- `table.rows_with_pk_where(...)` - should this return `(row, pk)` rather than `(pk, row)`?\r\n- `table.rows_where_pk(...)`\r\n- `table.pk_and_rows_where(...)`\r\n- `table.pk_with_rows_where(...)`\r\n- `table.pks_with_rows_where(...)` - because rows is pluralized, so pks should be pluralized too?\r\n- `table.pks_rows_where(...)`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816560819, "label": "table.pks_and_rows_where() method returning primary keys along with the rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/240#issuecomment-786001768", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240", "id": 786001768, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NjAwMTc2OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:50:28Z", "updated_at": "2021-02-25T15:52:12Z", "author_association": "OWNER", "body": "One option: `.rows_where()` could grow a `ensure_pk=True` option which checks to see if the table is a `rowid` table and, if it is, includes that in the `select`.\r\n\r\nOr... how about you can call `.rows_where(..., pks=True)` and it will yield `(pk, rowdict)` tuple pairs instead of just returning the sequence of dictionaries?\r\n\r\nI'm always a little bit nervous of methods that vary their return type based on their arguments. Maybe this would be a separate method instead?\r\n```python\r\n for pk, row in table.pk_rows_where(...):\r\n # ...\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816560819, "label": "table.pks_and_rows_where() method returning primary keys along with the rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785992158", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785992158, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk5MjE1OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:37:04Z", "updated_at": "2021-02-25T15:37:04Z", "author_association": "OWNER", "body": "Here's the current implementation of `.extract()`: https://github.com/simonw/sqlite-utils/blob/806c21044ac8d31da35f4c90600e98115aade7c6/sqlite_utils/db.py#L1049-L1074\r\n\r\nTricky detail here: I create the lookup table first, based on the types of the columns that are being extracted.\r\n\r\nI need to do this because extraction currently uses unique tuples of values, so the table has to be created in advance.\r\n\r\nBut if I'm using these new expand functions to figure out what's going to be extracted, I don't know the names of the columns and their types in advance. I'm only going to find those out during the transformation.\r\n\r\nThis may turn out to be incompatible with how `.extract()` works at the moment. I may need a new method, `.extract_expand()` perhaps? It could be simpler - work only against a single column for example.\r\n\r\nI can still use the existing `sqlite-utils extract` CLI command though, with a `--json` flag and a rule that you can't run it against multiple columns.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785983837", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785983837, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk4MzgzNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:25:21Z", "updated_at": "2021-02-25T15:28:57Z", "author_association": "OWNER", "body": "Problem with calling this argument `transform=` is that the term \"transform\" already means something else in this library.\r\n\r\nI could use `convert=` instead.\r\n\r\n... but that doesn't instantly make me think of turning a value into multiple columns.\r\n\r\nHow about `expand=`? I've not used that term anywhere yet.\r\n\r\n db[\"Reports\"].extract([\"Reported by\"], expand={\"Reported by\": json.loads})\r\n\r\nI think that works. You're expanding a single value into several columns of information.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785983070", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785983070, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk4MzA3MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:24:17Z", "updated_at": "2021-02-25T15:24:17Z", "author_association": "OWNER", "body": "I'm going to go with last-wins - so if multiple transform functions return the same key the last one will over-write the others.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785980813", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785980813, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk4MDgxMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:21:02Z", "updated_at": "2021-02-25T15:23:47Z", "author_association": "OWNER", "body": "Maybe the Python version takes an optional dictionary mapping column names to transformation functions? It could then merge all of those results together - and maybe throw an error if the same key is produced by more than one column.\r\n\r\n```python\r\n db[\"Reports\"].extract([\"Reported by\"], transform={\"Reported by\": json.loads})\r\n```\r\nOr it could have an option for different strategies if keys collide: first wins, last wins, throw exception, add a prefix to the new column name. That feels a bit too complex for an edge-case though.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785980083", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785980083, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk4MDA4Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:20:02Z", "updated_at": "2021-02-25T15:20:02Z", "author_association": "OWNER", "body": "It would be OK if the CLI version only allows you to specify a single column if you are using the `--json` option.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785979769", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785979769, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk3OTc2OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:19:37Z", "updated_at": "2021-02-25T15:19:37Z", "author_association": "OWNER", "body": "For the Python version I'd like to be able to provide a transformation callback function - which can be `json.loads` but could also be anything else which accepts the value of the current column and returns a Python dictionary of columns and their values to use in the new table.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785979192", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785979192, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk3OTE5Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:18:46Z", "updated_at": "2021-02-25T15:18:46Z", "author_association": "OWNER", "body": "Likewise the `sqlite-utils extract` command takes one or more columns:\r\n```\r\nUsage: sqlite-utils extract [OPTIONS] PATH TABLE COLUMNS...\r\n\r\n Extract one or more columns into a separate table\r\n\r\nOptions:\r\n --table TEXT Name of the other table to extract columns to\r\n --fk-column TEXT Name of the foreign key column to add to the table\r\n --rename ... Rename this column in extracted table\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/239#issuecomment-785978689", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/239", "id": 785978689, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk3ODY4OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:18:03Z", "updated_at": "2021-02-25T15:18:03Z", "author_association": "OWNER", "body": "The Python `.extract()` method currently starts like this:\r\n```python\r\ndef extract(self, columns, table=None, fk_column=None, rename=None):\r\n rename = rename or {}\r\n if isinstance(columns, str):\r\n columns = [columns]\r\n if not set(columns).issubset(self.columns_dict.keys()):\r\n raise InvalidColumns(\r\n \"Invalid columns {} for table with columns {}\".format(\r\n columns, list(self.columns_dict.keys())\r\n )\r\n )\r\n ...\r\n```\r\nNote that it takes a list of columns (and treats a string as a single item list). That's because it can be called with a list of columns and it will use them to populate another table of unique tuples of those column values.\r\n\r\nSo a new mechanism that can instead read JSON values from a single column needs to be compatible with that existing design.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816526538, "label": "sqlite-utils extract could handle nested objects"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/238#issuecomment-785972074", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/238", "id": 785972074, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTk3MjA3NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-25T15:08:36Z", "updated_at": "2021-02-25T15:08:36Z", "author_association": "OWNER", "body": "I bet the bug is in here: https://github.com/simonw/sqlite-utils/blob/806c21044ac8d31da35f4c90600e98115aade7c6/sqlite_utils/db.py#L593-L602", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 816523763, "label": ".add_foreign_key() corrupts database if column contains a space"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1243#issuecomment-785485597", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1243", "id": 785485597, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NTQ4NTU5Nw==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2021-02-25T00:28:30Z", "updated_at": "2021-02-25T00:28:30Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1243?src=pr&el=h1) Report\n> Merging [#1243](https://codecov.io/gh/simonw/datasette/pull/1243?src=pr&el=desc) (887bfd2) into [main](https://codecov.io/gh/simonw/datasette/commit/726f781c50e88f557437f6490b8479c3d6fabfc2?el=desc) (726f781) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/1243/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/1243?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## main #1243 +/- ##\n=======================================\n Coverage 91.56% 91.56% \n=======================================\n Files 34 34 \n Lines 4242 4242 \n=======================================\n Hits 3884 3884 \n Misses 358 358 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1243?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1243?src=pr&el=footer). Last update [726f781...32652d9](https://codecov.io/gh/simonw/datasette/pull/1243?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 815955014, "label": "fix small typo"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-784638394", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 784638394, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDYzODM5NA==", "user": {"value": 306240, "label": "UtahDave"}, "created_at": "2021-02-24T00:36:18Z", "updated_at": "2021-02-24T00:36:18Z", "author_association": "NONE", "body": "I noticed that @simonw is using black for formatting. I ran black on my additions in this PR.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1241#issuecomment-784567547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1241", "id": 784567547, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDU2NzU0Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-23T22:45:56Z", "updated_at": "2021-02-23T22:46:12Z", "author_association": "OWNER", "body": "I really like the way the Share feature on Stack Overflow works: https://stackoverflow.com/questions/18934149/how-can-i-use-postgresqls-text-column-type-in-django\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814595021, "label": "Share button for copying current URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1241#issuecomment-784347646", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1241", "id": 784347646, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDM0NzY0Ng==", "user": {"value": 7107523, "label": "Kabouik"}, "created_at": "2021-02-23T16:55:26Z", "updated_at": "2021-02-23T16:57:39Z", "author_association": "NONE", "body": "> I think it's possible that many users these days no longer assume they can paste a URL from the browser address bar (if they ever understood that at all) because to many apps are SPAs with broken URLs.\r\n\r\nAbsolutely, that's why I thought my corner case with `iframe` preventing access to the datasette URL could actually be relevant in more general situations.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814595021, "label": "Share button for copying current URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1241#issuecomment-784334931", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1241", "id": 784334931, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDMzNDkzMQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-23T16:37:26Z", "updated_at": "2021-02-23T16:37:26Z", "author_association": "OWNER", "body": "A \"Share link\" button would only be needed on the table page and the arbitrary query page I think - and maybe on the row page, especially as that page starts to grow more features in the future.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814595021, "label": "Share button for copying current URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1241#issuecomment-784333768", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1241", "id": 784333768, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDMzMzc2OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-23T16:35:51Z", "updated_at": "2021-02-23T16:35:51Z", "author_association": "OWNER", "body": "This can definitely be done with a plugin.\r\n\r\nAdding to Datasette itself is an interesting idea. I think it's possible that many users these days no longer assume they can paste a URL from the browser address bar (if they ever understood that at all) because to many apps are SPAs with broken URLs.\r\n\r\nThe shareable URLs are actually a key feature of Datasette - so maybe they should be highlighted in the default UI?\r\n\r\nI built a \"copy to clipboard\" feature for `datasette-copyable` and wrote up how that works here: https://til.simonwillison.net/javascript/copy-button", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814595021, "label": "Share button for copying current URL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1240#issuecomment-784312460", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1240", "id": 784312460, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDMxMjQ2MA==", "user": {"value": 7107523, "label": "Kabouik"}, "created_at": "2021-02-23T16:07:10Z", "updated_at": "2021-02-23T16:08:28Z", "author_association": "NONE", "body": "Likewise, while answering to another issue regarding the Vega plugin, I realized that there is no such way of linking rows after a custom query, I only get this \"Link\" column with individual URLs for the default SQL view:\r\n\r\n![ss-2021-02-23_170559](https://user-images.githubusercontent.com/7107523/108871491-1e3fd500-75f1-11eb-8f76-5d5a82cc14d7.png)\r\n\r\nOr is it there and I am just missing the option in my custom queries?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 814591962, "label": "Allow facetting on custom queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1218#issuecomment-784157345", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1218", "id": 784157345, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NDE1NzM0NQ==", "user": {"value": 1244799, "label": "soobrosa"}, "created_at": "2021-02-23T12:12:17Z", "updated_at": "2021-02-23T12:12:17Z", "author_association": "NONE", "body": "Topline this fixed the same problem for me.\r\n```\r\nbrew install python@3.7\r\nln -s /usr/local/opt/python@3.7/bin/python3.7 /usr/local/opt/python/bin/python3.7\r\npip3 uninstall -y numpy\r\npip3 uninstall -y setuptools\r\npip3 install setuptools\r\npip3 install numpy\r\npip3 install datasette-publish-fly\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 803356942, "label": " /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-783794520", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5", "id": 783794520, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mzc5NDUyMA==", "user": {"value": 306240, "label": "UtahDave"}, "created_at": "2021-02-23T01:13:54Z", "updated_at": "2021-02-23T01:13:54Z", "author_association": "NONE", "body": "Also, @simonw I created a test based off the existing tests. I think it's working correctly", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813880401, "label": "WIP: Add Gmail takeout mbox import"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1239#issuecomment-783774084", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1239", "id": 783774084, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mzc3NDA4NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-23T00:18:56Z", "updated_at": "2021-02-23T00:19:18Z", "author_association": "OWNER", "body": "Bug is here: https://github.com/simonw/datasette/blob/42caabf7e9e6e4d69ef6dd7de16f2cd96bc79d5b/datasette/filters.py#L149-L165\r\n\r\nThose `json_each` lines should be:\r\n\r\n select {t}.rowid from {t}, json_each([{t}].[{c}]) j", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813978858, "label": "JSON filter fails if column contains spaces"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-783688547", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4", "id": 783688547, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzY4ODU0Nw==", "user": {"value": 306240, "label": "UtahDave"}, "created_at": "2021-02-22T21:31:28Z", "updated_at": "2021-02-22T21:31:28Z", "author_association": "NONE", "body": "@Btibert3 I've opened a PR with my initial attempt at this. Would you be willing to give this a try?\r\n\r\nhttps://github.com/dogsheep/google-takeout-to-sqlite/pull/5", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778380836, "label": "Feature Request: Gmail"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1237#issuecomment-783676548", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1237", "id": 783676548, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzY3NjU0OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-22T21:10:19Z", "updated_at": "2021-02-22T21:10:25Z", "author_association": "OWNER", "body": "This is another change which is a little bit hard to figure out because I haven't solved #878 yet.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812704869, "label": "?_pretty=1 option for pretty-printing JSON output"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1234#issuecomment-783674659", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1234", "id": 783674659, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzY3NDY1OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-22T21:06:28Z", "updated_at": "2021-02-22T21:06:28Z", "author_association": "OWNER", "body": "I'm not going to work on this for a while, but if anyone has needs or ideas around that they can add them to this issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811505638, "label": "Runtime support for ATTACHing multiple databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-783674038", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 783674038, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzY3NDAzOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-22T21:05:21Z", "updated_at": "2021-02-22T21:05:21Z", "author_association": "OWNER", "body": "It's good on mobile - iOS at least. Going to close this open new issues if anyone reports bugs.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 1, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/220#issuecomment-783662968", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/220", "id": 783662968, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzY2Mjk2OA==", "user": {"value": 649467, "label": "mhalle"}, "created_at": "2021-02-22T20:44:51Z", "updated_at": "2021-02-22T20:44:51Z", "author_association": "NONE", "body": "Actually, coming back to this, I have a clearer use case for enabling fts generation for views: making it easier to bring in text from lookup tables and other joins. \r\n\r\nThe datasette documentation describes populating an fts table like so:\r\n```\r\nINSERT INTO \"items_fts\" (rowid, name, description, category_name)\r\n SELECT items. rowid,\r\n items.name,\r\n items.description,\r\n categories.name\r\n FROM items JOIN categories ON items.category_id=categories.id;\r\n```\r\nAlternatively if you have fts support in sqlite_utils for views (which sqlite and fts5 support), you can do the same thing just by creating a view that captures the above joins as columns, then creating an fts table from that view. Such an fts table can be created using sqlite_utils, where one created with your method can't. \r\n\r\nThe resulting fts table can then be used by a whole family of related tables and views in the manner you described earlier in this issue. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 783778672, "label": "Better error message for *_fts methods against views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1166#issuecomment-783560017", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1166", "id": 783560017, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw==", "user": {"value": 94334, "label": "thorn0"}, "created_at": "2021-02-22T18:00:57Z", "updated_at": "2021-02-22T18:13:11Z", "author_association": "NONE", "body": "Hi! I don't think Prettier supports this syntax for globs: `datasette/static/*[!.min].js` Are you sure that works?\r\nPrettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (`foo-[1-5].js`).\r\n\r\nTested it. Apparently, it works as a negated character class in regexes (like `[^.min]`). I wonder where this syntax comes from. Micromatch doesn't support that:\r\n\r\n```js\r\nmicromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']);\r\n// result: [\"static/n.js\"] -- brackets are treated like [!.min] in regexes, without negation\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777140799, "label": "Adopt Prettier for JavaScript code formatting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-783265830", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 783265830, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MzI2NTgzMA==", "user": {"value": 30665, "label": "frankieroberto"}, "created_at": "2021-02-22T10:21:14Z", "updated_at": "2021-02-22T10:21:14Z", "author_association": "NONE", "body": "@simonw:\r\n\r\n> The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword.\r\n\r\nInteresting! Although I don't think it matters too much what the underlying implementation is - I more meant that `limit` is familiar to developers conceptually as \"up to and including this number, if they exist\", whereas \"size\" is potentially more ambiguous. However, it's probably no big deal either way.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782789598", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782789598, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc4OTU5OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-21T03:30:02Z", "updated_at": "2021-02-21T03:30:02Z", "author_association": "OWNER", "body": "Another benefit to default:object - I could include a key that shows a list of available extras. I could then use that to power an interactive API explorer.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782765665", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782765665, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc2NTY2NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T23:34:41Z", "updated_at": "2021-02-20T23:34:41Z", "author_association": "OWNER", "body": "OK, I'm back to the \"top level object as the default\" side of things now - it's pretty much unanimous at this point, and it's certainly true that it's not a decision you'll even regret.", "reactions": "{\"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782756398", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782756398, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc1NjM5OA==", "user": {"value": 601316, "label": "simonrjones"}, "created_at": "2021-02-20T22:05:48Z", "updated_at": "2021-02-20T22:05:48Z", "author_association": "NONE", "body": "> I think it\u2019s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default.\n\nI agree it is more predictable if the top level item is an object with a rows or data object that contains an array of data, which then allows for other top-level meta data. \n\nI can see the argument for removing this and just using an array for convenience - but I think that's OK as an option (as you have now).\n\nRather than have lots of top-level keys you could have a \"meta\" object to contain non-data stuff. You could use something like \"links\" for API endpoint URLs (or use a standard like HAL). Which would then leave the top level a bit cleaner - if that's what you what. \n\nHave you had much feedback from users who use the Datasette API a lot?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782748501", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782748501, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0ODUwMQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:58:18Z", "updated_at": "2021-02-20T20:58:18Z", "author_association": "OWNER", "body": "Yet another option: support a `?_path=x` option which returns a nested path from the result. So you could do this:\r\n\r\n`/github/commits.json?_path=rows` - to get back a top-level array pulled from the `\"rows\"` key.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782748093", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782748093, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0ODA5Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:54:52Z", "updated_at": "2021-02-20T20:54:52Z", "author_association": "OWNER", "body": "> Have you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don\u2019t parse the JSON. Could be default (can\u2019t be much bigger with gzip?) or opt-in.\r\n\r\nAdding a `?_pretty=1` option that does that is a great idea, I'm filing a ticket for it: #1237", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782747878", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782747878, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0Nzg3OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:53:11Z", "updated_at": "2021-02-20T20:53:11Z", "author_association": "OWNER", "body": "... though thinking about this further, I could re-implement the `select * from commits` (but only return a max of 10 results) feature using a nested `select * from (select * from commits) limit 10` query.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782747743", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782747743, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0Nzc0Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:52:10Z", "updated_at": "2021-02-20T20:52:10Z", "author_association": "OWNER", "body": "> Minor suggestion: rename `size` query param to `limit`, to better reflect that it\u2019s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword?\r\n\r\nThe problem there is that `?_size=x` isn't actually doing the same thing as the SQL `limit` keyword. Consider this query:\r\n\r\nhttps://latest-with-plugins.datasette.io/github?sql=select+*+from+commits - `select * from commits`\r\n\r\nDatasette returns 1,000 results, and shows a \"Custom SQL query returning more than 1,000 rows\" message at the top. That's the `size` kicking in - I only fetch the first 1,000 results from the cursor to avoid exhausting resources. In the JSON version of that at https://latest-with-plugins.datasette.io/github.json?sql=select+*+from+commits there's a `\"truncated\": true` key to let you know what happened.\r\n\r\nI find myself using `?_size=2` against Datasette occasionally if I know the rows being returned are really big and I don't want to load 10+MB of HTML.\r\n\r\nThis is only really a concern for arbitrary SQL queries though - for table pages such as https://latest-with-plugins.datasette.io/github/commits?_size=10 adding `?_size=10` actually puts a `limit 10` on the underlying SQL query.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782747164", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782747164, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0NzE2NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:47:16Z", "updated_at": "2021-02-20T20:47:16Z", "author_association": "OWNER", "body": "(I started a thread on Twitter about this: https://twitter.com/simonw/status/1363220355318358016)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782746755", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782746755, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0Njc1NQ==", "user": {"value": 30665, "label": "frankieroberto"}, "created_at": "2021-02-20T20:44:05Z", "updated_at": "2021-02-20T20:44:05Z", "author_association": "NONE", "body": "Minor suggestion: rename `size` query param to `limit`, to better reflect that it\u2019s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword?\r\n\r\nI like the idea of specifying a limit of 0 if you don\u2019t want any rows data - and returning an empty array under the `rows` key seems fine.\r\n\r\nHave you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don\u2019t parse the JSON. Could be default (can\u2019t be much bigger with gzip?) or opt-in.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782746633", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782746633, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0NjYzMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:43:07Z", "updated_at": "2021-02-20T20:43:07Z", "author_association": "OWNER", "body": "Another option: `.json` always returns an object with a list of keys that gets increased through adding `?_extra=` parameters.\r\n\r\n`.jsona` always returns a JSON array of objects\r\n\r\nI had something similar to this in Datasette a few years ago - a `.jsono` extension, which still redirects to the `shape=array` version.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782745199", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782745199, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0NTE5OQ==", "user": {"value": 30665, "label": "frankieroberto"}, "created_at": "2021-02-20T20:32:03Z", "updated_at": "2021-02-20T20:32:03Z", "author_association": "NONE", "body": "I think it\u2019s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. Mainly because it allows you to add extra keys in a backwards-compatible way. Also just seems more expected somehow.\r\n\r\nThe API design guidance for the UK government also recommends this: https://www.gov.uk/guidance/gds-api-technical-and-data-standards#use-json\r\n\r\nI also strongly dislike having versioned APIs (eg with a `/v1/` path prefix, as it invariably means that old versions stop working at some point, even though the bit of the API you\u2019re using might not have changed at all.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 1}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782742233", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782742233, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0MjIzMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:09:16Z", "updated_at": "2021-02-20T20:09:16Z", "author_association": "OWNER", "body": "I just noticed that https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=total&_size=0&_trace=1 executes 35 SQL queries at the moment! A great reminder that a big improvement from this change will be a reduction in queries through not calculating things like suggested facets unless they are explicitly requested.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782741719", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782741719, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0MTcxOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:05:04Z", "updated_at": "2021-02-20T20:05:04Z", "author_association": "OWNER", "body": "> The only advantage of headers is that you don\u2019t need to do .rows, but that\u2019s actually good as a data validation step anyway\u2014if .rows is missing assume there\u2019s an error and do your error handling path instead of parsing the rest.\r\n\r\nThis is something I've not thought very hard about. If there's an error, I need to return a top-level object, not a top-level array, so I can provide details of the error.\r\n\r\nBut this means that client code will have to handle this difference - it will have to know that the returned data can be array-shaped if nothing went wrong, and object-shaped if there's an error.\r\n\r\nThe HTTP status code helps here - calling client code can know that a 200 status code means there will be an array, but an error status code means an object.\r\n\r\nIf developers really hate that the shape could be different, they can always use `?_extra=next` to ensure that the top level item is an object whether or not an error occurred. So I think this is OK.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782741107", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782741107, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0MTEwNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T20:00:22Z", "updated_at": "2021-02-20T20:00:22Z", "author_association": "OWNER", "body": "A really exciting opportunity this opens up is for parallel execution - the `facets()` and `suggested_facets()` and `total()` async functions could be called in parallel, which could speed things up if I'm confident the SQLite thread pool can execute on multiple CPU cores (it should be able to because the Python `sqlite3` module releases the GIL while it's executing C code).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782740985", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782740985, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0MDk4NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T19:59:21Z", "updated_at": "2021-02-20T19:59:21Z", "author_association": "OWNER", "body": "This design should be influenced by how it's implemented.\r\n\r\nOne implementation that could be nice is that each of the keys that can be requested - `next_url`, `total` etc - maps to an `async def` function which can do the work. So that expensive `count(*)` will only be executed by the `async def total` function if it is requested.\r\n\r\nThis raises more questions: Both `next` and `next_url` work off the same underlying data, so if they are both requested can we re-use the work that `next` does somehow? Maybe by letting these functions depend on each other (so `next_url()` knows to first call `next()`, but only if it hasn't been called already.\r\n\r\nI think I need to flesh out the full default collection of `?_extra=` parameters in order to design how they will work under the hood.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782740604", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782740604, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0MDYwNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T19:56:21Z", "updated_at": "2021-02-20T19:56:33Z", "author_association": "OWNER", "body": "I think I want to support `?_extra=next_url,total` in addition to `?_extra=next_url&_extra=total` - partly because it's less characters to type, and also because I know there exist URL handling library that don't know how to handle the same parameter multiple times (though they're going to break against Datasette already, so it's not a big deal).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782740488", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782740488, "node_id": "MDEyOklzc3VlQ29tbWVudDc4Mjc0MDQ4OA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T19:55:23Z", "updated_at": "2021-02-20T19:55:23Z", "author_association": "OWNER", "body": "Am I saying you won't get back a key in the response unless you explicitly request it, either by name or by specifying a bundle of extras (e.g. `all` or `paginated`)?\r\n\r\nThe `\"truncated\": true` key that tells you that your arbitrary query returned more than X results but was truncated is pretty important, do I really want people to have to opt-in to that one?\r\n\r\nAlso: having bundles like `all` or `paginated` live in the same namespace as single keys like `next_url` or `total` is a little odd - you can't tell by looking at them if they'll add a key called `all` or if they'll add a bunch of other stuff.\r\n\r\nMaybe bundles could be prefixed with something, perhaps an underscore? `?_extra=_all` and `?_extra=_paginated` for example.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782739926", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782739926, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjczOTkyNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T19:51:30Z", "updated_at": "2021-02-20T19:52:19Z", "author_association": "OWNER", "body": "Demos:\r\n\r\n- https://latest-with-plugins.datasette.io/github/commits.json-preview\r\n- https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=next_url\r\n- https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=total\r\n- https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=next_url&_extra=total\r\n- https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=total&_size=0", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782709425", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782709425, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjcwOTQyNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T16:24:54Z", "updated_at": "2021-02-20T16:24:54Z", "author_association": "OWNER", "body": "Having shortcuts means I could support `?_extra=all` for returning ALL possible keys.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782709270", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782709270, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjcwOTI3MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T16:23:51Z", "updated_at": "2021-02-20T16:24:11Z", "author_association": "OWNER", "body": "Also how would you opt out of returning the `\"rows\"` key? I sometimes want to do this - if I want to get back just the count or just the facets for example.\r\n\r\nSome options:\r\n\r\n* `/fixtures/roadside_attractions.json?_extra=total&_extra=-rows`\r\n* `/fixtures/roadside_attractions.json?_extra=total&_skip=rows`\r\n* `/fixtures/roadside_attractions.json?_extra=total&_size=0`\r\n\r\nI quite like that last one with `?_size=0`. I think it would still return `\"rows\": []` but that's OK.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/782#issuecomment-782708938", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/782", "id": 782708938, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjcwODkzOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-20T16:22:14Z", "updated_at": "2021-02-20T16:22:14Z", "author_association": "OWNER", "body": "I'm leaning back in the direction of a flat JSON array of objects as the default - this:\r\n\r\n`/fixtures/roadside_attractions.json`\r\n\r\nWould return:\r\n\r\n```json\r\n[\r\n {\r\n \"pk\": 1,\r\n \"name\": \"The Mystery Spot\",\r\n \"address\": \"465 Mystery Spot Road, Santa Cruz, CA 95065\",\r\n \"latitude\": 37.0167,\r\n \"longitude\": -122.0024\r\n },\r\n {\r\n \"pk\": 2,\r\n \"name\": \"Winchester Mystery House\",\r\n \"address\": \"525 South Winchester Boulevard, San Jose, CA 95128\",\r\n \"latitude\": 37.3184,\r\n \"longitude\": -121.9511\r\n },\r\n {\r\n \"pk\": 3,\r\n \"name\": \"Burlingame Museum of PEZ Memorabilia\",\r\n \"address\": \"214 California Drive, Burlingame, CA 94010\",\r\n \"latitude\": 37.5793,\r\n \"longitude\": -122.3442\r\n },\r\n {\r\n \"pk\": 4,\r\n \"name\": \"Bigfoot Discovery Museum\",\r\n \"address\": \"5497 Highway 9, Felton, CA 95018\",\r\n \"latitude\": 37.0414,\r\n \"longitude\": -122.0725\r\n }\r\n]\r\n```\r\nTo get the version that includes pagination information you would use the `?_extra=` parameter. For example:\r\n\r\n`/fixtures/roadside_attractions.json?_extra=total&_extra=next_url`\r\n\r\n```json\r\n{\r\n \"rows\": [\r\n {\r\n \"pk\": 1,\r\n \"name\": \"The Mystery Spot\",\r\n \"address\": \"465 Mystery Spot Road, Santa Cruz, CA 95065\",\r\n \"latitude\": 37.0167,\r\n \"longitude\": -122.0024\r\n },\r\n {\r\n \"pk\": 2,\r\n \"name\": \"Winchester Mystery House\",\r\n \"address\": \"525 South Winchester Boulevard, San Jose, CA 95128\",\r\n \"latitude\": 37.3184,\r\n \"longitude\": -121.9511\r\n },\r\n {\r\n \"pk\": 3,\r\n \"name\": \"Burlingame Museum of PEZ Memorabilia\",\r\n \"address\": \"214 California Drive, Burlingame, CA 94010\",\r\n \"latitude\": 37.5793,\r\n \"longitude\": -122.3442\r\n },\r\n {\r\n \"pk\": 4,\r\n \"name\": \"Bigfoot Discovery Museum\",\r\n \"address\": \"5497 Highway 9, Felton, CA 95018\",\r\n \"latitude\": 37.0414,\r\n \"longitude\": -122.0725\r\n }\r\n ],\r\n \"total\": 4,\r\n \"next_url\": null\r\n}\r\n```\r\nANY usage of the `?_extra=` parameter would turn the list into an object with a `\"rows\"` key.\r\n\r\nOpting in to the `total` is nice because it's actually expensive to run a count, so only doing a count if the user requests it feels good.\r\n\r\nBut... having to add `?_extra=total&_extra=next_url` for the common case of wanting both the total count and the URL to get the next page of results is a bit verbose. So maybe support aliases, like `?_extra=paginated` which is a shortcut for `?_extra=total&_extra=next_url`?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 627794879, "label": "Redesign default .json format"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782464306", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782464306, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ2NDMwNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:57:32Z", "updated_at": "2021-02-19T23:57:32Z", "author_association": "OWNER", "body": "Need to test this on mobile.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782464215", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782464215, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ2NDIxNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:57:13Z", "updated_at": "2021-02-19T23:57:13Z", "author_association": "OWNER", "body": "Now live on https://latest.datasette.io/_memory", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782462049", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782462049, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ2MjA0OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:51:12Z", "updated_at": "2021-02-19T23:51:12Z", "author_association": "OWNER", "body": "![resize-demo](https://user-images.githubusercontent.com/9599/108573758-4914eb00-72ca-11eb-989c-e642eee68021.gif)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782459550", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782459550, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ1OTU1MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:45:30Z", "updated_at": "2021-02-19T23:45:30Z", "author_association": "OWNER", "body": "Encoded using https://meyerweb.com/eric/tools/dencoder/\r\n\r\n`%3Csvg%20aria-labelledby%3D%22cm-drag-to-resize%22%20role%3D%22img%22%20fill%3D%22%23ccc%22%20stroke%3D%22%23ccc%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20viewBox%3D%220%200%2016%2016%22%20width%3D%2216%22%20height%3D%2216%22%3E%0A%20%20%3Ctitle%20id%3D%22cm-drag-to-resize%22%3EDrag%20to%20resize%3C%2Ftitle%3E%0A%20%20%3Cpath%20fill-rule%3D%22evenodd%22%20d%3D%22M1%202.75A.75.75%200%20011.75%202h12.5a.75.75%200%20110%201.5H1.75A.75.75%200%20011%202.75zm0%205A.75.75%200%20011.75%207h12.5a.75.75%200%20110%201.5H1.75A.75.75%200%20011%207.75zM1.75%2012a.75.75%200%20100%201.5h12.5a.75.75%200%20100-1.5H1.75z%22%3E%3C%2Fpath%3E%0A%3C%2Fsvg%3E`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782459405", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782459405, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ1OTQwNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:45:02Z", "updated_at": "2021-02-19T23:45:02Z", "author_association": "OWNER", "body": "I'm going to use a variant of the Datasette menu icon. Here it is in `#ccc` with an ARIA label:\r\n\r\n```svg\r\n\r\n Drag to resize\r\n \r\n\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782458983", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782458983, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ1ODk4Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:43:34Z", "updated_at": "2021-02-19T23:43:34Z", "author_association": "OWNER", "body": "I only want it to resize up and down, not left to right - so I'm not keen on the default resize handle:\r\n\r\n\"cm-resize_demo\"\r\n\r\nhttps://rawgit.com/Sphinxxxx/cm-resize/master/demo/index.html", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-782458744", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 782458744, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQ1ODc0NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T23:42:42Z", "updated_at": "2021-02-19T23:42:42Z", "author_association": "OWNER", "body": "I can use https://github.com/Sphinxxxx/cm-resize for this", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1212#issuecomment-782430028", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1212", "id": 782430028, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-19T22:54:13Z", "updated_at": "2021-02-19T22:54:13Z", "author_association": "CONTRIBUTOR", "body": "I will close this issue since it appears only in my particular setup.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797651831, "label": "Tests are very slow. "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/619#issuecomment-782246111", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/619", "id": 782246111, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjI0NjExMQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T18:11:22Z", "updated_at": "2021-02-19T18:11:22Z", "author_association": "OWNER", "body": "Big usability improvement, see also #1236", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 520655983, "label": "\"Invalid SQL\" page should let you edit the SQL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1229#issuecomment-782053455", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1229", "id": 782053455, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ==", "user": {"value": 295329, "label": "camallen"}, "created_at": "2021-02-19T12:47:19Z", "updated_at": "2021-02-19T12:47:19Z", "author_association": "CONTRIBUTOR", "body": "I believe this pr and #1031 are related and fix the same issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 810507413, "label": "ensure immutable databses when starting in configuration directory mode with"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/236#issuecomment-781825726", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/236", "id": 781825726, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTgyNTcyNg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T05:10:41Z", "updated_at": "2021-02-19T05:10:41Z", "author_association": "OWNER", "body": "Documentation: https://sqlite-utils.datasette.io/en/latest/cli.html#attaching-additional-databases", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811680502, "label": "--attach command line option for attaching extra databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/113#issuecomment-781825187", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/113", "id": 781825187, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTgyNTE4Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T05:09:12Z", "updated_at": "2021-02-19T05:09:12Z", "author_association": "OWNER", "body": "Documentation: https://sqlite-utils.datasette.io/en/latest/python-api.html#attaching-additional-databases", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 621286870, "label": "Syntactic sugar for ATTACH DATABASE"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781764561", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781764561, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTc2NDU2MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T02:10:21Z", "updated_at": "2021-02-19T02:10:21Z", "author_association": "OWNER", "body": "This feature is now released! https://docs.datasette.io/en/stable/changelog.html#v0-55", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 1, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1235#issuecomment-781736855", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1235", "id": 781736855, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTczNjg1NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T00:52:47Z", "updated_at": "2021-02-19T01:47:53Z", "author_association": "OWNER", "body": "I bumped the two lines in the `Dockerfile` to `FROM python:3.7.10-slim-stretch as build` and ran this to build it:\r\n\r\n docker build -f Dockerfile -t datasetteproject/datasette:python-3-7-10 .\r\n\r\nThen I ran it with:\r\n\r\n docker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette:python-3-7-10 datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\n\r\nhttp://0.0.0.0:8001/-/versions confirmed that it was now running Python 3.7.10", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811589344, "label": "Upgrade Python version used by official Datasette Docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1235#issuecomment-781735887", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1235", "id": 781735887, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTczNTg4Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-19T00:50:21Z", "updated_at": "2021-02-19T00:50:55Z", "author_association": "OWNER", "body": "I'll bump to `3.7.10` for the moment - the fix for 3.8 isn't out until March 1st according to https://news.ycombinator.com/item?id=26186434\r\n\r\nhttps://www.python.org/downloads/release/python-3710/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811589344, "label": "Upgrade Python version used by official Datasette Docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781670827", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781670827, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTY3MDgyNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T22:16:46Z", "updated_at": "2021-02-18T22:16:46Z", "author_association": "OWNER", "body": "Demo is now live here: https://latest.datasette.io/_memory\r\n\r\nThe documentation is at https://docs.datasette.io/en/latest/sql_queries.html#cross-database-queries - it links to this example query: https://latest.datasette.io/_memory?sql=select%0D%0A++%27fixtures%27+as+database%2C+*%0D%0Afrom%0D%0A++%5Bfixtures%5D.sqlite_master%0D%0Aunion%0D%0Aselect%0D%0A++%27extra_database%27+as+database%2C+*%0D%0Afrom%0D%0A++%5Bextra_database%5D.sqlite_master", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781599929", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781599929, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU5OTkyOQ==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2021-02-18T19:59:54Z", "updated_at": "2021-02-18T22:06:42Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=h1) Report\n> Merging [#1232](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=desc) (8876499) into [main](https://codecov.io/gh/simonw/datasette/commit/4df548e7668b5b21d64a267964951e67894f4712?el=desc) (4df548e) will **increase** coverage by `0.03%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/1232/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## main #1232 +/- ##\n==========================================\n+ Coverage 91.42% 91.46% +0.03% \n==========================================\n Files 32 32 \n Lines 3955 3970 +15 \n==========================================\n+ Hits 3616 3631 +15 \n Misses 339 339 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [datasette/app.py](https://codecov.io/gh/simonw/datasette/pull/1232/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2FwcC5weQ==) | `95.68% <100.00%> (+0.06%)` | :arrow_up: |\n| [datasette/cli.py](https://codecov.io/gh/simonw/datasette/pull/1232/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2NsaS5weQ==) | `76.62% <100.00%> (+0.36%)` | :arrow_up: |\n| [datasette/views/database.py](https://codecov.io/gh/simonw/datasette/pull/1232/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3ZpZXdzL2RhdGFiYXNlLnB5) | `97.19% <100.00%> (+0.01%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=footer). Last update [4df548e...8876499](https://codecov.io/gh/simonw/datasette/pull/1232?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781665560", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781665560, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTY2NTU2MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T22:06:14Z", "updated_at": "2021-02-18T22:06:14Z", "author_association": "OWNER", "body": "The implementation in #1232 is ready to land. It's the simplest-thing-that-could-possibly-work: you can run `datasette one.db two.db three.db --crossdb` and then use the `/_memory` page to run joins across tables from multiple databases.\r\n\r\nIt only works on the first 10 databases that were passed to the command-line. This means that if you have a Datasette instance with hundreds of attached databases (see [Datasette Library](https://github.com/simonw/datasette/issues/417)) this won't be particularly useful for you.\r\n\r\nSo... a better, future version of this feature would be one that lets you join across databases on command - maybe by hitting `/_memory?attach=db1&attach=db2` to get a special connection.\r\n\r\nAlso worth noting: plugins that implement the [prepare_connection()](https://docs.datasette.io/en/stable/plugin_hooks.html#prepare-connection-conn-database-datasette) hook can attach additional databases - so if you need better, customized support for this one way to handle that would be with a custom plugin.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781651283", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781651283, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTY1MTI4Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T21:37:55Z", "updated_at": "2021-02-18T21:37:55Z", "author_association": "OWNER", "body": "UI listing the attached tables:\r\n\r\n\"_memory\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781641728", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781641728, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTY0MTcyOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T21:19:34Z", "updated_at": "2021-02-18T21:19:34Z", "author_association": "OWNER", "body": "I tested the demo deployment like this:\r\n```\r\ndatasette publish cloudrun fixtures.db extra_database.db \\ \r\n -m fixtures.json \\\r\n --plugins-dir=plugins \\\r\n --branch=crossdb \\\r\n --extra-options=\"--setting template_debug 1 --crossdb\" \\\r\n --install=pysqlite3-binary \\\r\n --service=datasette-latest-crossdb\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781637292", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781637292, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTYzNzI5Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T21:11:31Z", "updated_at": "2021-02-18T21:11:31Z", "author_association": "OWNER", "body": "Due to bug #1233 I'm going to publish the additional database as `extra_database.db` rather than `extra database.db` as it is used in the tests.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1233#issuecomment-781636590", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1233", "id": 781636590, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTYzNjU5MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T21:10:08Z", "updated_at": "2021-02-18T21:10:08Z", "author_association": "OWNER", "body": "I think the bug is here: https://github.com/simonw/datasette/blob/640ac7071b73111ba4423812cd683756e0e1936b/datasette/utils/__init__.py#L349-L373", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811458446, "label": "\"datasette publish cloudrun\" cannot publish files with spaces in their name"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781634819", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781634819, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTYzNDgxOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T21:06:43Z", "updated_at": "2021-02-18T21:06:43Z", "author_association": "OWNER", "body": "I'll document this option on https://docs.datasette.io/en/stable/sql_queries.html", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781629841", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781629841, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTYyOTg0MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T20:57:23Z", "updated_at": "2021-02-18T20:57:23Z", "author_association": "OWNER", "body": "The new warning looks like this:\r\n\r\n\"datasette_\u2014_pipenv_shell_\u25b8_Python_\u2014_182\u00d766\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781598585", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781598585, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU5ODU4NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T19:57:30Z", "updated_at": "2021-02-18T19:57:30Z", "author_association": "OWNER", "body": "It would also be neat if https://latest.datasette.io/ had multiple databases attached in order to demonstrate this feature.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1232#issuecomment-781594632", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1232", "id": 781594632, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU5NDYzMg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T19:50:21Z", "updated_at": "2021-02-18T19:50:21Z", "author_association": "OWNER", "body": "It would be neat if the `/_memory` page showed a list of attached databases, to indicate that the `--crossdb` option is working and give people links to click to start running queries.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811407131, "label": "--crossdb option for joining across databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781593169", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781593169, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU5MzE2OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T19:47:34Z", "updated_at": "2021-02-18T19:47:34Z", "author_association": "OWNER", "body": "I have a working version now, moving development to a pull request.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781591015", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781591015, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU5MTAxNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T19:44:02Z", "updated_at": "2021-02-18T19:44:02Z", "author_association": "OWNER", "body": "For the moment I'm going to hard-code a `SQLITE_LIMIT_ATTACHED=10` constant and only attach the first 10 databases to the `_memory` connection.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781574786", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781574786, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU3NDc4Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T19:15:37Z", "updated_at": "2021-02-18T19:15:37Z", "author_association": "OWNER", "body": "`select * from pragma_database_list();` is useful - shows all attached databases for the current connection.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781573676", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781573676, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU3MzY3Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T19:13:30Z", "updated_at": "2021-02-18T19:13:30Z", "author_association": "OWNER", "body": "It turns out SQLite defaults to a maximum of 10 attached databases. This can be increased using a compile-time constant, but even with that it cannot be more than 62: https://stackoverflow.com/questions/9845448/attach-limit-10", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1231#issuecomment-781560989", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1231", "id": 781560989, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU2MDk4OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T18:50:53Z", "updated_at": "2021-02-18T18:50:53Z", "author_association": "OWNER", "body": "Ideally I'd figure out a way to replicate this error in a concurrent unit test.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811367257, "label": "Race condition errors in new refresh_schemas() mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1231#issuecomment-781560865", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1231", "id": 781560865, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU2MDg2NQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T18:50:38Z", "updated_at": "2021-02-18T18:50:38Z", "author_association": "OWNER", "body": "I started trying to use locks to resolve this but I've not figured out the right way to do that yet - here's my first experiment:\r\n```diff\r\ndiff --git a/datasette/app.py b/datasette/app.py\r\nindex 9e15a16..1681c9d 100644\r\n--- a/datasette/app.py\r\n+++ b/datasette/app.py\r\n@@ -217,6 +217,7 @@ class Datasette:\r\n self.inspect_data = inspect_data\r\n self.immutables = set(immutables or [])\r\n self.databases = collections.OrderedDict()\r\n+ self._refresh_schemas_lock = threading.Lock()\r\n if memory or not self.files:\r\n self.add_database(Database(self, is_memory=True), name=\"_memory\")\r\n # memory_name is a random string so that each Datasette instance gets its own\r\n@@ -324,6 +325,13 @@ class Datasette:\r\n self.client = DatasetteClient(self)\r\n \r\n async def refresh_schemas(self):\r\n+ return\r\n+ if self._refresh_schemas_lock.locked():\r\n+ return\r\n+ with self._refresh_schemas_lock:\r\n+ await self._refresh_schemas()\r\n+\r\n+ async def _refresh_schemas(self):\r\n internal_db = self.databases[\"_internal\"]\r\n if not self.internal_db_created:\r\n await init_internal_db(internal_db)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811367257, "label": "Race condition errors in new refresh_schemas() mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1226#issuecomment-781546512", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1226", "id": 781546512, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTU0NjUxMg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T18:26:19Z", "updated_at": "2021-02-18T18:26:19Z", "author_association": "OWNER", "body": "This broke CI: https://github.com/simonw/datasette/runs/1929355965?check_suite_focus=true", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 808843401, "label": "--port option should validate port is between 0 and 65535"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1226#issuecomment-781530157", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1226", "id": 781530157, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTUzMDE1Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T18:00:15Z", "updated_at": "2021-02-18T18:00:15Z", "author_association": "OWNER", "body": "I can use `click.IntRange(min=None, max=None)` for this. https://click.palletsprojects.com/en/7.x/options/#ranges - inclusive on both edges.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 808843401, "label": "--port option should validate port is between 0 and 65535"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-781451701", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4", "id": 781451701, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTQ1MTcwMQ==", "user": {"value": 203343, "label": "Btibert3"}, "created_at": "2021-02-18T16:06:21Z", "updated_at": "2021-02-18T16:06:21Z", "author_association": "NONE", "body": "Awesome!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778380836, "label": "Feature Request: Gmail"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1230#issuecomment-781330466", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1230", "id": 781330466, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTMzMDQ2Ng==", "user": {"value": 7107523, "label": "Kabouik"}, "created_at": "2021-02-18T13:06:22Z", "updated_at": "2021-02-18T15:22:15Z", "author_association": "NONE", "body": "[Edit] Oh, I just saw the \"Load all\" button under the cluster map as well as the [setting to alter the max number or results](https://docs.datasette.io/en/stable/settings.html#max-returned-rows). So I guess this issue only is about the Vega charts.\r\n\r\n
\r\nNote that datasette-cluster-map also seems to be limited to 998 displayed points: \r\n\r\n![ss-2021-02-18_140548](https://user-images.githubusercontent.com/7107523/108361225-15fb2a80-71ea-11eb-9a19-d885e8513f55.png)\r\n
", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 811054000, "label": "Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-781077127", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 781077127, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MTA3NzEyNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-18T05:56:30Z", "updated_at": "2021-02-18T05:57:34Z", "author_association": "OWNER", "body": "I'm going to to try prototyping the `--crossdb` option that causes `/_memory` to connect to all databases as a starting point and see how well that works.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 1, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-780991910", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 780991910, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MDk5MTkxMA==", "user": {"value": 9308268, "label": "rayvoelker"}, "created_at": "2021-02-18T02:13:56Z", "updated_at": "2021-02-18T02:13:56Z", "author_association": "NONE", "body": "I was going ask you about this issue when we talk during your office-hours schedule this Friday, but was there any support ever added for doing this cross-database joining?\r\n\r\nI have a use-case where could be pretty neat to do analysis using this tool on time-specific databases from snapshots\r\n\r\nhttps://ilsweb.cincinnatilibrary.org/collection-analysis/\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/108294883-ba3a8e00-7164-11eb-9206-fcd5a8cdd883.png)\r\n\r\nand thanks again for such an amazing tool!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-780817596", "issue_url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4", "id": 780817596, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MDgxNzU5Ng==", "user": {"value": 306240, "label": "UtahDave"}, "created_at": "2021-02-17T20:01:35Z", "updated_at": "2021-02-17T20:01:35Z", "author_association": "NONE", "body": "I've got this almost working. Just needs some polish", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778380836, "label": "Feature Request: Gmail"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/227#issuecomment-779785638", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/227", "id": 779785638, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTc4NTYzOA==", "user": {"value": 295329, "label": "camallen"}, "created_at": "2021-02-16T11:48:03Z", "updated_at": "2021-02-16T11:48:03Z", "author_association": "NONE", "body": "Thank you @simonw ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 807174161, "label": "Error reading csv files with large column data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1226#issuecomment-779467451", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1226", "id": 779467451, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTQ2NzQ1MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-15T22:02:46Z", "updated_at": "2021-02-15T22:02:46Z", "author_association": "OWNER", "body": "I'm OK with the current error message shown if you try to use too low a port:\r\n```\r\ndatasette fivethirtyeight.db -p 800 \r\nINFO: Started server process [45511]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nERROR: [Errno 13] error while attempting to bind on address ('127.0.0.1', 800): permission denied\r\nINFO: Waiting for application shutdown.\r\nINFO: Application shutdown complete.\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 808843401, "label": "--port option should validate port is between 0 and 65535"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1226#issuecomment-779467160", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1226", "id": 779467160, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTQ2NzE2MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-15T22:01:53Z", "updated_at": "2021-02-15T22:01:53Z", "author_association": "OWNER", "body": "This check needs to happen in two places:\r\n\r\nhttps://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/cli.py#L222-L227\r\n\r\nhttps://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/cli.py#L328-L333", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 808843401, "label": "--port option should validate port is between 0 and 65535"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779416619", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/147", "id": 779416619, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTQxNjYxOQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-15T19:40:57Z", "updated_at": "2021-02-15T21:27:55Z", "author_association": "OWNER", "body": "Tried this experiment (not proper binary search, it only searches downwards):\r\n```python\r\nimport sqlite3\r\n\r\ndb = sqlite3.connect(\":memory:\")\r\n\r\ndef tryit(n):\r\n sql = \"select 1 where 1 in ({})\".format(\", \".join(\"?\" for i in range(n)))\r\n db.execute(sql, [0 for i in range(n)])\r\n\r\n\r\ndef find_limit(min=0, max=5_000_000):\r\n value = max\r\n while True:\r\n print('Trying', value)\r\n try:\r\n tryit(value)\r\n return value\r\n except:\r\n value = value // 2\r\n```\r\nRunning `find_limit()` with those default parameters takes about 1.47s on my laptop:\r\n```\r\nIn [9]: %timeit find_limit()\r\nTrying 5000000\r\nTrying 2500000...\r\n1.47 s \u00b1 28 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\n```\r\nInterestingly the value it suggested was 156250 - suggesting that the macOS `sqlite3` binary with a 500,000 limit isn't the same as whatever my Python is using here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688670158, "label": "SQLITE_MAX_VARS maybe hard-coded too low"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779448912", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/147", "id": 779448912, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTQ0ODkxMg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-15T21:09:50Z", "updated_at": "2021-02-15T21:09:50Z", "author_association": "OWNER", "body": "I fiddled around and replaced that line with `batch_size = SQLITE_MAX_VARS // num_columns` - which evaluated to `10416` for this particular file. That got me this:\r\n\r\n 40.71s user 1.81s system 98% cpu 43.081 total\r\n\r\n43s is definitely better than 56s, but it's still not as big as the ~26.5s to ~3.5s improvement described by @simonwiles at the top of this issue. I wonder what I'm missing here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688670158, "label": "SQLITE_MAX_VARS maybe hard-coded too low"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779446652", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/147", "id": 779446652, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTQ0NjY1Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-15T21:04:19Z", "updated_at": "2021-02-15T21:04:19Z", "author_association": "OWNER", "body": "... but it looks like `batch_size` is hard-coded to 100, rather than `None` - which means it's not being calculated using that value:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1f49f32814a942fa076cfe5f504d1621188097ed/sqlite_utils/db.py#L704\r\n\r\nAnd\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1f49f32814a942fa076cfe5f504d1621188097ed/sqlite_utils/db.py#L1877", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688670158, "label": "SQLITE_MAX_VARS maybe hard-coded too low"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779445423", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/147", "id": 779445423, "node_id": "MDEyOklzc3VlQ29tbWVudDc3OTQ0NTQyMw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-02-15T21:00:44Z", "updated_at": "2021-02-15T21:01:09Z", "author_association": "OWNER", "body": "I tried changing the hard-coded value from 999 to 156_250 and running `sqlite-utils insert` against a 500MB CSV file, with these results:\r\n```\r\n(sqlite-utils) sqlite-utils % time sqlite-utils insert slow-ethos.db ethos ../ethos-datasette/ethos.csv --no-headers\r\n [###################################-] 99% 00:00:00sqlite-utils insert slow-ethos.db ethos ../ethos-datasette/ethos.csv\r\n44.74s user 7.61s system 92% cpu 56.601 total\r\n# Increased the setting here\r\n(sqlite-utils) sqlite-utils % time sqlite-utils insert fast-ethos.db ethos ../ethos-datasette/ethos.csv --no-headers\r\n [###################################-] 99% 00:00:00sqlite-utils insert fast-ethos.db ethos ../ethos-datasette/ethos.csv\r\n39.40s user 5.15s system 96% cpu 46.320 total\r\n```\r\nNot as big a difference as I was expecting.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688670158, "label": "SQLITE_MAX_VARS maybe hard-coded too low"}, "performed_via_github_app": null}