{"html_url": "https://github.com/simonw/datasette/issues/1153#issuecomment-804640440", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1153", "id": 804640440, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDY0MDQ0MA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T05:58:20Z", "updated_at": "2021-03-23T05:58:20Z", "author_association": "CONTRIBUTOR", "body": "Could there be a little widget that offers conversion from one to the other? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771202454, "label": "Use YAML examples in documentation by default, not JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1159#issuecomment-804639427", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1159", "id": 804639427, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDYzOTQyNw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T05:56:02Z", "updated_at": "2021-03-23T05:56:02Z", "author_association": "CONTRIBUTOR", "body": "With just three facets, I like it, but it does take more horizontal space. Would be nice to have a switch somewhere, enabling either original compact option or this proposed more-readable option. Also some control over word wrap (width setting) and facet spacing. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 774332247, "label": "Improve the display of facets information"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/164#issuecomment-804541064", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/164", "id": 804541064, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:45:12Z", "updated_at": "2021-03-23T02:45:12Z", "author_association": "CONTRIBUTOR", "body": "\"datasette skeleton\" feature removed #476", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 280013907, "label": "datasette skeleton command for kick-starting database and table metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/163#issuecomment-804539729", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/163", "id": 804539729, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDUzOTcyOQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T02:41:14Z", "updated_at": "2021-03-23T02:41:14Z", "author_association": "CONTRIBUTOR", "body": "I'm visiting old issues for context while learning datasette. Let me know if okay to make the occasional comment like this one.\r\nquerystring argument now located at:\r\nhttps://docs.datasette.io/en/latest/settings.html#sql-time-limit-ms", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 279547886, "label": "Document the querystring argument for setting a different time limit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/88#issuecomment-804471733", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/88", "id": 804471733, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDQ3MTczMw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-22T23:46:36Z", "updated_at": "2021-03-22T23:46:36Z", "author_association": "CONTRIBUTOR", "body": "Google Map API limits seem to prevent https://nhs-england-map.netlify.com from being a working demo.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273775212, "label": "Add NHS England Hospitals example to wiki"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1149#issuecomment-804415619", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1149", "id": 804415619, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDQxNTYxOQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-22T21:43:16Z", "updated_at": "2021-03-22T21:43:16Z", "author_association": "CONTRIBUTOR", "body": "Sounds like a good idea.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 769520939, "label": "Make it easier to theme Datasette with CSS"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/942#issuecomment-803631102", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/942", "id": 803631102, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMzYzMTEwMg==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-21T17:48:42Z", "updated_at": "2021-03-21T17:48:42Z", "author_association": "CONTRIBUTOR", "body": "I like this idea. Though it might be nice to have some kind of automated system from database to file, so that developers could easily track diffs.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 681334912, "label": "Support column descriptions in metadata.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1265#issuecomment-802923254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1265", "id": 802923254, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMjkyMzI1NA==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-19T15:39:15Z", "updated_at": "2021-03-19T15:39:15Z", "author_association": "CONTRIBUTOR", "body": "It doesn't use basic auth, but you can put a whole datasette instance, or parts of this, behind a username/password prompt using https://github.com/simonw/datasette-auth-passwords", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 836123030, "label": "Support for HTTP Basic Authentication"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1262#issuecomment-802095132", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1262", "id": 802095132, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMjA5NTEzMg==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-18T16:37:45Z", "updated_at": "2021-03-18T16:37:45Z", "author_association": "CONTRIBUTOR", "body": "This sounds like a good use case for a plugin, since this will only be useful for a subset of Datasette users. It shouldn't be too difficult to add a button to do this with the available plugin hooks - have you taken a look at https://docs.datasette.io/en/latest/writing_plugins.html?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 834602299, "label": "Plugin hook that could support 'order by random()' for table view"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-799003172", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 799003172, "node_id": "MDEyOklzc3VlQ29tbWVudDc5OTAwMzE3Mg==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-03-14T23:42:57Z", "updated_at": "2021-03-14T23:42:57Z", "author_association": "CONTRIBUTOR", "body": "Oh, and the container image can be up to 10GB, so the EFS step might not be needed except for pretty big stuff.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-799002993", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 799002993, "node_id": "MDEyOklzc3VlQ29tbWVudDc5OTAwMjk5Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-03-14T23:41:51Z", "updated_at": "2021-03-14T23:41:51Z", "author_association": "CONTRIBUTOR", "body": "Now that [Lambda supports Docker](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/), this probably is a bit easier and may be able to build on top of the existing package command.\r\n\r\nThere are weirdnesses in how the command actually gets invoked; the [aws-lambda-python image](https://hub.docker.com/r/amazon/aws-lambda-python) shows a bit of that. So Datasette would probably need some sort of Lambda-specific entry point to make this work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1256#issuecomment-795112935", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1256", "id": 795112935, "node_id": "MDEyOklzc3VlQ29tbWVudDc5NTExMjkzNQ==", "user": {"value": 6371750, "label": "JBPressac"}, "created_at": "2021-03-10T08:59:45Z", "updated_at": "2021-03-10T08:59:45Z", "author_association": "CONTRIBUTOR", "body": "Sorry, I meant \"minor typo\" not \"minor type\".", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 827341657, "label": "Minor type in IP adress"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/766#issuecomment-791509910", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/766", "id": 791509910, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MTUwOTkxMA==", "user": {"value": 6371750, "label": "JBPressac"}, "created_at": "2021-03-05T15:57:35Z", "updated_at": "2021-03-05T16:35:21Z", "author_association": "CONTRIBUTOR", "body": "Hello, \r\nI have the same wildcards search problems with an instance of Datasette. http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwerz&_sort=rowid is OK but http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwe* is not (FTS is activated on \"Reference\" \"IntituleAnalyse\" \"NomDuProducteur\" \"PresentationDuContenu\" \"Notes\"). \r\n\r\nNotice that a SQL query as below launched directly from SQLite in the server's shell, retrieves results.\r\n\r\n`select * from fonds_auguste_dupouy_1872_1967_fts where IntituleAnalyse MATCH \"gwe*\";`\r\n\r\nThanks,", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 617323873, "label": "Enable wildcard-searches by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1238#issuecomment-789186458", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1238", "id": 789186458, "node_id": "MDEyOklzc3VlQ29tbWVudDc4OTE4NjQ1OA==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2021-03-02T20:19:30Z", "updated_at": "2021-03-02T20:19:30Z", "author_association": "CONTRIBUTOR", "body": "A custom `templates/index.html` seems to work and custom `pages` as a workaround with moving them to `pages/base_url_dir`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813899472, "label": "Custom pages don't work with base_url setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/242#issuecomment-787121933", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/242", "id": 787121933, "node_id": "MDEyOklzc3VlQ29tbWVudDc4NzEyMTkzMw==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-02-27T19:18:57Z", "updated_at": "2021-02-27T19:18:57Z", "author_association": "CONTRIBUTOR", "body": "I think HTTPX gets it exactly right, with a clear separation between sync and async clients, each with a basically identical API. (I'm about to switch [feed-to-sqlite](https://github.com/eyeseast/feed-to-sqlite) over to it, from Requests, to eventually make way for async support.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 817989436, "label": "Async support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1212#issuecomment-782430028", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1212", "id": 782430028, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-19T22:54:13Z", "updated_at": "2021-02-19T22:54:13Z", "author_association": "CONTRIBUTOR", "body": "I will close this issue since it appears only in my particular setup.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797651831, "label": "Tests are very slow. "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1229#issuecomment-782053455", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1229", "id": 782053455, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ==", "user": {"value": 295329, "label": "camallen"}, "created_at": "2021-02-19T12:47:19Z", "updated_at": "2021-02-19T12:47:19Z", "author_association": "CONTRIBUTOR", "body": "I believe this pr and #1031 are related and fix the same issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 810507413, "label": "ensure immutable databses when starting in configuration directory mode with"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-778439617", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 778439617, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODQzOTYxNw==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-12T20:33:27Z", "updated_at": "2021-02-12T20:33:27Z", "author_association": "CONTRIBUTOR", "body": "That Docker command will mount your current directory inside the Docker container at `/mnt` - so you shouldn't need to change anything locally, just run\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt \\\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\n```\r\n\r\nand it will use the `fixtures.db` file within your current directory", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60", "id": 770069864, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA==", "user": {"value": 22578954, "label": "daniel-butler"}, "created_at": "2021-01-29T21:52:05Z", "updated_at": "2021-02-12T18:29:43Z", "author_association": "CONTRIBUTOR", "body": "For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called `gh-organization`. The github's owner id of gh-orgnization is `123456789`.\r\n\r\n```bash\r\ngithub-to-sqlite repos github.db gh-organization\r\n```\r\n\r\nI'm on a windows computer running git bash to be able to use the `|` command. This works for me\r\n```bash\r\nsqlite3 github.db \"SELECT full_name FROM repos WHERE owner = '123456789';\" | tr '\\n\\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }\r\n```\r\n\r\nOn a pure linux system I think this would work because the new line character is normally `\\n`\r\n```bash\r\nsqlite3 github.db \"SELECT full_name FROM repos WHERE owner = '123456789';\" | tr '\\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }`\r\n```\r\n\r\nAs expected I ran into rate limit issues #51 \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797097140, "label": "Use Data from SQLite in other commands"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778246347", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33", "id": 778246347, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODI0NjM0Nw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2021-02-12T15:00:43Z", "updated_at": "2021-02-12T15:00:43Z", "author_association": "CONTRIBUTOR", "body": "Yes, Big Sur Photos database doesn't have `ZGENERICASSET` table. PR #31 will fix this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 803338729, "label": "photo-to-sqlite: command not found"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-777927946", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 777927946, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-12T02:29:54Z", "updated_at": "2021-02-12T02:29:54Z", "author_association": "CONTRIBUTOR", "body": "According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt \\\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\n```\r\n\r\nThis uses `/mnt/fixtures.db` whereas you're using `fixtures.db` - did you try using this path instead?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1200#issuecomment-777132761", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1200", "id": 777132761, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NzEzMjc2MQ==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-11T00:29:52Z", "updated_at": "2021-02-11T00:29:52Z", "author_association": "CONTRIBUTOR", "body": "I'm probably missing something but what's the use case here - what would this offer over adding `limit 10` to the query?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 792890765, "label": "?_size=10 option for the arbitrary query page would be useful"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1208#issuecomment-774286962", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1208", "id": 774286962, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-05T21:02:39Z", "updated_at": "2021-02-05T21:02:39Z", "author_association": "CONTRIBUTOR", "body": "@simonw could you please take a look at the PR 1211 that fixes this issue?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 794554881, "label": "A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1212#issuecomment-772007663", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1212", "id": 772007663, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-02T21:36:56Z", "updated_at": "2021-02-02T21:36:56Z", "author_association": "CONTRIBUTOR", "body": "How do you get 4-5 minutes?\r\nI run my tests in WSL 2, so may be i need to try a real linux VM.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797651831, "label": "Tests are very slow. "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1211#issuecomment-771127458", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1211", "id": 771127458, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MTEyNzQ1OA==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-01T20:13:39Z", "updated_at": "2021-02-01T20:13:39Z", "author_association": "CONTRIBUTOR", "body": "Ping @simonw ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797649915, "label": "Use context manager instead of plain open"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51", "id": 770150526, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg==", "user": {"value": 22578954, "label": "daniel-butler"}, "created_at": "2021-01-30T03:44:19Z", "updated_at": "2021-01-30T03:47:24Z", "author_association": "CONTRIBUTOR", "body": "I don't have much experience with github's rate limiting. In my day job we use the [tenacity library](https://github.com/jd/tenacity) to handle http errors we get.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 703246031, "label": "github-to-sqlite should handle rate limits better"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770112248", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60", "id": 770112248, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDExMjI0OA==", "user": {"value": 22578954, "label": "daniel-butler"}, "created_at": "2021-01-30T00:01:03Z", "updated_at": "2021-01-30T01:14:42Z", "author_association": "CONTRIBUTOR", "body": "Yes that would be cool! I wouldn't mind helping. Is this the meat of it? https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/utils.py#L512\r\n\r\nIt looks like the cli option is added with this decorator : https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/cli.py#L14\r\n\r\nI looked a bit at utils.py in the GitHub repository. I was surprised at the amount of manual mapping of the API response you had to do to get this to work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797097140, "label": "Use Data from SQLite in other commands"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55", "id": 760950128, "node_id": "MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-15T13:44:52Z", "updated_at": "2021-01-15T13:44:52Z", "author_association": "CONTRIBUTOR", "body": "I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779211940, "label": "Fix archive imports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754729035", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54", "id": 754729035, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyOTAzNQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T16:03:29Z", "updated_at": "2021-01-05T16:03:29Z", "author_association": "CONTRIBUTOR", "body": "I was able to fix this, at least enough to get _my_ archive to import. Not sure if there's more work to be done here or not.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779088071, "label": "Archive import appears to be broken on recent exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55", "id": 754728696, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T16:02:55Z", "updated_at": "2021-01-05T16:02:55Z", "author_association": "CONTRIBUTOR", "body": "This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779211940, "label": "Fix archive imports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54", "id": 754721153, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T15:51:09Z", "updated_at": "2021-01-05T15:51:09Z", "author_association": "CONTRIBUTOR", "body": "Correction: the failure is on `lists-member.js` (I was thrown by the `block` variable name, but that's just a coincidence)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779088071, "label": "Archive import appears to be broken on recent exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1167#issuecomment-754619930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1167", "id": 754619930, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDYxOTkzMA==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-05T12:57:57Z", "updated_at": "2021-01-05T12:57:57Z", "author_association": "CONTRIBUTOR", "body": "Not sure where exactly to put the actual docs (presumably somewhere in [docs/contributing.rst](https://github.com/simonw/datasette/blob/main/docs/contributing.rst)) but I've made a slight change to make it easier to run locally (copying [the approach in excalidraw](https://github.com/excalidraw/excalidraw/blob/ade2565f497243a5e428f4906d8ed80c872fd981/package.json#L90-L94)): https://github.com/simonw/datasette/compare/main...benpickles:prettier-docs\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777145954, "label": "Add Prettier to contributing documentation"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647922203", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647922203, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyMjIwMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:44:58Z", "updated_at": "2021-01-05T08:22:43Z", "author_association": "CONTRIBUTOR", "body": "I'm seeing the problem on database page. Index page and table page runs quite fast.\r\n\r\n- Tables have <10 columns (`id`, `url`, `title`, `body_html`, `date`, `author`, `meta` (for keeping unstructured json)). I've added index on `date` columns (using `sqlite-utils`) in addition to the index present on `id` columns. \r\n- All tables have FTS enabled on `text` and `varchar` columns (`title`, `body_html` etc) to speed up searching.\r\n- There are couple of tables related with foreign keys (think a thread in a forum and posts in that thread, related with `thread_id`)\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1169#issuecomment-754007242", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1169", "id": 754007242, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDAwNzI0Mg==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-04T14:29:57Z", "updated_at": "2021-01-04T14:29:57Z", "author_association": "CONTRIBUTOR", "body": "I somewhat share your reluctance to add a package.json to seemingly every project out there but ultimately if they're project dependencies it's important they're managed within the codebase.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777677671, "label": "Prettier package not actually being cached"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1170#issuecomment-754004715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1170", "id": 754004715, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDAwNDcxNQ==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-04T14:25:44Z", "updated_at": "2021-01-04T14:25:44Z", "author_association": "CONTRIBUTOR", "body": "I was going to re-add the filter to only run Prettier when there have been changes in `datasette/static` but that would mean it wouldn't run when the package is updated. That plus the fact that [the last run of the job took only 8 seconds](https://github.com/benpickles/datasette/runs/1640121514) is why I decided not to re-add the filter.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778126516, "label": "Install Prettier via package.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-753531657", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 753531657, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzUzMTY1Nw==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2021-01-02T21:25:36Z", "updated_at": "2021-01-02T21:25:36Z", "author_association": "CONTRIBUTOR", "body": "Actually, on more research, I found out this is handled by the [trove-classifiers package](https://github.com/pypa/trove-classifiers/blob/master/src/trove_classifiers/__init__.py#L2) now, so it's just a one-liner pr instead of fire-up-a-docker-container-and-do-some-migrations", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-752098906", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 752098906, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-12-29T14:34:30Z", "updated_at": "2020-12-29T14:34:50Z", "author_association": "CONTRIBUTOR", "body": "FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py\r\n\r\nNot a production thing, just an experiment trying to explore what might be possible...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59", "id": 751375487, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw==", "user": {"value": 631242, "label": "frosencrantz"}, "created_at": "2020-12-26T17:08:44Z", "updated_at": "2020-12-26T17:08:44Z", "author_association": "CONTRIBUTOR", "body": "Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771872303, "label": "Remove unneeded exists=True for -a/--auth flag."}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1158#issuecomment-750389683", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1158", "id": 750389683, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MDM4OTY4Mw==", "user": {"value": 6774676, "label": "eumiro"}, "created_at": "2020-12-23T17:02:50Z", "updated_at": "2020-12-23T17:02:50Z", "author_association": "CONTRIBUTOR", "body": "The dict/set suggestion comes from `pyupgrade --py36-plus`, but then had to `black` the change.\r\n\r\nThe rest comes from PyCharm's Inspect code function. I reviewed all the suggestions and fixed a thing or two, such as leading/trailing spaces in the docstrings or turned around the chained conditions.\r\n\r\nThen I tried to convert all `os.path/glob/open` to `Path`, but there were some local test issues, so I'll have to start over in smaller chunks if you want to have that too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 773913793, "label": "Modernize code to Python 3.6+"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 748562330, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:45:08Z", "updated_at": "2020-12-20T04:45:08Z", "author_association": "CONTRIBUTOR", "body": "Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748562288, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:44:22Z", "updated_at": "2020-12-20T04:44:22Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748436779, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-19T07:49:00Z", "updated_at": "2020-12-19T07:49:00Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-748305976", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 748305976, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-12-18T20:34:39Z", "updated_at": "2020-12-18T20:34:39Z", "author_association": "CONTRIBUTOR", "body": "I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: \r\n\r\n```shell\r\ndatasette serve --metadata=./metadata.json\r\n```\r\n\r\nI have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/998#issuecomment-743080047", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/998", "id": 743080047, "node_id": "MDEyOklzc3VlQ29tbWVudDc0MzA4MDA0Nw==", "user": {"value": 6371750, "label": "JBPressac"}, "created_at": "2020-12-11T09:25:09Z", "updated_at": "2020-12-11T09:25:09Z", "author_association": "CONTRIBUTOR", "body": "Hello Simon,\r\nI have a similar problem with horizontal scrollbar display with Datasette version 0.51 and superior for a table with more than 30 rows. With Datasette 0.50, the horizontal scrollbar is displayed, if I upgrade Datasette to 0.51 and superior, the horizontal scrollbar disappears.\r\n\r\nDatasette 0.50: horizontal scrollbar\r\n\r\n![2020-12-11 10_23_28-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885620-a5f17800-3b9a-11eb-8870-654e7d4372ca.png)\r\n\r\nDatasette 0.51 and superior: no horizontal scrollbar\r\n\r\n![2020-12-11 10_24_55-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885782-dfc27e80-3b9a-11eb-9d55-6c9a56227bf2.png)\r\n\r\nThanks,", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 717699884, "label": "Wide tables should scroll horizontally within the page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1130#issuecomment-738907852", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1130", "id": 738907852, "node_id": "MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-12-04T17:22:29Z", "updated_at": "2020-12-04T17:31:25Z", "author_association": "CONTRIBUTOR", "body": "EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it.\r\n\r\n```css\r\nbody {\r\n min-height: 100vh;\r\n min-height: -webkit-fill-available;\r\n}\r\nhtml {\r\n height: -webkit-fill-available;\r\n}\r\n```\r\nhttps://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/\r\n\r\n---\r\n\r\nIt's actually not that difficult to fix.\r\nWell, this is actually a workaround to keep viewport in place.\r\n\r\nI usually put a transition (forgot to do it here) that keeps page from resizing.\r\n\r\n```css\r\n.container {\r\n min-height: 100vh;\r\n transition: height 10000s steps(0);\r\n}\r\n```\r\n\r\n`steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step.\r\nThis fix also prevents page from jumping around when the keyboard pops up and down.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 756876238, "label": "Fix footer not sticking to bottom in short pages"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1111#issuecomment-736322290", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1111", "id": 736322290, "node_id": "MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-12-01T08:54:47Z", "updated_at": "2020-12-01T08:54:47Z", "author_association": "CONTRIBUTOR", "body": "Somewhat related: https://github.com/simonw/datasette/issues/859\r\nI fixed the issue with forking and disabling the counts for hidden tables.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 751195017, "label": "Accessing a database's `.json` is slow for very large SQLite files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1114#issuecomment-735436014", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1114", "id": 735436014, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTQzNjAxNA==", "user": {"value": 2182, "label": "danp"}, "created_at": "2020-11-29T18:33:30Z", "updated_at": "2020-11-29T18:33:30Z", "author_association": "CONTRIBUTOR", "body": "Thank you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752966476, "label": "--load-extension=spatialite not working with datasetteproject/datasette docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-735281577", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 735281577, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTI4MTU3Nw==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-11-28T19:39:53Z", "updated_at": "2020-11-28T19:39:53Z", "author_association": "CONTRIBUTOR", "body": "I was confused by `--config` and I tried passing the json from datasette-ripgrep into `config.json` just as a wild guess. \r\n\r\nA short term solution might be pointing out in plugins that their snippet json can go in `metadata.json` at least makes it easier to search for config options or to know where to start if someone is new. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1112#issuecomment-735279355", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1112", "id": 735279355, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTI3OTM1NQ==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-11-28T19:21:09Z", "updated_at": "2020-11-28T19:21:09Z", "author_association": "CONTRIBUTOR", "body": "(Even more annoying is that I see my editor leaked an extra delete space at the end of the line. I'm happy to rebuild this to be less annoying, but you probably don't want the changelog update either way)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752749485, "label": "Fix --metadata doc usage"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-720354227", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 720354227, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-11-02T09:33:58Z", "updated_at": "2020-11-02T09:33:58Z", "author_association": "CONTRIBUTOR", "body": "Thanks; just a note that the `datasette.urls.static(path)` and `datasette.urls.static_plugins(plugin_name, path)` items both seem to be repeated and appear in the docs twice?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1049#issuecomment-718528252", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1049", "id": 718528252, "node_id": "MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-29T09:20:34Z", "updated_at": "2020-10-29T09:20:34Z", "author_association": "CONTRIBUTOR", "body": "That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 729017519, "label": "Add template block prior to extra URL loaders"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/189#issuecomment-717359145", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/189", "id": 717359145, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNzM1OTE0NQ==", "user": {"value": 35681, "label": "adamwolf"}, "created_at": "2020-10-27T16:20:32Z", "updated_at": "2020-10-27T16:20:32Z", "author_association": "CONTRIBUTOR", "body": "No problem. I added a test. Let me know if it looks sufficient or if you want me to to tweak something!\r\n\r\nIf you don't mind, would you tag this PR as \"hacktoberfest-accepted\"? If you do mind, no problem and I'm sorry for asking :) My kiddos like the shirts.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 729818242, "label": "Allow iterables other than Lists in m2m records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1043#issuecomment-716237524", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1043", "id": 716237524, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjIzNzUyNA==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2020-10-26T00:14:57Z", "updated_at": "2020-10-26T00:14:57Z", "author_association": "CONTRIBUTOR", "body": "Sorry, I was out of the loop this weekend. The missing sdists were in some the `datasette-*` plugins... i'll capture my findings more concretely in one spot when i have a chance...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 727915394, "label": "Include LICENSE in sdist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-716123598", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 716123598, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-25T10:20:12Z", "updated_at": "2020-10-25T10:53:24Z", "author_association": "CONTRIBUTOR", "body": "I'm trying to [run something behind a MyBinder proxy](https://github.com/ouseful-testing/nbsearch), but seem to have something set up incorrectly and not sure what the fix is?\r\n\r\nI'm starting datasette with jupyter-server-proxy setup:\r\n\r\n```\r\n# __init__.py\r\ndef setup_nbsearch():\r\n\r\n return {\r\n \"command\": [\r\n \"datasette\",\r\n \"serve\",\r\n f\"{_NBSEARCH_DB_PATH}\",\r\n \"-p\",\r\n \"{port}\",\r\n \"--config\",\r\n \"base_url:{base_url}nbsearch/\"\r\n ],\r\n \"absolute_url\": True,\r\n # The following needs a the labextension installing.\r\n # eg in postBuild: jupyter labextension install jupyterlab-server-proxy\r\n \"launcher_entry\": {\r\n \"enabled\": True,\r\n \"title\": \"nbsearch\",\r\n },\r\n }\r\n```\r\n\r\nwhere the `base_url` gets automatically populated by the server-proxy. I define the loaders as:\r\n\r\n```\r\n# __init__.py\r\nfrom datasette import hookimpl\r\n\r\n@hookimpl\r\ndef extra_css_urls(database, table, columns, view_name, datasette):\r\n return [\r\n \"/-/static-plugins/nbsearch/prism.css\",\r\n \"/-/static-plugins/nbsearch/nbsearch.css\",\r\n ]\r\n```\r\nbut these seem to also need a base_url prefix set somehow?\r\n\r\nCurrently, the generated HTML loads properly but internal links are incorrect; eg they take the form `` which resolves to eg `https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css` rather than required URL of form `https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css`.\r\n\r\nThe main css is loaded correctly: ``", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-716066000", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 716066000, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-24T22:58:33Z", "updated_at": "2020-10-24T22:58:33Z", "author_association": "CONTRIBUTOR", "body": "From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note:\r\n\r\n```\r\ndatasette.urls.instance()\r\nReturns the URL to the Datasette instance root page. This is usually \"/\"\r\n```\r\n\r\nWhat about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to?\r\n\r\n- [ ] `https://example.com:PORT/weirdpath/datasette`\r\n- [ ] `https://example.com:PORT/weirdpath/`\r\n- [ ] `https://example.com:PORT/`\r\n- [ ] `https://example.com`\r\n- [ ] something else?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-714908859", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 714908859, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDkwODg1OQ==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2020-10-23T04:49:20Z", "updated_at": "2020-10-23T04:49:20Z", "author_association": "CONTRIBUTOR", "body": "Good luck on 1.0! It may also be worth lobbying for a `Framework::Datasette::1.0` classifier. This would be a nice way to allow the ecosystem to self-document a bit more [discoverably](https://pypi.org/search/?q=&o=&c=Framework+%3A%3A+Datasette%3A%3A+1.0). \r\n\r\nI was surprised to see the [PR for `Framework::Jupyter`](https://github.com/pypa/warehouse/pull/1905/files) is a... database migration! Of course, there may be more workflow to it!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-714657366", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 714657366, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-22T17:51:29Z", "updated_at": "2020-10-22T17:51:29Z", "author_association": "CONTRIBUTOR", "body": "How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `\"extra_js_urls\": [ \"/static/app.js\"]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1019#issuecomment-708520800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1019", "id": 708520800, "node_id": "MDEyOklzc3VlQ29tbWVudDcwODUyMDgwMA==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2020-10-14T16:37:19Z", "updated_at": "2020-10-14T16:37:19Z", "author_association": "CONTRIBUTOR", "body": "\ud83c\udf89 Thanks so much @simonw ! \ud83c\udf89 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 721050815, "label": "\"Edit SQL\" button on canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/swarm-to-sqlite/pull/10#issuecomment-707326192", "issue_url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/10", "id": 707326192, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNzMyNjE5Mg==", "user": {"value": 29426418, "label": "mattiaborsoi"}, "created_at": "2020-10-12T20:20:02Z", "updated_at": "2020-10-12T20:20:02Z", "author_association": "CONTRIBUTOR", "body": "This closes issue #8 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 719637258, "label": "Update utils.py to fix sqlite3.OperationalError"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/48#issuecomment-704503719", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/48", "id": 704503719, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNDUwMzcxOQ==", "user": {"value": 755825, "label": "adamjonas"}, "created_at": "2020-10-06T19:26:59Z", "updated_at": "2020-10-06T19:26:59Z", "author_association": "CONTRIBUTOR", "body": "ref #46 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 681228542, "label": "Add pull requests"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-690860653", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50", "id": 690860653, "node_id": "MDEyOklzc3VlQ29tbWVudDY5MDg2MDY1Mw==", "user": {"value": 370930, "label": "mikepqr"}, "created_at": "2020-09-11T04:04:08Z", "updated_at": "2020-09-11T04:04:08Z", "author_association": "CONTRIBUTOR", "body": "There's probably a nicer way of doing (hence this is a comment rather than a PR), but this appears to fix it:\r\n```diff\r\n--- a/twitter_to_sqlite/utils.py\r\n+++ b/twitter_to_sqlite/utils.py\r\n@@ -181,6 +181,7 @@ def fetch_timeline(\r\n args[\"tweet_mode\"] = \"extended\"\r\n min_seen_id = None\r\n num_rate_limit_errors = 0\r\n+ seen_count = 0\r\n while True:\r\n if min_seen_id is not None:\r\n args[\"max_id\"] = min_seen_id - 1\r\n@@ -208,6 +209,7 @@ def fetch_timeline(\r\n yield tweet\r\n min_seen_id = min(t[\"id\"] for t in tweets)\r\n max_seen_id = max(t[\"id\"] for t in tweets)\r\n+ seen_count += len(tweets)\r\n if last_since_id is not None:\r\n max_seen_id = max((last_since_id, max_seen_id))\r\n last_since_id = max_seen_id\r\n@@ -217,7 +219,9 @@ def fetch_timeline(\r\n replace=True,\r\n )\r\n if stop_after is not None:\r\n- break\r\n+ if seen_count >= stop_after:\r\n+ break\r\n+ args[\"count\"] = min(args[\"count\"], stop_after - seen_count)\r\n time.sleep(sleep)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 698791218, "label": "favorites --stop_after=N stops after min(N, 200)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688573964", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688573964, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODU3Mzk2NA==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-08T01:55:07Z", "updated_at": "2020-09-08T01:55:07Z", "author_association": "CONTRIBUTOR", "body": "Okay, I've rewritten this PR to preserve the batching behaviour but still fix #145, and rebased the branch to account for the `db.execute()` api change. It's not terribly sophisticated -- if it attempts to insert a batch which has too many variables, the exception is caught, the batch is split in two and each half is inserted separately, and then it carries on as before with the same `batch_size`. In the edge case where this gets triggered, subsequent batches will all be inserted in two groups too if they continue to have the same number of columns (which is presumably reasonably likely). Do you reckon this is acceptable when set against the awkwardness of recalculating the `batch_size` on the fly?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688481317", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688481317, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODQ4MTMxNw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-07T19:18:55Z", "updated_at": "2020-09-07T19:18:55Z", "author_association": "CONTRIBUTOR", "body": "Just force-pushed to update d042f9c with more formatting changes to satisfy `black==20.8b1` and pass the GitHub Actions \"Test\" workflow.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688479163", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688479163, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODQ3OTE2Mw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-07T19:10:33Z", "updated_at": "2020-09-07T19:11:57Z", "author_association": "CONTRIBUTOR", "body": "@simonw -- I've gone ahead updated the documentation to reflect the changes introduced in this PR. IMO it's ready to merge now.\r\n\r\nIn writing the documentation changes, I begin to wonder about the value and role of `batch_size` at all, tbh. May I assume it was originally intended to prevent using the entire row set to determine columns and column types, and that this was a performance consideration? If so, this PR entirely undermines its purpose. I've been passing in excess of 500,000 rows at a time to `insert_all()` with these changes and although I'm sure the performance difference is measurable it's not really noticeable; given #145, I don't know that any performance advantages outweigh the problems doing it this way removes. What do you think about just dropping the argument and defaulting to the maximum `batch_size` permissible given `SQLITE_MAX_VARS`? Are there other reasons one might want to restrict `batch_size` that I've overlooked? I could open a new issue to discuss/implement this.\r\n\r\nOf course the documentation will need to change again too if/when something is done about #147.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/952#issuecomment-686061028", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/952", "id": 686061028, "node_id": "MDEyOklzc3VlQ29tbWVudDY4NjA2MTAyOA==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2020-09-02T22:26:14Z", "updated_at": "2020-09-02T22:26:14Z", "author_association": "CONTRIBUTOR", "body": "Looks like black is up-to-date now, so this is no longer needed.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 687245650, "label": "Update black requirement from ~=19.10b0 to >=19.10,<21.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/145#issuecomment-683382252", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/145", "id": 683382252, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MzM4MjI1Mg==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-30T06:27:25Z", "updated_at": "2020-08-30T06:27:52Z", "author_association": "CONTRIBUTOR", "body": "Note: had to adjust the test above because trying to exhaust a `SQLITE_MAX_VARIABLE_NUMBER` of 250000 in 99 records requires 2526 columns, and trips the ` \"Rows can have a maximum of {} columns\".format(SQLITE_MAX_VARS)` check even before it trips the default `SQLITE_MAX_COLUMN` value (2000).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688659182, "label": "Bug when first record contains fewer columns than subsequent records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682815377", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139", "id": 682815377, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjgxNTM3Nw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-28T16:14:58Z", "updated_at": "2020-08-28T16:14:58Z", "author_association": "CONTRIBUTOR", "body": "Thanks! And yeah, I had updating the docs on my list too :) Will try to get to it this afternoon (budgeting time is fraught with uncertainty at the moment!).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 686978131, "label": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139", "id": 682182178, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-27T20:46:18Z", "updated_at": "2020-08-27T20:46:18Z", "author_association": "CONTRIBUTOR", "body": "> I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem.\r\n\r\nSo the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048\r\n\r\nWith regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this:\r\nhttps://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63\r\n\r\nIt doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here:\r\nhttps://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716\r\n\r\nThe existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either.\r\n\r\n@simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made?\r\n\r\nWould be grateful for your thoughts! Thanks!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 686978131, "label": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/456#issuecomment-661524006", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/456", "id": 661524006, "node_id": "MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg==", "user": {"value": 32467826, "label": "abeyerpath"}, "created_at": "2020-07-21T01:15:07Z", "updated_at": "2020-07-21T01:15:07Z", "author_association": "CONTRIBUTOR", "body": "Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue...\r\n\r\nThe `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 442327592, "label": "Installing installs the tests package"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655898722", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/121", "id": 655898722, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTg5ODcyMg==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-09T04:53:08Z", "updated_at": "2020-07-09T04:53:08Z", "author_association": "CONTRIBUTOR", "body": "Yep, I agree that makes more sense for backwards compat and more casual use cases. I think it should be possible for the Database/Queryable methods to DTRT based on seeing if it's within a context-manager-managed transaction.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 652961907, "label": "Improved (and better documented) support for transactions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655652679", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/121", "id": 655652679, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTY1MjY3OQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T17:24:46Z", "updated_at": "2020-07-08T17:24:46Z", "author_association": "CONTRIBUTOR", "body": "Better transaction handling would be really great. Some of my thoughts on implementing better transaction discipline are in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728.\r\n\r\nMy preferences:\r\n\r\n- Each CLI command should operate in a single transaction so that either the whole thing succeeds or the whole thing is rolled back. This avoids partially completed operations when an error occurs part way through processing. Partially completed operations are typically much harder to recovery from gracefully and may cause inconsistent data states.\r\n\r\n- The Python API should be transaction-agnostic and rely on the caller to coordinate transactions. Only the caller knows how individual insert, create, update, etc operations/methods should be bundled conceptually into transactions. When the caller is the CLI, for example, that bundling would be at the CLI command-level. Other callers might want to break up operations into multiple transactions. Transactions are usually most useful when controlled at the application-level (like logging configuration) instead of the library level. The library needs to provide an API that's conducive to transaction use, though.\r\n\r\n- The Python API should provide a context manager to provide consistent transactions handling with more useful defaults than Python's `sqlite3` module. The latter issues implicit `BEGIN` statements by default for most DML (`INSERT`, `UPDATE`, `DELETE`, \u2026 but not `SELECT`, I believe), but **not** DDL (`CREATE TABLE`, `DROP TABLE`, `CREATE VIEW`, \u2026). Notably, the `sqlite3` module doesn't issue the implicit `BEGIN` until the first DML statement. It _does not_ issue it when entering the `with conn` block, like other DBAPI2-compatible modules do. The `with conn` block for `sqlite3` only arranges to commit or rollback an existing transaction when exiting. Including DDL and `SELECT`s in transactions is important for operation consistency, though. There are several existing bugs.python.org tickets about this and future changes are in the works, but sqlite-utils can provide its own API sooner. sqlite-utils's `Database` class could itself be a context manager (built on the `sqlite3` connection context manager) which additionally issues an explicit `BEGIN` when entering. This would then let Python API callers do something like:\r\n\r\n```python\r\ndb = sqlite_utils.Database(path)\r\n\r\nwith db: # \u2190 BEGIN issued here by Database.__enter__\r\n db.insert(\u2026)\r\n db.create_view(\u2026)\r\n# \u2190 COMMIT/ROLLBACK issue here by sqlite3.connection.__exit__\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 652961907, "label": "Improved (and better documented) support for transactions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655643078", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655643078, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTY0MzA3OA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T17:05:59Z", "updated_at": "2020-07-08T17:05:59Z", "author_association": "CONTRIBUTOR", "body": "> The only thing missing from this PR is updates to the documentation.\r\n\r\nAh, yes, thanks for this reminder! I've repushed with doc bits added.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655239728, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTIzOTcyOA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T02:16:42Z", "updated_at": "2020-07-08T02:16:42Z", "author_association": "CONTRIBUTOR", "body": "I fixed my original oops by moving the `DELETE FROM $table` out of the chunking loop and repushed. I think this change can be considered in isolation from issues around transactions, which I discuss next.\r\n\r\nI wanted to make the DELETE + INSERT happen all in the same transaction so it was robust, but that was more complicated than I expected. The transaction handling in the Database/Table classes isn't systematic, and this poses big hurdles to making `Table.insert_all` (or other operations) consistent and robust in the face of errors.\r\n\r\nFor example, I wanted to do this (whitespace ignored in diff, so indentation change not highlighted):\r\n\r\n```diff\r\ndiff --git a/sqlite_utils/db.py b/sqlite_utils/db.py\r\nindex d6b9ecf..4107ceb 100644\r\n--- a/sqlite_utils/db.py\r\n+++ b/sqlite_utils/db.py\r\n@@ -1028,6 +1028,11 @@ class Table(Queryable):\r\n batch_size = max(1, min(batch_size, SQLITE_MAX_VARS // num_columns))\r\n self.last_rowid = None\r\n self.last_pk = None\r\n+ with self.db.conn:\r\n+ # Explicit BEGIN is necessary because Python's sqlite3 doesn't\r\n+ # issue implicit BEGINs for DDL, only DML. We mix DDL and DML\r\n+ # below and might execute DDL first, e.g. for table creation.\r\n+ self.db.conn.execute(\"BEGIN\")\r\n if truncate and self.exists():\r\n self.db.conn.execute(\"DELETE FROM [{}];\".format(self.name))\r\n for chunk in chunks(itertools.chain([first_record], records), batch_size):\r\n@@ -1038,7 +1043,11 @@ class Table(Queryable):\r\n # Use the first batch to derive the table names\r\n column_types = suggest_column_types(chunk)\r\n column_types.update(columns or {})\r\n- self.create(\r\n+ # Not self.create() because that is wrapped in its own\r\n+ # transaction and Python's sqlite3 doesn't support\r\n+ # nested transactions.\r\n+ self.db.create_table(\r\n+ self.name,\r\n column_types,\r\n pk,\r\n foreign_keys,\r\n@@ -1139,7 +1148,6 @@ class Table(Queryable):\r\n flat_values = list(itertools.chain(*values))\r\n queries_and_params = [(sql, flat_values)]\r\n \r\n- with self.db.conn:\r\n for query, params in queries_and_params:\r\n try:\r\n result = self.db.conn.execute(query, params)\r\n```\r\n\r\nbut that fails in tests because other methods call `insert/upsert/insert_all/upsert_all` in the middle of their transactions, so the BEGIN statement throws an error (no nested transactions allowed).\r\n\r\nStepping back, it would be nice to make the transaction handling systematic and predictable. One way to do this is to make the `sqlite_utils/db.py` code generally not begin or commit any transactions, and require the caller to do that instead. This lets the caller mix and match the Python API calls into transactions as appropriate (which is impossible for the API methods themselves to fully determine). Then, make `sqlite_utils/cli.py` begin and commit a transaction in each `@cli.command` function, making each command robust and consistent in the face of errors. The big change here, and why I didn't just submit a patch, is that it dramatically changes the Python API to _require_ callers to begin a transaction rather than just immediately calling methods.\r\n\r\nThere is also the caveat that for each transaction, an explicit `BEGIN` is also necessary so that DDL as well as DML (as well as `SELECT`s) are consistent and rolled back on error. There are several bugs.python.org discussions around this particular problem of DDL and some plans to make it better and consistent with DBAPI2, eventually. In the meantime, the sqlite-utils Database class could be a context manager which supports the incantations necessary to do proper transactions. This would still be a Python API change for callers but wouldn't expose them to the weirdness of the sqlite3's default transaction handling.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655052451", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655052451, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTA1MjQ1MQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-07T18:45:23Z", "updated_at": "2020-07-07T18:45:23Z", "author_association": "CONTRIBUTOR", "body": "Ah, I see the problem. The truncate is inside a loop I didn't realize was there.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655018966", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655018966, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTAxODk2Ng==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-07T17:41:06Z", "updated_at": "2020-07-07T17:41:06Z", "author_association": "CONTRIBUTOR", "body": "Hmm, while tests pass, this may not work as intended on larger datasets. Looking into it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/889#issuecomment-653002499", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/889", "id": 653002499, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MzAwMjQ5OQ==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-07-02T13:22:13Z", "updated_at": "2020-07-02T13:22:13Z", "author_association": "CONTRIBUTOR", "body": "I was able to narrow this down to the fact that lifespan protocol is turned on. \r\n\r\nI see the workaround you've used here: https://github.com/simonw/datasette-debug-asgi/commit/72d568d32a3159c763ce908c0b269736935c6987\r\n\r\nIf so, maybe it's time to update some of the asg_wrapper [plugins](https://datasette.readthedocs.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 649907676, "label": "asgi_wrapper plugin hook is crashing at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/889#issuecomment-652990131", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/889", "id": 652990131, "node_id": "MDEyOklzc3VlQ29tbWVudDY1Mjk5MDEzMQ==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-07-02T12:58:11Z", "updated_at": "2020-07-02T13:00:18Z", "author_association": "CONTRIBUTOR", "body": "FWIW, this error does NOT happen in datasette 0.45a4.\r\n\r\nIt only started on 0.45a5", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 649907676, "label": "asgi_wrapper plugin hook is crashing at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652394742", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652394742, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjM5NDc0Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T12:41:13Z", "updated_at": "2020-07-01T12:41:13Z", "author_association": "CONTRIBUTOR", "body": "Well tests need to be updated.\r\n \r\nI need to get tests working on Windows.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652297139", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652297139, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI5NzEzOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T09:11:29Z", "updated_at": "2020-07-01T09:11:29Z", "author_association": "CONTRIBUTOR", "body": "Turns out we should include hidden tables in the result dict, or we're breaking tests. I've committed a refactor https://github.com/simonw/datasette/pull/883/commits/4f06e1bf6fbe4b73be770b87f610bf7c0e6e3ea7", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652255960", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652255960, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T07:52:25Z", "updated_at": "2020-07-01T08:10:00Z", "author_association": "CONTRIBUTOR", "body": "I am calling the API from another origin, so injecting CSRF token into templates wouldn't work.\r\n\r\nEDIT:\r\n\r\nI'll try the new version, it sounds promising", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652261382", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652261382, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T08:03:17Z", "updated_at": "2020-07-01T08:03:23Z", "author_association": "CONTRIBUTOR", "body": "Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652166115", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652166115, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T03:28:07Z", "updated_at": "2020-07-01T03:28:07Z", "author_association": "CONTRIBUTOR", "body": "Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-652160909", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 652160909, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjE2MDkwOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T03:09:32Z", "updated_at": "2020-07-01T03:10:21Z", "author_association": "CONTRIBUTOR", "body": "I've just realized Datasette tries to count hidden tables too. There are 5 visible tables, 25 hidden tables, which I haven't realize earlier to consider their effect. I've turned off counting for hidden tables to see if it has any effect.\r\n\r\nWhat's the point of counting FTS tables?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-648669523", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 648669523, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODY2OTUyMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-24T08:13:23Z", "updated_at": "2020-06-24T10:30:36Z", "author_association": "CONTRIBUTOR", "body": "I tried setting `cache_size_kb=0` then `cache_size_kb=100000`, still getting this behavior. I even changed `Database::table_counts` and lowered time limit to 1\r\n\r\n```py\r\ntable_count = (\r\n await self.execute(\r\n \"select count(*) from [{}]\".format(table),\r\n custom_time_limit=1,\r\n )\r\n).rows[0][0]\r\ncounts[table] = table_count\r\n```\r\n\r\nI feel like 10 seconds is a magic number, like a processing timeout and datasette gives up and returns the page. \r\nIndex page loads instantly, table page, query page, as well. But when I return to database page after some time, it loads in 10s.\r\n\r\nEDIT:\r\n\r\nIt's always like 10 + 0.3s, like 10s wait and timeout then 300ms to render the page", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-648232645", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 648232645, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODIzMjY0NQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T15:19:53Z", "updated_at": "2020-06-23T15:19:53Z", "author_association": "CONTRIBUTOR", "body": "The issue seems to appear sporadically, like when I return to database page after a while, during which some records have been added to the database.\r\n\r\nI've just visited database, page first visit took ~10s, consecutive visits took 0.3s.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647925594", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647925594, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:55:21Z", "updated_at": "2020-06-23T06:28:29Z", "author_association": "CONTRIBUTOR", "body": "Hmm, not seeing the problem now. \r\nI've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms.\r\n\r\nI have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page?\r\n\r\nEDIT: \r\nIt turns out it doesn't hold cache with mutable databases.\r\n\r\nI'll update the issue with more findings and a better way to reproduce the problem if I encounter it again.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647936117", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647936117, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:25:17Z", "updated_at": "2020-06-23T06:25:17Z", "author_association": "CONTRIBUTOR", "body": "> \r\n> \r\n> ```\r\n> sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50\r\n> ```\r\n> \r\n> Looks like that will take 35 minutes to run (it's not a particularly fast tool).\r\n\r\nTry chunking write operations into batches every 1000 records or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647935300", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647935300, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:23:01Z", "updated_at": "2020-06-23T06:23:01Z", "author_association": "CONTRIBUTOR", "body": "> You said \"200k+, 50+ rows in a couple of tables\" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes.\r\n\r\nAh that was a typo, I meant 50k.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647923666", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647923666, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyMzY2Ng==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:49:31Z", "updated_at": "2020-06-23T05:49:31Z", "author_association": "CONTRIBUTOR", "body": "I think I should mention that having FTS on all tables mean I have 5 visible, 25 hidden (FTS) tables displayed on database page.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647194131", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647194131, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzE5NDEzMQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-21T23:15:54Z", "updated_at": "2020-06-21T23:26:09Z", "author_association": "CONTRIBUTOR", "body": "I'm not sure if table counts are to blame. There shouldn't be a ~3 orders of magnitude difference.\r\n\r\n```fish\r\nuser@klein /a/w/scrapyard (master)> set sql \"select count(*) from table_1; select count(*) from table_2; select count(*) from table_3;\"\r\nuser@klein /a/w/scrapyard (master)> time sqlite3 scrapyard.db \"$sql\"\r\n187489\r\n46492\r\n2229\r\n\r\n________________________________________________________\r\nExecuted in 25.57 millis fish external\r\n usr time 3.55 millis 0.00 micros 3.55 millis\r\n sys time 22.42 millis 1123.00 micros 21.30 millis\r\n```\r\n\r\nbut not letting datasette count the tables definitely helps.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647135713", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647135713, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzEzNTcxMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-21T14:30:02Z", "updated_at": "2020-06-21T14:30:02Z", "author_association": "CONTRIBUTOR", "body": "Oops, the same method is called from both index and database pages. But removing select count queries speed up the page load quite a bit.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/851#issuecomment-645293374", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/851", "id": 645293374, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NTI5MzM3NA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-17T10:32:02Z", "updated_at": "2020-06-17T10:32:28Z", "author_association": "CONTRIBUTOR", "body": "Welp, I'm an idiot.\r\n\r\nTurns out I had a sneaky comma `,` after `sql` key:\r\n```\r\n... (:name, :url),\r\n```\r\nwhich tells sqlite to expect another `values(...)` list.\r\n\r\nCorrecting the SQL solved the issue. \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 640330278, "label": "Having trouble getting writable canned queries to work"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/691#issuecomment-643709037", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/691", "id": 643709037, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MzcwOTAzNw==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-06-14T02:35:16Z", "updated_at": "2020-06-14T02:35:16Z", "author_association": "CONTRIBUTOR", "body": "The server should reload in the `config_dir` mode. \r\n\r\nRef: #848", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 574021194, "label": "--reload sould reload server if code in --plugins-dir changes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-641908346", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 641908346, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-06-10T10:22:54Z", "updated_at": "2020-06-10T10:22:54Z", "author_association": "CONTRIBUTOR", "body": "There's a working demo here: https://github.com/wragge/datasette-test\r\n\r\nAnd if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/767#issuecomment-632555800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/767", "id": 632555800, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjU1NTgwMA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2020-05-22T08:00:23Z", "updated_at": "2020-05-22T08:00:23Z", "author_association": "CONTRIBUTOR", "body": "That would be perfect!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 620969465, "label": "Allow to specify a URL fragment for canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 628405453, "node_id": "MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-14T05:59:53Z", "updated_at": "2020-05-14T05:59:53Z", "author_association": "CONTRIBUTOR", "body": "I've added support for the above exif data to [v0.28.17](https://github.com/RhetTbull/osxphotos/releases/tag/v0.28.17) of osxphotos. `PhotoInfo.exif_info` will return an `ExifInfo` [dataclass](https://docs.python.org/3/library/dataclasses.html) object with the following properties:\r\n\r\n```python\r\n flash_fired: bool\r\n iso: int\r\n metering_mode: int\r\n sample_rate: int\r\n track_format: int\r\n white_balance: int\r\n aperture: float\r\n bit_rate: float\r\n duration: float\r\n exposure_bias: float\r\n focal_length: float\r\n fps: float\r\n latitude: float\r\n longitude: float\r\n shutter_speed: float\r\n camera_make: str\r\n camera_model: str\r\n codec: str\r\n lens_model: str\r\n```\r\n\r\nIt's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data\r\n\r\nNote: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 627007458, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T22:51:52Z", "updated_at": "2020-05-11T22:52:26Z", "author_association": "CONTRIBUTOR", "body": "I'm not familiar with `ExifReader`. I wrote my own wrapper around `exiftool` because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like [pyexiftool](https://github.com/smarnach/pyexiftool) didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in `ZEXTENDEDATTRIBUTES` table. I've got an open issue [#11](https://github.com/RhetTbull/osxphotos/issues/11) to add this to osxphotos but it hasn't bubbled to the top of my backlog yet. \r\n\r\nosxphotos will give you the location info: `PhotoInfo.location` returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS).\r\n\r\n```sql\r\nCREATE TABLE ZEXTENDEDATTRIBUTES (\r\n Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, \r\n Z_OPT INTEGER, ZFLASHFIRED INTEGER, \r\n ZISO INTEGER, ZMETERINGMODE INTEGER, \r\n ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, \r\n ZWHITEBALANCE INTEGER, ZASSET INTEGER, \r\n ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, \r\n ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, \r\n ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, \r\n ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, \r\n ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, \r\n ZLENSMODEL VARCHAR\r\n);\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 626667235, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T12:20:34Z", "updated_at": "2020-05-11T12:20:34Z", "author_association": "CONTRIBUTOR", "body": "@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example:\r\n\r\n```python\r\nimport osxphotos\r\nphotos = osxphotos.PhotosDB().photos()\r\nexiftool = osxphotos.exiftool.ExifTool(photos[0].path)\r\nexifdata = exiftool.as_dict()\r\ntags = exifdata[\"IPTC:Keywords\"]\r\n```\r\n\r\nNot as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC.\r\n\r\nAs for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626396379, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T22:01:48Z", "updated_at": "2020-05-10T22:01:48Z", "author_association": "CONTRIBUTOR", "body": "Frustrates me when package authors create a \"drop in\" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do \"import bpylist2 as bpylist\"", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395641, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:55:54Z", "updated_at": "2020-05-10T21:55:54Z", "author_association": "CONTRIBUTOR", "body": "Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null}