{"html_url": "https://github.com/simonw/datasette/pull/890#issuecomment-653309545", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/890", "id": 653309545, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MzMwOTU0NQ==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2020-07-03T02:52:25Z", "updated_at": "2020-07-03T03:03:00Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=h1) Report\n> Merging [#890](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=desc) into [master](https://codecov.io/gh/simonw/datasette/commit/57879dc8b346a435804a9e45ffaacbf2a0228bc6&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `80.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/890/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #890 +/- ##\n==========================================\n- Coverage 83.42% 83.40% -0.02% \n==========================================\n Files 27 27 \n Lines 3632 3634 +2 \n==========================================\n+ Hits 3030 3031 +1 \n- Misses 602 603 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [datasette/app.py](https://codecov.io/gh/simonw/datasette/pull/890/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2FwcC5weQ==) | `95.99% <80.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=footer). Last update [57879dc...745af3b](https://codecov.io/gh/simonw/datasette/pull/890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 650305298, "label": "Load only python files from plugins-dir."}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/848#issuecomment-643711117", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/848", "id": 643711117, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MzcxMTExNw==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2020-06-14T03:05:55Z", "updated_at": "2020-07-03T02:44:09Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=h1) Report\n> Merging [#848](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=desc) into [master](https://codecov.io/gh/simonw/datasette/commit/57879dc8b346a435804a9e45ffaacbf2a0228bc6&el=desc) will **decrease** coverage by `0.60%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/848/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #848 +/- ##\n==========================================\n- Coverage 83.42% 82.82% -0.61% \n==========================================\n Files 27 26 -1 \n Lines 3632 3540 -92 \n==========================================\n- Hits 3030 2932 -98 \n- Misses 602 608 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [datasette/cli.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2NsaS5weQ==) | `71.34% <0.00%> (-0.89%)` | :arrow_down: |\n| [datasette/views/special.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3ZpZXdzL3NwZWNpYWwucHk=) | `77.77% <0.00%> (-3.40%)` | :arrow_down: |\n| [datasette/app.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2FwcC5weQ==) | `94.58% <0.00%> (-1.58%)` | :arrow_down: |\n| [datasette/utils/asgi.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3V0aWxzL2FzZ2kucHk=) | `90.90% <0.00%> (-0.42%)` | :arrow_down: |\n| [datasette/utils/\\_\\_init\\_\\_.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3V0aWxzL19faW5pdF9fLnB5) | `93.84% <0.00%> (-0.09%)` | :arrow_down: |\n| [datasette/plugins.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3BsdWdpbnMucHk=) | `82.35% <0.00%> (\u00f8)` | |\n| [datasette/hookspecs.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2hvb2tzcGVjcy5weQ==) | `100.00% <0.00%> (\u00f8)` | |\n| [datasette/default\\_permissions.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2RlZmF1bHRfcGVybWlzc2lvbnMucHk=) | `100.00% <0.00%> (\u00f8)` | |\n| [datasette/default\\_magic\\_parameters.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2RlZmF1bHRfbWFnaWNfcGFyYW1ldGVycy5weQ==) | | |\n| [datasette/views/base.py](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3ZpZXdzL2Jhc2UucHk=) | `93.40% <0.00%> (+<0.01%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/simonw/datasette/pull/848/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=footer). Last update [57879dc...0d100d1](https://codecov.io/gh/simonw/datasette/pull/848?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 638270441, "label": "Reload support for config_dir mode."}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652311990", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652311990, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjMxMTk5MA==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2020-07-01T09:40:40Z", "updated_at": "2020-07-01T09:40:40Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/883?src=pr&el=h1) Report\n> Merging [#883](https://codecov.io/gh/simonw/datasette/pull/883?src=pr&el=desc) into [master](https://codecov.io/gh/simonw/datasette/commit/676bb64c877d73f8ff496cef4632f5a8a5a9283c&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/883/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/883?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #883 +/- ##\n=======================================\n Coverage 83.42% 83.42% \n=======================================\n Files 27 27 \n Lines 3632 3632 \n=======================================\n Hits 3030 3030 \n Misses 602 602 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/883?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/883?src=pr&el=footer). Last update [676bb64...251884f](https://codecov.io/gh/simonw/datasette/pull/883?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/869#issuecomment-650600176", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/869", "id": 650600176, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MDYwMDE3Ng==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2020-06-27T18:41:31Z", "updated_at": "2020-06-28T02:54:21Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=h1) Report\n> Merging [#869](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=desc) into [master](https://codecov.io/gh/simonw/datasette/commit/1bb33dab49fd25f77b9f8e7ab7ee23b3d64c123c&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `90.62%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/869/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #869 +/- ##\n==========================================\n+ Coverage 82.99% 83.23% +0.23% \n==========================================\n Files 26 27 +1 \n Lines 3547 3609 +62 \n==========================================\n+ Hits 2944 3004 +60 \n- Misses 603 605 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [datasette/plugins.py](https://codecov.io/gh/simonw/datasette/pull/869/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3BsdWdpbnMucHk=) | `82.35% <\u00f8> (\u00f8)` | |\n| [datasette/views/database.py](https://codecov.io/gh/simonw/datasette/pull/869/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3ZpZXdzL2RhdGFiYXNlLnB5) | `96.45% <86.36%> (-1.88%)` | :arrow_down: |\n| [datasette/default\\_magic\\_parameters.py](https://codecov.io/gh/simonw/datasette/pull/869/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2RlZmF1bHRfbWFnaWNfcGFyYW1ldGVycy5weQ==) | `91.17% <91.17%> (\u00f8)` | |\n| [datasette/app.py](https://codecov.io/gh/simonw/datasette/pull/869/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2FwcC5weQ==) | `96.07% <100.00%> (+0.81%)` | :arrow_up: |\n| [datasette/hookspecs.py](https://codecov.io/gh/simonw/datasette/pull/869/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL2hvb2tzcGVjcy5weQ==) | `100.00% <100.00%> (\u00f8)` | |\n| [datasette/utils/\\_\\_init\\_\\_.py](https://codecov.io/gh/simonw/datasette/pull/869/diff?src=pr&el=tree#diff-ZGF0YXNldHRlL3V0aWxzL19faW5pdF9fLnB5) | `93.87% <100.00%> (+0.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=footer). Last update [1bb33da...9e693a7](https://codecov.io/gh/simonw/datasette/pull/869?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 646734280, "label": "Magic parameters for canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/866#issuecomment-648818707", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/866", "id": 648818707, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODgxODcwNw==", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2020-06-24T13:26:14Z", "updated_at": "2020-06-24T13:26:14Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/866?src=pr&el=h1) Report\n> Merging [#866](https://codecov.io/gh/simonw/datasette/pull/866?src=pr&el=desc) into [master](https://codecov.io/gh/simonw/datasette/commit/1a5b7d318fa923edfcefd3df8f64dae2e9c49d3f&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/866/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1)](https://codecov.io/gh/simonw/datasette/pull/866?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #866 +/- ##\n=======================================\n Coverage 82.99% 82.99% \n=======================================\n Files 26 26 \n Lines 3547 3547 \n=======================================\n Hits 2944 2944 \n Misses 603 603 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/866?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/866?src=pr&el=footer). Last update [1a5b7d3...fb64dda](https://codecov.io/gh/simonw/datasette/pull/866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 644610729, "label": "Update pytest-asyncio requirement from <0.13,>=0.10 to >=0.10,<0.15"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-648800356", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 648800356, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODgwMDM1Ng==", "user": {"value": 6739646, "label": "tballison"}, "created_at": "2020-06-24T12:51:48Z", "updated_at": "2020-06-24T12:51:48Z", "author_association": "NONE", "body": ">But also want to say thanks for a great tool\r\n\r\n+1!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/865#issuecomment-648799963", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/865", "id": 648799963, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODc5OTk2Mw==", "user": {"value": 6739646, "label": "tballison"}, "created_at": "2020-06-24T12:51:01Z", "updated_at": "2020-06-24T12:51:01Z", "author_association": "NONE", "body": "This seems to be a duplicate of: https://github.com/simonw/datasette/issues/838", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 644582921, "label": "base_url doesn't seem to work when adding criteria and clicking \"apply\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/694#issuecomment-648296323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/694", "id": 648296323, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODI5NjMyMw==", "user": {"value": 3903726, "label": "kwladyka"}, "created_at": "2020-06-23T17:10:51Z", "updated_at": "2020-06-23T17:10:51Z", "author_association": "NONE", "body": "@simonw \r\n\r\nDid you find the reason? I had similar situation and I check this on millions ways. I am sure app doesn't consume such memory.\r\n\r\nI was trying the app with:\r\n`docker run --rm -it -p 80:80 -m 128M foo`\r\n\r\nI was watching app with `docker stats`. Even limited memory by `CMD [\"java\", \"-Xms60M\", \"-Xmx60M\", \"-jar\", \"api.jar\"]`.\r\nChecked memory usage by app in code and print bash commands. The app definitely doesn't use this memory. Also doesn't write files.\r\n\r\nOnly one solution is to change memory to 512M.\r\n\r\nIt is definitely something wrong with `cloud run`.\r\n\r\nI even did special app for testing this. It looks like when I cross very small amount of code / memory / app size in random when, then memory needs grow +hundreds. Nothing make sense here. Especially it works everywhere expect cloud run.\r\n\r\nPlease let me know if you discovered something more.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 576582604, "label": "datasette publish cloudrun --memory option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-647803394", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 647803394, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzgwMzM5NA==", "user": {"value": 6289012, "label": "ChristopherWilks"}, "created_at": "2020-06-22T22:36:34Z", "updated_at": "2020-06-22T22:36:34Z", "author_association": "NONE", "body": "I also am seeing the same issue with an Apache setup (same even w/o `ProxyPassReverse`, though I typically use it as @tsibley stated).\r\n\r\nBut also want to say thanks for a great tool (this issue not withstanding)!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47", "id": 645515103, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw==", "user": {"value": 73579, "label": "hpk42"}, "created_at": "2020-06-17T17:30:01Z", "updated_at": "2020-06-17T17:30:01Z", "author_association": "NONE", "body": "It's the one with python3.7::\n\n >>> sqlite3.sqlite_version\n '3.11.0'\n\n \nOn Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote:\n\n> That means your version of SQLite is old enough that it doesn't support the FTS5 extension.\n> \n> Could you share what operating system you're running, and what the output is that you get from running this?\n> \n> python -c 'import sqlite3; print(sqlite3.connect(\":memory:\").execute(\"select sqlite_version()\").fetchone()[0])'\n> \n> I can teach this tool to fall back on FTS4 if FTS5 isn't available.\n> \n> -- \n> You are receiving this because you authored the thread.\n> Reply to this email directly or view it on GitHub:\n> https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 639542974, "label": "Fall back to FTS4 if FTS5 is not available"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-643083451", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 643083451, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MzA4MzQ1MQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-06-12T06:04:14Z", "updated_at": "2020-06-12T06:04:14Z", "author_association": "NONE", "body": "Hmm, I haven't tried removing `ProxyPassReverse`, but it doesn't touch the HTML, which is the issue I'm seeing. You can read the [documentation here](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassreverse). `ProxyPassReverse` is a standard directive when proxying with Apache. I've used it dozens of times with other applications.\r\n\r\nLooking a little more at the code, I think the issue here is that the behaviour of `base_url` makes sense when Datasette is _mounted_ at a path within a larger application, but not when HTTP requests are being _proxied_ to it.\r\n\r\nIn a _mount_ situation, it is perfectly fine to construct URLs reusing the domain and path from the request. In a _proxy_ situation, it never is, as the domain and path in the request are not the domain and path that the non-proxy client actually needs to use. That is, links which include the Apache \u2192 Datasette request origin, `localhost:8001`, instead of the browser \u2192 Apache request origin, `example.com`, will be broken.\r\n\r\nThe tests you pointed to also reflect this in two ways:\r\n\r\n1. They strip a leading `http://localhost`, allowing such URLs in the facet links to pass, but inclusion of that in a proxy situation would mean the URL is broken.\r\n\r\n2. The test client emits direct ASGI events instead of actual proxied HTTP requests. The headers of these ASGI events don't reflect the way an HTTP proxy works; instead they pass through the original request path which contains `base_url`. This works because Datasette responds to requests equivalently at either `/\u2026` or `/{base_url}/\u2026`, which makes some sense in a _mount_ situation but is unconventional (albeit workable) for a proxied app.\r\n\r\nApps that support being proxied automatically support being mounted, but apps that only support being mounted don't automatically support being proxied.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-642522285", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 642522285, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MjUyMjI4NQ==", "user": {"value": 58298410, "label": "LVerneyPEReN"}, "created_at": "2020-06-11T09:15:19Z", "updated_at": "2020-06-11T09:15:19Z", "author_association": "NONE", "body": "Hi @wragge,\r\n\r\nThis looks great, thanks for the share! I refactored it into a self-contained function, binding on a random available TCP port (multi-user context). I am using subprocess API directly since the `%run` magic was leaving defunct process behind :/\r\n\r\n![image](https://user-images.githubusercontent.com/58298410/84367566-b5d0d500-abd4-11ea-96e2-f5c05a28e506.png)\r\n\r\n```python\r\nimport socket\r\n\r\nfrom signal import SIGINT\r\nfrom subprocess import Popen, PIPE\r\n\r\nfrom IPython.display import display, HTML\r\nfrom notebook.notebookapp import list_running_servers\r\n\r\n\r\ndef get_free_tcp_port():\r\n \"\"\"\r\n Get a free TCP port.\r\n \"\"\"\r\n tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n tcp.bind(('', 0))\r\n _, port = tcp.getsockname()\r\n tcp.close()\r\n return port\r\n\r\n\r\ndef datasette(database):\r\n \"\"\"\r\n Run datasette on an SQLite database.\r\n \"\"\"\r\n # Get current running servers\r\n servers = list_running_servers()\r\n\r\n # Get the current base url\r\n base_url = next(servers)['base_url']\r\n\r\n # Get a free port\r\n port = get_free_tcp_port()\r\n\r\n # Create a base url for Datasette suing the proxy path\r\n proxy_url = f'{base_url}proxy/absolute/{port}/'\r\n\r\n # Display a link to Datasette\r\n display(HTML(f'

View Datasette (Click on the stop button to close the Datasette server)

'))\r\n\r\n # Launch Datasette\r\n with Popen(\r\n [\r\n 'python', '-m', 'datasette', '--',\r\n database,\r\n '--port', str(port),\r\n '--config', f'base_url:{proxy_url}'\r\n ],\r\n stdout=PIPE,\r\n stderr=PIPE,\r\n bufsize=1,\r\n universal_newlines=True\r\n ) as p:\r\n print(p.stdout.readline(), end='')\r\n while True:\r\n try:\r\n line = p.stderr.readline()\r\n if not line:\r\n break\r\n print(line, end='')\r\n exit_code = p.poll()\r\n except KeyboardInterrupt:\r\n p.send_signal(SIGINT)\r\n```\r\n\r\nIdeally, I'd like some extra magic to notify users when they are leaving the closing the notebook tab and make them terminate the running datasette processes. I'll be looking for it.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-641889565", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 641889565, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MTg4OTU2NQ==", "user": {"value": 58298410, "label": "LVerneyPEReN"}, "created_at": "2020-06-10T09:49:34Z", "updated_at": "2020-06-10T09:49:34Z", "author_association": "NONE", "body": "Hi,\r\n\r\nI came across this issue while looking for a way to spawn Datasette as a SQLite files viewer in JupyterLab. I found https://github.com/simonw/jupyterserverproxy-datasette-demo which seems to be the most up to date proof of concept, but it seems to be failing to list the available db (at least in the Binder demo, https://hub.gke.mybinder.org/user/simonw-jupyters--datasette-demo-uw4dmlnn/datasette/, I only have `:memory`).\r\n\r\nDoes anyone tried to improve on this proof of concept to have a Datasette visualization for SQLite files?\r\n\r\nThanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/777#issuecomment-635513983", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/777", "id": 635513983, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNTUxMzk4Mw==", "user": {"value": 63653929, "label": "thisismyfuckingusername"}, "created_at": "2020-05-28T18:16:49Z", "updated_at": "2020-05-28T18:16:49Z", "author_association": "NONE", "body": " think, because the given URL of the CSS file doesn't have any complete parameters after query \r\nTry to complete the parameter \r\n``", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 626171242, "label": "Error pages not correctly loading CSS"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-635386935", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 635386935, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNTM4NjkzNQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-28T14:32:53Z", "updated_at": "2020-05-28T14:32:53Z", "author_association": "NONE", "body": "Wow, I'm in some way very proud!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/758#issuecomment-635195322", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/758", "id": 635195322, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNTE5NTMyMg==", "user": {"value": 2181410, "label": "clausjuhl"}, "created_at": "2020-05-28T08:23:27Z", "updated_at": "2020-05-28T08:23:27Z", "author_association": "NONE", "body": "@simonw I would prefer just the 7 character hash. No need to make the urls any longer than they need to be :)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612382643, "label": "Question: Access to immutable database-path"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-634446887", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 634446887, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNDQ0Njg4Nw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-27T06:01:28Z", "updated_at": "2020-05-27T06:01:28Z", "author_association": "NONE", "body": "Dear @simonw thank you for your time, now IT WORKS!!!\r\n\r\nI hope that this edit to datasette code is not for an exceptional case (my PC configuration) and that it will be useful to other users. \r\n\r\nThank you again!!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-634283355", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 634283355, "node_id": "MDEyOklzc3VlQ29tbWVudDYzNDI4MzM1NQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-26T21:15:34Z", "updated_at": "2020-05-26T21:15:34Z", "author_association": "NONE", "body": "> Oh no! It looks like `dirs_exist_ok` is Python 3.8 only. This is a bad fix, it needs to work on older Python's too. Re-opening.\r\n\r\nThank you very much", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/20#issuecomment-633234781", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20", "id": 633234781, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMzIzNDc4MQ==", "user": {"value": 41439, "label": "dmd"}, "created_at": "2020-05-24T13:56:13Z", "updated_at": "2020-05-24T13:56:13Z", "author_association": "NONE", "body": "As that seems to be closed, can you give a hint on how to make this work?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 613006393, "label": "Ability to serve thumbnailed Apple Photo from its place on disk"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-632305868", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 632305868, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjMwNTg2OA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-21T19:43:23Z", "updated_at": "2020-05-21T19:43:23Z", "author_association": "NONE", "body": "@simonw now I have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/bin/datasette\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 103, in heroku\r\n extra_metadata,\r\n File \"/usr/lib/python3.7/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 191, in temporary_heroku_directory\r\n os.path.join(tmp.name, \"templates\"),\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py\", line 605, in link_or_copy_directory\r\n shutil.copytree(src, dst, copy_function=os.link, dirs_exist_ok=True)\r\nTypeError: copytree() got an unexpected keyword argument 'dirs_exist_ok'\r\n```\r\n\r\nDo I must open a new issue?\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-632255088", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 632255088, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjI1NTA4OA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-21T17:58:51Z", "updated_at": "2020-05-21T17:58:51Z", "author_association": "NONE", "body": "Thank you very much!!\r\n\r\nI will try and I write you here", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-632249565", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 632249565, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjI0OTU2NQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-21T17:47:40Z", "updated_at": "2020-05-21T17:47:40Z", "author_association": "NONE", "body": "@simonw can I test it know? What I must do to update it?\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/699#issuecomment-626991001", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/699", "id": 626991001, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjk5MTAwMQ==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2020-05-11T22:06:34Z", "updated_at": "2020-05-11T22:06:34Z", "author_association": "NONE", "body": "Very nice! Thank you for sharing that :+1: :) Will try it out!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 582526961, "label": "Authentication (and permissions) as a core concept"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/699#issuecomment-626807487", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/699", "id": 626807487, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjgwNzQ4Nw==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2020-05-11T16:23:57Z", "updated_at": "2020-05-11T16:24:59Z", "author_association": "NONE", "body": "`Authorization: bearer xxx` auth for API keys is a plus plus for me. Looked into just adding this into your `Flask` logic but learned this project doesn't use flask. Interesting \ud83e\udd14", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 582526961, "label": "Authentication (and permissions) as a core concept"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/254#issuecomment-626340387", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/254", "id": 626340387, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM0MDM4Nw==", "user": {"value": 247131, "label": "philroche"}, "created_at": "2020-05-10T14:54:13Z", "updated_at": "2020-05-10T14:54:13Z", "author_association": "NONE", "body": "This has now been resolved and is not present in current version of datasette. \r\n\r\nSample query @simonw mentioned now returns as expected. \r\n\r\nhttps://aggreg8streams.tinyviking.ie/simplestreams?sql=select+*+from+cloudimage+where+%22content_id%22+%3D+%22com.ubuntu.cloud%3Areleased%3Adownload%22+order+by+id+limit+10", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 322283067, "label": "Escaping named parameters in canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/619#issuecomment-626006493", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/619", "id": 626006493, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjAwNjQ5Mw==", "user": {"value": 412005, "label": "davidszotten"}, "created_at": "2020-05-08T20:29:12Z", "updated_at": "2020-05-08T20:29:12Z", "author_association": "NONE", "body": "just trying out datasette and quite like it, thanks! i found this issue annoying enough to have a go at a fix. have you any thoughts on a good approach? (i'm happy to dig in myself if you haven't thought about it yet, but wanted to check if you had an idea for how to fix when you raised the issue)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 520655983, "label": "\"Invalid SQL\" page should let you edit the SQL"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/648#issuecomment-625321121", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/648", "id": 625321121, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTMyMTEyMQ==", "user": {"value": 28694175, "label": "chekos"}, "created_at": "2020-05-07T15:21:19Z", "updated_at": "2020-05-07T15:21:19Z", "author_association": "NONE", "body": "It seems that heroku wasn't updating to 0.41 on deployment. \r\n\r\nHad to add `--branch 0.41` and that solved it! Heroku caches dependencies \r\n\"Screen\r\n\r\nand (i think) because the `requirements.txt` doesn't specify the datasette version, it didn't update from 0.40 to 0.41 on heroku even though it was specified on my local requirements file as `datasette >= 0.41`\r\n\r\nThese are the lines that gave me an idea on how to solve it:\r\nhttps://github.com/simonw/datasette/blob/182e5c8745c94576718315f7596ccc81e5e2417b/datasette/publish/heroku.py#L164-L186", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 534492501, "label": "Mechanism for adding arbitrary pages like /about"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/648#issuecomment-625286519", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/648", "id": 625286519, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTI4NjUxOQ==", "user": {"value": 28694175, "label": "chekos"}, "created_at": "2020-05-07T14:23:22Z", "updated_at": "2020-05-07T14:28:33Z", "author_association": "NONE", "body": "Hi! I'm using datasette on this repository: https://github.com/chekos/RIPA-2018-datasette\r\n\r\nand on my local machine i can see an /about page i created but when i deploy to heroku i get a 404 (http://ripa-2018-db.herokuapp.com)\r\n\r\n\"Screen\r\n\r\n\r\nI bumped datasette in my requirements file to 0.41 so I'm 100% what the issue is \ud83e\udd14 \r\n\r\nDo you have any idea what could be the problem? \ud83d\udc40 \r\n\r\nEDIT: for context, I have a templates directory with a pages/about.html file in https://github.com/chekos/RIPA-2018-datasette/tree/master/datasette/templates", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 534492501, "label": "Mechanism for adding arbitrary pages like /about"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625091976", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625091976, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA5MTk3Ng==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T07:51:25Z", "updated_at": "2020-05-07T07:51:25Z", "author_association": "NONE", "body": "I have installed `heroku plugins:install heroku-builds`, but I have the same error.\r\n\r\nThen I have removed from `datasette\\publish\\heroku.py`\r\n\r\n```python\r\n # Check for heroku-builds plugin\r\n plugins = [\r\n line.split()[0] for line in check_output([\"heroku\", \"plugins\"]).splitlines()\r\n ]\r\n if b\"heroku-builds\" not in plugins:\r\n click.echo(\r\n \"Publishing to Heroku requires the heroku-builds plugin to be installed.\"\r\n )\r\n click.confirm(\r\n \"Install it? (this will run `heroku plugins:install heroku-builds`)\",\r\n abort=True,\r\n )\r\n call([\"heroku\", \"plugins:install\", \"heroku-builds\"])\r\n```\r\n\r\nAnd now I have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 210, in temporary_heroku_directory\r\n yield\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 96, in heroku\r\n list_output = check_output([\"heroku\", \"apps:list\", \"--json\"]).decode(\r\n File \"c:\\python37\\lib\\subprocess.py\", line 395, in check_output\r\n **kwargs).stdout\r\n File \"c:\\python37\\lib\\subprocess.py\", line 472, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"c:\\python37\\lib\\subprocess.py\", line 775, in __init__\r\n restore_signals, start_new_session)\r\n File \"c:\\python37\\lib\\subprocess.py\", line 1178, in _execute_child\r\n startupinfo)\r\nFileNotFoundError: [WinError 2] The specified file could not be found\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python37\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\python37\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\Scripts\\datasette.exe\\__main__.py\", line 9, in \r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 120, in heroku\r\n call([\"heroku\", \"builds:create\", \"-a\", app_name, \"--include-vcs-ignore\"])\r\n File \"c:\\python37\\lib\\contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 213, in temporary_heroku_directory\r\n tmp.cleanup()\r\n File \"c:\\python37\\lib\\tempfile.py\", line 809, in cleanup\r\n _shutil.rmtree(self.name)\r\n File \"c:\\python37\\lib\\shutil.py\", line 513, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\python37\\lib\\shutil.py\", line 401, in _rmtree_unsafe\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"c:\\python37\\lib\\shutil.py\", line 399, in _rmtree_unsafe\r\n os.rmdir(path)\r\nPermissionError: [WinError 32] Unable to access file. The file is being used by another process: 'C:\\\\Users\\\\aborr\\\\AppData\\\\Local\\\\Temp\\\\tmpkcxy8i_q'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625083715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625083715, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA4MzcxNQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T07:34:18Z", "updated_at": "2020-05-07T07:34:18Z", "author_association": "NONE", "body": "In Windows I'm not very strong. I use debian (inside WSL).\r\n\r\nHowever these are the possible steps:\r\n\r\n- I have installed Python 3 for win (I have 3.7.3);\r\n- I have installed heroku cli for win64 and logged in;\r\n- I have installed datasette running `python -m pip install --upgrade --user datasette`.\r\n\r\nIt's a very basic Python env that I do not use. This time only to reach my goal: try to publish using custom template", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625066073", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625066073, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA2NjA3Mw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T06:53:09Z", "updated_at": "2020-05-07T06:53:09Z", "author_association": "NONE", "body": "@simonw another error starting from Windows.\r\n\r\nI run\r\n\r\n```\r\ndatasette publish heroku -n comunepa --template-dir template commissioniComunePalermo.db\r\n```\r\n\r\nAnd I have\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"c:\\python37\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\python37\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\Scripts\\datasette.exe\\__main__.py\", line 9, in \r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\click\\core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"C:\\Users\\aborr\\AppData\\Roaming\\Python\\Python37\\site-packages\\datasette\\publish\\heroku.py\", line 53, in heroku\r\n line.split()[0] for line in check_output([\"heroku\", \"plugins\"]).splitlines()\r\n File \"c:\\python37\\lib\\subprocess.py\", line 395, in check_output\r\n **kwargs).stdout\r\n File \"c:\\python37\\lib\\subprocess.py\", line 472, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"c:\\python37\\lib\\subprocess.py\", line 775, in __init__\r\n restore_signals, start_new_session)\r\n File \"c:\\python37\\lib\\subprocess.py\", line 1178, in _execute_child\r\n startupinfo)\r\nFileNotFoundError: [WinError 2] The specified file could not be found\r\n```\r\n\r\n\r\n[files.zip](https://github.com/simonw/datasette/files/4591173/files.zip)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-625060561", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 625060561, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNTA2MDU2MQ==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-05-07T06:38:24Z", "updated_at": "2020-05-07T06:38:24Z", "author_association": "NONE", "body": "Hi @simonw probably I could try to do it in Python for windows. I do not like to do these things in win enviroment.\r\n\r\nBecause probably WSL Linux env (in which I do a lot of great things) is not an environment that will be tested for datasette.\r\n\r\nIn win I shouldn't have any problems. Am I right?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/759#issuecomment-624860451", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/759", "id": 624860451, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNDg2MDQ1MQ==", "user": {"value": 133845, "label": "Krazybug"}, "created_at": "2020-05-06T20:03:01Z", "updated_at": "2020-05-06T20:04:42Z", "author_association": "NONE", "body": "Thank you. Now it's ok with the url\r\n\r\nhttp://localhost:8001/index/summary?_search=language%3Aeng&_sort=title&_searchmode=raw\r\n\r\nBut I'm not able to manage it in the metadata file. Here is mine (note that the sort column is taken into account)\r\nHere it is:\r\n\r\n```\r\n{\r\n \"databases\": {\r\n \"index\": {\r\n \"tables\": {\r\n \"summary\": {\r\n \"sort\": \"title\",\r\n \"searchmode\": \"raw\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n\r\n```\r\nAny idea ?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612673948, "label": "fts search on a column doesn't work anymore due to escape_fts"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/725#issuecomment-623623696", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/725", "id": 623623696, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzYyMzY5Ng==", "user": {"value": 4312421, "label": "stonebig"}, "created_at": "2020-05-04T18:16:54Z", "updated_at": "2020-05-04T18:16:54Z", "author_association": "NONE", "body": "thanks a lot, Simon ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 598891570, "label": "Update aiofiles requirement from ~=0.4.0 to >=0.4,<0.6"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/38#issuecomment-623044643", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38", "id": 623044643, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzA0NDY0Mw==", "user": {"value": 5779832, "label": "zzeleznick"}, "created_at": "2020-05-03T02:34:32Z", "updated_at": "2020-05-03T02:34:32Z", "author_association": "NONE", "body": "1. More than glad to share feedback from the sidelines as a [starrer](https://github-to-sqlite.dogsheep.net/github?sql=select%0D%0A++starred_at%2C%0D%0A++starred_by%2C%0D%0A++full_name+as+repo_name%0D%0Afrom%0D%0A++repos_starred%0D%0Awhere%0D%0A++starred_by+%3D+%22zzeleznick%22%0D%0Aorder+by%0D%0A++starred_at+desc). \r\n\r\n```\r\n-- Motivation:\r\n-- Datasette is a data hammer and I'm looking for nails\r\n-- e.g. Find which repos a user has starred => trigger a TBD downstream action\r\nselect\r\n starred_at,\r\n starred_by,\r\n full_name as repo_name\r\nfrom\r\n repos_starred\r\nwhere\r\n starred_by = \"zzeleznick\"\r\norder by\r\n starred_at desc\r\n``` \r\n\r\n| starred_at | starred_by | repo_name |\r\n| --- | --- | --- |\r\n| 2020-02-11T01:08:59Z | zzeleznick | dogsheep/twitter-to-sqlite |\r\n| 2020-01-11T21:57:34Z | zzeleznick | simonw/datasette |\r\n\r\n2. In my day job, I use [airflow](https://github.com/apache/airflow), and that's the mental model I'm bringing to [datasette](https://github.com/simonw/datasette). \r\n\r\n3. I see your project like [twitter-to-sqlite](https://github.com/dogsheep/twitter-to-sqlite) akin to [Operators](https://airflow.apache.org/docs/stable/_api/index.html#pythonapi-operators) in Airflow world.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 611284481, "label": "[Feature Request] Support Repo Name in Search \ud83e\udd7a"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/38#issuecomment-623038148", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38", "id": 623038148, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzAzODE0OA==", "user": {"value": 5779832, "label": "zzeleznick"}, "created_at": "2020-05-03T01:18:57Z", "updated_at": "2020-05-03T01:18:57Z", "author_association": "NONE", "body": "Thanks, @simonw! \r\n\r\nI feel a little foolish in hindsight, but I'm on the same page now and am glad to have discovered first-hand a motivation for this `repos_starred` use case.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 611284481, "label": "[Feature Request] Support Repo Name in Search \ud83e\udd7a"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622279374", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33", "id": 622279374, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjI3OTM3NA==", "user": {"value": 2029, "label": "garethr"}, "created_at": "2020-05-01T07:12:47Z", "updated_at": "2020-05-01T07:12:47Z", "author_association": "NONE", "body": "I also go it working with:\r\n\r\n```yaml\r\nrun: echo ${{ secrets.github_token }} | github-to-sqlite auth\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 609950090, "label": "Fall back to authentication via ENV"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-621030783", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 621030783, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMTAzMDc4Mw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-29T07:16:27Z", "updated_at": "2020-04-29T07:16:27Z", "author_association": "NONE", "body": "Hi @simonw it's debian as Windows Subsystem for Linux \r\n\r\n```\r\nPRETTY_NAME=\"Pengwin\"\r\nNAME=\"Pengwin\"\r\nVERSION_ID=\"10\"\r\nVERSION=\"10 (buster)\"\r\nID=debian\r\nID_LIKE=debian\r\nHOME_URL=\"https://github.com/whitewaterfoundry/Pengwin\"\r\nSUPPORT_URL=\"https://github.com/whitewaterfoundry/Pengwin\"\r\nBUG_REPORT_URL=\"https://github.com/whitewaterfoundry/Pengwin\"\r\nVERSION_CODENAME=buster\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-621011554", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 621011554, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMTAxMTU1NA==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-29T06:17:26Z", "updated_at": "2020-04-29T06:17:26Z", "author_association": "NONE", "body": "A stupid note: I have no `tmpcqv_1i5d` folder in in `/tmp`.\r\n\r\nIt seems to me that it does not create any `/tmp/tmpcqv_1i5d/templates` folder (or other name folder, inside /tmp)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/744#issuecomment-621008152", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/744", "id": 621008152, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMTAwODE1Mg==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-29T06:05:02Z", "updated_at": "2020-04-29T06:05:02Z", "author_association": "NONE", "body": "Hi @simonw , I have installed it and I have the below errors.\r\n\r\n> Is it possible that your /tmp directory is on a different volume from the template folder? That could cause a problem with the symlinks.\r\n\r\nNo, /tmp folder is in the same volume. \r\n\r\nThank you\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py\", line 607, in link_or_copy_directory\r\n shutil.copytree(src, dst, copy_function=os.link)\r\n File \"/usr/lib/python3.7/shutil.py\", line 365, in copytree\r\n raise Error(errors)\r\nshutil.Error: [('/var/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmpcqv_1i5d/templates/base.html', \"[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmpcqv_1i5d/templates/base.html'\"), ('/var/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmpcqv_1i5d/templates/index.html', \"[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmpcqv_1i5d/templates/index.html'\")]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/aborruso/.local/bin/datasette\", line 8, in \r\n sys.exit(cli())\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/click/core.py\", line 610, in invoke return callback(*args, **kwargs)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 103, in heroku\r\n extra_metadata,\r\n File \"/usr/lib/python3.7/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py\", line 191, in temporary_heroku_directory\r\n os.path.join(tmp.name, \"templates\"),\r\n File \"/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py\", line 609, in link_or_copy_directory\r\n shutil.copytree(src, dst)\r\n File \"/usr/lib/python3.7/shutil.py\", line 321, in copytree\r\n os.makedirs(dst)\r\n File \"/usr/lib/python3.7/os.py\", line 221, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '/tmp/tmpcqv_1i5d/templates'\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 608058890, "label": "link_or_copy_directory() error - Invalid cross-device link"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/633#issuecomment-620841496", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/633", "id": 620841496, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMDg0MTQ5Ng==", "user": {"value": 46165, "label": "nryberg"}, "created_at": "2020-04-28T20:37:50Z", "updated_at": "2020-04-28T20:37:50Z", "author_association": "NONE", "body": "Using the Heroku web interface, you can set the WEB_CONCURRENCY = 1\r\n\r\n![image](https://user-images.githubusercontent.com/46165/80535319-352c8100-8966-11ea-9d4f-df2622ec8bff.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 522334771, "label": "Publish to Heroku is broken: \"WARNING: You must pass the application as an import string to enable 'reload' or 'workers\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/735#issuecomment-620401443", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/735", "id": 620401443, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMDQwMTQ0Mw==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-28T06:10:20Z", "updated_at": "2020-04-28T06:10:20Z", "author_association": "NONE", "body": "It works in heroku, than might be a bug with datasette-publish-now.\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 605806386, "label": "Error when I click on \"View and edit SQL\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/736#issuecomment-620401172", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/736", "id": 620401172, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMDQwMTE3Mg==", "user": {"value": 30607, "label": "aborruso"}, "created_at": "2020-04-28T06:09:28Z", "updated_at": "2020-04-28T06:09:28Z", "author_association": "NONE", "body": "> Would you mind trying publishing your database using one of the other options - Heroku, Cloud Run or https://fly.io/ - and see if you have the same bug there?\r\n\r\nIt works in heroku, than might be a bug with datasette-publish-now.\r\n\r\nThank you", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 606720674, "label": "strange behavior using accented characters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/725#issuecomment-619489720", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/725", "id": 619489720, "node_id": "MDEyOklzc3VlQ29tbWVudDYxOTQ4OTcyMA==", "user": {"value": 4312421, "label": "stonebig"}, "created_at": "2020-04-26T06:09:59Z", "updated_at": "2020-04-26T06:10:13Z", "author_association": "NONE", "body": "as a complementary remark: the versioning of datasette dependancies will become a problem when the new pip \"dependancy resolver\" will be activated. for now, it's just warnings via pip checks, later it will be a \"no\":\r\n\r\n````\r\ndatasette 0.40 has requirement aiofiles~=0.4.0, but you have aiofiles 0.5.0.\r\ndatasette 0.40 has requirement Jinja2~=2.10.3, but you have jinja2 2.11.2.\r\n````", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 598891570, "label": "Update aiofiles requirement from ~=0.4.0 to >=0.4,<0.6"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-617208503", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 617208503, "node_id": "MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw==", "user": {"value": 12976, "label": "nkirsch"}, "created_at": "2020-04-21T14:16:24Z", "updated_at": "2020-04-21T14:16:24Z", "author_association": "NONE", "body": "@eads I'm interested in helping, if there's still a need...", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/76#issuecomment-614440032", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/76", "id": 614440032, "node_id": "MDEyOklzc3VlQ29tbWVudDYxNDQ0MDAzMg==", "user": {"value": 10501166, "label": "metab0t"}, "created_at": "2020-04-16T06:23:29Z", "updated_at": "2020-04-16T06:23:29Z", "author_association": "NONE", "body": "Thanks for your hard work!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 549287310, "label": "order_by mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/97#issuecomment-614073859", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/97", "id": 614073859, "node_id": "MDEyOklzc3VlQ29tbWVudDYxNDA3Mzg1OQ==", "user": {"value": 1448859, "label": "betatim"}, "created_at": "2020-04-15T14:29:30Z", "updated_at": "2020-04-15T14:29:30Z", "author_association": "NONE", "body": "Woah! Thanks a lot. Next time I will add a more obvious/explicit \"if you like this idea let me know I'd love to work on it to get my feet wet here\" :D", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 593751293, "label": "Adding a \"recreate\" flag to the `Database` constructor"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/627#issuecomment-609393513", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/627", "id": 609393513, "node_id": "MDEyOklzc3VlQ29tbWVudDYwOTM5MzUxMw==", "user": {"value": 4312421, "label": "stonebig"}, "created_at": "2020-04-05T10:23:57Z", "updated_at": "2020-04-05T10:23:57Z", "author_association": "NONE", "body": "is there any specific reason to stick to Jinja2~=2.10.3 when there is Jinja-2.11.1 ?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 521323012, "label": "Support Python 3.8, stop supporting Python 3.5"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/15#issuecomment-605439685", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15", "id": 605439685, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNTQzOTY4NQ==", "user": {"value": 2029, "label": "garethr"}, "created_at": "2020-03-28T12:17:01Z", "updated_at": "2020-03-28T12:17:01Z", "author_association": "NONE", "body": "That looks great, thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 544571092, "label": "Assets table with downloads"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-603849245", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 603849245, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMzg0OTI0NQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-25T13:48:13Z", "updated_at": "2020-03-25T13:48:13Z", "author_association": "NONE", "body": "Great - thanks again.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-603539349", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 603539349, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMzUzOTM0OQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-24T22:33:23Z", "updated_at": "2020-03-24T22:33:23Z", "author_association": "NONE", "body": "Hi Simon - I'm just (trying, at least) to follow along in the above. I can't try it out now, but I will if no one else gets to it. Sorry I didn't write any tests in the original bit of code I pushed - I was just trying to see if it could work & whether you'd want to maybe head in that direction. Anyway, thank you, I will certainly use this. Comment back here if no one tried it out & I'll make time.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602916580", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602916580, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkxNjU4MA==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-23T23:37:06Z", "updated_at": "2020-03-23T23:37:06Z", "author_association": "NONE", "body": "@simonw You're welcome - I was just trying it out back in December as I thought it should work. Now there's a pandemic to work on though.... so no time at all for more at the moment. BTW, I have datasette running on several protein and full (virus) genome databases I build, and it's great - thank you! Hi and best regards to you & Nat :-)", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 1, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602911133", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602911133, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkxMTEzMw==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-23T23:22:10Z", "updated_at": "2020-03-23T23:22:10Z", "author_association": "NONE", "body": "I just updated #652 to remove a merge conflict. I think it's an easy way to add this functionality. I don't have time to do more though, sorry!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602904184", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602904184, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkwNDE4NA==", "user": {"value": 1448859, "label": "betatim"}, "created_at": "2020-03-23T23:03:42Z", "updated_at": "2020-03-23T23:03:42Z", "author_association": "NONE", "body": "On mybinder.org we allow access to arbitrary processes listening on a port inside the container via a [reverse proxy](https://github.com/jupyterhub/jupyter-server-proxy).\r\n\r\nThis means we need support for a proxy prefix as the proxy ends up running at a URL like `/something/random/proxy/datasette/...`\r\n\r\nAn example that shows the problem is https://github.com/psychemedia/jupyterserverproxy-datasette-demo. Launch directly into a datasette instance on mybinder.org with https://mybinder.org/v2/gh/psychemedia/jupyterserverproxy-datasette-demo/master?urlpath=datasette then try to follow links inside the UI.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-602136481", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16", "id": 602136481, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjEzNjQ4MQ==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-03-22T02:08:57Z", "updated_at": "2020-03-22T02:08:57Z", "author_association": "NONE", "body": "I'd love to be using your library as a better cached gh layer for a new library I have built, replacing large parts of the very ugly https://github.com/jayvdb/pypidb/blob/master/pypidb/_github.py , and then probably being able to rebuild the setuppy chunk as a feature here at a later stage.\r\n\r\nI would also need tokenless and netrc support, but I would be happy to add those bits.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546051181, "label": "Exception running first command: IndexError: list index out of range"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/89#issuecomment-593122605", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/89", "id": 593122605, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MzEyMjYwNQ==", "user": {"value": 35075, "label": "chrishas35"}, "created_at": "2020-03-01T17:33:11Z", "updated_at": "2020-03-01T17:33:11Z", "author_association": "NONE", "body": "If you're happy with the proposed implementation, I have code & tests written that I'll get ready for a PR.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 573578548, "label": "Ability to customize columns used by extracts= feature"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/46", "id": 592999503, "node_id": "MDEyOklzc3VlQ29tbWVudDU5Mjk5OTUwMw==", "user": {"value": 35075, "label": "chrishas35"}, "created_at": "2020-02-29T22:08:20Z", "updated_at": "2020-02-29T22:08:20Z", "author_association": "NONE", "body": "@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the \"value\" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named \"value\". I have an existing lookup table that I've populated with columns \"id\" and \"name\" as opposed to \"id\" and \"value\", and seems I can't use `extracts=`, unless I'm missing something...\r\n\r\nInitial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so:\r\n```\r\ntable = db.table(\"trees\", extracts={\"species_id\": (\"Species\", \"name\"})\r\n```\r\n\r\nI haven't dug too much into the existing code yet, but does this make sense? Worth doing?\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 471780443, "label": "extracts= option for insert/update/etc"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/675#issuecomment-590593247", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/675", "id": 590593247, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDU5MzI0Nw==", "user": {"value": 141844, "label": "aviflax"}, "created_at": "2020-02-24T23:02:52Z", "updated_at": "2020-02-24T23:02:52Z", "author_association": "NONE", "body": "> Design looks great to me.\r\n\r\nExcellent, thanks!\r\n\r\n> I'm not keen on two letter short versions (`-cp`) - I'd rather either have a single character or no short form at all.\r\n\r\nHmm, well, anyone running `datasette package` is probably at least somewhat familiar with UNIX CLIs\u2026 so how about `--cp` as a middle ground?\r\n\r\n```shell\r\n$ datasette package --cp /the/source/path /the/target/path data.db\r\n```\r\n\r\nI think I like it. Easy to remember!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 567902704, "label": "--cp option for datasette publish and datasette package for shipping additional files and directories"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/681#issuecomment-590543398", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/681", "id": 590543398, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDU0MzM5OA==", "user": {"value": 2181410, "label": "clausjuhl"}, "created_at": "2020-02-24T20:53:56Z", "updated_at": "2020-02-24T20:53:56Z", "author_association": "NONE", "body": "Excellent. I'll implement the simple plugin-solution now. And will have a go at a more mature plugin later. Thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 569317377, "label": "Cashe-header missing in http-response"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/675#issuecomment-590405736", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/675", "id": 590405736, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDQwNTczNg==", "user": {"value": 141844, "label": "aviflax"}, "created_at": "2020-02-24T16:06:27Z", "updated_at": "2020-02-24T16:06:27Z", "author_association": "NONE", "body": "> So yeah - if you're happy to design this I think it would be worth us adding.\r\n\r\nGreat! I\u2019ll give it a go.\r\n\r\n\r\n\r\n> Small design suggestion: allow `--copy` to be applied multiple times\u2026\r\n\r\nMakes a ton of sense, will do.\r\n\r\n> Also since Click arguments can take multiple options I don't think you need to have the `:` in there - although if it better matches Docker's own UI it might be more consistent to have it.\r\n\r\nGreat point. I double checked the docs for `docker cp` and in that context the colon is used to delimit a container and a path, while spaces are used to separate the source and target.\r\n\r\nThe usage string is:\r\n\r\n```text\r\ndocker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-\r\ndocker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH\r\n```\r\n\r\nso in fact it\u2019ll be more consistent to use a space to delimit the source and destination paths, like so:\r\n\r\n```shell\r\n$ datasette package --copy /the/source/path /the/target/path data.db\r\n```\r\n\r\nand I suppose the short-form version of the option should be `cp` like so:\r\n\r\n```shell\r\n$ datasette package -cp /the/source/path /the/target/path data.db\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 567902704, "label": "--cp option for datasette publish and datasette package for shipping additional files and directories"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/676#issuecomment-590209074", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/676", "id": 590209074, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDIwOTA3NA==", "user": {"value": 58088336, "label": "tunguyenatwork"}, "created_at": "2020-02-24T08:20:15Z", "updated_at": "2020-02-24T08:20:15Z", "author_association": "NONE", "body": "Awesome, thank you so much. I\u2019ll try it out and let you know.\n\nOn Sun, Feb 23, 2020 at 1:44 PM Simon Willison \nwrote:\n\n> You can try this right now like so:\n>\n> pip install https://github.com/simonw/datasette/archive/search-raw.zip\n>\n> Then use the following:\n>\n> ?_search=foo*&_searchmode=raw`\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 568091133, "label": "?_searchmode=raw option for running FTS searches without escaping characters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/676#issuecomment-589922016", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/676", "id": 589922016, "node_id": "MDEyOklzc3VlQ29tbWVudDU4OTkyMjAxNg==", "user": {"value": 58088336, "label": "tunguyenatwork"}, "created_at": "2020-02-22T05:50:10Z", "updated_at": "2020-02-22T05:50:10Z", "author_association": "NONE", "body": "Thanks Simon,\r\nMy use case is using Datasette for full text search type ahead. That was working pretty well. The _search_wildcard= option will be awesome. Thanks\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 568091133, "label": "?_searchmode=raw option for running FTS searches without escaping characters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/86#issuecomment-586683572", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/86", "id": 586683572, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NjY4MzU3Mg==", "user": {"value": 8149512, "label": "foscoj"}, "created_at": "2020-02-16T09:03:54Z", "updated_at": "2020-02-16T09:03:54Z", "author_association": "NONE", "body": "Probably the best option to just throw the error.\r\nIs there any active dev chan where we could post the issue to python sqlite3?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 564579430, "label": "Problem with square bracket in CSV column name"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/667#issuecomment-585285753", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/667", "id": 585285753, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NTI4NTc1Mw==", "user": {"value": 870184, "label": "xrotwang"}, "created_at": "2020-02-12T16:18:22Z", "updated_at": "2020-02-12T16:18:22Z", "author_association": "NONE", "body": "@simonw fwiw, here's the plugin I implemented to support CLDF datasets: https://github.com/cldf/datasette-cldf/blob/master/README.md\r\nIt's a bit of a hybrid in that it does both, building the SQLite database **and** extending datasette by exploting what we know about the data format - so it may not be worth listing it with the other plugins.\r\n\r\nHaving tools like datasette available definitely helps selling people on package formats like CLDF (or CSVW), many thanks for this!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 562787785, "label": "Allow injecting configuration data from plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/667#issuecomment-585109972", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/667", "id": 585109972, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NTEwOTk3Mg==", "user": {"value": 870184, "label": "xrotwang"}, "created_at": "2020-02-12T09:21:22Z", "updated_at": "2020-02-12T09:21:22Z", "author_association": "NONE", "body": "I think I found a better way to implement my use case: I wrap the `datasette serve` call into my own cli, which\r\n- creates the SQLite from CSV data\r\n- writes `metadata.json` for datasette\r\n- determines suitable config like `max_page_size`\r\n- then calls `datasette serve`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 562787785, "label": "Allow injecting configuration data from plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/327#issuecomment-584657949", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/327", "id": 584657949, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NDY1Nzk0OQ==", "user": {"value": 1055831, "label": "dazzag24"}, "created_at": "2020-02-11T14:21:15Z", "updated_at": "2020-02-11T14:21:15Z", "author_association": "NONE", "body": "See https://github.com/simonw/datasette/issues/657 and my changes that allow datasette to load parquet files ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 335200136, "label": "Explore if SquashFS can be used to shrink size of packaged Docker containers"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/352#issuecomment-584203999", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/352", "id": 584203999, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NDIwMzk5OQ==", "user": {"value": 870184, "label": "xrotwang"}, "created_at": "2020-02-10T16:18:58Z", "updated_at": "2020-02-10T16:18:58Z", "author_association": "NONE", "body": "I don't want to re-open this issue, but I'm wondering whether it would be possible to include the full row for which a specific cell is to be rendered in the hook signature. My use case are rows where custom rendering would need access to multiple values (specifically, rows containing the constituents of interlinear glossed text (IGT) in separate columns, see https://github.com/cldf/cldf/tree/master/components/examples).\r\n\r\nI could probably cobble this together with custom SQL and the sql-to-html plugin. But having a full row within a `render_cell` implementation seems a lot simpler.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 345821500, "label": "render_cell(value) plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/658#issuecomment-583177728", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/658", "id": 583177728, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MzE3NzcyOA==", "user": {"value": 49656826, "label": "null92"}, "created_at": "2020-02-07T00:28:55Z", "updated_at": "2020-02-07T00:29:50Z", "author_association": "NONE", "body": "Simon,\r\n\r\nYes, there is an \"app.css\" on static folder, however, anyone modification I do on this .css, doesn't apply on the datasette.\r\n\r\nI'm using this command: datasette publish heroku _\"databases folder\"_ -n _\"herokuapp name\"_ --extra-options=\"--config sql_time_limit_ms:60000 --config max_returned_rows:10000 --config force_https_urls:1\" --template-dir _\"templates folder\"_ -m _\"metadata.json folder\"_", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 550293770, "label": "How do I use the app.css as style sheet?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 580745213, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-31T14:02:38Z", "updated_at": "2020-01-31T14:21:09Z", "author_association": "NONE", "body": "So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error.\r\n\r\nIf I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly.\r\n\r\nIt feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/661#issuecomment-580075725", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/661", "id": 580075725, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MDA3NTcyNQ==", "user": {"value": 134771, "label": "dvhthomas"}, "created_at": "2020-01-30T04:17:51Z", "updated_at": "2020-01-30T04:17:51Z", "author_association": "NONE", "body": "Thanks for the elegant solution to the problem as stated. I'm packaging right now :-)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 555832585, "label": "--port option to expose a port other than 8001 in \"datasette package\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/662#issuecomment-579864036", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/662", "id": 579864036, "node_id": "MDEyOklzc3VlQ29tbWVudDU3OTg2NDAzNg==", "user": {"value": 2181410, "label": "clausjuhl"}, "created_at": "2020-01-29T17:17:01Z", "updated_at": "2020-01-29T17:17:01Z", "author_association": "NONE", "body": "This is excellent news. I'll wait until version 0.34. It would be tiresome to rewrite all standard-queries into custom queries. Thank you!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 556814876, "label": "Escape_fts5_query-hookimplementation does not work with queries to standard tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/662#issuecomment-579798917", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/662", "id": 579798917, "node_id": "MDEyOklzc3VlQ29tbWVudDU3OTc5ODkxNw==", "user": {"value": 2181410, "label": "clausjuhl"}, "created_at": "2020-01-29T15:08:57Z", "updated_at": "2020-01-29T15:08:57Z", "author_association": "NONE", "body": "Hi Simon\r\n\r\nThankt you for a quick reply. Here are a few examples of urls, where I search the 'cases_fts'-virtual table for tokens in the title-column. It returns the same results, wether the other query-params are present or not.\r\n\r\nSearching for sky\r\nhttp://localhost:8001/db-7596a4e/cases?_search_title=sky&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date\r\nReturns searchresults\r\n\r\nSearching for sky*\r\nhttp://localhost:8001/db-7596a4e/cases?_search_title=sky*&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date\r\nReturns searchresults\r\n\r\nSearching for sky-tog\r\nhttp://localhost:8001/db-7596a4e/cases?_search_title=sky-tog&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date\r\nThrows: No such column: tog\r\n\r\nsearching for sky+\r\nhttp://localhost:8001/db-7596a4e/cases?_search_title=sky%2B&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date\r\nThrows: Invalid SQL: fts5: syntax error near \"\"\r\n\r\nSearching for \"madpakke\" (including double quotes)\r\nhttp://localhost:8001/db-7596a4e/cases?_search_title=%22madpakke%22&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date\r\nReturns searchresults even though 'madpakke' only appears in the fulltextindex without quotes\r\n\r\nAs I said, my other plugins work just fine, and I just copied your sql_functions.py from the datasette-repo.\r\n\r\nThanks!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 556814876, "label": "Escape_fts5_query-hookimplementation does not work with queries to standard tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/657#issuecomment-576759416", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/657", "id": 576759416, "node_id": "MDEyOklzc3VlQ29tbWVudDU3Njc1OTQxNg==", "user": {"value": 1055831, "label": "dazzag24"}, "created_at": "2020-01-21T16:20:19Z", "updated_at": "2020-01-21T16:20:19Z", "author_association": "NONE", "body": "Hi,\r\n\r\nI've completed some changes to my fork of datasette that allows it to automatically create the parquet virtual table when you supply it with a filename that has the \".parquet\" extension.\r\n\r\nI had to figure out how to make the \"CREATE VIRTUAL TABLE\" statement only be applied to the fake in memory parquet database and not to any others that were also being loaded. Thus it supports mixed mode databases e.g\r\n\r\n```\r\ndatasette my_test.parquet normal_sqlite_file.db --load-extension=libparquet.so --load-extensio\r\nn=mod_spatialite.so\r\n```\r\n\r\nPlease see my changes here: \r\nhttps://github.com/dazzag24/datasette/commit/8e18394353114f17291fd1857073b1e0485a1faf\r\n\r\nThanks\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 548591089, "label": "Allow creation of virtual tables at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/70#issuecomment-575799104", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/70", "id": 575799104, "node_id": "MDEyOklzc3VlQ29tbWVudDU3NTc5OTEwNA==", "user": {"value": 26292069, "label": "LucasElArruda"}, "created_at": "2020-01-17T21:20:17Z", "updated_at": "2020-01-17T21:20:17Z", "author_association": "NONE", "body": "Omg sorry I took so long to reply!\r\n\r\nOn SQL we can say how the foreign key behaves when it is deleted or updated on the parent table (see https://www.sqlitetutorial.net/sqlite-foreign-key/ for more details).\r\n\r\nI did not see clearly how to create tables with this feature on sqlite-utils library.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 539204432, "label": "Implement ON DELETE and ON UPDATE actions for foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/657#issuecomment-575321322", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/657", "id": 575321322, "node_id": "MDEyOklzc3VlQ29tbWVudDU3NTMyMTMyMg==", "user": {"value": 1055831, "label": "dazzag24"}, "created_at": "2020-01-16T20:01:43Z", "updated_at": "2020-01-16T20:01:43Z", "author_association": "NONE", "body": "I have successfully tested datasette using a parquet VIRTUAL TABLE. In the first terminal:\r\n\r\n```datasette airports.db --load-extension=libparquet```\r\n\r\nIn another terminal I load the same sqlite db file using the sqlite3 cli client.\r\n\r\n```$ sqlite3 airports.db```\r\n\r\nand then load the parquet extension and create the virtual table.\r\n\r\n```\r\nsqlite> .load /home/darreng/metars/libparquet\r\nsqlite> CREATE VIRTUAL TABLE mytable USING parquet('/home/xx/data.parquet');\r\n```\r\n\r\nNow the parquet virtual table is usable by the datasette web UI.\r\n\r\nIts not an ideal solution but is a proof that datasette works the parquet extension.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 548591089, "label": "Allow creation of virtual tables at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 573047321, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-10T14:02:56Z", "updated_at": "2020-01-10T14:09:23Z", "author_association": "NONE", "body": "Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop).\r\n\r\nCould it be sqlite3? I'm on 3.30.1.\r\n\r\nUPDATE: just tried it on jupyter.org/try and I get the error there, too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-571412923", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16", "id": 571412923, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MTQxMjkyMw==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-01-07T03:06:46Z", "updated_at": "2020-01-07T03:06:46Z", "author_association": "NONE", "body": "I re-tried after doing `auth`, and I get the same result.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546051181, "label": "Exception running first command: IndexError: list index out of range"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 571138093, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-06T13:28:31Z", "updated_at": "2020-01-06T13:28:31Z", "author_association": "NONE", "body": "I think I actually had several issues in play...\r\n\r\nThe missing key was one, but I think there is also an issue as per below.\r\n\r\nFor example, in the following:\r\n\r\n```python\r\ndef init_testdb(dbname='test.db'):\r\n \r\n if os.path.exists(dbname):\r\n os.remove(dbname)\r\n\r\n conn = sqlite3.connect(dbname)\r\n db = Database(conn)\r\n \r\n return conn, db\r\n\r\nconn, db = init_testdb()\r\n\r\nc = conn.cursor()\r\nc.executescript('CREATE TABLE \"test1\" (\"Col1\" TEXT, \"Col2\" TEXT, PRIMARY KEY (\"Col1\"));')\r\nc.executescript('CREATE TABLE \"test2\" (\"Col1\" TEXT, \"Col2\" TEXT, PRIMARY KEY (\"Col1\"));')\r\n\r\nprint('Test 1...')\r\nfor i in range(3):\r\n db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n\r\nprint('Test 2...')\r\nfor i in range(3):\r\n db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},\r\n {'Col1':'c','Col2':'x'}], pk=('Col1'))\r\nprint('Done...')\r\n\r\n---------------------------------------------------------------------------\r\nTest 1...\r\nTest 2...\r\n IndexError: list index out of range \r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n in \r\n 22 print('Test 2...')\r\n 23 for i in range(3):\r\n---> 24 db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n 25 db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},\r\n 26 {'Col1':'c','Col2':'x'}], pk=('Col1'))\r\n\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts)\r\n 1157 alter=alter,\r\n 1158 extracts=extracts,\r\n-> 1159 upsert=True,\r\n 1160 )\r\n 1161 \r\n\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert)\r\n 1097 # self.last_rowid will be 0 if a \"INSERT OR IGNORE\" happened\r\n 1098 if (hash_id or pk) and self.last_rowid:\r\n-> 1099 row = list(self.rows_where(\"rowid = ?\", [self.last_rowid]))[0]\r\n 1100 if hash_id:\r\n 1101 self.last_pk = row[hash_id]\r\n\r\nIndexError: list index out of range\r\n```\r\n\r\nthe first test works but the second fails. Is the length of the list of items being upserted leaking somewhere?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/596#issuecomment-567226048", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/596", "id": 567226048, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIyNjA0OA==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:43:13Z", "updated_at": "2019-12-18T21:43:13Z", "author_association": "NONE", "body": "Meant to add that of course it would be better not to reinvent CSS (one time was already enough). But one option would be to provide a mechanism to specify a CSS class for a column (a cell, a row...) and let the user give a URL path to a CSS file on the command line.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 507454958, "label": "Handle really wide tables better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/596#issuecomment-567225156", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/596", "id": 567225156, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIyNTE1Ng==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:40:35Z", "updated_at": "2019-12-18T21:40:35Z", "author_association": "NONE", "body": "I initially went looking for a way to hide a column completely. Today I found the setting to truncate cells, but it applies to all cells. In my case I have text columns that can have many thousands of characters. I was wondering whether the metadata JSON would be an appropriate place to indicate how columns are displayed (on a col-by-col basis). E.g., I'd like to be able to specify that only 20 chars of a given column be shown, and the font be monospace. But maybe I can do that in some other way - I barely know anything about datasette yet, sorry!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 507454958, "label": "Handle really wide tables better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567219479", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567219479, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIxOTQ3OQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:24:23Z", "updated_at": "2019-12-18T21:24:23Z", "author_association": "NONE", "body": "@simonw What about allowing a base url. The `....` tag has been around forever. Then just use all relative URLs, which I guess is likely what you already do. See https://www.w3schools.com/TAGs/tag_base.asp", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567128636", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567128636, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEyODYzNg==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T17:19:46Z", "updated_at": "2019-12-18T17:19:46Z", "author_association": "NONE", "body": "Hmmm, wait, maybe my mindless (copy/paste) use of `proxy_redirect` is causing me grief...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567127981", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567127981, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEyNzk4MQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T17:18:06Z", "updated_at": "2019-12-18T17:18:06Z", "author_association": "NONE", "body": "Agreed, this would be nice to have. I'm currently working around it in `nginx` with additional location blocks:\r\n\r\n```\r\n\r\n location /datasette/ {\r\n proxy_pass http://127.0.0.1:8001/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n\r\n location /dna-protein-genome/ {\r\n proxy_pass http://127.0.0.1:8001/dna-protein-genome/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n\r\n location /rna-protein-genome/ {\r\n proxy_pass http://127.0.0.1:8001/rna-protein-genome/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n```\r\n\r\nThe 2nd and 3rd above are my databases. This works, but I have a small problem with URLs like `/rna-protein-genome?params....` that I could fix with some more nginx munging. I seem to do this sort of thing once every 5 years and then have to look it all up again.\r\n\r\nThanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/646#issuecomment-561247711", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/646", "id": 561247711, "node_id": "MDEyOklzc3VlQ29tbWVudDU2MTI0NzcxMQ==", "user": {"value": 18017473, "label": "lagolucas"}, "created_at": "2019-12-03T16:31:39Z", "updated_at": "2019-12-03T17:31:33Z", "author_association": "NONE", "body": "> I don't think this is possible at the moment but you're right, it totally should be.\r\n\r\nJust give me a heads-up if you think you can do that quickly. I am trying to implement it with very little knowledge of how datasette works, so it will take loads of time.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 531502365, "label": "Make database level information from metadata.json available in the index.html template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/646#issuecomment-561133534", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/646", "id": 561133534, "node_id": "MDEyOklzc3VlQ29tbWVudDU2MTEzMzUzNA==", "user": {"value": 18017473, "label": "lagolucas"}, "created_at": "2019-12-03T11:50:44Z", "updated_at": "2019-12-03T11:50:44Z", "author_association": "NONE", "body": "Thanks for the reply.\r\n\r\nWill try to implement that on my end, if I have any success I will post here/ make a pull request.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 531502365, "label": "Make database level information from metadata.json available in the index.html template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-559916057", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 559916057, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTkxNjA1Nw==", "user": {"value": 172847, "label": "pkoppstein"}, "created_at": "2019-11-30T06:08:50Z", "updated_at": "2019-11-30T06:08:50Z", "author_association": "NONE", "body": "@simonw, @jacobian - I was able to resolve the metadata.json issue by adding `-m metadata.json` to the Procfile. Now `git push heroku master` picks up the changes, though I have the impression that heroku is doing more work than necessary (e.g. one of the information messages is: `Installing requirements with pip`).\r\n\r\nI also had to set the environment variable WEB_CONCURRENCY -- I used WEB_CONCURRENCY=1.\r\n\r\nI am still anxious to know whether it's possible for Datasette on Heroku to access the SQLite file at another location. Cloudcube seems the most promising, and I'm hoping it can be done by tweaking the Procfile suitably, but maybe that's too optimistic?\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558852316", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558852316, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODg1MjMxNg==", "user": {"value": 172847, "label": "pkoppstein"}, "created_at": "2019-11-26T22:54:23Z", "updated_at": "2019-11-26T22:54:23Z", "author_association": "NONE", "body": "@jacobian - Thanks for your help. Having to upload an entire slug each time a small change is needed in `metadata.json` seems no better than the current situation so I probably won't go down that rabbit hole just yet. In any case, the really important goal is moving the SQLite file out of Heroku in a way that the Heroku app can still read it efficiently. Is this possible? Is Cloudcube the right place to start? Is there any alternative? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558437707", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558437707, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODQzNzcwNw==", "user": {"value": 172847, "label": "pkoppstein"}, "created_at": "2019-11-26T03:02:53Z", "updated_at": "2019-11-26T03:03:29Z", "author_association": "NONE", "body": "@simonw - Thanks for the reply!\r\n\r\nMy reading of the heroku documents is that if one sets things up using git, then one can use \"git push\" (from a {local, GitHub, GitLab} git repository to Heroku) to \"update\" a Heroku deployment, but I'm not sure exactly how this works. However, assuming there is some way to use \"git push\" to update the Heroku deployment, the question becomes how can one do this in conjunction with datasette.\r\n\r\nAgain based on my reading the heroku documents, it would seem that the following should work (but it doesn't quite):\r\n\r\n1) Use datasette to create a deployment (named MYAPP)\r\n2) Put it in maintenance mode\r\n3) heroku git:clone -a MYAPP\r\n -- This results in an empty repository (as expected)\r\n4) In another directory, heroku slugs:download -a MYAPP\r\n5) Copy the downloaded slug into the repository\r\n6) Make some change to metadata.json\r\n6) Commit and push it back\r\n7) Take the deployment out of maintenance mode\r\n8) Refresh the deployment\r\n\r\nUsing the heroku console, I've verified that the edits appear on heroku, but somehow they are not reflected in the running app.\r\n\r\nI'm hopeful that with some small tweak or perhaps the addition of a bit of voodoo, this strategy will work. \r\n\r\nI think it will be important to get this working for another reason: getting Heroku, Cloudcube, and datasette to work together, to overcome the slug size limitation so that large SQLite databases can be deployed to Heroku using Datasette.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/627#issuecomment-552737357", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/627", "id": 552737357, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MjczNzM1Nw==", "user": {"value": 2680980, "label": "willingc"}, "created_at": "2019-11-12T05:13:46Z", "updated_at": "2019-11-12T05:13:46Z", "author_association": "NONE", "body": "Thanks @simonw. I appreciate your work on this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 521323012, "label": "Support Python 3.8, stop supporting Python 3.5"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/595#issuecomment-552327079", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/595", "id": 552327079, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MjMyNzA3OQ==", "user": {"value": 647359, "label": "tomchristie"}, "created_at": "2019-11-11T07:34:27Z", "updated_at": "2019-11-11T07:34:27Z", "author_association": "NONE", "body": "> Glitch has been upgraded to Python 3.7.\r\n\r\nWhoop! \ud83e\udd73 \u2728 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 506300941, "label": "bump uvicorn to 0.9.0 to be Python-3.8 friendly"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/616#issuecomment-551872999", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/616", "id": 551872999, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MTg3Mjk5OQ==", "user": {"value": 49656826, "label": "null92"}, "created_at": "2019-11-08T15:31:33Z", "updated_at": "2019-11-08T15:31:33Z", "author_association": "NONE", "body": "Thank you so much, Simon!\r\n\r\nNow, I'm contacting Heroku's support team to find a way to update the Datasette version on bases.vortex.media.\r\n\r\nDo you know how to do it?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 518506242, "label": "Datasette FTS detection bug"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/607#issuecomment-550649607", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/607", "id": 550649607, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MDY0OTYwNw==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2019-11-07T03:38:10Z", "updated_at": "2019-11-07T03:38:10Z", "author_association": "NONE", "body": "I just got FTS5 working and it is incredible! The lookup time for returning all rows where company name contains \"Musk\" from my table of 16,428,090 rows has dropped from `13,340.019` ms to `15.6`ms. Well below the 100ms latency for the \"real time autocomplete\" feel (which doesn't currently include the http call).\r\n\r\nSo cool! Thanks again for the pointers and awesome datasette!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 512996469, "label": "Ways to improve fuzzy search speed on larger data sets?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-548508237", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 548508237, "node_id": "MDEyOklzc3VlQ29tbWVudDU0ODUwODIzNw==", "user": {"value": 634572, "label": "eads"}, "created_at": "2019-10-31T18:25:44Z", "updated_at": "2019-10-31T18:25:44Z", "author_association": "NONE", "body": "\ud83d\udc4b I'd be interested in building this out in Q1 or Q2 of 2020 if nobody has tackled it by then. I would love to integrate Datasette into @thechicagoreporter's practice, but we're also fully committed to GraphQL moving forward.", "reactions": "{\"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/605#issuecomment-548058715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/605", "id": 548058715, "node_id": "MDEyOklzc3VlQ29tbWVudDU0ODA1ODcxNQ==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2019-10-30T18:44:41Z", "updated_at": "2019-10-30T18:55:37Z", "author_association": "NONE", "body": "Sure. I imagine it being pretty straight forward. Today when you click on the database, the UI displays:\r\n\r\n-Table 1-\r\n -fields-\r\n -row count-\r\n-Table 2-\r\n -fields-\r\n -row count-\r\nQueries:\r\n -query1-\r\n -query2-\r\n ..\r\n ...\r\n\r\nMy proposal would be to display as follows:\r\n\r\n-Table 1-\r\n -fields-\r\n -row count-\r\n Queries:\r\n -query1-\r\n -query2-\r\n ..\r\n ...\r\n-Table 2-\r\n -fields-\r\n -row count-\r\n Queries:\r\n -query1-\r\n -query2-\r\n ..\r\n ...\r\n\r\nThis way, if a given table is not present in the database, the associated queries are also not present. Today, I have a list of queries, some work, some result in errors depending on whether the dependent tables exist in the database.\r\n\r\nLet me know if that makes sense. Thanks again!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 510076368, "label": "Support queries at the table level"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/607#issuecomment-548060038", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/607", "id": 548060038, "node_id": "MDEyOklzc3VlQ29tbWVudDU0ODA2MDAzOA==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2019-10-30T18:47:57Z", "updated_at": "2019-10-30T18:47:57Z", "author_association": "NONE", "body": "Hi Simon, thanks for the pointer! Feeling good that I came to your conclusion a few days ago. I did hit a snag with figuring out how to compile a special version of sqlite for my windows machine (which I only realized I needed to do after running your command `sqlite-utils enable-fts mydatabase.db items name description`). \r\n\r\nI'll try to solve that problem next week and report back here with my findings (if you know of a good tutorial for compiling on windows, I'm all ears). Either way, I'll try to close this issue out in the next two weeks. Thanks again!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 512996469, "label": "Ways to improve fuzzy search speed on larger data sets?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/594#issuecomment-547373739", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/594", "id": 547373739, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NzM3MzczOQ==", "user": {"value": 2680980, "label": "willingc"}, "created_at": "2019-10-29T11:21:52Z", "updated_at": "2019-10-29T11:21:52Z", "author_association": "NONE", "body": "Just an FYI for folks wishing to run datasette with Python 3.8, I was able to successfully use datasette with the following in a virtual environment:\r\n\r\n```\r\npip install uvloop==0.14.0rc1\r\npip install uvicorn==0.9.1\r\n```\r\n", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 506297048, "label": "upgrade to uvicorn-0.9 to be Python-3.8 friendly"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/607#issuecomment-546752311", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/607", "id": 546752311, "node_id": "MDEyOklzc3VlQ29tbWVudDU0Njc1MjMxMQ==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2019-10-28T00:37:10Z", "updated_at": "2019-10-28T00:37:10Z", "author_association": "NONE", "body": "UPDATE:\r\nAccording to tips suggested in [Squeezing Performance from SQLite: Indexes? Indexes!](https://medium.com/@JasonWyatt/squeezing-performance-from-sqlite-indexes-indexes-c4e175f3c346) I have added an index to my large table and benchmarked query speeds in the case where I want to return `all rows`, `rows exactly equal to 'Musk Elon'` and, `rows like 'musk'`. Indexing reduced query time for each of those measures and **dramatically** reduced the time to return `rows exactly equal to 'Musk Elon'` as shown below:\r\n\r\n> table: edgar_idx\r\n> rows: 16,428,090 rows\r\n> **indexed: False**\r\n> Return all rows where company name exactly equal to Musk Elon\r\n> query: select rowid, * from edgar_idx where \"company\" = :p0 order by rowid limit 101\r\n> query time: Query took 21821.031ms\r\n> \r\n> Return all rows where company name contains Musk\r\n> query: select rowid, * from edgar_idx where \"company\" like :p0 order by rowid limit 101\r\n> query time: Query took 20505.029ms\r\n> \r\n> Return everything\r\n> query: select rowid, * from edgar_idx order by rowid limit 101\r\n> query time: Query took 7985.011ms\r\n> \r\n> **indexed: True**\r\n> Return all rows where company name exactly equal to Musk Elon\r\n> query: select rowid, * from edgar_idx where \"company\" = :p0 order by rowid limit 101\r\n> query time: Query took 30.0ms\r\n> \r\n> Return all rows where company name contains Musk\r\n> query: select rowid, * from edgar_idx where \"company\" like :p0 order by rowid limit 101\r\n> query time: Query took 13340.019ms\r\n> \r\n> Return everything\r\n> query: select rowid, * from edgar_idx order by rowid limit 101\r\n> query time: Query took 2190.003ms\r\n\r\nSo indexing reduced query time for an exact match to \"Musk Elon\" from almost `22 seconds` to `30.0ms`. **That's amazing and truly promising!** However, an autocomplete feature relies on fuzzy / incomplete matching, which is more similar to the `contains 'musk'` query... Unfortunately, that takes 13 seconds even after indexing. So the hunt for a fast fuzzy / autocomplete search capability persists.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 512996469, "label": "Ways to improve fuzzy search speed on larger data sets?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/607#issuecomment-546723302", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/607", "id": 546723302, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NjcyMzMwMg==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2019-10-27T18:59:55Z", "updated_at": "2019-10-27T19:00:48Z", "author_association": "NONE", "body": "Ultimately, I'm needing to serve searches like this to multiple users (at times concurrently). Given the size of the database I'm working with, can anyone comment as to whether I should be storing this in something like MySQL or Postgres rather than SQLite. I know there's been much [defense of sqlite being performant](https://www.sqlite.org/whentouse.html) but I wonder if those arguments break down as the database size increases.\r\n\r\nFor example, if I scroll to the bottom of that linked page, where it says **Checklist For Choosing The Right Database Engine**, here's how I answer those questions:\r\n\r\n - Is the data separated from the application by a network? \u2192 choose client/server\r\n __Yes__\r\n- Many concurrent writers? \u2192 choose client/server\r\n __Not exactly. I may have many concurrent readers but almost no concurrent writers.__\r\n- Big data? \u2192 choose client/server\r\n __No, my database is less than 40 gb and wont approach a terabyte in the next decade.__\r\n\r\nSo is sqlite still a good idea here?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 512996469, "label": "Ways to improve fuzzy search speed on larger data sets?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/607#issuecomment-546722281", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/607", "id": 546722281, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NjcyMjI4MQ==", "user": {"value": 8431341, "label": "zeluspudding"}, "created_at": "2019-10-27T18:46:29Z", "updated_at": "2019-10-27T19:00:40Z", "author_association": "NONE", "body": "Update: I've created a table of only unique names. This reduces the search space from over 16 million, to just about 640,000. Interestingly, it takes less than 2 seconds to create this table using Python. Performing the same search that we did earlier for `elon musk` takes nearly a second - much faster than before but still not speedy enough for an autocomplete feature (which usually needs to return results within 100ms to feel \"real time\"). \r\n\r\nAny ideas for slashing the search speed nearly 10 fold?\r\n\r\n> ![image](https://user-images.githubusercontent.com/8431341/67639587-b6c02b00-f8bf-11e9-9344-1d8667cad395.png)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 512996469, "label": "Ways to improve fuzzy search speed on larger data sets?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/588#issuecomment-544502617", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/588", "id": 544502617, "node_id": "MDEyOklzc3VlQ29tbWVudDU0NDUwMjYxNw==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2019-10-21T12:58:22Z", "updated_at": "2019-10-21T12:58:22Z", "author_association": "NONE", "body": "Thanks for the reply. I was hoping queries per table were supported, as I have an application that builds tables depending on the user input to the application. It will either create one table, or two.. and if one or the other is missing, certain queries will return errors. Of course I can work around this by labeling the query name and hope users don't click queries they have not created a table for, but ideally the user would see the queries available based on the tables that exist in their database. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 505512251, "label": "Queries per DB table in metadata.json"}, "performed_via_github_app": null}