{"html_url": "https://github.com/simonw/datasette/issues/93#issuecomment-344424382", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/93", "id": 344424382, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQyNDM4Mg==", "user": {"value": 67420, "label": "atomotic"}, "created_at": "2017-11-14T22:42:16Z", "updated_at": "2017-11-14T22:42:16Z", "author_association": "NONE", "body": "tried quickly, this seems working:\r\n\r\n```\r\n~ pip3 install pyinstaller\r\n~ pyinstaller -F --add-data /usr/local/lib/python3.6/site-packages/datasette/templates:datasette/templates --add-data /usr/local/lib/python3.6/site-packages/datasette/static:datasette/static /usr/local/bin/datasette\r\n\r\n~ du -h dist/datasette\r\n6.8M\tdist/datasette\r\n~ file dist/datasette\r\ndist/datasette: Mach-O 64-bit executable x86_64\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273944952, "label": "Package as standalone binary"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/93#issuecomment-344430299", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/93", "id": 344430299, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDQzMDI5OQ==", "user": {"value": 67420, "label": "atomotic"}, "created_at": "2017-11-14T23:06:33Z", "updated_at": "2017-11-14T23:06:33Z", "author_association": "NONE", "body": "i will look better tomorrow, it's late i surely made some mistake\r\nhttps://asciinema.org/a/ZyAWbetrlriDadwWyVPUWB94H", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273944952, "label": "Package as standalone binary"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/93#issuecomment-344516406", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/93", "id": 344516406, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDUxNjQwNg==", "user": {"value": 67420, "label": "atomotic"}, "created_at": "2017-11-15T08:09:41Z", "updated_at": "2017-11-15T08:09:41Z", "author_association": "NONE", "body": "actually you can use travis to build for linux/macos and [appveyor](https://www.appveyor.com/) to build for windows.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 273944952, "label": "Package as standalone binary"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/97#issuecomment-345509500", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/97", "id": 345509500, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTUwOTUwMA==", "user": {"value": 231923, "label": "yschimke"}, "created_at": "2017-11-19T11:26:58Z", "updated_at": "2017-11-19T11:26:58Z", "author_association": "NONE", "body": "Specifically docs should make it clearer this file exists\r\n\r\nhttps://parlgov.datasettes.com/.json\r\n\r\nAnd from that you can build https://parlgov.datasettes.com/parlgov-25f9855.json\r\n\r\nThen https://parlgov.datasettes.com/parlgov-25f9855/cabinet.json", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274022950, "label": "Link to JSON for the list of tables "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/97#issuecomment-392895733", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/97", "id": 392895733, "node_id": "MDEyOklzc3VlQ29tbWVudDM5Mjg5NTczMw==", "user": {"value": 231923, "label": "yschimke"}, "created_at": "2018-05-29T18:51:35Z", "updated_at": "2018-05-29T18:51:35Z", "author_association": "NONE", "body": "Do you have an existing example with views?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274022950, "label": "Link to JSON for the list of tables "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/100#issuecomment-344864254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/100", "id": 344864254, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDg2NDI1NA==", "user": {"value": 13304454, "label": "coisnepe"}, "created_at": "2017-11-16T09:25:10Z", "updated_at": "2017-11-16T09:25:10Z", "author_association": "NONE", "body": "@simonw I see. I upgraded sanic-jinja2 and jinja2: it now works flawlessly. Thank you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274160723, "label": "TemplateAssertionError: no filter named 'tojson'"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/101#issuecomment-344597274", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/101", "id": 344597274, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NDU5NzI3NA==", "user": {"value": 450244, "label": "eaubin"}, "created_at": "2017-11-15T13:48:55Z", "updated_at": "2017-11-15T13:48:55Z", "author_association": "NONE", "body": "This is a duplicate of https://github.com/simonw/datasette/issues/100", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274161964, "label": "TemplateAssertionError: no filter named 'tojson'"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/120#issuecomment-355487646", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/120", "id": 355487646, "node_id": "MDEyOklzc3VlQ29tbWVudDM1NTQ4NzY0Ng==", "user": {"value": 723567, "label": "nickdirienzo"}, "created_at": "2018-01-05T07:10:12Z", "updated_at": "2018-01-05T07:10:12Z", "author_association": "NONE", "body": "Ah, glad I found this issue. I have private data that I'd like to share to a few different people. Personally, a shared username and password would be sufficient for me, more-or-less Basic Auth. Do you have more complex requirements in mind?\r\n\r\nI'm not sure if \"plugin\" means \"build a plugin\" or \"find a plugin\" or something else entirely. FWIW, I stumbled upon [sanic-auth](https://github.com/pyx/sanic-auth) which looks like a new project to bring some interfaces around auth to sanic, similar to Flask.\r\n\r\nAlternatively, it shouldn't be too bad to add in Basic Auth. If we went down that route, that would probably be best built as a separate package for sanic that `datasette` brings in.\r\n\r\nWhat are your thoughts around this?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275087397, "label": "Plugin that adds an authentication layer of some sort"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/120#issuecomment-439421164", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/120", "id": 439421164, "node_id": "MDEyOklzc3VlQ29tbWVudDQzOTQyMTE2NA==", "user": {"value": 36796532, "label": "ad-si"}, "created_at": "2018-11-16T15:05:18Z", "updated_at": "2018-11-16T15:05:18Z", "author_association": "NONE", "body": "This would be an awesome feature \u2764\ufe0f ", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275087397, "label": "Plugin that adds an authentication layer of some sort"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/120#issuecomment-496966227", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/120", "id": 496966227, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5Njk2NjIyNw==", "user": {"value": 26342344, "label": "duarteocarmo"}, "created_at": "2019-05-29T14:40:52Z", "updated_at": "2019-05-29T14:40:52Z", "author_association": "NONE", "body": "I would really like this. If you give me some pointers @simonw I'm willing to PR!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275087397, "label": "Plugin that adds an authentication layer of some sort"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-698110186", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 698110186, "node_id": "MDEyOklzc3VlQ29tbWVudDY5ODExMDE4Ng==", "user": {"value": 45416, "label": "obra"}, "created_at": "2020-09-24T04:49:51Z", "updated_at": "2020-09-24T04:49:51Z", "author_association": "NONE", "body": "As a half-measure, I'd get value out of being able to upload a CSV and have datasette run csv-to-sqlite on it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-698174957", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 698174957, "node_id": "MDEyOklzc3VlQ29tbWVudDY5ODE3NDk1Nw==", "user": {"value": 45416, "label": "obra"}, "created_at": "2020-09-24T07:42:05Z", "updated_at": "2020-09-24T07:42:05Z", "author_association": "NONE", "body": "\nOh. Awesome. \n\nOn Thu, Sep 24, 2020 at 12:28:53AM -0700, Simon Willison wrote:\n> @obra there's a plugin for that! https://github.com/simonw/\n> datasette-upload-csvs\n> \n> \u00e2\u0080\u0094\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.*\n> \n\n-- \n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-735440555", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 735440555, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTQ0MDU1NQ==", "user": {"value": 11912854, "label": "jsancho-gpl"}, "created_at": "2020-11-29T19:12:30Z", "updated_at": "2020-11-29T19:12:30Z", "author_association": "NONE", "body": "[datasette-connectors](https://github.com/pytables/datasette-connectors) provides an API for making connectors for any file based database. For example, [datasette-pytables](https://github.com/pytables/datasette-pytables) is a connector for HDF5 files, so now is possible to use this type of files with Datasette.\r\n\r\nIt'd be nice if Datasette coud provide that API directly, for other file formats and for urls too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-882096402", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 882096402, "node_id": "IC_kwDOBm6k_c40k7kS", "user": {"value": 921217, "label": "RayBB"}, "created_at": "2021-07-18T18:07:29Z", "updated_at": "2021-07-18T18:07:29Z", "author_association": "NONE", "body": "I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases.\r\n\r\nMy comment is partially inspired by this post about streaming sqlite dbs from github pages or such\r\nhttps://news.ycombinator.com/item?id=27016630\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/124#issuecomment-346987395", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/124", "id": 346987395, "node_id": "MDEyOklzc3VlQ29tbWVudDM0Njk4NzM5NQ==", "user": {"value": 50138, "label": "janimo"}, "created_at": "2017-11-26T06:24:08Z", "updated_at": "2017-11-26T06:24:08Z", "author_association": "NONE", "body": "Are there performance gains when using immutable as opposed to read-only? From what I see other processes can still modify the DB when immutable, but there are no change notifications.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125805, "label": "Option to open readonly but not immutable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/124#issuecomment-347123991", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/124", "id": 347123991, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NzEyMzk5MQ==", "user": {"value": 50138, "label": "janimo"}, "created_at": "2017-11-27T09:25:15Z", "updated_at": "2017-11-27T09:25:15Z", "author_association": "NONE", "body": "That's the only reference to immutable I saw as well, making me think that there may be no perceivable advantages over simply using mode=ro. Since the database is never or seldom updated the change notifications should not impact performance.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125805, "label": "Option to open readonly but not immutable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/141#issuecomment-346974336", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/141", "id": 346974336, "node_id": "MDEyOklzc3VlQ29tbWVudDM0Njk3NDMzNg==", "user": {"value": 50138, "label": "janimo"}, "created_at": "2017-11-26T00:00:35Z", "updated_at": "2017-11-26T00:00:35Z", "author_association": "NONE", "body": "FWIW I worked around this by setting TMPDIR to ~/tmp before running the command.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275814941, "label": "datasette publish can fail if /tmp is on a different device"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/144#issuecomment-346427794", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/144", "id": 346427794, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjQyNzc5NA==", "user": {"value": 649467, "label": "mhalle"}, "created_at": "2017-11-22T17:55:45Z", "updated_at": "2017-11-22T17:55:45Z", "author_association": "NONE", "body": "Thanks. There is a way to use pip to grab apsw, which also let's you configure it (flags to build extensions, use an internal sqlite, etc). Don't know how that works as a dependency for another package, though.\n\nOn November 22, 2017 11:38:06 AM EST, Simon Willison wrote:\n>I have a solution for FTS already, but I'm interested in apsw as a\n>mechanism for allowing custom virtual tables to be written in Python\n>(pysqlite only lets you write custom functions)\n>\n>Not having PyPI support is pretty tough though. I'm planning a\n>plugin/extension system which would be ideal for things like an\n>optional apsw mode, but that's a lot harder if apsw isn't in PyPI.\n>\n>-- \n>You are receiving this because you authored the thread.\n>Reply to this email directly or view it on GitHub:\n>https://github.com/simonw/datasette/issues/144#issuecomment-346405660\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 276091279, "label": "apsw as alternative sqlite3 binding (for full text search)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/153#issuecomment-348252037", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/153", "id": 348252037, "node_id": "MDEyOklzc3VlQ29tbWVudDM0ODI1MjAzNw==", "user": {"value": 20264, "label": "ftrain"}, "created_at": "2017-11-30T16:59:00Z", "updated_at": "2017-11-30T16:59:00Z", "author_association": "NONE", "body": "WOW!\n\n--\nPaul Ford // (646) 369-7128 // @ftrain\n\nOn Thu, Nov 30, 2017 at 11:47 AM, Simon Willison \nwrote:\n\n> Remaining work on this now lives in a milestone:\n> https://github.com/simonw/datasette/milestone/6\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> ,\n> or mute the thread\n> \n> .\n>\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 276842536, "label": "Ability to customize presentation of specific columns in HTML view"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/155#issuecomment-347714314", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/155", "id": 347714314, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NzcxNDMxNA==", "user": {"value": 388154, "label": "wsxiaoys"}, "created_at": "2017-11-29T00:46:25Z", "updated_at": "2017-11-29T00:46:25Z", "author_association": "NONE", "body": "```\r\nCREATE TABLE rhs (\r\n id INTEGER PRIMARY KEY,\r\n name TEXT\r\n);\r\n\r\nCREATE TABLE lhs (\r\n symbol INTEGER PRIMARY KEY,\r\n FOREIGN KEY (symbol) REFERENCES rhs(id)\r\n);\r\n\r\nINSERT INTO rhs VALUES (1, \"foo\");\r\nINSERT INTO rhs VALUES (2, \"bar\");\r\nINSERT INTO lhs VALUES (1);\r\nINSERT INTO lhs VALUES (2);\r\n```\r\n\r\nIt's expected that in lhs's view, foo / bar should be displayed.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 277589569, "label": "A primary key column that has foreign key restriction associated won't rendering label column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/161#issuecomment-350108113", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/161", "id": 350108113, "node_id": "MDEyOklzc3VlQ29tbWVudDM1MDEwODExMw==", "user": {"value": 388154, "label": "wsxiaoys"}, "created_at": "2017-12-07T22:02:24Z", "updated_at": "2017-12-07T22:02:24Z", "author_association": "NONE", "body": "It's not throwing the validation error anymore, but i still cannot run following with query:\r\n```\r\nWITH RECURSIVE cnt(x) AS (SELECT 1 UNION ALL SELECT x+1 FROM cnt LIMIT 10) SELECT x FROM cnt;\r\n```\r\n\r\nI got `near \"WITH\": syntax error`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 278814220, "label": "Support WITH query "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/161#issuecomment-350182904", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/161", "id": 350182904, "node_id": "MDEyOklzc3VlQ29tbWVudDM1MDE4MjkwNA==", "user": {"value": 388154, "label": "wsxiaoys"}, "created_at": "2017-12-08T06:18:12Z", "updated_at": "2017-12-08T06:18:12Z", "author_association": "NONE", "body": "You're right..got this resolved after upgrading the sqlite version.\r\n\r\nThanks you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 278814220, "label": "Support WITH query "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/173#issuecomment-823961091", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/173", "id": 823961091, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMzk2MTA5MQ==", "user": {"value": 3747136, "label": "ColinMaudry"}, "created_at": "2021-04-21T10:37:05Z", "updated_at": "2021-04-21T10:37:36Z", "author_association": "NONE", "body": "I have the feeling that the text visible to users is 95% present in template files ([datasette/templates](https://github.com/simonw/datasette/tree/main/datasette/templates)). The python code mainly contains error messages.\r\n\r\nIn the current situation, the best way to provide a localized frontend is to translate the templates and [configure datasette to use them](https://docs.datasette.io/en/stable/custom_templates.html). I think I'm going to do it for French.\r\n\r\nIf we want localization to be better integrated, for the python code, I think [gettext](https://docs.python.org/3/library/gettext.html#localizing-your-application) is the way to go. The .po can be translated in user-friendly tools such as Transifex and Crowdin.\r\n\r\nFor the templates, I'm not sure how we could do it cleanly and easy to maintain. Maybe the tools above could parse HTML and detect the strings to be translated.\r\n\r\nIn any case, localization implementing l10n is just the first step: a continuous process must be setup to maintain the translations and produce new ones while datasette keeps getting new features.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 281110295, "label": "I18n and L10n support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/173#issuecomment-826784306", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/173", "id": 826784306, "node_id": "MDEyOklzc3VlQ29tbWVudDgyNjc4NDMwNg==", "user": {"value": 3747136, "label": "ColinMaudry"}, "created_at": "2021-04-26T12:10:01Z", "updated_at": "2021-04-26T12:10:01Z", "author_association": "NONE", "body": "I found a neat tutorial to set up gettext with jinja2: http://siongui.github.io/2016/01/17/i18n-python-web-application-by-gettext-jinja2/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 281110295, "label": "I18n and L10n support"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-356115657", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 356115657, "node_id": "MDEyOklzc3VlQ29tbWVudDM1NjExNTY1Nw==", "user": {"value": 4313116, "label": "wulfmann"}, "created_at": "2018-01-08T22:22:32Z", "updated_at": "2018-01-08T22:22:32Z", "author_association": "NONE", "body": "This project probably would not be the place for that. This is a layer for sqllite specifically. It solves a similar problem as graphql, so adding that here wouldn't make sense.\r\n\r\nHere's an example i found from google that uses micro to run a graphql microservice. you'd just then need to connect your db.\r\nhttps://github.com/timneutkens/micro-graphql", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-356161672", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 356161672, "node_id": "MDEyOklzc3VlQ29tbWVudDM1NjE2MTY3Mg==", "user": {"value": 173848, "label": "yozlet"}, "created_at": "2018-01-09T02:35:35Z", "updated_at": "2018-01-09T02:35:35Z", "author_association": "NONE", "body": "@wulfmann I think I disagree, except I'm not entirely sure what you mean by that first paragraph. The JSON API that Datasette currently exposes is quite different to GraphQL.\r\n\r\nFurthermore, there's no \"just\" about connecting micro-graphql to a DB; at least, no more \"just\" than adding any other API. You still need to configure the schema, which is exactly the kind of thing that Datasette does for JSON API. This is why I think that GraphQL's a good fit here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-356175667", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 356175667, "node_id": "MDEyOklzc3VlQ29tbWVudDM1NjE3NTY2Nw==", "user": {"value": 4313116, "label": "wulfmann"}, "created_at": "2018-01-09T04:19:03Z", "updated_at": "2018-01-09T04:19:03Z", "author_association": "NONE", "body": "@yozlet Yes I think that I was confused when I posted my original comment. I see your main point now and am in agreement.\r\n\r\n", "reactions": "{\"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-359697938", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 359697938, "node_id": "MDEyOklzc3VlQ29tbWVudDM1OTY5NzkzOA==", "user": {"value": 7193, "label": "gijs"}, "created_at": "2018-01-23T07:17:56Z", "updated_at": "2018-01-23T07:17:56Z", "author_association": "NONE", "body": "\ud83d\udc4d I'd like this too! ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-368625350", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 368625350, "node_id": "MDEyOklzc3VlQ29tbWVudDM2ODYyNTM1MA==", "user": {"value": 7431774, "label": "wuhland"}, "created_at": "2018-02-26T19:44:11Z", "updated_at": "2018-02-26T19:44:11Z", "author_association": "NONE", "body": "great idea!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-431867885", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 431867885, "node_id": "MDEyOklzc3VlQ29tbWVudDQzMTg2Nzg4NQ==", "user": {"value": 634572, "label": "eads"}, "created_at": "2018-10-22T15:24:57Z", "updated_at": "2018-10-22T15:24:57Z", "author_association": "NONE", "body": "I'd like this as well. It would let me access Datasette-driven projects from GatsbyJS the same way I can access Postgres DBs via Hasura. While I don't see SQLite replacing Postgres for the 50m row datasets I sometimes have to work with, there's a whole class of smaller datasets that are great with Datasette but currently would find another option.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-548508237", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 548508237, "node_id": "MDEyOklzc3VlQ29tbWVudDU0ODUwODIzNw==", "user": {"value": 634572, "label": "eads"}, "created_at": "2019-10-31T18:25:44Z", "updated_at": "2019-10-31T18:25:44Z", "author_association": "NONE", "body": "\ud83d\udc4b I'd be interested in building this out in Q1 or Q2 of 2020 if nobody has tackled it by then. I would love to integrate Datasette into @thechicagoreporter's practice, but we're also fully committed to GraphQL moving forward.", "reactions": "{\"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/176#issuecomment-617208503", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/176", "id": 617208503, "node_id": "MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw==", "user": {"value": 12976, "label": "nkirsch"}, "created_at": "2020-04-21T14:16:24Z", "updated_at": "2020-04-21T14:16:24Z", "author_association": "NONE", "body": "@eads I'm interested in helping, if there's still a need...", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 285168503, "label": "Add GraphQL endpoint"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/181#issuecomment-378297842", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/181", "id": 378297842, "node_id": "MDEyOklzc3VlQ29tbWVudDM3ODI5Nzg0Mg==", "user": {"value": 1957344, "label": "bsmithgall"}, "created_at": "2018-04-03T15:47:13Z", "updated_at": "2018-04-03T15:47:13Z", "author_association": "NONE", "body": "I can work on that -- would you prefer to inline a `display: hidden` and then have the javascript flip the visibility or include it as css?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 289425975, "label": "add \"format sql\" button to query page, uses sql-formatter"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/181#issuecomment-379759875", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/181", "id": 379759875, "node_id": "MDEyOklzc3VlQ29tbWVudDM3OTc1OTg3NQ==", "user": {"value": 1957344, "label": "bsmithgall"}, "created_at": "2018-04-09T13:53:14Z", "updated_at": "2018-04-09T13:53:14Z", "author_association": "NONE", "body": "I've implemented that approach in 86ac746. It does cause the button to pop in only after Codemirror is finished rendering which is a bit awkward.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 289425975, "label": "add \"format sql\" button to query page, uses sql-formatter"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/184#issuecomment-379788103", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/184", "id": 379788103, "node_id": "MDEyOklzc3VlQ29tbWVudDM3OTc4ODEwMw==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-04-09T15:15:11Z", "updated_at": "2018-04-09T15:15:11Z", "author_association": "NONE", "body": "Visit https://salaries.news.baltimoresun.com/salaries/bad-table.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 292011379, "label": "500 from missing table name"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/184#issuecomment-494459264", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/184", "id": 494459264, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5NDQ1OTI2NA==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2019-05-21T16:17:29Z", "updated_at": "2019-05-21T16:17:29Z", "author_association": "NONE", "body": "Reopening this because it still raises 500 for incorrect table capitalization. \r\n\r\nExample:\r\n\r\n- https://salaries.news.baltimoresun.com/salaries/2018+Maryland+state+salaries/1 200 OK\r\n- https://salaries.news.baltimoresun.com/salaries/bad-table/1 400\r\n- https://salaries.news.baltimoresun.com/salaries/2018+maryland+state+salaries/1 500 Internal Error (note lowercase 'm')\r\n\r\nI think because the table name exists but is not in its canonical form, it triggers a dict lookup error.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 292011379, "label": "500 from missing table name"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/185#issuecomment-370461231", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/185", "id": 370461231, "node_id": "MDEyOklzc3VlQ29tbWVudDM3MDQ2MTIzMQ==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-03-05T15:43:56Z", "updated_at": "2018-03-05T15:44:27Z", "author_association": "NONE", "body": "Yes. I think the simplest implementation is to change lines like\r\n\r\n```python\r\n metadata = self.ds.metadata.get('databases', {}).get(name, {})\r\n```\r\n\r\nto\r\n\r\n```python\r\nmetadata = {\r\n **self.ds.metadata,\r\n **self.ds.metadata.get('databases', {}).get(name, {}),\r\n}\r\n```\r\n\r\nso that specified inner values overwrite outer values, but only if they exist.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 299760684, "label": "Metadata should be a nested arbitrary KV store"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/185#issuecomment-376590265", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/185", "id": 376590265, "node_id": "MDEyOklzc3VlQ29tbWVudDM3NjU5MDI2NQ==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-03-27T16:32:51Z", "updated_at": "2018-03-27T16:32:51Z", "author_association": "NONE", "body": ">I think the templates themselves should be able to indicate if they want the inherited values or not. That way we could support arbitrary key/values and avoid the application code having special knowledge of license_url etc.\r\n\r\nYes, you could have `metadata` that works like `metadata` does currently and `inherited_metadata` that works with inheritance.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 299760684, "label": "Metadata should be a nested arbitrary KV store"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/185#issuecomment-376592044", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/185", "id": 376592044, "node_id": "MDEyOklzc3VlQ29tbWVudDM3NjU5MjA0NA==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-03-27T16:38:23Z", "updated_at": "2018-03-27T16:38:23Z", "author_association": "NONE", "body": "It would be nice to also allow arbitrary keys (maybe under a parent key called params or something to prevent conflicts). For our datasette project, we just have a bunch of dictionaries defined in the base template for things like site URL and column humanized names: https://github.com/baltimore-sun-data/salaries-datasette/blob/master/templates/base.html It would be cleaner if this were in the metadata.json.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 299760684, "label": "Metadata should be a nested arbitrary KV store"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/185#issuecomment-376614973", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/185", "id": 376614973, "node_id": "MDEyOklzc3VlQ29tbWVudDM3NjYxNDk3Mw==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-03-27T17:49:00Z", "updated_at": "2018-03-27T17:49:00Z", "author_association": "NONE", "body": "@simonw Other than metadata, the biggest item on wishlist for the salaries project was the ability to reorder by column. Of course, that could be done with a custom SQL query, but we didn't want to have to reimplement all the nav/pagination stuff from scratch. \r\n\r\n@carolinp, feel free to add your thoughts.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 299760684, "label": "Metadata should be a nested arbitrary KV store"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/185#issuecomment-412663658", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/185", "id": 412663658, "node_id": "MDEyOklzc3VlQ29tbWVudDQxMjY2MzY1OA==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-08-13T21:04:11Z", "updated_at": "2018-08-13T21:04:11Z", "author_association": "NONE", "body": "That seems good to me.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 299760684, "label": "Metadata should be a nested arbitrary KV store"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/186#issuecomment-374872202", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/186", "id": 374872202, "node_id": "MDEyOklzc3VlQ29tbWVudDM3NDg3MjIwMg==", "user": {"value": 47107, "label": "stefanocudini"}, "created_at": "2018-03-21T09:07:22Z", "updated_at": "2018-03-21T09:07:22Z", "author_association": "NONE", "body": "--debug is perfect tnk", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 306811513, "label": "proposal new option to disable user agents cache"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-427943710", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 427943710, "node_id": "MDEyOklzc3VlQ29tbWVudDQyNzk0MzcxMA==", "user": {"value": 1583271, "label": "progpow"}, "created_at": "2018-10-08T18:58:05Z", "updated_at": "2018-10-08T18:58:05Z", "author_association": "NONE", "body": "I have same error:\r\n```\r\nCollecting uvloop\r\n Using cached https://files.pythonhosted.org/packages/5c/37/6daa39aac42b2deda6ee77f408bec0419b600e27b89b374b0d440af32b10/uvloop-0.11.2.tar.gz\r\n Complete output from command python setup.py egg_info:\r\n Traceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\sageev\\AppData\\Local\\Temp\\pip-install-bq64l8jy\\uvloop\\setup.py\", line 15, in \r\n raise RuntimeError('uvloop does not support Windows at the moment')\r\n RuntimeError: uvloop does not support Windows at the moment\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-463917744", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 463917744, "node_id": "MDEyOklzc3VlQ29tbWVudDQ2MzkxNzc0NA==", "user": {"value": 4190962, "label": "phoenixjun"}, "created_at": "2019-02-15T05:58:44Z", "updated_at": "2019-02-15T05:58:44Z", "author_association": "NONE", "body": "is this supported or not? you can comment if it is not supported so that people like me can stop trying.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-466325528", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 466325528, "node_id": "MDEyOklzc3VlQ29tbWVudDQ2NjMyNTUyOA==", "user": {"value": 2892252, "label": "fkuhn"}, "created_at": "2019-02-22T09:03:50Z", "updated_at": "2019-02-22T09:03:50Z", "author_association": "NONE", "body": "I ran into the same issue when trying to install datasette on windows after successfully using it on linux. Unfortunately, there has not been any progress in implementing uvloop for windows - so I recommend not to use it on win. You can read about this issue here:\r\n[https://github.com/MagicStack/uvloop/issues/14](url)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-489353316", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 489353316, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4OTM1MzMxNg==", "user": {"value": 46059, "label": "carsonyl"}, "created_at": "2019-05-04T18:36:36Z", "updated_at": "2019-05-04T18:36:36Z", "author_association": "NONE", "body": "Hi @simonw - I just hit this issue when trying out Datasette after your PyCon talk today. Datasette is pinned to Sanic 0.7.0, but it looks like 0.8.0 added the option to remove the uvloop dependency for Windows by having an environment variable `SANIC_NO_UVLOOP` at install time. Maybe that'll be sufficient before a port to Starlette?", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 1, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/187#issuecomment-490039343", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/187", "id": 490039343, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5MDAzOTM0Mw==", "user": {"value": 6422964, "label": "Maltazar"}, "created_at": "2019-05-07T11:24:42Z", "updated_at": "2019-05-07T11:24:42Z", "author_association": "NONE", "body": "I totally agree with carsonyl", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309033998, "label": "Windows installation error"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/188#issuecomment-398778485", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/188", "id": 398778485, "node_id": "MDEyOklzc3VlQ29tbWVudDM5ODc3ODQ4NQ==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2018-06-20T14:48:39Z", "updated_at": "2018-06-20T14:48:39Z", "author_association": "NONE", "body": "This would be a great feature to have!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309047460, "label": "Ability to bundle metadata and templates inside the SQLite file"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/189#issuecomment-379791047", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/189", "id": 379791047, "node_id": "MDEyOklzc3VlQ29tbWVudDM3OTc5MTA0Nw==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-04-09T15:23:45Z", "updated_at": "2018-04-09T15:23:45Z", "author_association": "NONE", "body": "Awesome!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309471814, "label": "Ability to sort (and paginate) by column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/189#issuecomment-381429213", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/189", "id": 381429213, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTQyOTIxMw==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-04-15T18:54:22Z", "updated_at": "2018-04-15T18:54:22Z", "author_association": "NONE", "body": "I think I found a bug. I tried to sort by middle initial in my salaries set, and many middle initials are null. The next_url gets set by Datasette to:\r\n\r\nhttp://localhost:8001/salaries-d3a5631/2017+Maryland+state+salaries?_next=None%2C391&_sort=middle_initial\r\n\r\nBut then `None` is interpreted literally and it tries to find a name with the middle initial \"None\" and ends up skipping ahead to O on page 2.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 309471814, "label": "Ability to sort (and paginate) by column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/191#issuecomment-381602005", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/191", "id": 381602005, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTYwMjAwNQ==", "user": {"value": 119974, "label": "coleifer"}, "created_at": "2018-04-16T13:37:32Z", "updated_at": "2018-04-16T13:37:32Z", "author_association": "NONE", "body": "I don't think it should be too difficult... you can look at what @ghaering did with pysqlite (and similarly what I copied for pysqlite3). You would theoretically take an amalgamation build of Sqlite (all code in a single .c and .h file). The `AmalgamationLibSqliteBuilder` class detects the presence of this amalgamated source file and builds a statically-linked pysqlite.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 310533258, "label": "Figure out how to bundle a more up-to-date SQLite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/191#issuecomment-392828475", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/191", "id": 392828475, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MjgyODQ3NQ==", "user": {"value": 119974, "label": "coleifer"}, "created_at": "2018-05-29T15:50:18Z", "updated_at": "2018-05-29T15:50:18Z", "author_association": "NONE", "body": "Python standard-library SQLite dynamically links against the system sqlite3. So presumably you installed a more up-to-date sqlite3 somewhere on your `LD_LIBRARY_PATH`.\r\n\r\nTo compile a statically-linked pysqlite you need to include an amalgamation in the project root when building the extension. Read the relevant setup.py.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 310533258, "label": "Figure out how to bundle a more up-to-date SQLite"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/193#issuecomment-379142500", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/193", "id": 379142500, "node_id": "MDEyOklzc3VlQ29tbWVudDM3OTE0MjUwMA==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-04-06T04:05:58Z", "updated_at": "2018-04-06T04:05:58Z", "author_association": "NONE", "body": "You could try pulling out a validate query strings method. If it fails validation build the error object from the message. If it passes, you only need to go down a happy path. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 310882100, "label": "Cleaner mechanism for handling custom errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/215#issuecomment-540548765", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/215", "id": 540548765, "node_id": "MDEyOklzc3VlQ29tbWVudDU0MDU0ODc2NQ==", "user": {"value": 2181410, "label": "clausjuhl"}, "created_at": "2019-10-10T12:27:56Z", "updated_at": "2019-10-10T12:27:56Z", "author_association": "NONE", "body": "Hi Simon. Any news on the ability to add routes (with static content) to datasette? As a public institution I'm required to have at least privacy, cookie and availability policies in place, and it really would be nice to have these under the same url. Thank you for some great work!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 314506669, "label": "Allow plugins to define additional URL routes and views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/227#issuecomment-439194286", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/227", "id": 439194286, "node_id": "MDEyOklzc3VlQ29tbWVudDQzOTE5NDI4Ng==", "user": {"value": 222245, "label": "carlmjohnson"}, "created_at": "2018-11-15T21:20:37Z", "updated_at": "2018-11-15T21:20:37Z", "author_association": "NONE", "body": "I'm diving back into https://salaries.news.baltimoresun.com and what I really want is the ability to inject the request into my context.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 315960272, "label": "prepare_context() plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-920543967", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 920543967, "node_id": "IC_kwDOBm6k_c423mLf", "user": {"value": 164214, "label": "sethvincent"}, "created_at": "2021-09-16T03:19:08Z", "updated_at": "2021-09-16T03:19:08Z", "author_association": "NONE", "body": ":wave: I just put together a small example using the lambda container image support: https://github.com/sethvincent/datasette-aws-lambda-example\r\n\r\nIt uses mangum and AWS's [python runtime interface client](https://github.com/aws/aws-lambda-python-runtime-interface-client) to handle the lambda event stuff.\r\n\r\nI'd be happy to help with a publish plugin for AWS lambda as I plan to use this for upcoming projects.\r\n\r\nThe example uses the [serverless](https://www.serverless.com) cli for deployment but there might be a more suitable deployment approach for the plugin. It would be cool if users didn't have to install anything additional other than the aws cli and its associated config/credentials setup.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-1033772902", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 1033772902, "node_id": "IC_kwDOBm6k_c49nh9m", "user": {"value": 1376648, "label": "jordaneremieff"}, "created_at": "2022-02-09T13:40:52Z", "updated_at": "2022-02-09T13:40:52Z", "author_association": "NONE", "body": "Hi @simonw, \r\n\r\nI've received some inquiries over the last year or so about Datasette and how it might be supported by [Mangum](https://github.com/jordaneremieff/mangum). I maintain Mangum which is, as far as I know, the only project that provides support for ASGI applications in AWS Lambda.\r\n\r\nIf there is anything that I can help with here, please let me know because I think what Datasette provides to the community (even beyond OSS) is noble and worthy of special consideration.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-1465208436", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 1465208436, "node_id": "IC_kwDOBm6k_c5XVU50", "user": {"value": 545193, "label": "sopel"}, "created_at": "2023-03-12T14:04:15Z", "updated_at": "2023-03-12T14:04:15Z", "author_association": "NONE", "body": "I keep coming back to this in search for the related exploration, so I'll just link it now:\r\n\r\n@simonw has meanwhile researched _how to deploy Datasette to AWS Lambda using function URLs and Mangum_ via https://github.com/simonw/public-notes/issues/6 and concluded _that's everything I need to know in order to build a datasette-publish-lambda plugin_.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/247#issuecomment-390689406", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/247", "id": 390689406, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MDY4OTQwNg==", "user": {"value": 11912854, "label": "jsancho-gpl"}, "created_at": "2018-05-21T15:29:31Z", "updated_at": "2018-05-21T15:29:31Z", "author_association": "NONE", "body": "I've changed my mind about the way to support external connectors aside of SQLite and I'm working in a more simple style that respects the original Datasette, i.e. less refactoring. I present you [a version of Datasette wich supports other database connectors](https://github.com/jsancho-gpl/datasette/tree/external-connectors) and [a Datasette connector for HDF5/PyTables files](https://github.com/jsancho-gpl/datasette-pytables).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 319449852, "label": "SQLite code decoupled from Datasette"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/254#issuecomment-388367027", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/254", "id": 388367027, "node_id": "MDEyOklzc3VlQ29tbWVudDM4ODM2NzAyNw==", "user": {"value": 247131, "label": "philroche"}, "created_at": "2018-05-11T13:41:46Z", "updated_at": "2018-05-11T13:41:46Z", "author_association": "NONE", "body": "An example deployment @ https://datasette-zkcvlwdrhl.now.sh/simplestreams-270f20c/cloudimage?content_id__exact=com.ubuntu.cloud%3Areleased%3Adownload\r\n\r\nIt is not causing errors, more of an inconvenience. I have worked around it using a `like` query instead. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 322283067, "label": "Escaping named parameters in canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/254#issuecomment-626340387", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/254", "id": 626340387, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM0MDM4Nw==", "user": {"value": 247131, "label": "philroche"}, "created_at": "2020-05-10T14:54:13Z", "updated_at": "2020-05-10T14:54:13Z", "author_association": "NONE", "body": "This has now been resolved and is not present in current version of datasette. \r\n\r\nSample query @simonw mentioned now returns as expected. \r\n\r\nhttps://aggreg8streams.tinyviking.ie/simplestreams?sql=select+*+from+cloudimage+where+%22content_id%22+%3D+%22com.ubuntu.cloud%3Areleased%3Adownload%22+order+by+id+limit+10", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 322283067, "label": "Escaping named parameters in canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/258#issuecomment-390577711", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/258", "id": 390577711, "node_id": "MDEyOklzc3VlQ29tbWVudDM5MDU3NzcxMQ==", "user": {"value": 247131, "label": "philroche"}, "created_at": "2018-05-21T07:38:15Z", "updated_at": "2018-05-21T07:38:15Z", "author_association": "NONE", "body": "Excellent, I was not aware of the auto redirect to the new hash. My bad\r\n\r\nThis solves my use case.\r\n\r\nI do agree that your suggested --no-url-hash approach is much neater. I will investigate ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 322741659, "label": "Add new metadata key persistent_urls which removes the hash from all database urls"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/260#issuecomment-1051473892", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/260", "id": 1051473892, "node_id": "IC_kwDOBm6k_c4-rDfk", "user": {"value": 596279, "label": "zaneselvans"}, "created_at": "2022-02-26T02:24:15Z", "updated_at": "2022-02-26T02:24:15Z", "author_association": "NONE", "body": "Is there already functionality that can be used to validate the `metadata.json` file? Is there a JSON Schema that defines it? Or a validation that's available via datasette with Python? We're working on [automatically building the metadata](https://github.com/catalyst-cooperative/pudl/pull/1479) in CI and when we deploy to cloud run, and it would be nice to be able to check whether the the metadata we're outputting is valid in our tests.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323223872, "label": "Validate metadata.json on startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/260#issuecomment-1235079469", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/260", "id": 1235079469, "node_id": "IC_kwDOBm6k_c5JndEt", "user": {"value": 596279, "label": "zaneselvans"}, "created_at": "2022-09-02T05:24:59Z", "updated_at": "2022-09-02T05:24:59Z", "author_association": "NONE", "body": "@zschira is working with Pydantic while converting between and validating JSON frictionless datapackage descriptors that annotate an SQLite DB ([extracted from FERC's XBRL data](https://github.com/catalyst-cooperative/ferc-xbrl-extractor)) and the Datasette YAML metadata [so we can publish them with Datasette](https://github.com/catalyst-cooperative/pudl/pull/1831). Maybe there's some overlap? We've been loving Pydantic.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 1}", "issue": {"value": 323223872, "label": "Validate metadata.json on startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/265#issuecomment-392890045", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/265", "id": 392890045, "node_id": "MDEyOklzc3VlQ29tbWVudDM5Mjg5MDA0NQ==", "user": {"value": 231923, "label": "yschimke"}, "created_at": "2018-05-29T18:37:49Z", "updated_at": "2018-05-29T18:37:49Z", "author_association": "NONE", "body": "Just about to ask for this! Move this page https://github.com/simonw/datasette/wiki/Datasettes\r\n\r\ninto a datasette, with some concept of versioning as well.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323677499, "label": "Add links to example Datasette instances to appropiate places in docs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/267#issuecomment-414860009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/267", "id": 414860009, "node_id": "MDEyOklzc3VlQ29tbWVudDQxNDg2MDAwOQ==", "user": {"value": 78156, "label": "annapowellsmith"}, "created_at": "2018-08-21T23:57:51Z", "updated_at": "2018-08-21T23:57:51Z", "author_association": "NONE", "body": "Looks to me like hashing, redirects and caching were documented as part of https://github.com/simonw/datasette/commit/788a542d3c739da5207db7d1fb91789603cdd336#diff-3021b0e065dce289c34c3b49b3952a07 - so perhaps this can be closed? :tada:", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323716411, "label": "Documentation for URL hashing, redirects and cache policy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/268#issuecomment-789409126", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/268", "id": 789409126, "node_id": "MDEyOklzc3VlQ29tbWVudDc4OTQwOTEyNg==", "user": {"value": 649467, "label": "mhalle"}, "created_at": "2021-03-03T03:57:15Z", "updated_at": "2021-03-03T03:58:40Z", "author_association": "NONE", "body": "In FTS5, I think doing an FTS search is actually much easier than doing a join against the main table like datasette does now. In fact, FTS5 external content tables provide a transparent interface back to the original table or view.\r\n\r\nHere's what I'm currently doing:\r\n* build a view that joins whatever tables I want and rename the columns to non-joiny names (e.g, `chapter.name AS chapter_name` in the view where needed)\r\n* Create an FTS5 table with `content=\"viewname\"`\r\n* As described in the \"external content tables\" section (https://www.sqlite.org/fts5.html#external_content_tables), sql queries can be made directly to the FTS table, which behind the covers makes select calls to the content table when the content of the original columns are needed.\r\n* In addition, you get \"rank\" and \"bm25()\" available to you when you select on the _fts table.\r\n\r\nUnfortunately, datasette doesn't currently seem happy being coerced into doing a real query on an fts5 table. This works:\r\n```select col1, col2, col3 from table_fts where coll1=\"value\" and table_fts match escape_fts(\"search term\") order by rank```\r\n\r\nBut this doesn't work in the datasette SQL query interface:\r\n```select col1, col2, col3 from table_fts where coll1=\"value\" and table_fts match escape_fts(:search) order by rank``` (the \"search\" input text field doesn't show up)\r\n\r\nFor what datasette is doing right now, I think you could just use contentless fts5 tables (`content=\"\"`), since all you care about is the rowid since all you're doing a subselect to get the rowid anyway. In fts5, that's just a contentless table.\r\n\r\nI guess if you want to follow this suggestion, you'd need a somewhat different code path for fts5.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323718842, "label": "Mechanism for ranking results from SQLite full-text search"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/268#issuecomment-790257263", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/268", "id": 790257263, "node_id": "MDEyOklzc3VlQ29tbWVudDc5MDI1NzI2Mw==", "user": {"value": 649467, "label": "mhalle"}, "created_at": "2021-03-04T03:20:23Z", "updated_at": "2021-03-04T03:20:23Z", "author_association": "NONE", "body": "It's kind of an ugly hack, but you can try out what using the fts5 table as an actual datasette-accessible table looks like without changing any datasette code by creating yet another view on top of the fts5 table:\r\n\r\n`create view proxyview as select *, rank, table_fts as fts from table_fts;`\r\n\r\nThat's now visible from datasette, just like any other view, but you can use `fts match escape_fts(search_string) order by rank`.\r\n\r\nThis is only good as a proof of concept because you're inefficiently going from view -> fts5 external content table -> view -> data table. However, it does show it works.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323718842, "label": "Mechanism for ranking results from SQLite full-text search"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/268#issuecomment-876428348", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/268", "id": 876428348, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NjQyODM0OA==", "user": {"value": 9308268, "label": "rayvoelker"}, "created_at": "2021-07-08T13:13:12Z", "updated_at": "2021-07-08T13:13:12Z", "author_association": "NONE", "body": "I had setup a full text search on my instance of Datasette for title data for our public library, and was noticing that some of the features of the SQLite FTS weren't working as expected ... and maybe the issue is in the `escape_fts()` function\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/124925900-f1ea8b00-dfca-11eb-895e-59cc083d6524.png)\r\nvs removing the function...\r\n![image](https://user-images.githubusercontent.com/9308268/124925971-0464c480-dfcb-11eb-8fbf-8e9b5d6e0861.png)\r\n\r\nAlso, on the issue of sorting by rank by default .. perhaps something like this could work for the baked-in default SQL query for Datasette?\r\n![image](https://user-images.githubusercontent.com/9308268/124927191-5a863780-dfcc-11eb-9908-3f63577d5ff5.png)\r\n\r\n[link to the above search in my instance of Datasette](https://ilsweb.cincinnatilibrary.org/collection-analysis/current_collection-87a9011?sql=with+fts_search+as+%28%0D%0A++select%0D%0A++rowid%2C%0D%0A++rank%0D%0A++++from%0D%0A++++++bib_fts%0D%0A++++where%0D%0A++++++bib_fts+match+%3Asearch%0D%0A%29%0D%0A%0D%0Aselect%0D%0A++%0D%0A++bib_record_num%2C%0D%0A++creation_date%2C%0D%0A++record_last_updated%2C%0D%0A++isbn%2C%0D%0A++best_author%2C%0D%0A++best_title%2C%0D%0A++publisher%2C%0D%0A++publish_year%2C%0D%0A++bib_level_callnumber%2C%0D%0A++indexed_subjects%0D%0Afrom%0D%0A++fts_search%0D%0A++join+bib+on+bib.rowid+%3D+fts_search.rowid%0D%0A++%0D%0Aorder+by%0D%0Arank%0D%0A&search=black+death+NOT+fiction)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323718842, "label": "Mechanism for ranking results from SQLite full-text search"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/268#issuecomment-876721585", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/268", "id": 876721585, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NjcyMTU4NQ==", "user": {"value": 9308268, "label": "rayvoelker"}, "created_at": "2021-07-08T20:22:17Z", "updated_at": "2021-07-08T20:22:17Z", "author_association": "NONE", "body": "I do like the idea of there being a option for turning that on by default so that you could use those terms in the default \"Search\" bar presented when you browse to a table where FTS has been enabled. Maybe even a small inline pop up with a short bit explaining the FTS feature and the keywords (e.g. case matters). What are the side-effects of turning that on in the query string, or even by default as you suggested? I see that you stated in the docs... \"to ensure they do not cause any confusion for users who are not aware of them\", but I'm not sure what those could be.\r\n\r\nIsn't it the case that those keywords are only picked up by sqlite in where you're using the MATCH clause?\r\n\r\nSeems like a really powerful feature (even though there are a lot of hurdles around setting it up in the sqlite db ... sqlite-utils makes that so simple by the way!)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323718842, "label": "Mechanism for ranking results from SQLite full-text search"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/272#issuecomment-400571521", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/272", "id": 400571521, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMDU3MTUyMQ==", "user": {"value": 647359, "label": "tomchristie"}, "created_at": "2018-06-27T07:30:07Z", "updated_at": "2018-06-27T07:30:07Z", "author_association": "NONE", "body": "I\u2019m up for helping with this.\r\n\r\nLooks like you\u2019d need static files support, which I\u2019m planning on adding a component for. Anything else obviously missing?\r\n\r\nFor a quick overview it looks very doable - the test client ought to me your test cases stay roughly the same.\r\n\r\nAre you using any middleware or other components for the Sanic ecosystem? Do you use cookies or sessions at all?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324188953, "label": "Port Datasette to ASGI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/272#issuecomment-404514973", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/272", "id": 404514973, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNDUxNDk3Mw==", "user": {"value": 647359, "label": "tomchristie"}, "created_at": "2018-07-12T13:38:24Z", "updated_at": "2018-07-12T13:38:24Z", "author_association": "NONE", "body": "Okay. I reckon the latest version should have all the kinds of components you'd need:\r\n\r\nRecently added ASGI components for Routing and Static Files support, as well as making few tweaks to make sure requests and responses are instantiated efficiently.\r\n\r\nDon't have any redirect-to-slash / redirect-to-non-slash stuff out of the box yet, which it looks like you might miss.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324188953, "label": "Port Datasette to ASGI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/272#issuecomment-418695115", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/272", "id": 418695115, "node_id": "MDEyOklzc3VlQ29tbWVudDQxODY5NTExNQ==", "user": {"value": 647359, "label": "tomchristie"}, "created_at": "2018-09-05T11:21:25Z", "updated_at": "2018-09-05T11:21:25Z", "author_association": "NONE", "body": "Some notes:\r\n\r\n* Starlette just got a bump to 0.3.0 - there's some renamings in there. It's got enough functionality now that you can treat it either as a framework or as a toolkit. Either way the component design is all just *here's an ASGI app* all the way through.\r\n* Uvicorn got a bump to 0.3.3 - Removed some cyclical references that were causing garbage collection to impact performance. Ought to be a decent speed bump.\r\n* Wrt. passing config - Either use a single envvar that points to a config, or use multiple envvars for the config. Uvicorn could get a flag to read a `.env` file, but I don't see ASGI itself having a specific interface there.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324188953, "label": "Port Datasette to ASGI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/272#issuecomment-494297022", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/272", "id": 494297022, "node_id": "MDEyOklzc3VlQ29tbWVudDQ5NDI5NzAyMg==", "user": {"value": 647359, "label": "tomchristie"}, "created_at": "2019-05-21T08:39:17Z", "updated_at": "2019-05-21T08:39:17Z", "author_association": "NONE", "body": "Useful context stuff:\r\n\r\n> ASGI decodes %2F encoded slashes in URLs automatically\r\n\r\n`raw_path` for ASGI looks to be under consideration: https://github.com/django/asgiref/issues/87\r\n\r\n> uvicorn doesn't support Python 3.5\r\n\r\nThat was an issue specifically against the <=3.5.2 minor point releases of Python, now resolved: https://github.com/encode/uvicorn/issues/330 \ud83d\udc4d\r\n\r\n> Starlette for things like form parsing - but it's 3.6+ only!\r\n\r\nYeah - the bits that require 3.6 are anywhere with the \"async for\" syntax. If it wasn't for that I'd downport it, but that one's a pain. It's the one bit of syntax to watch out for if you're looking to bring any bits of implementation across to Datasette.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324188953, "label": "Port Datasette to ASGI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-744461856", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 744461856, "node_id": "MDEyOklzc3VlQ29tbWVudDc0NDQ2MTg1Ng==", "user": {"value": 296686, "label": "robintw"}, "created_at": "2020-12-14T14:04:57Z", "updated_at": "2020-12-14T14:04:57Z", "author_association": "NONE", "body": "I'm looking into using datasette with a database with spatialite geometry columns, and came across this issue. Has there been any progress on this since 2018?\r\n\r\nIn one of my tables I'm just storing lat/lon points in a spatialite point geometry, and I've managed to make datasette-cluster-map display the points by extracting the lat and lon in SQL - using something like `select ... ST_X(location) as longitude, ST_Y(location) as latitude from Blah`. Something more 'built-in' would be great though - particularly for the tables I have that store more complex geometries.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-780991910", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 780991910, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MDk5MTkxMA==", "user": {"value": 9308268, "label": "rayvoelker"}, "created_at": "2021-02-18T02:13:56Z", "updated_at": "2021-02-18T02:13:56Z", "author_association": "NONE", "body": "I was going ask you about this issue when we talk during your office-hours schedule this Friday, but was there any support ever added for doing this cross-database joining?\r\n\r\nI have a use-case where could be pretty neat to do analysis using this tool on time-specific databases from snapshots\r\n\r\nhttps://ilsweb.cincinnatilibrary.org/collection-analysis/\r\n\r\n![image](https://user-images.githubusercontent.com/9308268/108294883-ba3a8e00-7164-11eb-9206-fcd5a8cdd883.png)\r\n\r\nand thanks again for such an amazing tool!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/283#issuecomment-789680230", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/283", "id": 789680230, "node_id": "MDEyOklzc3VlQ29tbWVudDc4OTY4MDIzMA==", "user": {"value": 605492, "label": "justinpinkney"}, "created_at": "2021-03-03T12:28:42Z", "updated_at": "2021-03-03T12:28:42Z", "author_association": "NONE", "body": "One note on using this pragma I got an error on starting datasette `no such table: pragma_database_list`. \r\n\r\nI diagnosed this to an older version of sqlite3 (3.14.2) and upgrading to a newer version (3.34.2) fixed the issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 325958506, "label": "Support cross-database joins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/293#issuecomment-420295524", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/293", "id": 420295524, "node_id": "MDEyOklzc3VlQ29tbWVudDQyMDI5NTUyNA==", "user": {"value": 11912854, "label": "jsancho-gpl"}, "created_at": "2018-09-11T14:32:45Z", "updated_at": "2018-09-11T14:32:45Z", "author_association": "NONE", "body": "I close this PR because it's better to use the new one #364 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 326987229, "label": "Support for external database connectors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/316#issuecomment-398030903", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/316", "id": 398030903, "node_id": "MDEyOklzc3VlQ29tbWVudDM5ODAzMDkwMw==", "user": {"value": 132230, "label": "gavinband"}, "created_at": "2018-06-18T12:00:43Z", "updated_at": "2018-06-18T12:00:43Z", "author_association": "NONE", "body": "I should add that I'm using datasette version 0.22, Python 2.7.10 on Mac OS X. Happy to send more info if helpful.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 333238932, "label": "datasette inspect takes a very long time on large dbs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/316#issuecomment-398109204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/316", "id": 398109204, "node_id": "MDEyOklzc3VlQ29tbWVudDM5ODEwOTIwNA==", "user": {"value": 132230, "label": "gavinband"}, "created_at": "2018-06-18T16:12:45Z", "updated_at": "2018-06-18T16:12:45Z", "author_association": "NONE", "body": "Hi Simon,\r\nThanks for the response. Ok I'll try running `datasette inspect` up front.\r\nIn principle the db won't change. However, the site's in development and it's likely I'll need to add views and some auxiliary (smaller) tables as I go along. I will need to be careful with this if it involves an inspect step in each iteration, though.\r\ng.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 333238932, "label": "datasette inspect takes a very long time on large dbs"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/321#issuecomment-399098080", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/321", "id": 399098080, "node_id": "MDEyOklzc3VlQ29tbWVudDM5OTA5ODA4MA==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2018-06-21T13:10:48Z", "updated_at": "2018-06-21T13:10:48Z", "author_association": "NONE", "body": "Perfect, thank you!!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 334190959, "label": "Wildcard support in query parameters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/321#issuecomment-399106871", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/321", "id": 399106871, "node_id": "MDEyOklzc3VlQ29tbWVudDM5OTEwNjg3MQ==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2018-06-21T13:39:37Z", "updated_at": "2018-06-21T13:39:37Z", "author_association": "NONE", "body": "One thing I've noticed with this approach is that the query is executed with no parameters which I do not believe was the case previously. In the case the table contains a lot of data, this adds some time executing the query before the user can enter their input and run it with the parameters they want.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 334190959, "label": "Wildcard support in query parameters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/321#issuecomment-399129220", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/321", "id": 399129220, "node_id": "MDEyOklzc3VlQ29tbWVudDM5OTEyOTIyMA==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2018-06-21T14:45:02Z", "updated_at": "2018-06-21T14:45:02Z", "author_association": "NONE", "body": "Those queries look identical. How can this be prevented if the queries are in a metadata.json file?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 334190959, "label": "Wildcard support in query parameters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/321#issuecomment-399173916", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/321", "id": 399173916, "node_id": "MDEyOklzc3VlQ29tbWVudDM5OTE3MzkxNg==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2018-06-21T17:00:10Z", "updated_at": "2018-06-21T17:00:10Z", "author_association": "NONE", "body": "Oh I see.. My issue is that the query executes with an empty string prior to the user submitting the parameters. I'll try adding your workaround to some of my queries. Thanks again,", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 334190959, "label": "Wildcard support in query parameters"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/327#issuecomment-584657949", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/327", "id": 584657949, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NDY1Nzk0OQ==", "user": {"value": 1055831, "label": "dazzag24"}, "created_at": "2020-02-11T14:21:15Z", "updated_at": "2020-02-11T14:21:15Z", "author_association": "NONE", "body": "See https://github.com/simonw/datasette/issues/657 and my changes that allow datasette to load parquet files ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 335200136, "label": "Explore if SquashFS can be used to shrink size of packaged Docker containers"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/327#issuecomment-1043609198", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/327", "id": 1043609198, "node_id": "IC_kwDOBm6k_c4-NDZu", "user": {"value": 208018, "label": "dholth"}, "created_at": "2022-02-17T23:21:36Z", "updated_at": "2022-02-17T23:33:01Z", "author_association": "NONE", "body": "On fly.io. This particular database goes from 1.4GB to 200M. Slower, part of that might be having no `--inspect-file`?\r\n\r\n```\r\n$ datasette publish fly ... --generate-dir /tmp/deploy-this\r\n...\r\n$ mksquashfs large.db large.squashfs\r\n$ rm large.db # don't accidentally put it in the image\r\n$ cat Dockerfile\r\nFROM python:3.8\r\nCOPY . /app\r\nWORKDIR /app\r\n\r\nENV DATASETTE_SECRET 'xyzzy'\r\nRUN pip install -U datasette\r\n# RUN datasette inspect large.db --inspect-file inspect-data.json\r\nENV PORT 8080\r\nEXPOSE 8080\r\nCMD mount -o loop -t squashfs large.squashfs /mnt; datasette serve --host 0.0.0.0 -i /mnt/large.db --cors --port $PORT\r\n```\r\n\r\nIt would also be possible to copy the file onto the ~6GB available on the ephemeral container filesystem on startup. A little against the spirit of the thing? On this example the whole docker image is 2.42 GB and the squashfs version is 1.14 GB.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 335200136, "label": "Explore if SquashFS can be used to shrink size of packaged Docker containers"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/327#issuecomment-1043626870", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/327", "id": 1043626870, "node_id": "IC_kwDOBm6k_c4-NHt2", "user": {"value": 208018, "label": "dholth"}, "created_at": "2022-02-17T23:37:24Z", "updated_at": "2022-02-17T23:37:24Z", "author_association": "NONE", "body": "On second thought any kind of quick-to-decompress-on-startup could be helpful if we're paying for the container registry and deployment bandwidth but not ephemeral storage.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 335200136, "label": "Explore if SquashFS can be used to shrink size of packaged Docker containers"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/328#issuecomment-427261369", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/328", "id": 427261369, "node_id": "MDEyOklzc3VlQ29tbWVudDQyNzI2MTM2OQ==", "user": {"value": 13698964, "label": "chmaynard"}, "created_at": "2018-10-05T06:37:06Z", "updated_at": "2018-10-05T06:37:06Z", "author_association": "NONE", "body": "```\r\n~ $ docker pull datasetteproject/datasette\r\n~ $ docker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\nUsage: datasette -p [OPTIONS] [FILES]...\r\n\r\nError: Invalid value for \"files\": Path \"/mnt/fixtures.db\" does not exist.\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 336464733, "label": "Installation instructions, including how to use the docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/328#issuecomment-1706701195", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/328", "id": 1706701195, "node_id": "IC_kwDOBm6k_c5lujGL", "user": {"value": 7983005, "label": "eric-burel"}, "created_at": "2023-09-05T14:10:39Z", "updated_at": "2023-09-05T14:10:39Z", "author_association": "NONE", "body": "Hey @simonw I hit the same issue as mentionned by @chmaynard on a fresh install, \"/mnt/fixtures.db\" doesn't seem to exist in the docker image", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 336464733, "label": "Installation instructions, including how to use the docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/339#issuecomment-404576136", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/339", "id": 404576136, "node_id": "MDEyOklzc3VlQ29tbWVudDQwNDU3NjEzNg==", "user": {"value": 12617395, "label": "bsilverm"}, "created_at": "2018-07-12T16:45:08Z", "updated_at": "2018-07-12T16:45:08Z", "author_association": "NONE", "body": "Thanks for the quick reply. Looks like that is working well.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 340396247, "label": "Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/352#issuecomment-584203999", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/352", "id": 584203999, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NDIwMzk5OQ==", "user": {"value": 870184, "label": "xrotwang"}, "created_at": "2020-02-10T16:18:58Z", "updated_at": "2020-02-10T16:18:58Z", "author_association": "NONE", "body": "I don't want to re-open this issue, but I'm wondering whether it would be possible to include the full row for which a specific cell is to be rendered in the hook signature. My use case are rows where custom rendering would need access to multiple values (specifically, rows containing the constituents of interlinear glossed text (IGT) in separate columns, see https://github.com/cldf/cldf/tree/master/components/examples).\r\n\r\nI could probably cobble this together with custom SQL and the sql-to-html plugin. But having a full row within a `render_cell` implementation seems a lot simpler.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 345821500, "label": "render_cell(value) plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/363#issuecomment-417684877", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/363", "id": 417684877, "node_id": "MDEyOklzc3VlQ29tbWVudDQxNzY4NDg3Nw==", "user": {"value": 436032, "label": "kevboh"}, "created_at": "2018-08-31T14:39:45Z", "updated_at": "2018-08-31T14:39:45Z", "author_association": "NONE", "body": "It looks like the check passed, not sure why it's showing as running in GH.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 355299310, "label": "Search all apps during heroku publish"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/393#issuecomment-451415063", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/393", "id": 451415063, "node_id": "MDEyOklzc3VlQ29tbWVudDQ1MTQxNTA2Mw==", "user": {"value": 1727065, "label": "ltrgoddard"}, "created_at": "2019-01-04T11:04:08Z", "updated_at": "2019-01-04T11:04:08Z", "author_association": "NONE", "body": "Awesome - will get myself up and running on 0.26", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 395236066, "label": "CSV export in \"Advanced export\" pane doesn't respect query"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567127981", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567127981, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEyNzk4MQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T17:18:06Z", "updated_at": "2019-12-18T17:18:06Z", "author_association": "NONE", "body": "Agreed, this would be nice to have. I'm currently working around it in `nginx` with additional location blocks:\r\n\r\n```\r\n\r\n location /datasette/ {\r\n proxy_pass http://127.0.0.1:8001/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n\r\n location /dna-protein-genome/ {\r\n proxy_pass http://127.0.0.1:8001/dna-protein-genome/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n\r\n location /rna-protein-genome/ {\r\n proxy_pass http://127.0.0.1:8001/rna-protein-genome/;\r\n proxy_redirect off;\r\n include proxy_params;\r\n }\r\n```\r\n\r\nThe 2nd and 3rd above are my databases. This works, but I have a small problem with URLs like `/rna-protein-genome?params....` that I could fix with some more nginx munging. I seem to do this sort of thing once every 5 years and then have to look it all up again.\r\n\r\nThanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567128636", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567128636, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEyODYzNg==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T17:19:46Z", "updated_at": "2019-12-18T17:19:46Z", "author_association": "NONE", "body": "Hmmm, wait, maybe my mindless (copy/paste) use of `proxy_redirect` is causing me grief...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567219479", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567219479, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzIxOTQ3OQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2019-12-18T21:24:23Z", "updated_at": "2019-12-18T21:24:23Z", "author_association": "NONE", "body": "@simonw What about allowing a base url. The `....` tag has been around forever. Then just use all relative URLs, which I guess is likely what you already do. See https://www.w3schools.com/TAGs/tag_base.asp", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602904184", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602904184, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkwNDE4NA==", "user": {"value": 1448859, "label": "betatim"}, "created_at": "2020-03-23T23:03:42Z", "updated_at": "2020-03-23T23:03:42Z", "author_association": "NONE", "body": "On mybinder.org we allow access to arbitrary processes listening on a port inside the container via a [reverse proxy](https://github.com/jupyterhub/jupyter-server-proxy).\r\n\r\nThis means we need support for a proxy prefix as the proxy ends up running at a URL like `/something/random/proxy/datasette/...`\r\n\r\nAn example that shows the problem is https://github.com/psychemedia/jupyterserverproxy-datasette-demo. Launch directly into a datasette instance on mybinder.org with https://mybinder.org/v2/gh/psychemedia/jupyterserverproxy-datasette-demo/master?urlpath=datasette then try to follow links inside the UI.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602911133", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602911133, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkxMTEzMw==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-23T23:22:10Z", "updated_at": "2020-03-23T23:22:10Z", "author_association": "NONE", "body": "I just updated #652 to remove a merge conflict. I think it's an easy way to add this functionality. I don't have time to do more though, sorry!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602916580", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602916580, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkxNjU4MA==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-23T23:37:06Z", "updated_at": "2020-03-23T23:37:06Z", "author_association": "NONE", "body": "@simonw You're welcome - I was just trying it out back in December as I thought it should work. Now there's a pandemic to work on though.... so no time at all for more at the moment. BTW, I have datasette running on several protein and full (virus) genome databases I build, and it's great - thank you! Hi and best regards to you & Nat :-)", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 1, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-603539349", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 603539349, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMzUzOTM0OQ==", "user": {"value": 132978, "label": "terrycojones"}, "created_at": "2020-03-24T22:33:23Z", "updated_at": "2020-03-24T22:33:23Z", "author_association": "NONE", "body": "Hi Simon - I'm just (trying, at least) to follow along in the above. I can't try it out now, but I will if no one else gets to it. Sorry I didn't write any tests in the original bit of code I pushed - I was just trying to see if it could work & whether you'd want to maybe head in that direction. Anyway, thank you, I will certainly use this. Comment back here if no one tried it out & I'll make time.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null}