{"id": 574035432, "node_id": "MDU6SXNzdWU1NzQwMzU0MzI=", "number": 692, "title": "is_hidden_table context variable on table.html page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-03-02T15:03:25Z", "updated_at": "2020-03-02T15:03:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's useful to know if a table is hidden when rendering that page. `datasette-configure-fts` for example may want to disallow enabling search on hidden tables.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/692/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 539204432, "node_id": "MDU6SXNzdWU1MzkyMDQ0MzI=", "number": 70, "title": "Implement ON DELETE and ON UPDATE actions for foreign keys", "user": {"value": 26292069, "label": "LucasElArruda"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-12-17T17:19:10Z", "updated_at": "2020-02-27T04:18:53Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi! I did not find any mention on the library about ON DELETE and ON UPDATE actions for foreign keys. Are those expected to be implemented? If not, it would be a nice thing to include!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/70/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 550293770, "node_id": "MDU6SXNzdWU1NTAyOTM3NzA=", "number": 658, "title": "How do I use the app.css as style sheet?", "user": {"value": 49656826, "label": "null92"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-01-15T16:27:57Z", "updated_at": "2020-02-07T00:29:50Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Simon,\r\n\r\nI'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't.\r\n\r\nIs this possible?\r\n\r\nThanks!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/658/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 559964149, "node_id": "MDU6SXNzdWU1NTk5NjQxNDk=", "number": 665, "title": "Introduce a SQL statement parser in Python", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-02-04T20:36:05Z", "updated_at": "2020-02-04T20:36:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "#254 and #653 are both examples of problems that could be solved using a real SQL parser in Python.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/665/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 546073980, "node_id": "MDU6SXNzdWU1NDYwNzM5ODA=", "number": 74, "title": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column", "user": {"value": 15092, "label": "jayvdb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-01-07T04:35:50Z", "updated_at": "2020-01-12T07:21:17Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes.\r\n\r\nMost fail on the cli exit code like\r\n```py\r\n[ 74s] =================================== FAILURES ===================================\r\n[ 74s] _________________________________ test_tables __________________________________\r\n[ 74s] \r\n[ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db'\r\n[ 74s] \r\n[ 74s] def test_tables(db_path):\r\n[ 74s] result = CliRunner().invoke(cli.cli, [\"tables\", db_path])\r\n[ 74s] > assert '[{\"table\": \"Gosh\"},\\n {\"table\": \"Gosh2\"}]' == result.output.strip()\r\n[ 74s] E assert '[{\"table\": \"...e\": \"Gosh2\"}]' == ''\r\n[ 74s] E - [{\"table\": \"Gosh\"},\r\n[ 74s] E - {\"table\": \"Gosh2\"}]\r\n[ 74s] \r\n[ 74s] tests/test_cli.py:28: AssertionError\r\n```\r\n\r\npackaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils\r\n\r\nI'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 541274681, "node_id": "MDU6SXNzdWU1NDEyNzQ2ODE=", "number": 2, "title": "Add linkedin-to-sqlite", "user": {"value": 881925, "label": "mnp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-21T03:13:40Z", "updated_at": "2019-12-21T03:13:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There is an API available. https://developer.linkedin.com/docs/rest-api#\r\n\r\nAt the minimum, I would think contact list and messages would be of interest.", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 527710055, "node_id": "MDU6SXNzdWU1Mjc3MTAwNTU=", "number": 640, "title": "Nicer error message for heroku publish name clash", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-24T14:57:07Z", "updated_at": "2019-12-06T07:19:34Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If you try to publish to Heroku using no set name (i.e. the default `datasette` name) and a project already exists under that name, you get a meaningful error report on the first line followed by Py error messages that drown it out:\r\n\r\n```\r\nCreating datasette... !\r\n \u25b8 Name datasette is already taken\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasette\", line 10, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 764, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 717, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 956, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/NNNNN/Library/Python/3.7/lib/python/site-packages/datasette/publish/heroku.py\", line 124, in heroku\r\n create_output = check_output(cmd).decode(\"utf8\")\r\n File \"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py\", line 411, in check_output\r\n **kwargs).stdout\r\n File \"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py\", line 512, in run\r\n output=stdout, stderr=stderr)\r\nsubprocess.CalledProcessError: Command '['heroku', 'apps:create', 'datasette', '--json']' returned non-zero exit status 1.\r\n```\r\n\r\nIt would be neater if:\r\n\r\n- the Py error message was caught;\r\n- the report suggested setting a project name using `-n` etc.\r\n\r\nIt may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/640/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 527670799, "node_id": "MDU6SXNzdWU1Mjc2NzA3OTk=", "number": 639, "title": "updating metadata.json without recreating the app", "user": {"value": 172847, "label": "pkoppstein"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2019-11-24T09:19:53Z", "updated_at": "2019-11-30T06:08:50Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I've sucessfully \"uploaded\" an SQLite database (with a metadata.json file) to heroku using:\r\n\r\n $ datasette publish heroku so-sales.db -m metadata.json -n so-sales\r\n\r\nThe question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database.\r\n\r\nThe directions on heroku indicate I should run:\r\n\r\n heroku git:clone -a so-sales\r\n\r\nBut this just results in an empty directory with a warning:\r\nwarning: You appear to have cloned an empty repository.\r\n\r\nI've been able to \"clone\" the heroku \"app\" using the command:\r\n\r\n $ heroku slugs:download -a so-sales\r\n\r\nbut this is not a git repository....\r\n\r\nIdeally, it seems to me, there'd be an option of the `datasette` CLI to allow a file\r\nto be updated, or there'd be some way to create a local git \"clone\" of the app\r\nso that the heroku instructions for \"Deploying with git\" would apply.\r\n\r\n(p.s. I ran `datasette publish heroku -m metadata.json -n so-sales`\r\nin the hope that that would not cause the .db file to be wiped, but of course\r\nit was.)\r\n\r\n(p.p.s. Thanks for Datasette!)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/639/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 530468212, "node_id": "MDU6SXNzdWU1MzA0NjgyMTI=", "number": 643, "title": "Set up some basic benchmarks as part of the unit tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-11-29T19:24:19Z", "updated_at": "2019-11-29T19:24:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://pypi.org/project/pytest-benchmark/ looks great for this.\r\n\r\nHere's how to run it as a github action: https://github.com/rhysd/github-action-benchmark/blob/master/examples/pytest/README.md", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/643/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 464987783, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk1MTI3MjEz", "number": 546, "title": "Facet by delimiter", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-07T20:06:05Z", "updated_at": "2019-11-18T23:46:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/546", "body": "Refs #510", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/546/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 501773982, "node_id": "MDExOlB1bGxSZXF1ZXN0MzIzOTgzNzMy", "number": 579, "title": "New connection pooling", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-10-02T23:22:19Z", "updated_at": "2019-11-15T22:57:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/579", "body": "See #569", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/579/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 516874735, "node_id": "MDU6SXNzdWU1MTY4NzQ3MzU=", "number": 613, "title": "Basic join support for table view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-03T19:12:53Z", "updated_at": "2019-11-03T19:14:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I think it would be possible to support basic foreign key joins on the table page.\r\n\r\nThe user could specify columns that should result in a join (from a set of suggestions similar to how facets work right now) and they could then be passed as `?_join=city_id` arguments.\r\n\r\nThis feature will make a lot of sense when combined with the ability to show / hide / customize columns, see #292", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/613/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 510076368, "node_id": "MDU6SXNzdWU1MTAwNzYzNjg=", "number": 605, "title": "Support queries at the table level", "user": {"value": 12617395, "label": "bsilverm"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-10-21T15:58:30Z", "updated_at": "2019-10-30T18:55:37Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Per the issue described in [issue #588](https://github.com/simonw/datasette/issues/588), it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/605/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 457147936, "node_id": "MDU6SXNzdWU0NTcxNDc5MzY=", "number": 512, "title": "\"about\" parameter in metadata does not appear when alone", "user": {"value": 7936571, "label": "chrismp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-17T21:04:20Z", "updated_at": "2019-10-11T15:49:13Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Here's an example of metadata I have for one database on datasette.\r\n\r\n```\r\n\"Records-requests\": {\r\n\t\"tables\": {\r\n\t\t\"Some table\": {\r\n\t\t\t\"about\": \"This table has data.\"\r\n\t\t}\r\n\t}\r\n}\r\n```\r\n\r\nThe text in `about` does not show up when I publish the data. But it shows up after I add a `\"source\"` parameter in the metadata.\r\n\r\nIs this intended?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/512/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 505673645, "node_id": "MDU6SXNzdWU1MDU2NzM2NDU=", "number": 16, "title": "Do a better job with archived direct message threads", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-11T06:55:21Z", "updated_at": "2019-10-11T06:55:27Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://github.com/dogsheep/twitter-to-sqlite/blob/fb2698086d766e0333a55bb73435e7283feeb438/twitter_to_sqlite/archive.py#L98-L99", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 504720731, "node_id": "MDU6SXNzdWU1MDQ3MjA3MzE=", "number": 1, "title": "Add more details on how to request data from google takeout correctly.", "user": {"value": 1055831, "label": "dazzag24"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-09T15:17:34Z", "updated_at": "2019-10-09T15:17:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:\r\n\r\n- My Activity\r\n- Location History\r\n\r\nIn addition unless you specify that \"My Activity\" is downloaded in JSON format the default is HTML. This then causes the \r\n\r\n`google-takeout-to-sqlite my-activity takeout.db takeout.zip`\r\n\r\ncommand to fail as it only contains html files not json files.\r\n\r\nThanks", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 503053243, "node_id": "MDU6SXNzdWU1MDMwNTMyNDM=", "number": 582, "title": "Datasette should not completely crash if one SQLite database is malformed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-06T05:11:43Z", "updated_at": "2019-10-06T05:11:43Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "If you run Datasette against a number of database files and one of them is malformed, you get this 500 error on the index page:\r\n\r\n\"Error_500\"\r\n\r\nIt would be better if Datasette still worked and listed the databases that were NOT malformed, then showed an inline error message just for the one that could not be accessed.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/582/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 481885279, "node_id": "MDU6SXNzdWU0ODE4ODUyNzk=", "number": 569, "title": "More advanced connection pooling", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2019-08-17T13:20:41Z", "updated_at": "2019-10-02T22:44:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "We need a much smarter way of handling database connections.\r\n\r\nToday, connections are simple: Datasette runs a number of threads (defaults to 3) and each thread gets a threadlocal read-only (or immutable) connection to each attached database - opened on demand.\r\n\r\nFor Datasette Library (#417) I want to support potentially hundreds of attached databases. Datasette Edit (#567) is going to introduce a need for writable connections too.\r\n\r\nI'd also like to be able to run joins across multiple databases (#283) which further complicates things.\r\n\r\nSupporting thousands of open SQLite connections at once feels like it won't provide good enough performance (though I should benchmark that to be sure). Some kind of connection pooling is likely to be necessary.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/569/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 488874815, "node_id": "MDU6SXNzdWU0ODg4NzQ4MTU=", "number": 5, "title": "Write tests that simulate the Twitter API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-09-03T23:55:35Z", "updated_at": "2019-09-03T23:56:28Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "I can use betamax for this: https://pypi.org/project/betamax/", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 463544206, "node_id": "MDU6SXNzdWU0NjM1NDQyMDY=", "number": 537, "title": "Populate \"endpoint\" key in ASGI scope", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2019-07-03T04:54:47Z", "updated_at": "2019-07-22T06:03:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a trick used by Starlette so that other layers of ASGI middleware can see which route was selected.\r\n\r\nThey added it here: https://github.com/encode/starlette/commit/34d0097feb6f057bd050d5057df5a2f96b97384e\r\n\r\nIf Datasette supports it as well we can benefit from it if we integrate this sentry_asgi middleware (probably as a `datasette-sentry` plugin): https://github.com/encode/sentry-asgi/blob/c6a42d44d31f85885b79e4ee898683ecf8104971/sentry_asgi/middleware.py#L34-L35", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/537/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 465003070, "node_id": "MDU6SXNzdWU0NjUwMDMwNzA=", "number": 551, "title": "Ship many-to-many faceting support (and facet-by-delimiter)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-07T23:11:45Z", "updated_at": "2019-07-08T15:45:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/551/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 456569067, "node_id": "MDU6SXNzdWU0NTY1NjkwNjc=", "number": 510, "title": "Ability to facet by delimiter (e.g. comma separated fields)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": {"value": 9599, "label": "simonw"}, "milestone": null, "comments": 1, "created_at": "2019-06-15T19:34:41Z", "updated_at": "2019-07-08T15:44:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "E.g. if a field contains \"Tags,With,Commas\" be able to facet them in the same way as `_facet_array=` lets you facet `[\"Tags\", \"With\", \"Commas\"]`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 462117311, "node_id": "MDU6SXNzdWU0NjIxMTczMTE=", "number": 531, "title": "/database/-/inspect", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-06-28T16:33:41Z", "updated_at": "2019-07-08T15:43:57Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Build `/database/-/inspect` which shows tables, columns, column types and foreign keys\r\n\r\nIt won't show table counts. Or maybe it will include them optionally but only for `-i` databases, in a special area of the JSON reserved for immutable-only inspect details.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/465#issuecomment-506797086_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/531/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 465327844, "node_id": "MDU6SXNzdWU0NjUzMjc4NDQ=", "number": 553, "title": "Potential improvements to facet-by-date", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-07-08T15:37:53Z", "updated_at": "2019-07-08T15:41:55Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In addition to #483 Tobias had some useful suggestions on Twitter:\r\n\r\nhttps://twitter.com/rixxtr/status/1148253926476701696\r\n> I think for date facets, it might be more meaningful to order them by date, rather than by size? Or offer both? I'm *definitely* often interested in size-over-time, so https://data.rixx.de/django_tickets/tickets?_facet_date=created#facet-created \u2026 isn't all that helpful!\r\n\r\nScreenshot of that link:\r\n\r\n\"django_tickets__tickets__29_846_rows\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/553/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 465019882, "node_id": "MDU6SXNzdWU0NjUwMTk4ODI=", "number": 552, "title": "Add --plugin-secret support to \"datasette package\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-07-08T01:46:47Z", "updated_at": "2019-07-08T01:47:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split out from #544.\r\n\r\nI think I should combine this with #347 (renaming `datasette package` to `datasette publish docker`).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/552/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 327395270, "node_id": "MDU6SXNzdWUzMjczOTUyNzA=", "number": 296, "title": "Per-database and per-table /-/ URL namespace", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-05-29T16:23:13Z", "updated_at": "2019-06-28T16:46:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph`\r\n\r\nTo start:\r\n\r\n* `/dbname/-/inspect`\r\n* `/dbname/-/metadata`\r\n* `/dbname/tablename/-/inspect`\r\n* `/dbname/tablename/-/metadata`\r\n\r\nThis means we will no longer allow databases or tables to have the name `\"-\"` - I think that's OK\r\n\r\nWe will continue to support rows with a primary key of `\"-\"` at the following URL:\r\n\r\n* `/dbname/tablename/-`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/296/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 327365110, "node_id": "MDU6SXNzdWUzMjczNjUxMTA=", "number": 294, "title": "inspect should record column types", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2018-05-29T15:10:41Z", "updated_at": "2019-06-28T16:45:28Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "For each table we want to know the columns, their order and what type they are.\r\n\r\nI'm going to break with SQLite defaults a little on this one and allow datasette to define additional types - to start with just a `geometry` type for columns that are detected as SpatiaLite geometries.\r\n\r\nPossible JSON design:\r\n\r\n \"columns\": [{\r\n \"name\": \"title\",\r\n \"type\": \"text\"\r\n }, ...]\r\n\r\nRefs #276", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/294/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 459622390, "node_id": "MDU6SXNzdWU0NTk2MjIzOTA=", "number": 522, "title": "Handle case-insensitive headers in a nicer way", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-06-23T21:56:34Z", "updated_at": "2019-06-26T18:48:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spun out from https://github.com/simonw/datasette/pull/518#discussion_r296486289", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/522/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 460095928, "node_id": "MDU6SXNzdWU0NjAwOTU5Mjg=", "number": 528, "title": "Establish a pattern for Datasette plugins built on top of Pandas", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-06-24T21:05:52Z", "updated_at": "2019-06-24T21:05:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The Pandas ecosystem is huge, varied and full of tools that are really good at doing interesting analysis on top of tabular data.\r\n\r\nPandas should not be a dependency of Datasette core, but I think there is a lot of potential in having plugins which use Pandas to apply interesting analysis to data sucked out of Datasette's SQLite tables.\r\n\r\nOne example ([thanks, Tony](https://twitter.com/psychemedia/status/1143259809715752962)): https://github.com/ResidentMario/missingno could form the basis of a fantastic plugin for getting a high-level overview of how complete each column in a table is.\r\n\r\nSome thought is needed here about what shape these kind of plugins might take, and what plugin hooks they would use.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 459469278, "node_id": "MDU6SXNzdWU0NTk0NjkyNzg=", "number": 515, "title": "Try shrinking official image with docker-slim", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-06-22T12:25:37Z", "updated_at": "2019-06-22T12:25:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This looks really promising: https://github.com/docker-slim/docker-slim\r\n\r\nIf it can shave substantial size from our official container reliably we could add it to the automated build process.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/515/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 451585764, "node_id": "MDU6SXNzdWU0NTE1ODU3NjQ=", "number": 499, "title": "Accessibility for non-techie newsies? ", "user": {"value": 7936571, "label": "chrismp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-03T16:49:37Z", "updated_at": "2019-06-05T21:22:55Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/499/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 447408527, "node_id": "MDU6SXNzdWU0NDc0MDg1Mjc=", "number": 483, "title": "Option to facet by date using month or year", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2019-05-23T01:25:29Z", "updated_at": "2019-05-29T21:38:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Facet by date (from #481) can take datetimes and facet them by the day component.\r\n\r\nhttps://latest.datasette.io/fixtures/facetable?_facet_date=created\r\n\r\nI'd like to also be able to facet by month or year.\r\n\r\nI'm not sure what the best way to achieve this is. Could be two more Facet classes (YearFacet and MonthFacet) but I think it might be nicer if the existing DateFacet could take an optional argument that changed its behaviour. But... if I do that, do I expose it in the UI somewhere or is it only available to URL-hackers?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/483/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 449445715, "node_id": "MDU6SXNzdWU0NDk0NDU3MTU=", "number": 491, "title": "Figure out how to use Firebase with cloudrun to enable vanity URLs and CDN caching", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-28T19:48:06Z", "updated_at": "2019-05-28T19:48:35Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It looks like Firebase can solve a couple of problems with the existing `datasette publish cloudrun` hosting mechanism:\r\n\r\n* The URLs it produces aren't pretty enough. Firebase offers more control over vanity URLs.\r\n* CDN caching (as seen in `datasette publish now`) is great for improving performance and saving money on Cloud Run execution time.\r\n\r\nhttps://firebase.google.com/docs/hosting/cloud-run looks like it can help with both of these.\r\n\r\nLots of interesting questions:\r\n\r\n* Should this be a new `datasette publish firebase` command or should it instead be implemented as additional custom options to `datasette publish cloudrun`?\r\n* How much harder does it become to do account setup?\r\n* How much will this option cost users?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/491/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 447451492, "node_id": "MDU6SXNzdWU0NDc0NTE0OTI=", "number": 484, "title": "Mechanism for displaying summary of m2m relationships in rows on table view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-05-23T05:02:41Z", "updated_at": "2019-05-23T06:34:05Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Part of #354 (m2m support)\r\n\r\nIt would be fantastic if rows that are part of a m2m relationship could display it in an additional column in the table view.\r\n\r\nIt might look something like this: https://russian-ira-facebook-ads.datasettes.com/russian-ads-919cbfd/display_ads?_search=black+lives+matter\r\n\r\n\"russian-ads__display_ads__50_rows_where_where_search_matches__black_lives_matter_\"\r\n\r\nThat example [was achieved](https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/daf51a8c50a78e8bc7971c211005fd85e66ccf64/russian-ads-metadata.yaml#L72-L77) using a custom SQL query and [datasette-json-html](https://github.com/simonw/datasette-json-html) - but I'd like this to be a built-in feature instead.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/484/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 346027040, "node_id": "MDU6SXNzdWUzNDYwMjcwNDA=", "number": 355, "title": "Table view should support filtering via many-to-many relationships", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-07-31T04:04:16Z", "updated_at": "2019-05-23T06:04:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Parent: #354 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/355/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275159710, "node_id": "MDU6SXNzdWUyNzUxNTk3MTA=", "number": 128, "title": "Every visualization should have an \"embed\" button", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-11-19T13:38:13Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "At least for the first round of visualizations, any time you construct one using the UI the result should include an \"embed this\" button that returns source code to copy and paste\r\n\r\nThese examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/128/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275415799, "node_id": "MDU6SXNzdWUyNzU0MTU3OTk=", "number": 137, "title": "Ability to combine multiple SQL queries on a single graph", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-20T16:26:57Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This would make visualizations significantly more powerful. The interesting challenge will be around the URL design. It would be useful to be able to combine either multiple explicit SQL queries or multiple queries based on the filter string parameters passed to one or more table views.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/137/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275755475, "node_id": "MDU6SXNzdWUyNzU3NTU0NzU=", "number": 140, "title": "Heatmap visualization plugin", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-21T15:34:23Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Could use https://github.com/scottbedard/svelte-heatmap", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/140/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 288438570, "node_id": "MDU6SXNzdWUyODg0Mzg1NzA=", "number": 179, "title": "More metadata options for template authors ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-01-14T20:51:04Z", "updated_at": "2019-05-13T18:33:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "See this thread on Twitter: https://twitter.com/simonw/status/952637152797458432", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/179/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 299760684, "node_id": "MDU6SXNzdWUyOTk3NjA2ODQ=", "number": 185, "title": "Metadata should be a nested arbitrary KV store", "user": {"value": 222245, "label": "carlmjohnson"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2018-02-23T16:02:07Z", "updated_at": "2019-05-13T18:33:33Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication.\r\n\r\nIdeally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like `metadata.page.key` vs. `metadata.this.key` might work as an interface.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/185/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 440325850, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2", "number": 452, "title": "SQL builder utility classes", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-04T13:57:47Z", "updated_at": "2019-05-04T14:03:04Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/452", "body": "This adds a straightforward set of classes to aid in the construction of\r\nSQL queries.\r\n\r\nMy plan for this was to allow plugins to manipulate the\r\nDatasette-generated SQL in a more structured way. I'm not sure that's\r\ngoing to work, but I feel like this is still a step forward - it\r\nreduces the number of intermediate variables in `TableView.data` which\r\naids readability, and also factors out a lot of the boring string\r\nconcatenation.\r\n\r\nThere are a fair number of minor structure changes in here too as I've\r\ntried to make the ordering of `TableView.data` a bit more logical. As\r\nfar as I can tell, I haven't broken anything...", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/452/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 411257981, "node_id": "MDU6SXNzdWU0MTEyNTc5ODE=", "number": 412, "title": "Linked Data(sette)", "user": {"value": 43340, "label": "sfkeller"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-02-18T00:38:14Z", "updated_at": "2019-03-19T10:09:46Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I've a radical feature idea (possible first as an extension in order to experiment?): \r\n\r\nI'd like to link to a remote table from a remote database, e.g. with a function \"linked_datasette()\". So one could do following query:\r\n```\r\nSELECT foo.id, foo.a, remote_party.b\r\nFROM foo\r\nJOIN linked_datasette(\"https://parlgov.datasettes.com/parlgov-b42a2f2\") AS remote_party \r\n ON foo.id=remote_party.id\r\n```\r\nThis is inspired by SPARQL's SERVICE keyword for remote RDF \"endpoints\".\r\n\r\nThere's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ).\r\n\r\nAnd here's an implementation from me in Postgres FDW to connect another Postgres \"endpoint\": https://pastebin.com/Fz2v64Cz .", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/412/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 400340905, "node_id": "MDU6SXNzdWU0MDAzNDA5MDU=", "number": 402, "title": "Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-01-17T15:52:28Z", "updated_at": "2019-01-17T16:15:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html\r\n\r\nhttps://twitter.com/ignoredambience/status/1085926961413869568", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/402/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 377166793, "node_id": "MDU6SXNzdWUzNzcxNjY3OTM=", "number": 372, "title": "Docker build tools", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-11-04T16:02:35Z", "updated_at": "2018-11-04T16:02:35Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as `requirements.txt` files.\r\n\r\nIf plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base `datasette` container may be required.\r\n\r\nExamples of Dockerfile generators / container builders:\r\n\r\n- [openshift/source-to-image (s2i)](https://github.com/openshift/source-to-image)\r\n- [jupyter/repo2docker](https://github.com/jupyter/repo2docker)\r\n- [stencila/dockter](https://github.com/stencila/dockter)\r\n\r\nDiscussions / threads (via Binderhub gitter) on:\r\n- [why `repo2docker` not `s2i`](http://words.yuvi.in/post/why-not-s2i/)\r\n- [why `dockter` not `repo2docker`](https://twitter.com/choldgraf/status/1058499607309647872)\r\n- [composability in `s2i`](https://trello.com/c/AexIVZNf/1008-8-composable-builds-builds-evg)\r\n\r\nRelates to things like:\r\n\r\n- https://github.com/simonw/datasette/pull/280", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/372/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 330826972, "node_id": "MDU6SXNzdWUzMzA4MjY5NzI=", "number": 308, "title": "Support extra Heroku apps:create options - region, space, team", "user": {"value": 78156, "label": "annapowellsmith"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-08T23:08:33Z", "updated_at": "2018-09-21T14:09:28Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/308/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 359075028, "node_id": "MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx", "number": 364, "title": "Support for other types of databases using external connectors", "user": {"value": 11912854, "label": "jsancho-gpl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-09-11T14:31:47Z", "updated_at": "2018-09-11T14:31:47Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/364", "body": "This PR is related to #293, but now all commits have been merged.\r\n\r\nThe purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points.\r\n\r\nThe modifications in the original datasette code are minimal and many are in a separated file.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/364/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 355299310, "node_id": "MDExOlB1bGxSZXF1ZXN0MjExODYwNzA2", "number": 363, "title": "Search all apps during heroku publish", "user": {"value": 436032, "label": "kevboh"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-08-29T19:25:10Z", "updated_at": "2018-08-31T14:39:45Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/363", "body": "Adds the `-A` option to include apps from all organizations when searching app names for publish.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/363/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 344654623, "node_id": "MDU6SXNzdWUzNDQ2NTQ2MjM=", "number": 347, "title": "Rename \"datasette package\" to \"datasette publish docker\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-07-26T00:42:46Z", "updated_at": "2018-07-26T00:42:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/347/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 341228846, "node_id": "MDU6SXNzdWUzNDEyMjg4NDY=", "number": 343, "title": "Render boolean fields better by default", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-07-14T11:10:29Z", "updated_at": "2018-07-14T14:17:14Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/343/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 318490133, "node_id": "MDU6SXNzdWUzMTg0OTAxMzM=", "number": 241, "title": "Default datasette logging format should be JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-27T17:32:48Z", "updated_at": "2018-07-10T17:45:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Structured logs are better. Datasette should default to outputting it's HTTP access log lines as newline delimited JSON instead of the Sanic default format it uses at the moment.\r\n\r\nFor improved greppability these logs should have keys ordered in a consistent way. Python's JSON module can do this with ordered dictionaries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/241/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 314771615, "node_id": "MDU6SXNzdWUzMTQ3NzE2MTU=", "number": 218, "title": "Support custom unit display in order to handle \"$10,000\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T18:39:31Z", "updated_at": "2018-07-10T17:45:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I tried to get Datasette to display `$10,000` using the new units support but we currently only display units as a suffix:\r\n\r\nhttps://github.com/simonw/datasette/blob/10a34f995c70daa37a8a2aa02c3135a4b023a24c/datasette/app.py#L563-L572\r\n\r\nIt would be neat if there was a mechanism for specifying a custom unit display - maybe something like this:\r\n\r\n```\r\n{\r\n \"custom_units\": {\r\n \"us_dollar\": {\r\n \"unit\": \"us_dollar = [] = $\",\r\n \"format\": \"${:,}\"\r\n }\r\n }\r\n}\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/218/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312395790, "node_id": "MDU6SXNzdWUzMTIzOTU3OTA=", "number": 197, "title": "Ability to sort by more than one column", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:13:30Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189.\r\n\r\nI'd like to support \"sort by X descending, then by Y ascending if there are dupes for X\" as well. Suggested syntax for that:\r\n\r\n ?_sort_desc=X&_sort=Y\r\n\r\nwe currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example:\r\n\r\n ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/197/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312396095, "node_id": "MDU6SXNzdWUzMTIzOTYwOTU=", "number": 198, "title": "Ability to sort with nulls last", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:15:40Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189\r\n\r\nHere's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid\r\n\r\n order by case when career_ranypa is null then 1 else 0 end, career_ranypa", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/198/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 326778161, "node_id": "MDU6SXNzdWUzMjY3NzgxNjE=", "number": 290, "title": "Consider increasing the default for num_sql_threads (currently 3)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-27T00:52:41Z", "updated_at": "2018-05-27T00:52:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I ran a very rough micro-benchmark on the new `num_sql_threads` config option (added in #285)\r\n\r\n datasette --config num_sql_threads:1 fivethirtyeight.db\r\n\r\nThen\r\n\r\n ab -n 100 -c 10 'http://127.0.0.1:8011/fivethirtyeight-2628db9/twitter-ratio%2Fsenators'\r\n\r\n| Number of threads | Requests/second |\r\n|---|---|\r\n| 1 | 4.57 |\r\n| 3 | 9.77 |\r\n| 10 | 13.53 |\r\n| 20 | 15.24 \r\n| 50 | 8.21 | \r\n\r\nThis was on my early 2018 OS X laptop. Need to benchmark in other common environments before making a decision on changing the default. That said, the default of 3 was a number I plucked out of thin air.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/290/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 326599525, "node_id": "MDU6SXNzdWUzMjY1OTk1MjU=", "number": 286, "title": "Database hash should include current datasette version", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-25T17:03:42Z", "updated_at": "2018-05-25T17:07:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Right now deploying a new version of datasette doesn't invalidate existing URLs, so users may still see a cached copy of the old templates.\r\n\r\nWe can fix this by including the current datasette version in the input to the hash function (which currently just the database file contents).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/286/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 319449852, "node_id": "MDU6SXNzdWUzMTk0NDk4NTI=", "number": 247, "title": "SQLite code decoupled from Datasette", "user": {"value": 11912854, "label": "jsancho-gpl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-05-02T08:03:28Z", "updated_at": "2018-05-21T15:29:31Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with [PyTables](https://github.com/PyTables/PyTables) format.\r\n\r\nIn order to accomplish that, I've started [a fork for decoupling the code related with SQLite](https://github.com/jsancho-gpl/datasette/tree/feature/db-type-plugin) and putting it in an external connector to allow future connectors for a lot of file formats.\r\n\r\nIt'd be nice if you could look at it and suggest improvements for a possible PR.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/247/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 320132682, "node_id": "MDU6SXNzdWUzMjAxMzI2ODI=", "number": 250, "title": "Setup some issue templates", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-04T01:49:07Z", "updated_at": "2018-05-04T01:49:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://twitter.com/left_pad/status/99216385740464537\r\n\r\nI like the idea of using these to help people understand some of the ways I want to use issues.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/250/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 316621102, "node_id": "MDU6SXNzdWUzMTY2MjExMDI=", "number": 235, "title": "Add limit on the size in KB of data returned from a single query", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-22T23:01:15Z", "updated_at": "2018-04-24T00:30:02Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized.\r\n\r\nIt does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server.\r\n\r\nI think the easiest place to implement that is here:\r\n\r\nhttps://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190\r\n\r\nCurrently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits.\r\n\r\nThe bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/235/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 314834783, "node_id": "MDU6SXNzdWUzMTQ4MzQ3ODM=", "number": 219, "title": "Expose units in the JSON API?", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T22:04:25Z", "updated_at": "2018-04-16T22:04:25Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map.\r\n\r\nI'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/219/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 268110769, "node_id": "MDU6SXNzdWUyNjgxMTA3Njk=", "number": 33, "title": "Use locust for benchmarking and load tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-10-24T17:00:09Z", "updated_at": "2017-12-10T03:12:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/locustio/locust\r\n\r\nNeeded for #32 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/33/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 267515678, "node_id": "MDU6SXNzdWUyNjc1MTU2Nzg=", "number": 3, "title": "Make individual column valuables addressable, with smart content types", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-10-23T01:11:32Z", "updated_at": "2017-12-10T03:11:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Some SQLite databases embed images in columns. It would be cool if these had URLs.\r\n\r\n /database-name-7sha256/table-name/compound-pk/column\r\n /database-name-7sha256/table-name/compound-pk/column.json\r\n /database-name-7sha256/table-name/compound-pk/column.png\r\n /database-name-7sha256/table-name/compound-pk/column.gif\r\n /database-name-7sha256/table-name/compound-pk/column.txt\r\n\r\nThe one without an explicit file extension auto-detects the correct extension.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}