{"id": 267517348, "node_id": "MDU6SXNzdWUyNjc1MTczNDg=", "number": 9, "title": "Initial test suite", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 2, "created_at": "2017-10-23T01:28:46Z", "updated_at": "2017-10-24T05:55:33Z", "closed_at": "2017-10-24T05:55:33Z", "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/9/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 267828746, "node_id": "MDU6SXNzdWUyNjc4Mjg3NDY=", "number": 24, "title": "Implement full URL design", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 2, "created_at": "2017-10-23T21:49:05Z", "updated_at": "2017-10-24T14:12:00Z", "closed_at": "2017-10-24T14:12:00Z", "author_association": "OWNER", "pull_request": null, "body": "Full URL design:\r\n\r\n /database-name\r\n /database-name.json\r\n /database-name-7sha256\r\n /database-name-7sha256.json\r\n /database-name/table-name\r\n /database-name/table-name.json\r\n /database-name-7sha256/table-name\r\n /database-name-7sha256/table-name.json\r\n /database-name-7sha256/table-name/compound-pk\r\n /database-name-7sha256/table-name/compound-pk.json\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/24/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 267857622, "node_id": "MDU6SXNzdWUyNjc4NTc2MjI=", "number": 25, "title": "Endpoint that returns SQL ready to be piped into DB", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-10-24T00:19:26Z", "updated_at": "2017-11-15T05:11:12Z", "closed_at": "2017-11-15T05:11:11Z", "author_association": "OWNER", "pull_request": null, "body": "It would be cool if I could figure out a way to generate both the create table statements and the inserts for an individual table or the entire database and then stream them down to the client.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/25/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 268453968, "node_id": "MDU6SXNzdWUyNjg0NTM5Njg=", "number": 37, "title": "Ability to serialize massive JSON without blocking event loop", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-10-25T15:58:03Z", "updated_at": "2020-05-30T17:29:20Z", "closed_at": "2020-05-30T17:29:20Z", "author_association": "OWNER", "pull_request": null, "body": "We run the risk of someone attempting a select statement that returns thousands of rows and hence takes several seconds just to JSON encode the response, effectively blocking the event loop and pausing all other traffic.\r\n\r\nThe Twisted community have a solution for this, can we adapt that in some way? http://as.ynchrono.us/2010/06/asynchronous-json_18.html?m=1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/37/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 272694136, "node_id": "MDU6SXNzdWUyNzI2OTQxMzY=", "number": 50, "title": "Unit tests against application itself", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 2, "created_at": "2017-11-09T19:31:49Z", "updated_at": "2017-11-11T22:23:22Z", "closed_at": "2017-11-11T22:23:22Z", "author_association": "OWNER", "pull_request": null, "body": "Use Sanic\u2019s testing mechanism. Test should create a temporary SQLite database file on disk by executing sql that is stored in the test themselves.\r\n\r\nFor the moment we can just test the JSON API more thoroughly and just sanity check that the HTML output doesn\u2019t throw any errors.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/50/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273026602, "node_id": "MDU6SXNzdWUyNzMwMjY2MDI=", "number": 52, "title": "Solution for temporarily uploading DB so it can be built by docker", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-10T18:55:25Z", "updated_at": "2017-12-10T03:02:57Z", "closed_at": "2017-12-10T03:02:57Z", "author_association": "OWNER", "pull_request": null, "body": "For the `datasette publish` command I ideally need a way of uploading the specified DB to somewhere temporary on the internet so that when the Dockerfile is built by the final hosting location it can download that database as part of the build process.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/52/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273127117, "node_id": "MDU6SXNzdWUyNzMxMjcxMTc=", "number": 55, "title": "Ship first version to PyPI", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 2, "created_at": "2017-11-11T07:38:48Z", "updated_at": "2017-11-13T21:19:43Z", "closed_at": "2017-11-13T21:19:43Z", "author_association": "OWNER", "pull_request": null, "body": "Just before doing this, update the Dockerfile template to `pip install datasette` https://github.com/simonw/datasette/blob/65e350ca2a4845c25752a62c16ba58cfe2c14b9b/datasette/utils.py#L125", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/55/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273192789, "node_id": "MDU6SXNzdWUyNzMxOTI3ODk=", "number": 67, "title": "Command that builds a local docker container", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 2, "created_at": "2017-11-12T02:13:29Z", "updated_at": "2017-11-13T16:17:52Z", "closed_at": "2017-11-13T16:17:52Z", "author_association": "OWNER", "pull_request": null, "body": "Be nice to indicate that this isn't just for Now. Shouldn't be too hard either.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/67/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273296178, "node_id": "MDU6SXNzdWUyNzMyOTYxNzg=", "number": 73, "title": "_nocache=1 query string option for use with sort-by-random", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-13T02:57:10Z", "updated_at": "2018-05-28T17:25:15Z", "closed_at": "2018-05-28T17:25:15Z", "author_association": "OWNER", "pull_request": null, "body": "The one place where we wouldn\u2019t want cdching is if we have something which uses sort by random to return random items. We can offer a _nocache=1 querystring argument to support this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/73/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273569477, "node_id": "MDU6SXNzdWUyNzM1Njk0Nzc=", "number": 80, "title": "Deploy final versions of fivethirtyeight and parlgov datasets (with view pagination)", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2857392, "label": "Ship first public release"}, "comments": 2, "created_at": "2017-11-13T20:37:46Z", "updated_at": "2017-11-13T22:09:46Z", "closed_at": "2017-11-13T22:09:46Z", "author_association": "OWNER", "pull_request": null, "body": "Final versions should be deployed using the first released version of datasette.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/80/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 273595473, "node_id": "MDExOlB1bGxSZXF1ZXN0MTUyMzYwNzQw", "number": 81, "title": ":fire: Removes DS_Store", "user": {"value": 50527, "label": "jefftriplett"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-13T22:07:52Z", "updated_at": "2017-11-14T02:24:54Z", "closed_at": "2017-11-13T22:16:55Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/81", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/81/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 274160723, "node_id": "MDU6SXNzdWUyNzQxNjA3MjM=", "number": 100, "title": "TemplateAssertionError: no filter named 'tojson'", "user": {"value": 13304454, "label": "coisnepe"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-15T13:43:41Z", "updated_at": "2017-11-16T09:25:10Z", "closed_at": "2017-11-16T00:14:13Z", "author_association": "NONE", "pull_request": null, "body": "A 500 error is raised upon clicking on the name of a table on the homepage, say _http://0.0.0.0:8001/_ to _http://0.0.0.0:8001/test_check-c1f4771/users_ The API part seems to function as intended, though...\r\n\r\n```\r\n2017-11-15 14:33:57 - (sanic)[ERROR]: Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/sanic/app.py\", line 503, in handle_request\r\n response = await response\r\n File \"/usr/local/lib/python3.5/dist-packages/datasette/app.py\", line 155, in get\r\n return await self.view_get(request, name, hash, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/datasette/app.py\", line 219, in view_get\r\n **context,\r\n File \"/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py\", line 84, in render\r\n return html(self.render_string(template, request, **context))\r\n File \"/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py\", line 81, in render_string\r\n return self.env.get_template(template).render(**context)\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 812, in get_template\r\n return self._load_template(name, self.make_globals(globals))\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 786, in _load_template\r\n template = self.loader.load(self, name, globals)\r\n File \"/usr/lib/python3/dist-packages/jinja2/loaders.py\", line 125, in load\r\n code = environment.compile(source, name, filename)\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 565, in compile\r\n self.handle_exception(exc_info, source_hint=source_hint)\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 754, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/lib/python3/dist-packages/jinja2/_compat.py\", line 37, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/usr/local/lib/python3.5/dist-packages/datasette/templates/table.html\", line 29, in template\r\n
params = {{ query.params|tojson(4) }}
\r\n File \"/usr/lib/python3/dist-packages/jinja2/environment.py\", line 515, in _generate\r\n return generate(source, self, name, filename, defer_init=defer_init)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 62, in generate\r\n generator.visit(node)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 849, in visit_Template\r\n self.blockvisit(block.body, block_frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 492, in blockvisit\r\n self.visit(node, frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 1172, in visit_If\r\n self.blockvisit(node.body, if_frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 492, in blockvisit\r\n self.visit(node, frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 1353, in visit_Output\r\n self.visit(argument, frame)\r\n File \"/usr/lib/python3/dist-packages/jinja2/visitor.py\", line 38, in visit\r\n return f(node, *args, **kwargs)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 1565, in visit_Filter\r\n self.fail('no filter named %r' % node.name, node.lineno)\r\n File \"/usr/lib/python3/dist-packages/jinja2/compiler.py\", line 427, in fail\r\n raise TemplateAssertionError(msg, lineno, self.name, self.filename)\r\njinja2.exceptions.TemplateAssertionError: no filter named 'tojson'\r\n\r\n2017-11-15 14:33:57 - (network)[INFO][127.0.0.1:41316]: GET http://0.0.0.0:8001/test_check-c1f4771/users 500 144\r\n2017-11-15 14:33:57 - (network)[INFO][127.0.0.1:41316]: GET http://0.0.0.0:8001/favicon.ico 200 0\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/100/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 274578142, "node_id": "MDU6SXNzdWUyNzQ1NzgxNDI=", "number": 110, "title": "Add --load-extension option to datasette for loading extra SQLite extensions", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-16T16:26:19Z", "updated_at": "2017-11-16T18:38:30Z", "closed_at": "2017-11-16T16:58:50Z", "author_association": "OWNER", "pull_request": null, "body": "This would allow users with extra SQLite extensions installed (like spatialite) to load them at runtime.\r\n\r\nInspired by this comment: https://github.com/simonw/datasette/issues/46#issuecomment-344810525", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/110/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275135393, "node_id": "MDU6SXNzdWUyNzUxMzUzOTM=", "number": 125, "title": "Plot rows on a map with Leaflet and Leaflet.markercluster", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-19T06:05:05Z", "updated_at": "2018-04-26T15:14:31Z", "closed_at": "2018-04-26T15:14:31Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/Leaflet/Leaflet.markercluster would allow us to paginate-load in an enormous set of rows with latitude/longitude points, e.g. https://australian-dunnies.now.sh/\r\n\r\nHere's a demo of it loading 50,000 markers: https://leaflet.github.io/Leaflet.markercluster/example/marker-clustering-realworld.50000.html - and it looks like it's easy to support progress bars for if we were iteratively loading 1,000 markers at a time using datasette pagination.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/125/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275135719, "node_id": "MDU6SXNzdWUyNzUxMzU3MTk=", "number": 127, "title": "Filtered tables should show count of all matching rows, if fast enough", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2919870, "label": "Foreign key edition"}, "comments": 2, "created_at": "2017-11-19T06:13:29Z", "updated_at": "2017-11-24T22:02:01Z", "closed_at": "2017-11-24T22:02:01Z", "author_association": "OWNER", "pull_request": null, "body": "Relates to #86. If you are viewing a filtered page e.g. https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9/bob-ross%2Felements-by-episode?CLOUDS=1 we should show the count of matching rows.\r\n\r\nSince this could be an expensive operation, we will run it with a strict time limit (maybe 50ms). If the time limit is exceeded we will display \"many\" instead, perhaps? Maybe even link to a count(*) query that would get the full 1000ms time limit which the user can click on if they like (that could even Ajax-in the result).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/127/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275164558, "node_id": "MDU6SXNzdWUyNzUxNjQ1NTg=", "number": 129, "title": "Hide FTS-created tables by default on the database index page", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-19T14:50:42Z", "updated_at": "2017-11-22T20:22:02Z", "closed_at": "2017-11-22T20:19:04Z", "author_association": "OWNER", "pull_request": null, "body": "SQLite databases that use FTS include a number of automatically generated tables, e.g.:\r\n\r\nhttps://sf-trees-search.now.sh/sf-trees-search-a899b92\r\n\r\n\"sf-trees-search_and_sf-trees-search\"\r\n\r\nOf these, only the `Street_Tree_List` table is actually relevant to the user.\r\n\r\nWe can detect which tables are FTS tables by first finding the virtual tables:\r\n\r\n sqlite> .headers on\r\n sqlite> select * from sqlite_master where rootpage = 0;\r\n type|name|tbl_name|rootpage|sql\r\n table|Search|Search|0|CREATE VIRTUAL TABLE \"Street_Tree_List_fts\" USING FTS4 (\"qAddress\", \"qCaretaker\", \"qSpecies\")\r\n\r\nThen parsing the above to figure out which ones are USING FTS? - then assume that any table which starts with that `Street_Tree_List_fts` prefix was created to support search:\r\n\r\n sqlite> select * from sqlite_master where type='table' and tbl_name like 'Street_Tree_List_fts%';\r\n type|name|tbl_name|rootpage|sql\r\n table|Search_content|Search_content|10355|CREATE TABLE 'Street_Tree_List_fts_content'(docid INTEGER PRIMARY KEY, 'c0qAddress', 'c1qCaretaker', 'c2qSpecies')\r\n table|Search_segments|Search_segments|10356|CREATE TABLE 'Street_Tree_List_fts_segments'(blockid INTEGER PRIMARY KEY, block BLOB)\r\n table|Search_segdir|Search_segdir|10357|CREATE TABLE 'Street_Tree_List_fts_segdir'(level INTEGER,idx INTEGER,start_block INTEGER,leaves_end_block INTEGER,end_block INTEGER,root BLOB,PRIMARY KEY(level, idx))\r\n table|Search_docsize|Search_docsize|10359|CREATE TABLE 'Street_Tree_List_fts_docsize'(docid INTEGER PRIMARY KEY, size BLOB)\r\n table|Search_stat|Search_stat|10360|CREATE TABLE 'Street_Tree_List_fts_stat'(id INTEGER PRIMARY KEY, value BLOB)\r\n\r\nWe won't hide these completely - instead, we'll default the database index view to not showing them with a message that says \"5 hidden tables\" and support ?_hidden=1 to display them.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/129/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275493851, "node_id": "MDU6SXNzdWUyNzU0OTM4NTE=", "number": 139, "title": "Build a visualization plugin for Vega", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-20T20:47:41Z", "updated_at": "2018-07-10T17:48:18Z", "closed_at": "2018-07-10T17:48:18Z", "author_association": "OWNER", "pull_request": null, "body": "https://vega.github.io/vega/examples/population-pyramid/ for example looks pretty easy to hook up to Datasette.\r\n\r\nDepends on #14 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/139/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 275755475, "node_id": "MDU6SXNzdWUyNzU3NTU0NzU=", "number": 140, "title": "Heatmap visualization plugin", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-21T15:34:23Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Could use https://github.com/scottbedard/svelte-heatmap", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/140/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 276455748, "node_id": "MDU6SXNzdWUyNzY0NTU3NDg=", "number": 146, "title": "datasette publish gcloud", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-23T18:55:03Z", "updated_at": "2019-06-24T06:48:20Z", "closed_at": "2019-06-24T06:48:20Z", "author_association": "OWNER", "pull_request": null, "body": "See also #103 \r\n\r\nIt looks like you can start a Google Cloud VM with a \"docker container\" option - and the Google Cloud Registry is easy to push containers to. So it would be feasible to have `datasette publish gcloud ...` automatically build a container, push it to GCR, then start a new VM instance with it:\r\n\r\nhttps://cloud.google.com/container-registry/docs/pushing-and-pulling\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/146/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 276718605, "node_id": "MDU6SXNzdWUyNzY3MTg2MDU=", "number": 151, "title": "Set up a pattern portfolio", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-25T02:09:49Z", "updated_at": "2020-07-02T00:13:24Z", "closed_at": "2020-05-03T03:13:16Z", "author_association": "OWNER", "pull_request": null, "body": "https://www.slideshare.net/nataliedowne/practical-maintainable-css/75\r\n\r\nThis will be a single page that demonstrates all of the different CSS styles and classes available to Datasette.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/151/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 279547886, "node_id": "MDU6SXNzdWUyNzk1NDc4ODY=", "number": 163, "title": "Document the querystring argument for setting a different time limit", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-12-05T22:05:08Z", "updated_at": "2021-03-23T02:44:33Z", "closed_at": "2017-12-06T15:06:57Z", "author_association": "OWNER", "pull_request": null, "body": "http://datasette.readthedocs.io/en/latest/sql_queries.html#query-limits\r\n\r\nNeed to explain why this is useful too.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/163/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 280014287, "node_id": "MDU6SXNzdWUyODAwMTQyODc=", "number": 165, "title": "metadata.json support for per-database and per-table information", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 2949431, "label": "Custom templates edition"}, "comments": 2, "created_at": "2017-12-07T06:15:34Z", "updated_at": "2017-12-07T16:48:34Z", "closed_at": "2017-12-07T16:47:29Z", "author_association": "OWNER", "pull_request": null, "body": "Every database and every table should be able to support the following optional metadata:\r\n\r\n\ttitle\r\n\tdescription\r\n\tdescription_html\r\n\tlicense\r\n\tlicense_url\r\n\tsource\r\n\tsource_url\r\n\r\nIf `description_html` is provided it over-rides `description` and will be displayed unescaped.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/165/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 281110295, "node_id": "MDU6SXNzdWUyODExMTAyOTU=", "number": 173, "title": "I18n and L10n support", "user": {"value": 50138, "label": "janimo"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-12-11T17:49:58Z", "updated_at": "2021-04-26T12:10:01Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/173/reactions\", \"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 288438570, "node_id": "MDU6SXNzdWUyODg0Mzg1NzA=", "number": 179, "title": "More metadata options for template authors ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-01-14T20:51:04Z", "updated_at": "2019-05-13T18:33:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "See this thread on Twitter: https://twitter.com/simonw/status/952637152797458432", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/179/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 291639118, "node_id": "MDU6SXNzdWUyOTE2MzkxMTg=", "number": 183, "title": "Custom Queries - escaping strings", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-01-25T16:49:13Z", "updated_at": "2019-06-24T06:45:07Z", "closed_at": "2019-06-24T06:45:07Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If a SQLite table column name contains spaces, they are usually referred to in double quotes:\r\n\r\n`SELECT * FROM mytable WHERE \"gappy column name\"=\"my value\";`\r\n\r\nIn the JSON metadata file, this is passed by escaping the double quotes:\r\n\r\n`\"queries\": {\"my query\": \"SELECT * FROM mytable WHERE \\\"gappy column name\\\"=\\\"my value\\\";\"}`\r\n\r\nWhen specifying a custom query in `metadata.json` using double quotes, these are then rendered in the *datasette* query box using single quotes:\r\n\r\n`SELECT * FROM mytable WHERE 'gappy column name'='my value';`\r\n\r\nwhich does not work.\r\n\r\nAlternatively, a valid custom query can be passed using backticks (\\`) to quote the column name and single (unescaped) quotes for the matched value:\r\n\r\n``\"queries\": {\"my query\": \"SELECT * FROM mytable WHERE `gappy column name`='my value';\"}``\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/183/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 312312125, "node_id": "MDU6SXNzdWUzMTIzMTIxMjU=", "number": 194, "title": "Rename table_rows and filtered_table_rows to have _count suffix", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-08T14:53:37Z", "updated_at": "2018-04-09T05:25:22Z", "closed_at": "2018-04-09T05:25:22Z", "author_association": "OWNER", "pull_request": null, "body": "These fields represent counts of items:\r\n\r\n \"table_rows\": 131,\r\n \"filtered_table_rows\": 8,\r\n\r\nBut the names make it sound like they might be arrays full of rows. Adding a `_count` suffix would make this more clear:\r\n\r\n \"table_rows_count\": 131,\r\n \"filtered_table_rows_count\": 8,\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/194/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 313785206, "node_id": "MDExOlB1bGxSZXF1ZXN0MTgxMjQ3NTY4", "number": 202, "title": "Raise 404 on nonexistent table URLs", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-12T15:47:06Z", "updated_at": "2018-04-13T19:22:56Z", "closed_at": "2018-04-13T18:19:15Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/202", "body": "Currently they just 500. Also cleaned the logic up a bit, I hope I didn't miss anything.\r\n\r\nThis is issue #184.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/202/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 314847571, "node_id": "MDU6SXNzdWUzMTQ4NDc1NzE=", "number": 220, "title": "Investigate syntactic sugar for plugins", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-16T23:01:39Z", "updated_at": "2020-06-11T21:50:06Z", "closed_at": "2020-06-11T21:49:55Z", "author_association": "OWNER", "pull_request": null, "body": "Suggested by @andrewhayward on Twitter: https://twitter.com/arhayward/status/986015118965268480?s=21\r\n\r\n> Have you considered a basic abstraction on top of that, for standard hook features?\r\n\r\n```\r\n@sql_function\r\nrandom_integer(a,b):\r\n return random.randint(a,b)\r\n\r\n@template_filter\r\nuppercase(str):\r\n return str.upper()\r\n```\r\n\r\nMaybe `from datasette.plugins import template_filter`?\r\n\r\nWould have to work out how to get this to play well with pluggy", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/220/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 315738696, "node_id": "MDU6SXNzdWUzMTU3Mzg2OTY=", "number": 226, "title": "Unit tests for installable plugins", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-19T06:05:32Z", "updated_at": "2020-11-24T19:52:51Z", "closed_at": "2020-11-24T19:52:46Z", "author_association": "OWNER", "pull_request": null, "body": "I'd like more thorough unit test coverage of the plugins mechanism - in particular for installable plugins.\r\n\r\nI think I can do this while still having the code live in the same repo, by creating a subdirectory in tests/example_plugin with its own setup.py and then running `python setup.py install` as part of the test runner.\r\n\r\nI imagine I will need to bump the version number every time I change the plugin in case someone runs the test again in the same virtual environment.\r\n\r\nIf that doesn't work I can instead ship a datasette-plugins-tests two to PyPI and add that as a tests_require dependency.\r\n\r\nRefs #14", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/226/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 316621102, "node_id": "MDU6SXNzdWUzMTY2MjExMDI=", "number": 235, "title": "Add limit on the size in KB of data returned from a single query", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-22T23:01:15Z", "updated_at": "2018-04-24T00:30:02Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized.\r\n\r\nIt does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server.\r\n\r\nI think the easiest place to implement that is here:\r\n\r\nhttps://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190\r\n\r\nCurrently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits.\r\n\r\nThe bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/235/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 317475156, "node_id": "MDU6SXNzdWUzMTc0NzUxNTY=", "number": 237, "title": "Support for ?_search_colname=blah searches", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-25T04:29:53Z", "updated_at": "2018-05-05T22:56:42Z", "closed_at": "2018-05-05T22:33:23Z", "author_association": "OWNER", "pull_request": null, "body": "Right now the `_search=` argument searches across all fields in a full-text index, for example:\r\n\r\nhttps://san-francisco.datasettes.com/sf-film-locations-84594a7/Film_Locations_in_San_Francisco?_search=justin\r\n\r\nSQLite FTS also supports searches within a specified field, for example:\r\n\r\nhttps://san-francisco.datasettes.com/sf-film-locations-84594a7?sql=select+rowid%2C+*+from+Film_Locations_in_San_Francisco+where+rowid+in+%28select+rowid+from+%5BFilm_Locations_in_San_Francisco_fts%5D+where+%5BLocations%5D+match+%3Asearch%29+order+by+rowid+limit+101&search=justin\r\n\r\n```\r\nselect rowid, * from Film_Locations_in_San_Francisco\r\nwhere rowid in (\r\n select rowid from [Film_Locations_in_San_Francisco_fts]\r\n where [Locations] match :search\r\n) order by rowid limit 101\r\n```\r\n\r\nThe `_search=` parameter could be extended to support this using `_search_colname=`.\r\n\r\nThis should also be able to support columns with spaces and special characters in their names, something like this:\r\n\r\n`_search_Column%20With%20Spaces=foo`\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/237/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 317760361, "node_id": "MDU6SXNzdWUzMTc3NjAzNjE=", "number": 239, "title": "Support for hidden tables in metadata.json", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-25T19:21:17Z", "updated_at": "2018-04-26T03:45:12Z", "closed_at": "2018-04-26T03:43:10Z", "author_association": "OWNER", "pull_request": null, "body": "Since we already have a hidden feature, let's expose it more to our users ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/239/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 318737808, "node_id": "MDU6SXNzdWUzMTg3Mzc4MDg=", "number": 243, "title": "--spatialite option for datasette publish commands", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-29T18:19:32Z", "updated_at": "2018-05-31T14:17:53Z", "closed_at": "2018-05-31T14:17:53Z", "author_association": "OWNER", "pull_request": null, "body": "Performs the necessary incantations to install Spatialite on Zeit Now or Heroku and sets the corresponding environment variable to ensure the module is correctly loaded by datasette serve.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/243/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 319954545, "node_id": "MDU6SXNzdWUzMTk5NTQ1NDU=", "number": 248, "title": "/-/plugins should show version of each installed plugin", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-03T14:50:45Z", "updated_at": "2018-05-04T18:25:40Z", "closed_at": "2018-05-04T18:05:04Z", "author_association": "OWNER", "pull_request": null, "body": "Refs #244 \r\n\r\nhttps://stackoverflow.com/questions/20180543/how-to-check-version-of-python-modules\r\n\r\n```\r\n>>> import pkg_resources\r\n>>> pkg_resources.get_distribution('datasette_cluster_map').version\r\n'0.4'\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/248/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 324451322, "node_id": "MDU6SXNzdWUzMjQ0NTEzMjI=", "number": 273, "title": "Figure out a way to have /-/version return current git commit hash", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-18T15:16:56Z", "updated_at": "2018-05-22T19:35:22Z", "closed_at": "2018-05-22T19:35:22Z", "author_association": "OWNER", "pull_request": null, "body": "https://fivethirtyeight.datasettes.com/-/versions reports Datasette version `0.21`\r\n\r\nThis isn't actually correct. The deploy script for that site actually deploys current master using `https://github.com/simonw/datasette/archive/master.zip`: https://github.com/simonw/fivethirtyeight-datasette/blob/66b4b0dfedd7237bc8c02d3e26d905bca7b84069/Dockerfile#L9\r\n\r\nIdeally this would show the current commit hash, but I'm not at all sure if it's possible to derive that from `pip install https://github.com/simonw/datasette/archive/master.zip`. Is there another mechanism that could be used to reliably `pip install` current master but still provide access to the most recent commit hash?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/273/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 324652142, "node_id": "MDU6SXNzdWUzMjQ2NTIxNDI=", "number": 274, "title": "Rename --limit to --config, add --help-config", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-19T18:57:42Z", "updated_at": "2018-05-20T17:04:55Z", "closed_at": "2018-05-20T17:04:11Z", "author_association": "OWNER", "pull_request": null, "body": "#270 introduced `--limit` but on further thought it should be called `--config` instead.\r\n\r\n`--page_size` should becomes `--config default_page_size:1000`\r\n\r\nAdd `--help-config` to show full help showing all config settings.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/274/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 324836533, "node_id": "MDExOlB1bGxSZXF1ZXN0MTg5MzE4NDUz", "number": 277, "title": "Refactor inspect logic", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-21T08:49:31Z", "updated_at": "2018-05-22T16:07:24Z", "closed_at": "2018-05-22T14:03:07Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/277", "body": "This pulls the logic for inspect out into a new file which makes it a bit easier to understand.\r\n\r\nThis was going to be the first part of an implementation for #276, but it seems like that might take a while so I'm going to PR a few bits of refactoring individually.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/277/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 326189744, "node_id": "MDU6SXNzdWUzMjYxODk3NDQ=", "number": 285, "title": "num_threads and cache_max_age should be --config options", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-24T16:04:51Z", "updated_at": "2018-05-27T00:53:35Z", "closed_at": "2018-05-27T00:43:33Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/datasette/blob/58b5a37dbbf13868a46bcbb284509434e66eca25/datasette/app.py#L106\r\n\r\nAnd\r\n\r\nhttps://github.com/simonw/datasette/blob/58b5a37dbbf13868a46bcbb284509434e66eca25/datasette/views/base.py#L325\r\n\r\nRefs #275 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/285/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 326599525, "node_id": "MDU6SXNzdWUzMjY1OTk1MjU=", "number": 286, "title": "Database hash should include current datasette version", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-25T17:03:42Z", "updated_at": "2018-05-25T17:07:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Right now deploying a new version of datasette doesn't invalidate existing URLs, so users may still see a cached copy of the old templates.\r\n\r\nWe can fix this by including the current datasette version in the input to the hash function (which currently just the database file contents).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/286/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 327383759, "node_id": "MDU6SXNzdWUzMjczODM3NTk=", "number": 295, "title": "Extract unit tests for inspect out to test_inspect.py", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-29T15:55:04Z", "updated_at": "2019-05-11T21:40:32Z", "closed_at": "2019-05-11T21:40:32Z", "author_association": "OWNER", "pull_request": null, "body": "Right now they are bundled up as API unit tests for a relatively unimportant endpoint. They should be their own thing.\r\n\r\nBlocks #294", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/295/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 327459829, "node_id": "MDU6SXNzdWUzMjc0NTk4Mjk=", "number": 298, "title": "URLify URLs in results from custom SQL statements / views", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-29T19:41:07Z", "updated_at": "2018-07-24T04:53:20Z", "closed_at": "2018-07-24T03:56:50Z", "author_association": "OWNER", "pull_request": null, "body": "Consider this custom query:\r\n\r\nhttps://fivethirtyeight.datasettes.com/fivethirtyeight-5de27e3?sql=select+user%2C+%28%27https%3A%2F%2Ftwitter.com%2F%27+%7C%7C+user%29+as+user_url%2C+created_at%2C+text%2C+url+from+%5Btwitter-ratio%2Fsenators%5D+limit+10%3B\r\n\r\n```select user, ('https://twitter.com/' || user) as user_url, created_at, text, url from [twitter-ratio/senators] limit 10;```\r\n\r\n![2018-05-29 at 12 38 pm](https://user-images.githubusercontent.com/9599/40681177-44a36d5c-633d-11e8-935b-c49dad7ac682.png)\r\n\r\nIt would be nice if these URLs were turned into links, as happens on the table view page: https://fivethirtyeight.datasettes.com/fivethirtyeight-5de27e3/twitter-ratio%2Fsenators\r\n\r\n![2018-05-29 at 12 39 pm](https://user-images.githubusercontent.com/9599/40681206-5c69c47c-633d-11e8-9f3a-08899f8659b8.png)\r\n\r\nThis currently does not happen because the table view render logic takes a different path through `display_columns_and_rows()` which includes this bit:\r\n\r\nhttps://github.com/simonw/datasette/blob/b0a95da96386ddf99816911e08df86178ffa9a89/datasette/views/table.py#L195-L202", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/298/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 328171513, "node_id": "MDU6SXNzdWUzMjgxNzE1MTM=", "number": 302, "title": "test-2.3.sqlite database filename throws a 404", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3439337, "label": "0.23.1"}, "comments": 2, "created_at": "2018-05-31T14:50:58Z", "updated_at": "2018-06-21T15:21:17Z", "closed_at": "2018-06-21T15:21:16Z", "author_association": "OWNER", "pull_request": null, "body": "The following almost works:\r\n\r\n datasette test-2.3.sqlite\r\n\r\nhttp://127.0.0.1:8001test-2.3-c88bc35/HighWays loads OK, but http://127.0.0.1:8001test-2.3-c88bc35 throws a 404:\r\n\r\n![2018-05-31 at 7 50 am](https://user-images.githubusercontent.com/9599/40789434-447ae934-64a7-11e8-9a07-4eeba87147d5.png)\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/302/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 329147284, "node_id": "MDU6SXNzdWUzMjkxNDcyODQ=", "number": 305, "title": "Add contributor guidelines to docs", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-04T17:25:30Z", "updated_at": "2019-06-24T06:40:19Z", "closed_at": "2019-06-24T06:40:19Z", "author_association": "OWNER", "pull_request": null, "body": "https://channels.readthedocs.io/en/latest/contributing.html is a nice example of this done well.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/305/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 330826972, "node_id": "MDU6SXNzdWUzMzA4MjY5NzI=", "number": 308, "title": "Support extra Heroku apps:create options - region, space, team", "user": {"value": 78156, "label": "annapowellsmith"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-08T23:08:33Z", "updated_at": "2018-09-21T14:09:28Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/308/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 331343824, "node_id": "MDU6SXNzdWUzMzEzNDM4MjQ=", "number": 309, "title": "On 404s with a trailing slash redirect to that page without a trailing slash", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3439337, "label": "0.23.1"}, "comments": 2, "created_at": "2018-06-11T20:46:49Z", "updated_at": "2018-06-21T15:22:02Z", "closed_at": "2018-06-21T15:13:15Z", "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/309/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 332998752, "node_id": "MDExOlB1bGxSZXF1ZXN0MTk1MzM5MTEx", "number": 311, "title": "?_labels=1 to expand foreign keys (in csv and json), refs #233", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-16T16:31:12Z", "updated_at": "2018-06-16T22:20:31Z", "closed_at": "2018-06-16T22:20:31Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/311", "body": "Output looks something like this:\r\n\r\n {\r\n \"rowid\": 233,\r\n \"TreeID\": 121240,\r\n \"qLegalStatus\": {\r\n \"value\" 2,\r\n \"label\": \"Private\"\r\n }\r\n \"qSpecies\": {\r\n \"value\": 16,\r\n \"label\": \"Sycamore\"\r\n }\r\n \"qAddress\": \"91 Commonwealth Ave\",\r\n ...\r\n }", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/311/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 333326107, "node_id": "MDU6SXNzdWUzMzMzMjYxMDc=", "number": 317, "title": "Travis CI fails to upload new releases to PyPI", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3439337, "label": "0.23.1"}, "comments": 2, "created_at": "2018-06-18T15:44:26Z", "updated_at": "2018-06-21T15:45:47Z", "closed_at": "2018-06-21T15:45:47Z", "author_association": "OWNER", "pull_request": null, "body": "https://travis-ci.org/simonw/datasette/jobs/393684139\r\n\r\n```\r\n...\r\nremoving build/bdist.linux-x86_64/wheel\r\nUploading distributions to https://upload.pypi.org/legacy/\r\nUploading datasette-0.23-py3-none-any.whl\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 201k/201k [00:00<00:00, 1.02MB/s]\r\nHTTPError: 403 Client Error: Invalid or non-existent authentication information. for url: https://upload.pypi.org/legacy/\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/317/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 334149717, "node_id": "MDU6SXNzdWUzMzQxNDk3MTc=", "number": 319, "title": "Incorrect display of compound primary keys with foreign key relationships", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 3439337, "label": "0.23.1"}, "comments": 2, "created_at": "2018-06-20T16:09:36Z", "updated_at": "2018-06-21T15:58:15Z", "closed_at": "2018-06-21T14:56:41Z", "author_association": "OWNER", "pull_request": null, "body": "https://registry.datasette.io/registry-7d4f81f/datasette_tags\r\n\r\n![2018-06-20 at 9 07 am](https://user-images.githubusercontent.com/9599/41670542-68cc4dec-7469-11e8-9521-3bbc6465eccb.png)\r\n\r\nUnderlying JSON looks [like this](https://registry.datasette.io/registry-7d4f81f/datasette_tags.json?_labels=on):\r\n\r\n```\r\n{\r\n \"database\": \"registry\",\r\n \"table\": \"datasette_tags\",\r\n \"is_view\": false,\r\n \"human_description_en\": \"\",\r\n \"rows\": [\r\n {\r\n \"datasette_id\": {\r\n \"value\": 1,\r\n \"label\": \"Global Power Plant Database\"\r\n },\r\n \"tag\": {\r\n \"value\": \"geospatial\",\r\n \"label\": \"geospatial\"\r\n }\r\n },\r\n````\r\n\r\nBug is likely somewhere in here: https://github.com/simonw/datasette/blob/e04f5b0d348ef7275a0a5ab9eb53527105132885/datasette/views/table.py#L143-L207", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/319/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 336936010, "node_id": "MDU6SXNzdWUzMzY5MzYwMTA=", "number": 331, "title": "Datasette throws error when loading spatialite db without extension loaded", "user": {"value": 82988, "label": "psychemedia"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-29T09:51:14Z", "updated_at": "2022-01-20T21:29:40Z", "closed_at": "2018-07-10T15:13:36Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When starting datasette on a SpatialLite database *without* loading the SpatiaLite extension (using eg `--load-extension=/usr/local/lib/mod_spatialite.dylib`) an error is thrown and the server fails to start:\r\n\r\n```\r\ndatasette -p 8003 adminboundaries.db \r\nServe! files=('adminboundaries.db',) on port 8003\r\nTraceback (most recent call last):\r\n File \"/Users/ajh59/anaconda3/bin/datasette\", line 11, in \r\n sys.exit(cli())\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/cli.py\", line 552, in serve\r\n ds.inspect()\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/app.py\", line 273, in inspect\r\n \"tables\": inspect_tables(conn, self.metadata.get(\"databases\", {}).get(name, {}))\r\n File \"/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/inspect.py\", line 79, in inspect_tables\r\n \"PRAGMA table_info({});\".format(escape_sqlite(table))\r\nsqlite3.OperationalError: no such module: VirtualSpatialIndex\r\n``` \r\n\r\nIt would be nice to trap this and return a message saying something like:\r\n\r\n```\r\nIt looks like you're trying to load a SpatiaLite database? Make sure you load in the SpatiaLite extension when starting datasette.\r\n\r\nRead more: https://datasette.readthedocs.io/en/latest/spatialite.html\r\n```\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/331/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 339095976, "node_id": "MDU6SXNzdWUzMzkwOTU5NzY=", "number": 334, "title": "extra_options not passed to heroku publisher", "user": {"value": 719357, "label": "kamicut"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-07-06T23:26:12Z", "updated_at": "2018-07-24T04:53:21Z", "closed_at": "2018-07-10T01:46:04Z", "author_association": "NONE", "pull_request": null, "body": "I might be wrong but I was not able to publish to `heroku` with `--extra-options`, I think `extra_options` is not being used in this function [here](https://github.com/simonw/datasette/blob/master/datasette/utils.py#L369). \r\n\r\nAny help appreciated! ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/334/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 344701755, "node_id": "MDU6SXNzdWUzNDQ3MDE3NTU=", "number": 350, "title": "Don't list default plugins on /-/plugins", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-07-26T05:38:00Z", "updated_at": "2018-08-28T17:13:50Z", "closed_at": "2018-08-28T16:48:19Z", "author_association": "OWNER", "pull_request": null, "body": "https://dbbe707.datasette.io/-/plugins is showing \"datasette.publish.now\" and \"datasette.publish.heroku\"", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/350/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 361764460, "node_id": "MDExOlB1bGxSZXF1ZXN0MjE2NjUxMzE3", "number": 365, "title": "fix small doc typo", "user": {"value": 418191, "label": "jaywgraves"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-09-19T14:02:02Z", "updated_at": "2019-12-19T02:30:33Z", "closed_at": "2018-09-19T17:15:43Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/365", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/365/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 369716228, "node_id": "MDU6SXNzdWUzNjk3MTYyMjg=", "number": 366, "title": "Default built image size over Zeit Now 100MiB limit", "user": {"value": 416374, "label": "gfrmin"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-10-12T21:27:17Z", "updated_at": "2018-11-05T06:23:32Z", "closed_at": "2018-11-05T06:23:32Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "Using `dataset publish now` with no other custom options on a small (43KB) sqlite database leads to the error \"The built image size (373.5M) exceeds the 100MiB limit\". I think this is because of a recent Zeit change: https://github.com/zeit/now-cli/issues/1523", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/366/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 374675798, "node_id": "MDExOlB1bGxSZXF1ZXN0MjI2MzE0ODYy", "number": 367, "title": "Mark codemirror files as vendored", "user": {"value": 48517, "label": "jaap3"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-10-27T18:41:25Z", "updated_at": "2019-05-03T21:12:09Z", "closed_at": "2019-05-03T21:11:20Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/367", "body": "GitHub lists datasette as a Javascript project, primarily because of the vendored codemirror files. This is somewhat confusing when you're looking for datasette, knowing it's written in Python.\r\n\r\nLuckily it's possible exclude certain files from GitHub's code statistics: https://github.com/github/linguist#using-gitattributes", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/367/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 374953006, "node_id": "MDU6SXNzdWUzNzQ5NTMwMDY=", "number": 369, "title": "Interface should show same JSON shape options for custom SQL queries", "user": {"value": 416374, "label": "gfrmin"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2018-10-29T10:39:15Z", "updated_at": "2020-05-30T17:24:06Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the `_shape` parameter to the JSON API URL manually still works, so perhaps there should be consistency in the interface by having the same \"Advanced Export\" box for custom SQL queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/369/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 382471625, "node_id": "MDExOlB1bGxSZXF1ZXN0MjMyMTcyMTA2", "number": 389, "title": "Bump dependency versions", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-11-20T02:23:12Z", "updated_at": "2019-11-13T19:13:41Z", "closed_at": "2019-11-13T19:13:41Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/389", "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/389/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 398089089, "node_id": "MDU6SXNzdWUzOTgwODkwODk=", "number": 399, "title": "/-/versions for official Docker image returns wrong Datasette version", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-01-11T01:19:58Z", "updated_at": "2019-01-13T23:31:59Z", "closed_at": "2019-01-13T23:10:45Z", "author_association": "OWNER", "pull_request": null, "body": "```\r\ndocker run -p 8001:8001 datasetteproject/datasette datasette -p 8001 -h 0.0.0.0\r\n```\r\nhttp://0.0.0.0:8001/-/versions returns this:\r\n```\r\n{\r\n \"datasette\": {\r\n \"version\": \"0+unknown\"\r\n },\r\n ...\r\n```\r\nThis is because the Docker image is built by copying in the Datasette source code, which confuses versioneer. Maybe the Docker image should install the code using a wheel or similar?\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/399/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 400511206, "node_id": "MDU6SXNzdWU0MDA1MTEyMDY=", "number": 403, "title": "How does persistence work?", "user": {"value": 1794527, "label": "ccorcos"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-01-17T23:41:57Z", "updated_at": "2019-01-19T05:47:55Z", "closed_at": "2019-01-18T06:51:14Z", "author_association": "NONE", "pull_request": null, "body": "I was under the impression that now.sh is for stateless microservices. So where are these SQLite databases stored and when do they get created and destroyed?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/403/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 403617881, "node_id": "MDU6SXNzdWU0MDM2MTc4ODE=", "number": 405, "title": ".json?_nl=on option for exporting newline-delimited JSON", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-01-28T01:10:45Z", "updated_at": "2019-01-28T01:49:00Z", "closed_at": "2019-01-28T01:48:37Z", "author_association": "OWNER", "pull_request": null, "body": "The neat thing about newline-delimited JSON is that you don't have to read an entire array (of potentially thousands of objects) into memory in order to parse it - you can parse things a line at a time instead.\r\n\r\nIt will look like this:\r\n\r\n`https://latest.datasette.io/fixtures/facetable.json?_shape=array&_nl=on`\r\n```\r\n{\"pk\": 1, \"planet_int\": 1, \"on_earth\": 1, \"state\": \"CA\", \"city_id\": 1, \"neighborhood\": \"Mission\"}\r\n{\"pk\": 2, \"planet_int\": 1, \"on_earth\": 1, \"state\": \"CA\", \"city_id\": 1, \"neighborhood\": \"Dogpatch\"}\r\n{\"pk\": 3, \"planet_int\": 1, \"on_earth\": 1, \"state\": \"CA\", \"city_id\": 1, \"neighborhood\": \"SOMA\"}\r\n{\"pk\": 4, \"planet_int\": 1, \"on_earth\": 1, \"state\": \"CA\", \"city_id\": 1, \"neighborhood\": \"Tenderloin\"}\r\n{\"pk\": 5, \"planet_int\": 1, \"on_earth\": 1, \"state\": \"CA\", \"city_id\": 1, \"neighborhood\": \"Bernal Heights\"}\r\n```\r\n\r\nI added this as part of the `sqlite-utils json` CLI command is this commit - I think Datasette should offer it as well: https://github.com/simonw/sqlite-utils/commit/5466c9745dfef858286146ea158ffd5a71391d10\r\n\r\nIt can be offered alongside `_stream=on` (which currently only works for CSV, but it could work for JSON as well thanks to this trick).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/405/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 406055201, "node_id": "MDU6SXNzdWU0MDYwNTUyMDE=", "number": 406, "title": "Support nullable foreign keys in _labels mode", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": {"value": 9599, "label": "simonw"}, "milestone": null, "comments": 2, "created_at": "2019-02-03T05:34:20Z", "updated_at": "2019-11-02T22:39:28Z", "closed_at": "2019-11-02T22:30:27Z", "author_association": "OWNER", "pull_request": null, "body": "Currently if there's a null in a foreign key we get \"None\" displayed in the inflated view:\r\n\r\n\"screen\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/406/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 411257981, "node_id": "MDU6SXNzdWU0MTEyNTc5ODE=", "number": 412, "title": "Linked Data(sette)", "user": {"value": 43340, "label": "sfkeller"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-02-18T00:38:14Z", "updated_at": "2019-03-19T10:09:46Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I've a radical feature idea (possible first as an extension in order to experiment?): \r\n\r\nI'd like to link to a remote table from a remote database, e.g. with a function \"linked_datasette()\". So one could do following query:\r\n```\r\nSELECT foo.id, foo.a, remote_party.b\r\nFROM foo\r\nJOIN linked_datasette(\"https://parlgov.datasettes.com/parlgov-b42a2f2\") AS remote_party \r\n ON foo.id=remote_party.id\r\n```\r\nThis is inspired by SPARQL's SERVICE keyword for remote RDF \"endpoints\".\r\n\r\nThere's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ).\r\n\r\nAnd here's an implementation from me in Postgres FDW to connect another Postgres \"endpoint\": https://pastebin.com/Fz2v64Cz .", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/412/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 413740684, "node_id": "MDU6SXNzdWU0MTM3NDA2ODQ=", "number": 11, "title": "Detect numpy types when creating tables", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-02-23T21:09:35Z", "updated_at": "2019-02-24T04:02:20Z", "closed_at": "2019-02-24T04:02:20Z", "author_association": "OWNER", "pull_request": null, "body": "Inspired by #8", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/11/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 413871266, "node_id": "MDU6SXNzdWU0MTM4NzEyNjY=", "number": 18, "title": ".insert/.upsert/.insert_all/.upsert_all should add missing columns", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4348046, "label": "1.0"}, "comments": 2, "created_at": "2019-02-24T21:36:11Z", "updated_at": "2019-05-25T00:42:11Z", "closed_at": "2019-05-25T00:42:11Z", "author_association": "OWNER", "pull_request": null, "body": "This is a larger change, but it would be incredibly useful: if you attempt to insert or update a document with a field that does not currently exist in the underlying table, sqlite-utils should add the appropriate column for you.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/18/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 421985685, "node_id": "MDU6SXNzdWU0MjE5ODU2ODU=", "number": 421, "title": "Documentation for ?_hash=1 and Datasette's hashed URL caching", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4305096, "label": "0.28"}, "comments": 2, "created_at": "2019-03-17T23:08:36Z", "updated_at": "2019-05-19T05:32:37Z", "closed_at": "2019-05-19T05:31:27Z", "author_association": "OWNER", "pull_request": null, "body": "Follow on from #418 - the Datasette documentation needs an entire section (probably a new page) describing exactly how the hash-in-URL caching mechanism works.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/421/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 427429265, "node_id": "MDExOlB1bGxSZXF1ZXN0MjY2MDM1Mzgy", "number": 424, "title": "Column types in inspected metadata", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-03-31T18:46:33Z", "updated_at": "2019-04-29T18:30:50Z", "closed_at": "2019-04-29T18:30:46Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/424", "body": "This PR does two things:\r\n\r\n* Adds the sqlite column type for each column to the inspected table info.\r\n* Stops binary columns from being rendered to HTML, unless a plugin handles it.\r\n\r\nThere's a bit more detail in the changeset descriptions.\r\n\r\nThese changes are intended as a precursor to a plugin which adds first-class support for Spatialite geographic primitives, and perhaps more useful geo-stuff.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/424/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 430103450, "node_id": "MDU6SXNzdWU0MzAxMDM0NTA=", "number": 425, "title": "Submitting SQL on hide page is broken", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-07T04:21:31Z", "updated_at": "2019-04-12T05:12:13Z", "closed_at": "2019-04-12T05:00:53Z", "author_association": "OWNER", "pull_request": null, "body": "Clicking the submit button here doesn't work correctly: https://3a208a4.datasette.io/fixtures?sql=select+%2A+from+compound_three_primary_keys+order+by+pk1%2C+pk2%2C+pk3+limit+101&_hide_sql=1", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/425/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 432371762, "node_id": "MDU6SXNzdWU0MzIzNzE3NjI=", "number": 428, "title": "Make ?_fts_table=x and ?_fts_pk=y available as URL parameters on table view", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-12T03:30:55Z", "updated_at": "2019-04-12T04:30:29Z", "closed_at": "2019-04-12T04:21:25Z", "author_association": "OWNER", "pull_request": null, "body": "These can currently only be set using `metadata.json`: https://datasette.readthedocs.io/en/0.27/full_text_search.html#configuring-full-text-search-for-a-table-or-view\r\n\r\nThere's no reason not to support these as URL parameters as well. That way it would be easy to use FTS search against a view without having to use `metadata.json`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/428/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 438048318, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0MTc0NjE0", "number": 437, "title": "Add inspect and prepare_sanic hooks", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-28T11:53:34Z", "updated_at": "2019-06-24T16:38:57Z", "closed_at": "2019-06-24T16:38:56Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/437", "body": "This adds two new plugin hooks:\r\n\r\nThe `inspect` hook allows plugins to add data to the inspect dictionary.\r\n\r\nThe `prepare_sanic` hook allows plugins to hook into the web router. I've attached a warning to this hook in the docs in light of #272 but I want this hook now...\r\n\r\nOn quick inspection, I don't think it's worthwhile to try and make this hook independent of the web framework (but it looks like Starlette would make the hook implementation a bit nicer).\r\n\r\nRef #14", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/437/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 438200529, "node_id": "MDU6SXNzdWU0MzgyMDA1Mjk=", "number": 438, "title": "Plugins are loaded when running pytest", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T08:25:58Z", "updated_at": "2019-05-02T05:09:18Z", "closed_at": "2019-05-02T05:09:11Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If I have a datasette plugin installed on my system, its hooks are called when running the main datasette tests. This is probably undesirable, especially with the inspect hook in #437, as the plugin may rely on inspected state that the tests don't know about.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/438/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 438240541, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0MzEzNjI1", "number": 439, "title": "[WIP] Add primary key to the extra_body_script hook arguments", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T10:08:23Z", "updated_at": "2019-05-01T09:58:32Z", "closed_at": "2019-05-01T09:58:30Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/439", "body": "This allows the row to be identified on row pages. The context here is that I want to access the row's data to plot it on a map.\r\n\r\nI considered passing the entire template context through to the hook function. This would expose the actual row data and potentially avoid a further fetch request in JS, but it does make the plugin API a lot more leaky. \r\n\r\n(At any rate, using the selected row data is tricky in my case because of Spatialite's infuriating custom binary representation...)", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/439/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 438450757, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc0NDc4NzYx", "number": 442, "title": "Suppress rendering of binary data", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-04-29T18:36:41Z", "updated_at": "2019-05-03T18:26:48Z", "closed_at": "2019-05-03T16:44:49Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/442", "body": "Binary columns (including spatialite geographies) get shown as ugly\r\nbinary strings in the HTML by default. Nobody wants to see that mess.\r\n\r\nShow the size of the column in bytes instead. If you want to decode\r\nthe binary data, you can use a plugin to do it.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/442/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 440304714, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1OTA5MTk3", "number": 450, "title": "Coalesce hidden table count to 0", "user": {"value": 45057, "label": "russss"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-04T09:37:10Z", "updated_at": "2019-05-11T18:10:09Z", "closed_at": "2019-05-11T18:10:09Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/450", "body": "For some reason I'm hitting a `None` here with a FTS table. I'm not\r\nentirely sure why but this makes the logic work the same as with\r\nnon-hidden tables.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/450/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 442330564, "node_id": "MDU6SXNzdWU0NDIzMzA1NjQ=", "number": 457, "title": "Ability to \"publish cloudrun\" with no user input", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-09T16:42:51Z", "updated_at": "2019-05-09T19:41:31Z", "closed_at": "2019-05-09T16:45:08Z", "author_association": "OWNER", "pull_request": null, "body": "If you attempt to deploy a new version of a cloudrun deployment, the script currently pauses and asks for user input for the service name like this:\r\n\r\n```77d4d7de-3dfc-4acc-9a23-efe16230f318 2019-05-09T15:01:48+00:00 52S gs://datasette-222320_cloudbuild/source/1557414063.1-3a82df8096e9434b93511b0588d8d155.tgz gcr.io/datasette-222320/sf-trees (+1 more) SUCCESS\r\nService name: (sf-trees): USER INPUT REQUIRED HERE\r\nDeploying container to Cloud Run service [sf-trees] in project [datasette-222320] region [us-central1]\r\n\u2713 Deploying... Done. \r\n \u2713 Creating Revision... \r\n \u2713 Routing traffic... \r\n \u2713 Setting IAM Policy... \r\n```\r\nThis is incompatible with running under CI.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/457/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 443040665, "node_id": "MDU6SXNzdWU0NDMwNDA2NjU=", "number": 466, "title": "Move \"no such module: VirtualSpatialIndex\" code elsewhere", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4305096, "label": "0.28"}, "comments": 2, "created_at": "2019-05-11T22:09:00Z", "updated_at": "2022-01-20T21:29:41Z", "closed_at": "2019-05-11T22:57:22Z", "author_association": "OWNER", "pull_request": null, "body": "We currently show a useful warning (from #331) when the user tries to open a spatialite database without first loading the module:\r\n\r\nhttps://github.com/simonw/datasette/blob/c692cd291111050483a32bea1ee08e994a0b781b/datasette/app.py#L547-L554\r\n\r\nThis code is part of `.inspect()` which is going away - see #462 - so I need to find somewhere else for it to live.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/466/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 444711254, "node_id": "MDU6SXNzdWU0NDQ3MTEyNTQ=", "number": 467, "title": "Index page row counts only for DBs with < 30 tables (10ms count limit per table)", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4305096, "label": "0.28"}, "comments": 2, "created_at": "2019-05-16T01:21:36Z", "updated_at": "2019-05-16T03:03:45Z", "closed_at": "2019-05-16T03:03:45Z", "author_association": "OWNER", "pull_request": null, "body": "Split out from #460.\r\n\r\nIf a database is mutable, calculating row counts gets expensive. I'm only going to calculate row counts for the index page if it has less than X tables (both hidden and non-hidden) AND each table can be counted in less than 10ms.\r\n\r\nIf any count takes longer than 10ms I'll cancel the counting entirely. We currently show an inaccurate count if this happens, which is just confusing.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/467/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 448189298, "node_id": "MDU6SXNzdWU0NDgxODkyOTg=", "number": 486, "title": "Ability to add extra routes and related templates", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-24T14:04:25Z", "updated_at": "2019-05-24T14:43:28Z", "closed_at": "2019-05-24T14:43:09Z", "author_association": "NONE", "pull_request": null, "body": "Hi Simon\r\n\r\nThank for an excellent job! Datasette is such an obviously good idea (once you have that idea!) and so well done. The only thing that I miss, is the ability to add extras routes (with associated jinja2-templates). For most of the datasets, that I would like to publish, I would also like at least a page, that describes the data (semantics, provenance, biases...) and a page explaining our cookie- and privacy-policies (which would allows us to use something like Goggle Analytics).\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/486/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 448395665, "node_id": "MDU6SXNzdWU0NDgzOTU2NjU=", "number": 22, "title": "Release notes for 1.0", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4348046, "label": "1.0"}, "comments": 2, "created_at": "2019-05-25T00:58:03Z", "updated_at": "2019-05-25T01:18:27Z", "closed_at": "2019-05-25T01:06:52Z", "author_association": "OWNER", "pull_request": null, "body": "https://github.com/simonw/sqlite-utils/compare/0.14...251e473", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/22/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 450032134, "node_id": "MDU6SXNzdWU0NTAwMzIxMzQ=", "number": 495, "title": "facet_m2m gets confused by multiple relationships", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-05-29T21:37:28Z", "updated_at": "2020-12-17T05:08:22Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I got this for a database I was playing with:\r\n\r\n\"hackathon__hacks_project__24_rows\"\r\n\r\nI think this is because of these three tables:\r\n\r\n\"hackathon__select___from_sqlite_master_\"\r\n\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/495/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 457201907, "node_id": "MDU6SXNzdWU0NTcyMDE5MDc=", "number": 513, "title": "Is it possible to publish to Heroku despite slug size being too large?", "user": {"value": 7936571, "label": "chrismp"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-06-18T00:12:02Z", "updated_at": "2019-06-21T22:35:54Z", "closed_at": "2019-06-21T22:35:54Z", "author_association": "NONE", "pull_request": null, "body": "I'm trying to push more than 1.5GB worth of SQLite databases -- 535MB compressed -- to Heroku but I get this error when I run the `datasette publish heroku` command.\r\n\r\n Compiled slug size: 535.5M is too large (max is 500M).\r\n\r\nCan I publish the databases and make datasette work on Heroku despite the large slug size?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/513/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 459627549, "node_id": "MDU6SXNzdWU0NTk2Mjc1NDk=", "number": 523, "title": "Show total/unfiltered row count when filtering", "user": {"value": 2657547, "label": "rixx"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-06-23T22:56:48Z", "updated_at": "2019-06-24T01:38:14Z", "closed_at": "2019-06-24T01:38:14Z", "author_association": "CONTRIBUTOR", "pull_request": null, "body": "When I'm seeing a filtered view of a table, I'd like to be able to see something like '2 rows where status != \"closed\" (of 1000 total)' to have a context for the data I'm seeing \u2013 e.g. currently my database is being filled by an importer, so this information would be super helpful.\r\n\r\nSince this information would be a performance hit, maybe something like '12 rows where status != \"closed\" (of ??? total)' with lazy-loading on-click(?) could be applied (Or via a \"How many total?\" tooltip, or \u2026)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/523/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 459714943, "node_id": "MDU6SXNzdWU0NTk3MTQ5NDM=", "number": 525, "title": "Add section on sqite-utils enable-fts to the search documentation", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": {"value": 9599, "label": "simonw"}, "milestone": null, "comments": 2, "created_at": "2019-06-24T06:39:16Z", "updated_at": "2019-06-24T16:36:35Z", "closed_at": "2019-06-24T16:29:43Z", "author_association": "OWNER", "pull_request": null, "body": "https://datasette.readthedocs.io/en/stable/full_text_search.html already has a section about csvs-to-sqlite, sqlite-utils is even more relevant.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/525/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 462423839, "node_id": "MDU6SXNzdWU0NjI0MjM4Mzk=", "number": 33, "title": "index_foreign_keys / index-foreign-keys utilities", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-06-30T16:42:03Z", "updated_at": "2019-06-30T23:54:11Z", "closed_at": "2019-06-30T23:50:55Z", "author_association": "OWNER", "pull_request": null, "body": "Sometimes it's good to have indices on all columns that are foreign keys, to allow for efficient reverse lookups.\r\n\r\nThis would be a useful utility:\r\n\r\n $ sqlite-utils index-foreign-keys database.db\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/33/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 462430920, "node_id": "MDU6SXNzdWU0NjI0MzA5MjA=", "number": 35, "title": "table.update(...) method", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-06-30T18:06:15Z", "updated_at": "2019-07-28T15:43:52Z", "closed_at": "2019-07-28T15:43:52Z", "author_association": "OWNER", "pull_request": null, "body": "Spun off from #23 - this method will allow a user to update a specific row.\r\n\r\nCurrently the only way to do that it is to call `.upsert({full record})` with the primary key field matching an existing record - but this does not support partial updates.\r\n```python\r\ndb[\"events\"].update(3, {\"name\": \"Renamed\"})\r\n```\r\nThis method only works on an existing table, so there's no need for a `pk=\"id\"` specifier - it can detect the primary key by looking at the table.\r\n\r\nIf the primary key is compound the first argument can be a tuple:\r\n```python\r\ndb[\"events_venues\"].update((3, 2), {\"custom_label\": \"Label\"})\r\n```\r\nThe method can be called without the second dictionary argument. Doing this selects the row specified by the primary key (throwing an error if it does not exist) and remembers it so that chained operations can be carried out - see proposal in https://github.com/simonw/sqlite-utils/issues/23#issuecomment-507055345\r\n", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/35/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 464040911, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk0NDAwNDQ2", "number": 539, "title": "Secret plugin configuration options", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-04T03:21:20Z", "updated_at": "2019-07-04T05:36:45Z", "closed_at": "2019-07-04T05:36:45Z", "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/539", "body": "Refs #538 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/539/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 464779810, "node_id": "MDU6SXNzdWU0NjQ3Nzk4MTA=", "number": 541, "title": "Plugin hook for adding extra template context variables", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-05T21:37:05Z", "updated_at": "2019-07-06T00:05:59Z", "closed_at": "2019-07-06T00:05:59Z", "author_association": "OWNER", "pull_request": null, "body": "It turns out I need this for https://github.com/simonw/datasette-auth-github/issues/5\r\n\r\nIt can be modelled on the `extra_body_script` hook: https://datasette.readthedocs.io/en/stable/plugins.html#extra-body-script-template-database-table-view-name-datasette", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/541/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 464987783, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk1MTI3MjEz", "number": 546, "title": "Facet by delimiter", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-07T20:06:05Z", "updated_at": "2019-11-18T23:46:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/546", "body": "Refs #510", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/546/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 464990184, "node_id": "MDU6SXNzdWU0NjQ5OTAxODQ=", "number": 547, "title": "Release notes for 0.29", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": {"value": 4471010, "label": "Datasette 0.29"}, "comments": 2, "created_at": "2019-07-07T20:30:28Z", "updated_at": "2019-07-08T03:31:59Z", "closed_at": "2019-07-08T03:31:59Z", "author_association": "OWNER", "pull_request": null, "body": "There's a lot of stuff... https://github.com/simonw/datasette/compare/0.28...master", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/547/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 465003070, "node_id": "MDU6SXNzdWU0NjUwMDMwNzA=", "number": 551, "title": "Ship many-to-many faceting support (and facet-by-delimiter)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-07T23:11:45Z", "updated_at": "2019-07-08T15:45:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/551/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 467862459, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk3NDEyNDY0", "number": 38, "title": "table.update() method", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-14T17:03:49Z", "updated_at": "2019-07-28T15:43:51Z", "closed_at": "2019-07-28T15:43:51Z", "author_association": "OWNER", "pull_request": "simonw/sqlite-utils/pulls/38", "body": "Refs #35\r\n\r\nStill to do:\r\n\r\n- [x] Unit tests\r\n- [x] Switch to using `.get()`\r\n- [x] Better exceptions, plus unit tests for what happens if pk does not exist\r\n- [x] Documentation\r\n- [x] Ensure compound primary keys work properly\r\n- [x] `alter=True` support", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/38/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 470542938, "node_id": "MDU6SXNzdWU0NzA1NDI5Mzg=", "number": 562, "title": "Facet by array shouldn't suggest for arrays that are not arrays-of-strings", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-19T20:51:29Z", "updated_at": "2019-11-01T19:42:10Z", "closed_at": "2019-11-01T19:37:55Z", "author_association": "OWNER", "pull_request": null, "body": "It's triggering for arrays that look like this at the moment:\r\n```json\r\n[\r\n {\r\n \"type\": \"HKWorkoutEventTypeSegment\",\r\n \"date\": \"2019-05-21 09:43:50 -0700\",\r\n \"duration\": \"12.2780519704024\",\r\n \"durationUnit\": \"min\"\r\n },\r\n {\r\n \"type\": \"HKWorkoutEventTypeSegment\",\r\n \"date\": \"2019-05-21 09:43:50 -0700\",\r\n \"duration\": \"19.467273102204\",\r\n \"durationUnit\": \"min\"\r\n }\r\n]\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/562/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 470691622, "node_id": "MDU6SXNzdWU0NzA2OTE2MjI=", "number": 5, "title": "Add progress bar", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-20T16:29:07Z", "updated_at": "2019-07-22T03:30:13Z", "closed_at": "2019-07-22T02:49:22Z", "author_association": "MEMBER", "pull_request": null, "body": "Showing a progress bar would be nice, using Click.\r\n\r\nThe easiest way to do this would probably be be to hook it up to the length of the compressed content, and update it as this code pushes more XML bytes through the parser:\r\n\r\nhttps://github.com/dogsheep/healthkit-to-sqlite/blob/d64299765064501f4efdd9a0b21dbdba9ec4287f/healthkit_to_sqlite/utils.py#L6-L10", "repo": {"value": 197882382, "label": "healthkit-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 471628483, "node_id": "MDU6SXNzdWU0NzE2Mjg0ODM=", "number": 44, "title": "Utilities for building lookup tables", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-23T10:59:58Z", "updated_at": "2019-07-23T13:07:01Z", "closed_at": "2019-07-23T13:07:01Z", "author_association": "OWNER", "pull_request": null, "body": "While building https://github.com/dogsheep/healthkit-to-sqlite I found a need for a neat mechanism for easily building lookup tables - tables where each unique value in a column is replaced by a foreign key to a separate table.\r\n\r\ncsvs-to-sqlite currently creates those with its \"extract\" mechanism - but that's written as custom code against Pandas. I'd like to eventually replace Pandas with sqlite-utils there.\r\n\r\nSee also #42 ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/44/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 471818939, "node_id": "MDU6SXNzdWU0NzE4MTg5Mzk=", "number": 48, "title": "Jupyter notebook demo of the library, launchable on Binder", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-23T17:05:05Z", "updated_at": "2022-01-26T02:08:46Z", "closed_at": "2022-01-26T02:08:39Z", "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/48/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 487847945, "node_id": "MDExOlB1bGxSZXF1ZXN0MzEzMDA3NDgz", "number": 56, "title": "Escape the table name in populate_fts and search.", "user": {"value": 49260, "label": "amjith"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-01T06:29:05Z", "updated_at": "2019-09-02T17:23:21Z", "closed_at": "2019-09-02T17:23:21Z", "author_association": "CONTRIBUTOR", "pull_request": "simonw/sqlite-utils/pulls/56", "body": "The table names weren't escaped using double quotes in the populate_fts method. \r\n\r\nReproducible case: \r\n```\r\n>>> import sqlite_utils\r\n>>> db = sqlite_utils.Database(\"abc.db\")\r\n>>> db[\"http://example.com\"].insert_all([\r\n... {\"id\": 1, \"age\": 4, \"name\": \"Cleo\"},\r\n... {\"id\": 2, \"age\": 2, \"name\": \"Pancakes\"}\r\n... ], pk=\"id\")\r\n\r\n>>> db[\"http://example.com\"].enable_fts([\"name\"])\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n db[\"http://example.com\"].enable_fts([\"name\"])\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/sqlite_utils/db.py\", l\r\nine 705, in enable_fts\r\n self.populate_fts(columns)\r\n File \"/home/amjith/.virtualenvs/itsysearch/lib/python3.7/site-packages/sqlite_utils/db.py\", l\r\nine 715, in populate_fts\r\n self.db.conn.executescript(sql)\r\nsqlite3.OperationalError: unrecognized token: \":\"\r\n>>> \r\n```", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/56/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 488835586, "node_id": "MDU6SXNzdWU0ODg4MzU1ODY=", "number": 4, "title": "Command for importing data from a Twitter Export file", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-03T21:34:13Z", "updated_at": "2019-10-11T06:45:02Z", "closed_at": "2019-10-11T06:45:02Z", "author_association": "MEMBER", "pull_request": null, "body": "Twitter lets you export all of your data as an archive file: https://twitter.com/settings/your_twitter_data\r\n\r\nA command for importing this data into SQLite would be extremely useful.\r\n\r\n $ twitter-to-sqlite import twitter.db path-to-archive.zip\r\n", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/4/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 491219910, "node_id": "MDU6SXNzdWU0OTEyMTk5MTA=", "number": 61, "title": "importing CSV to SQLite as library", "user": {"value": 17739, "label": "witeshadow"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-09T17:12:40Z", "updated_at": "2019-11-04T16:25:01Z", "closed_at": "2019-11-04T16:25:01Z", "author_association": "NONE", "pull_request": null, "body": "CSV can be imported to SQLite when used CLI, but I don't see documentation for when using as library. ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/61/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 493670426, "node_id": "MDU6SXNzdWU0OTM2NzA0MjY=", "number": 3, "title": "Command to fetch all repos belonging to a user or organization", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-14T21:54:21Z", "updated_at": "2019-09-17T00:17:53Z", "closed_at": "2019-09-17T00:17:53Z", "author_association": "MEMBER", "pull_request": null, "body": "How about this:\r\n\r\n $ github-to-sqlite repos simonw", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 493671014, "node_id": "MDU6SXNzdWU0OTM2NzEwMTQ=", "number": 5, "title": "Add \"incomplete\" boolean to users table for incomplete profiles", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-14T22:01:50Z", "updated_at": "2020-03-23T19:23:31Z", "closed_at": "2020-03-23T19:23:30Z", "author_association": "MEMBER", "pull_request": null, "body": "User profiles that are fetched from e.g. stargazers (#4) are incomplete - they have a login but they don't have name, company etc. \r\n\r\nAdd a `incomplete` boolean flag to the `users` table to record this. Then later I can add a `backfill-users` command which loops through and fetches missing data for those incomplete profiles.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 494685791, "node_id": "MDU6SXNzdWU0OTQ2ODU3OTE=", "number": 574, "title": "Improve usage description of --host option", "user": {"value": 132978, "label": "terrycojones"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-09-17T15:12:12Z", "updated_at": "2019-11-01T21:58:17Z", "closed_at": "2019-11-01T21:57:54Z", "author_association": "NONE", "pull_request": null, "body": "It would be nice if the `--host` option had a clearer description. I tried to get datasette running on an AWS instance and it took a while to realize it was only listening on localhost. So I wanted to make it listen on an non-localhost interface and tried giving a couple of values to `--host` (a host name, then an interface name), but none of them did. In the end I read the source to see that the option is passed to `uvicorn` and looked at the uvicorn docs, which also didn't help. Then I searched the web for \"example running datasette on a host\" which led me to https://github.com/simonw/datasette/issues/514 where I saw someone using `-h 0.0.0.0`. I tried that and it works. That usage could be mentioned somewhere, and might save someone else some time.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/574/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"} {"id": 503190241, "node_id": "MDU6SXNzdWU1MDMxOTAyNDE=", "number": 584, "title": "Codec error in some CSV exports", "user": {"value": 9599, "label": "simonw"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-10-07T01:15:34Z", "updated_at": "2021-06-17T18:13:20Z", "closed_at": "2019-10-18T05:23:16Z", "author_association": "OWNER", "pull_request": null, "body": "Got this exploring my Swarm checkins:\r\n\r\n![448DBFC4-71F8-4846-83C0-BEA511B2157A](https://user-images.githubusercontent.com/9599/66279259-3af53480-e865-11e9-9651-04fd2d895392.jpeg)\r\n\r\n`/swarm/stickers.csv?stickerType=messageOnly&_size=max`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/584/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}