{"id": 348043884, "node_id": "MDU6SXNzdWUzNDgwNDM4ODQ=", "number": 357, "title": "Plugin hook for loading metadata.json", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2018-08-06T19:00:01Z", "updated_at": "2020-06-21T22:19:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "For https://github.com/simonw/russian-ira-facebook-ads-datasette/tree/af6d956995e14afd585c35a6a06bb01da32043ba I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements.\r\n\r\nExample yaml metadata file: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/af6d956995e14afd585c35a6a06bb01da32043ba/russian-ads-metadata.yaml\r\n\r\nIt would be useful if Datasette could be fed a YAML file directly:\r\n\r\n datasette -m metadata.yaml\r\n\r\nQuestion is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a `datasette-metadata-yaml` plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/357/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 639542974, "node_id": "MDU6SXNzdWU2Mzk1NDI5NzQ=", "number": 47, "title": "Fall back to FTS4 if FTS5 is not available", "user": {"value": 73579, "label": "hpk42"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-06-16T10:11:23Z", "updated_at": "2020-06-17T20:13:48Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "got this with version 0.21.1 from pypi. twitter-to-sqlite auth worked but then \"twitter-to-sqlite user-timeline USER.db\" produced a tracekback ending in \"no such module: FTS5\". ", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 639993467, "node_id": "MDU6SXNzdWU2Mzk5OTM0Njc=", "number": 850, "title": "Proof of concept for Datasette on AWS Lambda with EFS", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 25, "created_at": "2020-06-16T21:48:31Z", "updated_at": "2020-06-16T23:52:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://aws.amazon.com/about-aws/whats-new/2020/06/aws-lambda-support-for-amazon-elastic-file-system-now-generally-/\r\n\r\nIf Datasette can run on Lambda with access to EFS it could both read AND write large databases there.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/850/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 574021194, "node_id": "MDU6SXNzdWU1NzQwMjExOTQ=", "number": 691, "title": "--reload sould reload server if code in --plugins-dir changes", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-03-02T14:42:21Z", "updated_at": "2020-06-14T02:35:17Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/691/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 638238548, "node_id": "MDU6SXNzdWU2MzgyMzg1NDg=", "number": 845, "title": "Code coverage should ignore files in .coveragerc", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-06-13T21:45:42Z", "updated_at": "2020-06-13T21:46:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I'm not sure why this is, but the code coverage I have running in a GitHub Action doesn't take my `.coveragerc` file into account. It should:\r\n\r\nhttps://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/.github/workflows/test-coverage.yml#L31-L35\r\n\r\nHere's the bit that's ignored:\r\n\r\nhttps://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/.coveragerc#L1-L2\r\n\r\nAs a result my coverage score is 84%, when it should be 92%:\r\n```\r\n2020-06-13T21:41:18.4404252Z ----------- coverage: platform linux, python 3.8.3-final-0 -----------\r\n2020-06-13T21:41:18.4404570Z Name Stmts Miss Cover\r\n2020-06-13T21:41:18.4404971Z --------------------------------------------------------\r\n2020-06-13T21:41:18.4405227Z datasette/__init__.py 3 0 100%\r\n2020-06-13T21:41:18.4405441Z datasette/__main__.py 3 3 0%\r\n2020-06-13T21:41:18.4405668Z datasette/_version.py 279 279 0%\r\n2020-06-13T21:41:18.4405921Z datasette/actor_auth_cookie.py 20 0 100%\r\n2020-06-13T21:41:18.4406135Z datasette/app.py 499 27 95%\r\n2020-06-13T21:41:18.4406343Z datasette/cli.py 162 45 72%\r\n2020-06-13T21:41:18.4406553Z datasette/database.py 236 17 93%\r\n2020-06-13T21:41:18.4406761Z datasette/default_permissions.py 40 0 100%\r\n2020-06-13T21:41:18.4406975Z datasette/facets.py 210 24 89%\r\n2020-06-13T21:41:18.4407186Z datasette/filters.py 122 7 94%\r\n2020-06-13T21:41:18.4407394Z datasette/hookspecs.py 34 0 100%\r\n2020-06-13T21:41:18.4407600Z datasette/inspect.py 36 23 36%\r\n2020-06-13T21:41:18.4407807Z datasette/plugins.py 34 6 82%\r\n2020-06-13T21:41:18.4408014Z datasette/publish/__init__.py 0 0 100%\r\n2020-06-13T21:41:18.4408240Z datasette/publish/cloudrun.py 57 2 96%\r\n2020-06-13T21:41:18.4408786Z datasette/publish/common.py 19 1 95%\r\n2020-06-13T21:41:18.4409029Z datasette/publish/heroku.py 97 13 87%\r\n2020-06-13T21:41:18.4409243Z datasette/renderer.py 63 4 94%\r\n2020-06-13T21:41:18.4409450Z datasette/sql_functions.py 5 0 100%\r\n2020-06-13T21:41:18.4410480Z datasette/tracer.py 87 16 82%\r\n2020-06-13T21:41:18.4410972Z datasette/utils/__init__.py 504 31 94%\r\n2020-06-13T21:41:18.4411755Z datasette/utils/asgi.py 264 24 91%\r\n2020-06-13T21:41:18.4412173Z datasette/utils/shutil_backport.py 44 44 0%\r\n2020-06-13T21:41:18.4412822Z datasette/version.py 4 0 100%\r\n2020-06-13T21:41:18.4413562Z datasette/views/__init__.py 0 0 100%\r\n2020-06-13T21:41:18.4414276Z datasette/views/base.py 288 19 93%\r\n2020-06-13T21:41:18.4414579Z datasette/views/database.py 120 2 98%\r\n2020-06-13T21:41:18.4414860Z datasette/views/index.py 57 2 96%\r\n2020-06-13T21:41:18.4415379Z datasette/views/special.py 72 16 78%\r\n2020-06-13T21:41:18.4418994Z datasette/views/table.py 418 18 96%\r\n2020-06-13T21:41:18.4428811Z --------------------------------------------------------\r\n2020-06-13T21:41:18.4430394Z TOTAL 3777 623 84%\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/845/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 628156527, "node_id": "MDU6SXNzdWU2MjgxNTY1Mjc=", "number": 789, "title": "Mechanism for enabling pluggy tracing", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-06-01T05:10:14Z", "updated_at": "2020-06-01T05:11:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Could be useful for debugging plugins: https://pluggy.readthedocs.io/en/latest/#call-tracing\r\n\r\nI tried this out by adding these two lines in `plugins.py`:\r\n```python\r\npm = pluggy.PluginManager(\"datasette\")\r\npm.add_hookspecs(hookspecs)\r\n# Added these:\r\npm.trace.root.setwriter(print)\r\npm.enable_tracing()\r\n```\r\nOutput looked something like this:\r\n```\r\nINFO: 127.0.0.1:52724 - \"GET /-/-/static/app.css HTTP/1.1\" 404 Not Found\r\n actor_from_request [hook]\r\n datasette: \r\n request: \r\n\r\n finish actor_from_request --> [] [hook]\r\n\r\n extra_body_script [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n view_name: json_data\r\n datasette: \r\n\r\n finish extra_body_script --> [] [hook]\r\n\r\n extra_template_vars [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n view_name: json_data\r\n request: \r\n datasette: \r\n\r\n finish extra_template_vars --> [] [hook]\r\n\r\n extra_css_urls [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n datasette: \r\n\r\n finish extra_css_urls --> [] [hook]\r\n\r\n extra_js_urls [hook]\r\n template: show_json.html\r\n database: None\r\n table: None\r\n datasette: \r\n\r\n finish extra_js_urls --> [] [hook]\r\n\r\nINFO: 127.0.0.1:52724 - \"GET /-/actor HTTP/1.1\" 200 OK\r\n actor_from_request [hook]\r\n datasette: \r\n request: \r\n\r\n finish actor_from_request --> [] [hook]\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/789/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 374953006, "node_id": "MDU6SXNzdWUzNzQ5NTMwMDY=", "number": 369, "title": "Interface should show same JSON shape options for custom SQL queries", "user": {"value": 416374, "label": "gfrmin"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 3268330, "label": "Datasette 1.0"}, "comments": 2, "created_at": "2018-10-29T10:39:15Z", "updated_at": "2020-05-30T17:24:06Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the `_shape` parameter to the JSON API URL manually still works, so perhaps there should be consistency in the interface by having the same \"Advanced Export\" box for custom SQL queries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/369/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 626582657, "node_id": "MDU6SXNzdWU2MjY1ODI2NTc=", "number": 779, "title": "Make human_description_en explicitly available to output renderers", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-28T14:59:54Z", "updated_at": "2020-05-28T14:59:54Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "`datasette-atom` uses this:\r\n\r\nhttps://github.com/simonw/datasette-atom/blob/df98a6c43a443224b6cd232f84703ec297ef046b/datasette_atom/__init__.py#L36-L37\r\n```python\r\n if data.get(\"human_description_en\"):\r\n title += \": \" + data[\"human_description_en\"]\r\n```\r\nIt's a nice way to generate a useful title for a filtered table.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/779/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 612382643, "node_id": "MDU6SXNzdWU2MTIzODI2NDM=", "number": 758, "title": "Question: Access to immutable database-path", "user": {"value": 2181410, "label": "clausjuhl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2020-05-05T07:01:18Z", "updated_at": "2020-05-28T08:23:27Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi Simon\r\n\r\nIs there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context (`databases[0][\"path\")` when rendering urls of the database (eg. `/db-44b06v9/cases`...), but where can I find the hashed url when rendering the index-page? I'm trying to avoid redirects. Thanks!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/758/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 621486115, "node_id": "MDU6SXNzdWU2MjE0ODYxMTU=", "number": 27, "title": "photos_with_apple_metadata view should include labels", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-20T06:06:17Z", "updated_at": "2020-05-20T06:06:17Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://dogsheep-photos.dogsheep.net/public/photos_with_apple_metadata?place_city=New+Orleans&_facet=place_city&_facet_array=albums&_facet_array=persons\r\n\r\nHere's one way to add that:\r\n```sql\r\n select\r\n rowid,\r\n photo,\r\n (\r\n select\r\n json_group_array(\r\n json_object(\r\n 'label',\r\n normalized_string,\r\n 'href',\r\n '/photos/labelled?_hide_sql=1&label=' || normalized_string\r\n )\r\n )\r\n from\r\n labels\r\n where\r\n labels.uuid = photos_with_apple_metadata.uuid\r\n ) as labels,\r\n date,\r\n```", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/27/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 621323348, "node_id": "MDU6SXNzdWU2MjEzMjMzNDg=", "number": 24, "title": "Configurable URL for images", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-19T22:25:56Z", "updated_at": "2020-05-20T06:00:29Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "This is hard-coded at the moment, which is bad:\r\nhttps://github.com/dogsheep/photos-to-sqlite/blob/d5d69b9019703c47bc251444838578dd752801e2/photos_to_sqlite/cli.py#L269-L272", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/24/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 615626118, "node_id": "MDU6SXNzdWU2MTU2MjYxMTg=", "number": 22, "title": "Try out ExifReader", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2020-05-11T06:32:13Z", "updated_at": "2020-05-14T05:59:53Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://pypi.org/project/ExifReader/\r\n\r\nNew fork that should be able to handle EXIF in HEIC files.\r\n\r\nForked here: https://github.com/ianare/exif-py/issues/102#issuecomment-626376522\r\n\r\nRefs #3 ", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 616087149, "node_id": "MDU6SXNzdWU2MTYwODcxNDk=", "number": 765, "title": "publish heroku should default to currently tagged version", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-11T18:24:06Z", "updated_at": "2020-05-11T18:25:43Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Had a report that deploying to Heroku was using the previously installed version of Datasette, not the latest.\r\n\r\nCould be because of this:\r\n\r\nhttps://github.com/simonw/datasette/blob/af6c6c5d6f929f951c0e63bfd1c82e37a071b50f/datasette/publish/heroku.py#L172-L179\r\n\r\nHeroku documentation recommends pinning to specific versions https://devcenter.heroku.com/articles/python-pip\r\n\r\nSo... we could ensure we default to an install value of `[\"datasette>=current_tag\"]`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/765/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 613491342, "node_id": "MDU6SXNzdWU2MTM0OTEzNDI=", "number": 762, "title": "Experiment with PRAGMA hard_heap_limit ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-05-06T17:33:23Z", "updated_at": "2020-05-07T03:08:44Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This was added in SQLite 2020-01-22 (3.31.0): https://www.sqlite.org/changes.html#version_3_31_0\r\n\r\n> Add the [sqlite3_hard_heap_limit64()](https://www.sqlite.org/c3ref/hard_heap_limit64.html) interface and the corresponding [PRAGMA hard_heap_limit](https://www.sqlite.org/pragma.html#pragma_hard_heap_limit) command. \r\n\r\nThis sounds like it could be a nice extra safety measure.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/762/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 613422636, "node_id": "MDU6SXNzdWU2MTM0MjI2MzY=", "number": 760, "title": "Way of seeing full schema for a database", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-05-06T15:46:08Z", "updated_at": "2020-05-06T23:49:06Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I find myself wanting to quickly figure out all of the BLOB columns in a database.\r\n\r\nA `/-/schema` page showing the full schema (actually since it's per-database probably `/dbname/-/schema` or `/-/schema/dbname`) would be really handy.\r\n\r\nIt would need to be carefully constructed from various queries against `sqlite_master` - just doing `select * from sqlite_master where type='table'` isn't quite enough because I also want to show indexes, triggers etc.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/760/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 612860758, "node_id": "MDU6SXNzdWU2MTI4NjA3NTg=", "number": 18, "title": "Switch CI solution to GitHub Actions with a macOS runner", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-05-05T20:03:50Z", "updated_at": "2020-05-05T23:49:18Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Refs #17.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/18/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 612287234, "node_id": "MDU6SXNzdWU2MTIyODcyMzQ=", "number": 16, "title": "Import machine-learning detected labels (dog, llama etc) from Apple Photos", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 13, "created_at": "2020-05-05T02:45:43Z", "updated_at": "2020-05-05T05:38:16Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Follow-on from #1. Apple Photos runs some very sophisticated machine learning on-device to figure out if photos are of dogs, llamas and so on. I really want to extract those labels out into my own database.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 1, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 602533300, "node_id": "MDU6SXNzdWU2MDI1MzMzMDA=", "number": 1, "title": "Import photo metadata from Apple Photos into SQLite", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": {"value": 5324096, "label": "Apple Photos online and securely browsable"}, "comments": 8, "created_at": "2020-04-18T19:23:26Z", "updated_at": "2020-05-04T02:41:40Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Faces, albums, locations, that kind of thing.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 608512747, "node_id": "MDU6SXNzdWU2MDg1MTI3NDc=", "number": 14, "title": "Annotate photos using the Google Cloud Vision API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2020-04-28T18:09:03Z", "updated_at": "2020-04-28T18:19:06Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "It can detect faces, run OCR, do image labeling (it knows what a lemur is!) and do object localization where it identifies objects and returns bounding polygons for them.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/14/reactions\", \"total_count\": 3, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 607888367, "node_id": "MDU6SXNzdWU2MDc4ODgzNjc=", "number": 13, "title": "Also upload movie files", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-04-27T22:11:25Z", "updated_at": "2020-04-28T00:39:45Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "The `upload` command currently only handles static images:\r\n\r\nhttps://github.com/dogsheep/photos-to-sqlite/blob/d939455af00e07866686457ee2fcb9b2d1b7194e/photos_to_sqlite/utils.py#L26-L33\r\n\r\nNeed to cover movies taken by my phone and DSLR too.", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/13/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 606033104, "node_id": "MDU6SXNzdWU2MDYwMzMxMDQ=", "number": 12, "title": "If less than 500MB, show size in MB not GB", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-24T04:35:01Z", "updated_at": "2020-04-24T04:35:25Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Just saw this:\r\n```\r\nUploading 0.05 GB\r\n```", "repo": {"value": 256834907, "label": "dogsheep-photos"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep-photos/issues/12/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 285168503, "node_id": "MDU6SXNzdWUyODUxNjg1MDM=", "number": 176, "title": "Add GraphQL endpoint", "user": {"value": 173848, "label": "yozlet"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 8, "created_at": "2017-12-29T23:21:01Z", "updated_at": "2020-04-21T14:16:24Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/176/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 602619330, "node_id": "MDU6SXNzdWU2MDI2MTkzMzA=", "number": 45, "title": "Use raise_for_status() everywhere", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-04-19T04:38:28Z", "updated_at": "2020-04-19T04:39:22Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "I keep seeing errors which I think are caused by authentication or rate limit problems but which appear to be unexpected JSON responses - presumably because they are actually an error message.\r\n\r\nRecent example: https://github.com/simonw/jsk-fellows-on-twitter/runs/598892575\r\n\r\nUsing `response.raise_for_status()` everywhere will make these errors less confusing.", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/45/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 530491074, "node_id": "MDU6SXNzdWU1MzA0OTEwNzQ=", "number": 14, "title": "Command for importing events", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-11-29T21:28:58Z", "updated_at": "2020-04-14T19:38:34Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "Eg from https://api.github.com/users/simonw/events\r\n\r\nDocs here: https://developer.github.com/v3/activity/events/#list-events-performed-by-a-user", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/14/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 599776345, "node_id": "MDU6SXNzdWU1OTk3NzYzNDU=", "number": 24, "title": "Feature idea: github-to-sqlite everything ...", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-14T18:34:00Z", "updated_at": "2020-04-14T18:34:00Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "At the moment if you want to pull all your repos, issues, issues comments etc you have to do it with a sequence of separate commands.\r\n\r\nConsider adding a `everything` or `all` command which fetches everything that the tool knows how to fetch, and is designed to be run on a cron in a way that fetches just new stuff each time.", "repo": {"value": 207052882, "label": "github-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/github-to-sqlite/issues/24/reactions\", \"total_count\": 7, \"+1\": 7, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 593006814, "node_id": "MDU6SXNzdWU1OTMwMDY4MTQ=", "number": 715, "title": "Refactor duplicate cell display logic", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2020-04-03T00:58:11Z", "updated_at": "2020-04-03T00:58:11Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The logic for rendering cells in table view and in database (or canned query) view is currently very similar:\r\n\r\nhttps://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/base.py#L514-L539\r\n\r\nCompared with:\r\n\r\nhttps://github.com/simonw/datasette/blob/7656fd64d8b6a32ebc34d89c1b8711cc5ea240f7/datasette/views/table.py#L104-L195\r\n\r\nI'll be changing this a bit in #698 but I should still try to clean this up more further in the future.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/715/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 565064079, "node_id": "MDExOlB1bGxSZXF1ZXN0Mzc1MTgwODMy", "number": 672, "title": "--dirs option for scanning directories for SQLite databases", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 15, "created_at": "2020-02-14T02:25:52Z", "updated_at": "2020-03-27T01:03:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/672", "body": "Refs #417.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/672/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 574035432, "node_id": "MDU6SXNzdWU1NzQwMzU0MzI=", "number": 692, "title": "is_hidden_table context variable on table.html page", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-03-02T15:03:25Z", "updated_at": "2020-03-02T15:03:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It's useful to know if a table is hidden when rendering that page. `datasette-configure-fts` for example may want to disallow enabling search on hidden tables.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/692/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 539204432, "node_id": "MDU6SXNzdWU1MzkyMDQ0MzI=", "number": 70, "title": "Implement ON DELETE and ON UPDATE actions for foreign keys", "user": {"value": 26292069, "label": "LucasElArruda"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-12-17T17:19:10Z", "updated_at": "2020-02-27T04:18:53Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi! I did not find any mention on the library about ON DELETE and ON UPDATE actions for foreign keys. Are those expected to be implemented? If not, it would be a nice thing to include!", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/70/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 550293770, "node_id": "MDU6SXNzdWU1NTAyOTM3NzA=", "number": 658, "title": "How do I use the app.css as style sheet?", "user": {"value": 49656826, "label": "null92"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2020-01-15T16:27:57Z", "updated_at": "2020-02-07T00:29:50Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Simon,\r\n\r\nI'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't.\r\n\r\nIs this possible?\r\n\r\nThanks!", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/658/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 559964149, "node_id": "MDU6SXNzdWU1NTk5NjQxNDk=", "number": 665, "title": "Introduce a SQL statement parser in Python", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2020-02-04T20:36:05Z", "updated_at": "2020-02-04T20:36:48Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "#254 and #653 are both examples of problems that could be solved using a real SQL parser in Python.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/665/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 546073980, "node_id": "MDU6SXNzdWU1NDYwNzM5ODA=", "number": 74, "title": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column", "user": {"value": 15092, "label": "jayvdb"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2020-01-07T04:35:50Z", "updated_at": "2020-01-12T07:21:17Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes.\r\n\r\nMost fail on the cli exit code like\r\n```py\r\n[ 74s] =================================== FAILURES ===================================\r\n[ 74s] _________________________________ test_tables __________________________________\r\n[ 74s] \r\n[ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db'\r\n[ 74s] \r\n[ 74s] def test_tables(db_path):\r\n[ 74s] result = CliRunner().invoke(cli.cli, [\"tables\", db_path])\r\n[ 74s] > assert '[{\"table\": \"Gosh\"},\\n {\"table\": \"Gosh2\"}]' == result.output.strip()\r\n[ 74s] E assert '[{\"table\": \"...e\": \"Gosh2\"}]' == ''\r\n[ 74s] E - [{\"table\": \"Gosh\"},\r\n[ 74s] E - {\"table\": \"Gosh2\"}]\r\n[ 74s] \r\n[ 74s] tests/test_cli.py:28: AssertionError\r\n```\r\n\r\npackaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils\r\n\r\nI'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo.", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 541274681, "node_id": "MDU6SXNzdWU1NDEyNzQ2ODE=", "number": 2, "title": "Add linkedin-to-sqlite", "user": {"value": 881925, "label": "mnp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-12-21T03:13:40Z", "updated_at": "2019-12-21T03:13:40Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "There is an API available. https://developer.linkedin.com/docs/rest-api#\r\n\r\nAt the minimum, I would think contact list and messages would be of interest.", "repo": {"value": 214746582, "label": "dogsheep.github.io"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 527710055, "node_id": "MDU6SXNzdWU1Mjc3MTAwNTU=", "number": 640, "title": "Nicer error message for heroku publish name clash", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-24T14:57:07Z", "updated_at": "2019-12-06T07:19:34Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "If you try to publish to Heroku using no set name (i.e. the default `datasette` name) and a project already exists under that name, you get a meaningful error report on the first line followed by Py error messages that drown it out:\r\n\r\n```\r\nCreating datasette... !\r\n \u25b8 Name datasette is already taken\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasette\", line 10, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 764, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 717, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 956, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/NNNNN/Library/Python/3.7/lib/python/site-packages/datasette/publish/heroku.py\", line 124, in heroku\r\n create_output = check_output(cmd).decode(\"utf8\")\r\n File \"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py\", line 411, in check_output\r\n **kwargs).stdout\r\n File \"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py\", line 512, in run\r\n output=stdout, stderr=stderr)\r\nsubprocess.CalledProcessError: Command '['heroku', 'apps:create', 'datasette', '--json']' returned non-zero exit status 1.\r\n```\r\n\r\nIt would be neater if:\r\n\r\n- the Py error message was caught;\r\n- the report suggested setting a project name using `-n` etc.\r\n\r\nIt may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/640/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 527670799, "node_id": "MDU6SXNzdWU1Mjc2NzA3OTk=", "number": 639, "title": "updating metadata.json without recreating the app", "user": {"value": 172847, "label": "pkoppstein"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 6, "created_at": "2019-11-24T09:19:53Z", "updated_at": "2019-11-30T06:08:50Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I've sucessfully \"uploaded\" an SQLite database (with a metadata.json file) to heroku using:\r\n\r\n $ datasette publish heroku so-sales.db -m metadata.json -n so-sales\r\n\r\nThe question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database.\r\n\r\nThe directions on heroku indicate I should run:\r\n\r\n heroku git:clone -a so-sales\r\n\r\nBut this just results in an empty directory with a warning:\r\nwarning: You appear to have cloned an empty repository.\r\n\r\nI've been able to \"clone\" the heroku \"app\" using the command:\r\n\r\n $ heroku slugs:download -a so-sales\r\n\r\nbut this is not a git repository....\r\n\r\nIdeally, it seems to me, there'd be an option of the `datasette` CLI to allow a file\r\nto be updated, or there'd be some way to create a local git \"clone\" of the app\r\nso that the heroku instructions for \"Deploying with git\" would apply.\r\n\r\n(p.s. I ran `datasette publish heroku -m metadata.json -n so-sales`\r\nin the hope that that would not cause the .db file to be wiped, but of course\r\nit was.)\r\n\r\n(p.p.s. Thanks for Datasette!)", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/639/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 530468212, "node_id": "MDU6SXNzdWU1MzA0NjgyMTI=", "number": 643, "title": "Set up some basic benchmarks as part of the unit tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-11-29T19:24:19Z", "updated_at": "2019-11-29T19:24:19Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://pypi.org/project/pytest-benchmark/ looks great for this.\r\n\r\nHere's how to run it as a github action: https://github.com/rhysd/github-action-benchmark/blob/master/examples/pytest/README.md", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/643/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 464987783, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjk1MTI3MjEz", "number": 546, "title": "Facet by delimiter", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-07T20:06:05Z", "updated_at": "2019-11-18T23:46:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/546", "body": "Refs #510", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/546/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 501773982, "node_id": "MDExOlB1bGxSZXF1ZXN0MzIzOTgzNzMy", "number": 579, "title": "New connection pooling", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-10-02T23:22:19Z", "updated_at": "2019-11-15T22:57:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": "simonw/datasette/pulls/579", "body": "See #569", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/579/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 516874735, "node_id": "MDU6SXNzdWU1MTY4NzQ3MzU=", "number": 613, "title": "Basic join support for table view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-11-03T19:12:53Z", "updated_at": "2019-11-03T19:14:01Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I think it would be possible to support basic foreign key joins on the table page.\r\n\r\nThe user could specify columns that should result in a join (from a set of suggestions similar to how facets work right now) and they could then be passed as `?_join=city_id` arguments.\r\n\r\nThis feature will make a lot of sense when combined with the ability to show / hide / customize columns, see #292", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/613/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 510076368, "node_id": "MDU6SXNzdWU1MTAwNzYzNjg=", "number": 605, "title": "Support queries at the table level", "user": {"value": 12617395, "label": "bsilverm"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-10-21T15:58:30Z", "updated_at": "2019-10-30T18:55:37Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Per the issue described in [issue #588](https://github.com/simonw/datasette/issues/588), it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/605/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 457147936, "node_id": "MDU6SXNzdWU0NTcxNDc5MzY=", "number": 512, "title": "\"about\" parameter in metadata does not appear when alone", "user": {"value": 7936571, "label": "chrismp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-17T21:04:20Z", "updated_at": "2019-10-11T15:49:13Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Here's an example of metadata I have for one database on datasette.\r\n\r\n```\r\n\"Records-requests\": {\r\n\t\"tables\": {\r\n\t\t\"Some table\": {\r\n\t\t\t\"about\": \"This table has data.\"\r\n\t\t}\r\n\t}\r\n}\r\n```\r\n\r\nThe text in `about` does not show up when I publish the data. But it shows up after I add a `\"source\"` parameter in the metadata.\r\n\r\nIs this intended?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/512/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 505673645, "node_id": "MDU6SXNzdWU1MDU2NzM2NDU=", "number": 16, "title": "Do a better job with archived direct message threads", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-11T06:55:21Z", "updated_at": "2019-10-11T06:55:27Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "https://github.com/dogsheep/twitter-to-sqlite/blob/fb2698086d766e0333a55bb73435e7283feeb438/twitter_to_sqlite/archive.py#L98-L99", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/16/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 504720731, "node_id": "MDU6SXNzdWU1MDQ3MjA3MzE=", "number": 1, "title": "Add more details on how to request data from google takeout correctly.", "user": {"value": 1055831, "label": "dazzag24"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-09T15:17:34Z", "updated_at": "2019-10-09T15:17:34Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:\r\n\r\n- My Activity\r\n- Location History\r\n\r\nIn addition unless you specify that \"My Activity\" is downloaded in JSON format the default is HTML. This then causes the \r\n\r\n`google-takeout-to-sqlite my-activity takeout.db takeout.zip`\r\n\r\ncommand to fail as it only contains html files not json files.\r\n\r\nThanks", "repo": {"value": 206649770, "label": "google-takeout-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 503053243, "node_id": "MDU6SXNzdWU1MDMwNTMyNDM=", "number": 582, "title": "Datasette should not completely crash if one SQLite database is malformed", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-10-06T05:11:43Z", "updated_at": "2019-10-06T05:11:43Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "If you run Datasette against a number of database files and one of them is malformed, you get this 500 error on the index page:\r\n\r\n\"Error_500\"\r\n\r\nIt would be better if Datasette still worked and listed the databases that were NOT malformed, then showed an inline error message just for the one that could not be accessed.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/582/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 481885279, "node_id": "MDU6SXNzdWU0ODE4ODUyNzk=", "number": 569, "title": "More advanced connection pooling", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2019-08-17T13:20:41Z", "updated_at": "2019-10-02T22:44:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "We need a much smarter way of handling database connections.\r\n\r\nToday, connections are simple: Datasette runs a number of threads (defaults to 3) and each thread gets a threadlocal read-only (or immutable) connection to each attached database - opened on demand.\r\n\r\nFor Datasette Library (#417) I want to support potentially hundreds of attached databases. Datasette Edit (#567) is going to introduce a need for writable connections too.\r\n\r\nI'd also like to be able to run joins across multiple databases (#283) which further complicates things.\r\n\r\nSupporting thousands of open SQLite connections at once feels like it won't provide good enough performance (though I should benchmark that to be sure). Some kind of connection pooling is likely to be necessary.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/569/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 488874815, "node_id": "MDU6SXNzdWU0ODg4NzQ4MTU=", "number": 5, "title": "Write tests that simulate the Twitter API", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-09-03T23:55:35Z", "updated_at": "2019-09-03T23:56:28Z", "closed_at": null, "author_association": "MEMBER", "pull_request": null, "body": "I can use betamax for this: https://pypi.org/project/betamax/", "repo": {"value": 206156866, "label": "twitter-to-sqlite"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/5/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 463544206, "node_id": "MDU6SXNzdWU0NjM1NDQyMDY=", "number": 537, "title": "Populate \"endpoint\" key in ASGI scope", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2019-07-03T04:54:47Z", "updated_at": "2019-07-22T06:03:18Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This is a trick used by Starlette so that other layers of ASGI middleware can see which route was selected.\r\n\r\nThey added it here: https://github.com/encode/starlette/commit/34d0097feb6f057bd050d5057df5a2f96b97384e\r\n\r\nIf Datasette supports it as well we can benefit from it if we integrate this sentry_asgi middleware (probably as a `datasette-sentry` plugin): https://github.com/encode/sentry-asgi/blob/c6a42d44d31f85885b79e4ee898683ecf8104971/sentry_asgi/middleware.py#L34-L35", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/537/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 465003070, "node_id": "MDU6SXNzdWU0NjUwMDMwNzA=", "number": 551, "title": "Ship many-to-many faceting support (and facet-by-delimiter)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-07-07T23:11:45Z", "updated_at": "2019-07-08T15:45:23Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/551/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 456569067, "node_id": "MDU6SXNzdWU0NTY1NjkwNjc=", "number": 510, "title": "Ability to facet by delimiter (e.g. comma separated fields)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": {"value": 9599, "label": "simonw"}, "milestone": null, "comments": 1, "created_at": "2019-06-15T19:34:41Z", "updated_at": "2019-07-08T15:44:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "E.g. if a field contains \"Tags,With,Commas\" be able to facet them in the same way as `_facet_array=` lets you facet `[\"Tags\", \"With\", \"Commas\"]`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/510/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 462117311, "node_id": "MDU6SXNzdWU0NjIxMTczMTE=", "number": 531, "title": "/database/-/inspect", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-06-28T16:33:41Z", "updated_at": "2019-07-08T15:43:57Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Build `/database/-/inspect` which shows tables, columns, column types and foreign keys\r\n\r\nIt won't show table counts. Or maybe it will include them optionally but only for `-i` databases, in a special area of the JSON reserved for immutable-only inspect details.\r\n\r\n_Originally posted by @simonw in https://github.com/simonw/datasette/issues/465#issuecomment-506797086_", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/531/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 465327844, "node_id": "MDU6SXNzdWU0NjUzMjc4NDQ=", "number": 553, "title": "Potential improvements to facet-by-date", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-07-08T15:37:53Z", "updated_at": "2019-07-08T15:41:55Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "In addition to #483 Tobias had some useful suggestions on Twitter:\r\n\r\nhttps://twitter.com/rixxtr/status/1148253926476701696\r\n> I think for date facets, it might be more meaningful to order them by date, rather than by size? Or offer both? I'm *definitely* often interested in size-over-time, so https://data.rixx.de/django_tickets/tickets?_facet_date=created#facet-created \u2026 isn't all that helpful!\r\n\r\nScreenshot of that link:\r\n\r\n\"django_tickets__tickets__29_846_rows\"\r\n", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/553/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 465019882, "node_id": "MDU6SXNzdWU0NjUwMTk4ODI=", "number": 552, "title": "Add --plugin-secret support to \"datasette package\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-07-08T01:46:47Z", "updated_at": "2019-07-08T01:47:30Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split out from #544.\r\n\r\nI think I should combine this with #347 (renaming `datasette package` to `datasette publish docker`).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/552/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 327395270, "node_id": "MDU6SXNzdWUzMjczOTUyNzA=", "number": 296, "title": "Per-database and per-table /-/ URL namespace", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2018-05-29T16:23:13Z", "updated_at": "2019-06-28T16:46:34Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph`\r\n\r\nTo start:\r\n\r\n* `/dbname/-/inspect`\r\n* `/dbname/-/metadata`\r\n* `/dbname/tablename/-/inspect`\r\n* `/dbname/tablename/-/metadata`\r\n\r\nThis means we will no longer allow databases or tables to have the name `\"-\"` - I think that's OK\r\n\r\nWe will continue to support rows with a primary key of `\"-\"` at the following URL:\r\n\r\n* `/dbname/tablename/-`", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/296/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 327365110, "node_id": "MDU6SXNzdWUzMjczNjUxMTA=", "number": 294, "title": "inspect should record column types", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 7, "created_at": "2018-05-29T15:10:41Z", "updated_at": "2019-06-28T16:45:28Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "For each table we want to know the columns, their order and what type they are.\r\n\r\nI'm going to break with SQLite defaults a little on this one and allow datasette to define additional types - to start with just a `geometry` type for columns that are detected as SpatiaLite geometries.\r\n\r\nPossible JSON design:\r\n\r\n \"columns\": [{\r\n \"name\": \"title\",\r\n \"type\": \"text\"\r\n }, ...]\r\n\r\nRefs #276", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/294/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 459622390, "node_id": "MDU6SXNzdWU0NTk2MjIzOTA=", "number": 522, "title": "Handle case-insensitive headers in a nicer way", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-06-23T21:56:34Z", "updated_at": "2019-06-26T18:48:53Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Spun out from https://github.com/simonw/datasette/pull/518#discussion_r296486289", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/522/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 460095928, "node_id": "MDU6SXNzdWU0NjAwOTU5Mjg=", "number": 528, "title": "Establish a pattern for Datasette plugins built on top of Pandas", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-06-24T21:05:52Z", "updated_at": "2019-06-24T21:05:52Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "The Pandas ecosystem is huge, varied and full of tools that are really good at doing interesting analysis on top of tabular data.\r\n\r\nPandas should not be a dependency of Datasette core, but I think there is a lot of potential in having plugins which use Pandas to apply interesting analysis to data sucked out of Datasette's SQLite tables.\r\n\r\nOne example ([thanks, Tony](https://twitter.com/psychemedia/status/1143259809715752962)): https://github.com/ResidentMario/missingno could form the basis of a fantastic plugin for getting a high-level overview of how complete each column in a table is.\r\n\r\nSome thought is needed here about what shape these kind of plugins might take, and what plugin hooks they would use.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/528/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 459469278, "node_id": "MDU6SXNzdWU0NTk0NjkyNzg=", "number": 515, "title": "Try shrinking official image with docker-slim", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-06-22T12:25:37Z", "updated_at": "2019-06-22T12:25:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This looks really promising: https://github.com/docker-slim/docker-slim\r\n\r\nIf it can shave substantial size from our official container reliably we could add it to the automated build process.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/515/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 451585764, "node_id": "MDU6SXNzdWU0NTE1ODU3NjQ=", "number": 499, "title": "Accessibility for non-techie newsies? ", "user": {"value": 7936571, "label": "chrismp"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-06-03T16:49:37Z", "updated_at": "2019-06-05T21:22:55Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/499/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 447408527, "node_id": "MDU6SXNzdWU0NDc0MDg1Mjc=", "number": 483, "title": "Option to facet by date using month or year", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 5, "created_at": "2019-05-23T01:25:29Z", "updated_at": "2019-05-29T21:38:27Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Facet by date (from #481) can take datetimes and facet them by the day component.\r\n\r\nhttps://latest.datasette.io/fixtures/facetable?_facet_date=created\r\n\r\nI'd like to also be able to facet by month or year.\r\n\r\nI'm not sure what the best way to achieve this is. Could be two more Facet classes (YearFacet and MonthFacet) but I think it might be nicer if the existing DateFacet could take an optional argument that changed its behaviour. But... if I do that, do I expose it in the UI somewhere or is it only available to URL-hackers?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/483/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 449445715, "node_id": "MDU6SXNzdWU0NDk0NDU3MTU=", "number": 491, "title": "Figure out how to use Firebase with cloudrun to enable vanity URLs and CDN caching", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-28T19:48:06Z", "updated_at": "2019-05-28T19:48:35Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "It looks like Firebase can solve a couple of problems with the existing `datasette publish cloudrun` hosting mechanism:\r\n\r\n* The URLs it produces aren't pretty enough. Firebase offers more control over vanity URLs.\r\n* CDN caching (as seen in `datasette publish now`) is great for improving performance and saving money on Cloud Run execution time.\r\n\r\nhttps://firebase.google.com/docs/hosting/cloud-run looks like it can help with both of these.\r\n\r\nLots of interesting questions:\r\n\r\n* Should this be a new `datasette publish firebase` command or should it instead be implemented as additional custom options to `datasette publish cloudrun`?\r\n* How much harder does it become to do account setup?\r\n* How much will this option cost users?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/491/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 447451492, "node_id": "MDU6SXNzdWU0NDc0NTE0OTI=", "number": 484, "title": "Mechanism for displaying summary of m2m relationships in rows on table view", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2019-05-23T05:02:41Z", "updated_at": "2019-05-23T06:34:05Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Part of #354 (m2m support)\r\n\r\nIt would be fantastic if rows that are part of a m2m relationship could display it in an additional column in the table view.\r\n\r\nIt might look something like this: https://russian-ira-facebook-ads.datasettes.com/russian-ads-919cbfd/display_ads?_search=black+lives+matter\r\n\r\n\"russian-ads__display_ads__50_rows_where_where_search_matches__black_lives_matter_\"\r\n\r\nThat example [was achieved](https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/daf51a8c50a78e8bc7971c211005fd85e66ccf64/russian-ads-metadata.yaml#L72-L77) using a custom SQL query and [datasette-json-html](https://github.com/simonw/datasette-json-html) - but I'd like this to be a built-in feature instead.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/484/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 346027040, "node_id": "MDU6SXNzdWUzNDYwMjcwNDA=", "number": 355, "title": "Table view should support filtering via many-to-many relationships", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 10, "created_at": "2018-07-31T04:04:16Z", "updated_at": "2019-05-23T06:04:03Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Parent: #354 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/355/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275159710, "node_id": "MDU6SXNzdWUyNzUxNTk3MTA=", "number": 128, "title": "Every visualization should have an \"embed\" button", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-11-19T13:38:13Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "At least for the first round of visualizations, any time you construct one using the UI the result should include an \"embed this\" button that returns source code to copy and paste\r\n\r\nThese examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/128/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275415799, "node_id": "MDU6SXNzdWUyNzU0MTU3OTk=", "number": 137, "title": "Ability to combine multiple SQL queries on a single graph", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-11-20T16:26:57Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "This would make visualizations significantly more powerful. The interesting challenge will be around the URL design. It would be useful to be able to combine either multiple explicit SQL queries or multiple queries based on the filter string parameters passed to one or more table views.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/137/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 275755475, "node_id": "MDU6SXNzdWUyNzU3NTU0NzU=", "number": 140, "title": "Heatmap visualization plugin", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2017-11-21T15:34:23Z", "updated_at": "2019-05-13T18:33:51Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Could use https://github.com/scottbedard/svelte-heatmap", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/140/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 288438570, "node_id": "MDU6SXNzdWUyODg0Mzg1NzA=", "number": 179, "title": "More metadata options for template authors ", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-01-14T20:51:04Z", "updated_at": "2019-05-13T18:33:33Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "See this thread on Twitter: https://twitter.com/simonw/status/952637152797458432", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/179/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 299760684, "node_id": "MDU6SXNzdWUyOTk3NjA2ODQ=", "number": 185, "title": "Metadata should be a nested arbitrary KV store", "user": {"value": 222245, "label": "carlmjohnson"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 12, "created_at": "2018-02-23T16:02:07Z", "updated_at": "2019-05-13T18:33:33Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication.\r\n\r\nIdeally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like `metadata.page.key` vs. `metadata.this.key` might work as an interface.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/185/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 440325850, "node_id": "MDExOlB1bGxSZXF1ZXN0Mjc1OTIzMDY2", "number": 452, "title": "SQL builder utility classes", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2019-05-04T13:57:47Z", "updated_at": "2019-05-04T14:03:04Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/452", "body": "This adds a straightforward set of classes to aid in the construction of\r\nSQL queries.\r\n\r\nMy plan for this was to allow plugins to manipulate the\r\nDatasette-generated SQL in a more structured way. I'm not sure that's\r\ngoing to work, but I feel like this is still a step forward - it\r\nreduces the number of intermediate variables in `TableView.data` which\r\naids readability, and also factors out a lot of the boring string\r\nconcatenation.\r\n\r\nThere are a fair number of minor structure changes in here too as I've\r\ntried to make the ordering of `TableView.data` a bit more logical. As\r\nfar as I can tell, I haven't broken anything...", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/452/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 411257981, "node_id": "MDU6SXNzdWU0MTEyNTc5ODE=", "number": 412, "title": "Linked Data(sette)", "user": {"value": 43340, "label": "sfkeller"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2019-02-18T00:38:14Z", "updated_at": "2019-03-19T10:09:46Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I've a radical feature idea (possible first as an extension in order to experiment?): \r\n\r\nI'd like to link to a remote table from a remote database, e.g. with a function \"linked_datasette()\". So one could do following query:\r\n```\r\nSELECT foo.id, foo.a, remote_party.b\r\nFROM foo\r\nJOIN linked_datasette(\"https://parlgov.datasettes.com/parlgov-b42a2f2\") AS remote_party \r\n ON foo.id=remote_party.id\r\n```\r\nThis is inspired by SPARQL's SERVICE keyword for remote RDF \"endpoints\".\r\n\r\nThere's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ).\r\n\r\nAnd here's an implementation from me in Postgres FDW to connect another Postgres \"endpoint\": https://pastebin.com/Fz2v64Cz .", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/412/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 400340905, "node_id": "MDU6SXNzdWU0MDAzNDA5MDU=", "number": 402, "title": "Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 3, "created_at": "2019-01-17T15:52:28Z", "updated_at": "2019-01-17T16:15:21Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "> Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html\r\n\r\nhttps://twitter.com/ignoredambience/status/1085926961413869568", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/402/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 377166793, "node_id": "MDU6SXNzdWUzNzcxNjY3OTM=", "number": 372, "title": "Docker build tools", "user": {"value": 82988, "label": "psychemedia"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-11-04T16:02:35Z", "updated_at": "2018-11-04T16:02:35Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as `requirements.txt` files.\r\n\r\nIf plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base `datasette` container may be required.\r\n\r\nExamples of Dockerfile generators / container builders:\r\n\r\n- [openshift/source-to-image (s2i)](https://github.com/openshift/source-to-image)\r\n- [jupyter/repo2docker](https://github.com/jupyter/repo2docker)\r\n- [stencila/dockter](https://github.com/stencila/dockter)\r\n\r\nDiscussions / threads (via Binderhub gitter) on:\r\n- [why `repo2docker` not `s2i`](http://words.yuvi.in/post/why-not-s2i/)\r\n- [why `dockter` not `repo2docker`](https://twitter.com/choldgraf/status/1058499607309647872)\r\n- [composability in `s2i`](https://trello.com/c/AexIVZNf/1008-8-composable-builds-builds-evg)\r\n\r\nRelates to things like:\r\n\r\n- https://github.com/simonw/datasette/pull/280", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/372/reactions\", \"total_count\": 2, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 2, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 330826972, "node_id": "MDU6SXNzdWUzMzA4MjY5NzI=", "number": 308, "title": "Support extra Heroku apps:create options - region, space, team", "user": {"value": 78156, "label": "annapowellsmith"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-06-08T23:08:33Z", "updated_at": "2018-09-21T14:09:28Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/308/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 359075028, "node_id": "MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx", "number": 364, "title": "Support for other types of databases using external connectors", "user": {"value": 11912854, "label": "jsancho-gpl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-09-11T14:31:47Z", "updated_at": "2018-09-11T14:31:47Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/364", "body": "This PR is related to #293, but now all commits have been merged.\r\n\r\nThe purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points.\r\n\r\nThe modifications in the original datasette code are minimal and many are in a separated file.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/364/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 355299310, "node_id": "MDExOlB1bGxSZXF1ZXN0MjExODYwNzA2", "number": 363, "title": "Search all apps during heroku publish", "user": {"value": 436032, "label": "kevboh"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-08-29T19:25:10Z", "updated_at": "2018-08-31T14:39:45Z", "closed_at": null, "author_association": "FIRST_TIME_CONTRIBUTOR", "pull_request": "simonw/datasette/pulls/363", "body": "Adds the `-A` option to include apps from all organizations when searching app names for publish.", "repo": {"value": 107914493, "label": "datasette"}, "type": "pull", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/363/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": 0, "state_reason": null} {"id": 344654623, "node_id": "MDU6SXNzdWUzNDQ2NTQ2MjM=", "number": 347, "title": "Rename \"datasette package\" to \"datasette publish docker\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-07-26T00:42:46Z", "updated_at": "2018-07-26T00:42:46Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/347/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 341228846, "node_id": "MDU6SXNzdWUzNDEyMjg4NDY=", "number": 343, "title": "Render boolean fields better by default", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-07-14T11:10:29Z", "updated_at": "2018-07-14T14:17:14Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases?", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/343/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 318490133, "node_id": "MDU6SXNzdWUzMTg0OTAxMzM=", "number": 241, "title": "Default datasette logging format should be JSON", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-27T17:32:48Z", "updated_at": "2018-07-10T17:45:40Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Structured logs are better. Datasette should default to outputting it's HTTP access log lines as newline delimited JSON instead of the Sanic default format it uses at the moment.\r\n\r\nFor improved greppability these logs should have keys ordered in a consistent way. Python's JSON module can do this with ordered dictionaries.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/241/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 314771615, "node_id": "MDU6SXNzdWUzMTQ3NzE2MTU=", "number": 218, "title": "Support custom unit display in order to handle \"$10,000\"", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T18:39:31Z", "updated_at": "2018-07-10T17:45:38Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I tried to get Datasette to display `$10,000` using the new units support but we currently only display units as a suffix:\r\n\r\nhttps://github.com/simonw/datasette/blob/10a34f995c70daa37a8a2aa02c3135a4b023a24c/datasette/app.py#L563-L572\r\n\r\nIt would be neat if there was a mechanism for specifying a custom unit display - maybe something like this:\r\n\r\n```\r\n{\r\n \"custom_units\": {\r\n \"us_dollar\": {\r\n \"unit\": \"us_dollar = [] = $\",\r\n \"format\": \"${:,}\"\r\n }\r\n }\r\n}\r\n```", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/218/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312395790, "node_id": "MDU6SXNzdWUzMTIzOTU3OTA=", "number": 197, "title": "Ability to sort by more than one column", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:13:30Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189.\r\n\r\nI'd like to support \"sort by X descending, then by Y ascending if there are dupes for X\" as well. Suggested syntax for that:\r\n\r\n ?_sort_desc=X&_sort=Y\r\n\r\nwe currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example:\r\n\r\n ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/197/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 312396095, "node_id": "MDU6SXNzdWUzMTIzOTYwOTU=", "number": 198, "title": "Ability to sort with nulls last", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-09T05:15:40Z", "updated_at": "2018-07-10T17:45:37Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Split off from #189\r\n\r\nHere's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid\r\n\r\n order by case when career_ranypa is null then 1 else 0 end, career_ranypa", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/198/reactions\", \"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 326778161, "node_id": "MDU6SXNzdWUzMjY3NzgxNjE=", "number": 290, "title": "Consider increasing the default for num_sql_threads (currently 3)", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-27T00:52:41Z", "updated_at": "2018-05-27T00:52:41Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "I ran a very rough micro-benchmark on the new `num_sql_threads` config option (added in #285)\r\n\r\n datasette --config num_sql_threads:1 fivethirtyeight.db\r\n\r\nThen\r\n\r\n ab -n 100 -c 10 'http://127.0.0.1:8011/fivethirtyeight-2628db9/twitter-ratio%2Fsenators'\r\n\r\n| Number of threads | Requests/second |\r\n|---|---|\r\n| 1 | 4.57 |\r\n| 3 | 9.77 |\r\n| 10 | 13.53 |\r\n| 20 | 15.24 \r\n| 50 | 8.21 | \r\n\r\nThis was on my early 2018 OS X laptop. Need to benchmark in other common environments before making a decision on changing the default. That said, the default of 3 was a number I plucked out of thin air.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/290/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 326599525, "node_id": "MDU6SXNzdWUzMjY1OTk1MjU=", "number": 286, "title": "Database hash should include current datasette version", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-05-25T17:03:42Z", "updated_at": "2018-05-25T17:07:36Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Right now deploying a new version of datasette doesn't invalidate existing URLs, so users may still see a cached copy of the old templates.\r\n\r\nWe can fix this by including the current datasette version in the input to the hash function (which currently just the database file contents).", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/286/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 319449852, "node_id": "MDU6SXNzdWUzMTk0NDk4NTI=", "number": 247, "title": "SQLite code decoupled from Datasette", "user": {"value": 11912854, "label": "jsancho-gpl"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2018-05-02T08:03:28Z", "updated_at": "2018-05-21T15:29:31Z", "closed_at": null, "author_association": "NONE", "pull_request": null, "body": "I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with [PyTables](https://github.com/PyTables/PyTables) format.\r\n\r\nIn order to accomplish that, I've started [a fork for decoupling the code related with SQLite](https://github.com/jsancho-gpl/datasette/tree/feature/db-type-plugin) and putting it in an external connector to allow future connectors for a lot of file formats.\r\n\r\nIt'd be nice if you could look at it and suggest improvements for a possible PR.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/247/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 320132682, "node_id": "MDU6SXNzdWUzMjAxMzI2ODI=", "number": 250, "title": "Setup some issue templates", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-05-04T01:49:07Z", "updated_at": "2018-05-04T01:49:07Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://twitter.com/left_pad/status/99216385740464537\r\n\r\nI like the idea of using these to help people understand some of the ways I want to use issues.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/250/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 316621102, "node_id": "MDU6SXNzdWUzMTY2MjExMDI=", "number": 235, "title": "Add limit on the size in KB of data returned from a single query", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 2, "created_at": "2018-04-22T23:01:15Z", "updated_at": "2018-04-24T00:30:02Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized.\r\n\r\nIt does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server.\r\n\r\nI think the easiest place to implement that is here:\r\n\r\nhttps://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190\r\n\r\nCurrently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits.\r\n\r\nThe bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/235/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 314834783, "node_id": "MDU6SXNzdWUzMTQ4MzQ3ODM=", "number": 219, "title": "Expose units in the JSON API?", "user": {"value": 45057, "label": "russss"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2018-04-16T22:04:25Z", "updated_at": "2018-04-16T22:04:25Z", "closed_at": null, "author_association": "CONTRIBUTOR", "pull_request": null, "body": "From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map.\r\n\r\nI'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/219/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 268110769, "node_id": "MDU6SXNzdWUyNjgxMTA3Njk=", "number": 33, "title": "Use locust for benchmarking and load tests", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 0, "created_at": "2017-10-24T17:00:09Z", "updated_at": "2017-12-10T03:12:16Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "https://github.com/locustio/locust\r\n\r\nNeeded for #32 ", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/33/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null} {"id": 267515678, "node_id": "MDU6SXNzdWUyNjc1MTU2Nzg=", "number": 3, "title": "Make individual column valuables addressable, with smart content types", "user": {"value": 9599, "label": "simonw"}, "state": "open", "locked": 0, "assignee": null, "milestone": null, "comments": 1, "created_at": "2017-10-23T01:11:32Z", "updated_at": "2017-12-10T03:11:58Z", "closed_at": null, "author_association": "OWNER", "pull_request": null, "body": "Some SQLite databases embed images in columns. It would be cool if these had URLs.\r\n\r\n /database-name-7sha256/table-name/compound-pk/column\r\n /database-name-7sha256/table-name/compound-pk/column.json\r\n /database-name-7sha256/table-name/compound-pk/column.png\r\n /database-name-7sha256/table-name/compound-pk/column.gif\r\n /database-name-7sha256/table-name/compound-pk/column.txt\r\n\r\nThe one without an explicit file extension auto-detects the correct extension.", "repo": {"value": 107914493, "label": "datasette"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/datasette/issues/3/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": null}