1,185 rows sorted by updated_at descending

View and edit SQL



id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app
572896293 MDU6SXNzdWU1NzI4OTYyOTM= 687 Expand plugins documentation to multiple pages simonw 9599 closed 0   Datasette 0.45 5533512 11 2020-02-28T17:26:21Z 2020-06-22T03:55:20Z 2020-06-22T03:53:54Z OWNER  

I think the plugins docs need to extend beyond a single page now. I want to add a whole section on writing tests for plugins, showing how httpx can be used as seen in and suchlike.

datasette 107914493 issue    
642127307 MDU6SXNzdWU2NDIxMjczMDc= 855 Add instructions for using cookiecutter plugin template to plugin docs simonw 9599 closed 0   Datasette 0.45 5533512 2 2020-06-19T17:33:25Z 2020-06-22T02:51:38Z 2020-06-22T02:51:38Z OWNER  

Once I ship the datasette-plugin template:

datasette 107914493 issue    
642651572 MDU6SXNzdWU2NDI2NTE1NzI= 860 Plugin hook for database/table metadata simonw 9599 open 0     1 2020-06-21T22:20:25Z 2020-06-21T22:25:27Z   OWNER  

I'm not happy with how metadata.(json|yaml) keeps growing new features. Rather than having a single plugin hook for all of metadata.json I'm going to split out the feature that shows actual real metadata for tables and databases - source, license etc - into its own plugin-powered mechanism.

_Originally posted by @simonw in

datasette 107914493 issue    
348043884 MDU6SXNzdWUzNDgwNDM4ODQ= 357 Plugin hook for loading metadata.json simonw 9599 open 0     6 2018-08-06T19:00:01Z 2020-06-21T22:19:58Z   OWNER  

For I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements.

Example yaml metadata file:

It would be useful if Datasette could be fed a YAML file directly:

datasette -m metadata.yaml

Question is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a datasette-metadata-yaml plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be?

datasette 107914493 issue    
529429214 MDU6SXNzdWU1Mjk0MjkyMTQ= 642 Provide a cookiecutter template for creating new plugins simonw 9599 closed 0   Datasette 1.0 3268330 6 2019-11-27T15:46:36Z 2020-06-20T03:20:33Z 2020-06-20T03:20:25Z OWNER  

See this conversation:

datasette 107914493 issue    
642297505 MDU6SXNzdWU2NDIyOTc1MDU= 857 Comprehensive documentation for variables made available to templates simonw 9599 open 0   Datasette 1.0 3268330 0 2020-06-20T03:19:43Z 2020-06-20T03:19:44Z   OWNER  

Needed for the Datasette 1.0 release, so template authors can trust that Datasette is unlikely to break their templates.

datasette 107914493 issue    
642296989 MDU6SXNzdWU2NDIyOTY5ODk= 856 Consider pagination of canned queries simonw 9599 open 0     0 2020-06-20T03:15:59Z 2020-06-20T03:15:59Z   OWNER  

The new canned_queries() plugin hook from #852 combined with plugins like could mean that some installations end up with hundreds or even thousands of canned queries. I should consider pagination or some other way of ensuring that this doesn't cause performance problems for Datasette.

datasette 107914493 issue    
640917326 MDU6SXNzdWU2NDA5MTczMjY= 852 canned_queries() plugin hook simonw 9599 closed 0   Datasette 0.45 5533512 9 2020-06-18T05:24:35Z 2020-06-20T03:08:40Z 2020-06-20T03:08:40Z OWNER  

Canned queries are currently baked into metadata.json which is read once on startup.

Allowing users to interactively create new canned queries - even if just through a plugin - would make a lot of sense.

Is this a new plugin hook or some other mechanism? Lots to think about here.

datasette 107914493 issue    
632843030 MDU6SXNzdWU2MzI4NDMwMzA= 807 Ability to ship alpha and beta releases simonw 9599 closed 0   Datasette 0.45 5533512 18 2020-06-07T00:12:55Z 2020-06-18T21:41:16Z 2020-06-18T21:41:16Z OWNER  

I'd like to be able to ship alphas and betas to PyPI so in-development plugins can depend on them and help test unreleased plugin hooks.

datasette 107914493 issue    
641460179 MDU6SXNzdWU2NDE0NjAxNzk= 854 Respect default scope["actor"] if one exists simonw 9599 closed 0   Datasette 0.45 5533512 0 2020-06-18T18:25:08Z 2020-06-18T18:39:22Z 2020-06-18T18:39:22Z OWNER  

ASGI wrapper plugins that themselves set the actor scope variable should be respected (though actor_from_request plugins should still execute and get the chance to replace that initial actor value).

Relevant code:

datasette 107914493 issue    
635049296 MDU6SXNzdWU2MzUwNDkyOTY= 820 Idea: Plugin hook for registering canned queries simonw 9599 closed 0     2 2020-06-09T01:58:21Z 2020-06-18T17:58:02Z 2020-06-18T17:58:02Z OWNER  

Thought of this while thinking about possible permissions plugins (#818).

Imagine an API key plugin which allows access for API keys. It could let users register new API keys by providing a writable canned query for writing to the api_keys table.

To do this the plugin needs to register the query. At the moment queries have to be registered in metadata.json - a plugin hook for registering additional queries could help solve this.

One challenge: how does the plugin know which named database the query should be registered for?

It could default to the first attached database and allow users to optionally tell the plugin "actually use this named database instead" in plugin configuration.

datasette 107914493 issue    
639542974 MDU6SXNzdWU2Mzk1NDI5NzQ= 47 Fall back to FTS4 if FTS5 is not available hpk42 73579 open 0     3 2020-06-16T10:11:23Z 2020-06-17T20:13:48Z   NONE  

got this with version 0.21.1 from pypi. twitter-to-sqlite auth worked but then "twitter-to-sqlite user-timeline USER.db" produced a tracekback ending in "no such module: FTS5".

twitter-to-sqlite 206156866 issue    
640330278 MDU6SXNzdWU2NDAzMzAyNzg= 851 Having trouble getting writable canned queries to work abdusco 3243482 closed 0     1 2020-06-17T10:30:28Z 2020-06-17T10:33:25Z 2020-06-17T10:32:33Z CONTRIBUTOR  


I'm trying to get canned inserts to work. I have an DB with following metadata:

sqlite> .mode line

sqlite> select name, sql from sqlite_master where name like '%search%';
 name = search
# ...
        sql: insert into search(name, url) VALUES (:name, :url),
        write: true

which renders a form as expected, but when I submit the form I get incomplete input error.

but when submit post the form

I've attached a debugger to see where the error comes from, because incomplete input string doesn't appear in datasette codebase.

Inside datasette.database.Database.execute_write_fn

result = await reply_queue.async_q.get()

this line raises an exception.

That led me to believe I had something wrong with my SQL. But running the command in sqlite3 inserts the record just fine.

sqlite> insert into search (name, url) values ('my name', 'my url');
sqlite> SELECT last_insert_rowid();
last_insert_rowid() = 3

So I'm a bit lost here.

  • datasette, version 0.44
  • Python 3.8.1
datasette 107914493 issue    
639993467 MDU6SXNzdWU2Mzk5OTM0Njc= 850 Proof of concept for Datasette on AWS Lambda with EFS simonw 9599 open 0     25 2020-06-16T21:48:31Z 2020-06-16T23:52:16Z   OWNER

If Datasette can run on Lambda with access to EFS it could both read AND write large databases there.

datasette 107914493 issue    
317001500 MDU6SXNzdWUzMTcwMDE1MDA= 236 datasette publish lambda plugin simonw 9599 open 0     4 2018-04-23T22:10:30Z 2020-06-16T23:50:59Z   OWNER  

Refs #217 - create a publish plugin that can deploy to AWS Lambda. says lambda packages can be up to 50 MB, so this would only work with smaller databases (the command can check the filesize before attempting to package and deploy it).

Lambdas do get a 512 MB /tmp directory too, so for larger databases the function could start and then download up to 512MB from an S3 bucket - so the plugin could take an optional S3 bucket to write to and know how to upload the .db file there and then have the lambda download it on startup.

datasette 107914493 issue    
638375985 MDExOlB1bGxSZXF1ZXN0NDM0MTYyMzE2 29 Fixed bug in SQL query for photo scores RhetTbull 41546558 open 0     0 2020-06-14T15:39:22Z 2020-06-14T15:39:22Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep-photos/pulls/29

The join on ZCOMPUTEDASSETATTRIBUTES used the wrong columns. In most of the Photos database tables, table.ZASSET joins with ZGENERICASSET.Z_PK

dogsheep-photos 256834907 pull    
574021194 MDU6SXNzdWU1NzQwMjExOTQ= 691 --reload sould reload server if code in --plugins-dir changes simonw 9599 open 0     1 2020-03-02T14:42:21Z 2020-06-14T02:35:17Z   OWNER   datasette 107914493 issue    
638241779 MDU6SXNzdWU2MzgyNDE3Nzk= 846 "Too many open files" error running tests simonw 9599 closed 0     6 2020-06-13T22:11:40Z 2020-06-14T00:26:31Z 2020-06-14T00:26:31Z OWNER  

I got this on my laptop:

/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.7/site-packages/jinja2/ in get_source
    f = open_if_exists(filename)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

filename = '/Users/simon/Dropbox/Development/datasette/datasette/templates/400.html', mode = 'rb'

    def open_if_exists(filename, mode='rb'):
        """Returns a file descriptor for the filename if that file exists,
        otherwise `None`.
>           return open(filename, mode)
E           OSError: [Errno 24] Too many open files: '/Users/simon/Dropbox/Development/datasette/datasette/templates/400.html'

/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.7/site-packages/jinja2/ OSError

Based on the conversation in I'm worried that my tests are opening too many files without closing them.

In particular... I call sqlite3.connect(filepath) a LOT - and I don't ever call conn.close() on those opened connections:

Could this be resulting in my tests eventually opening too many unclosed file handles? How could I confirm this?

datasette 107914493 issue    
638238548 MDU6SXNzdWU2MzgyMzg1NDg= 845 Code coverage should ignore files in .coveragerc simonw 9599 open 0     0 2020-06-13T21:45:42Z 2020-06-13T21:46:03Z   OWNER  

I'm not sure why this is, but the code coverage I have running in a GitHub Action doesn't take my .coveragerc file into account. It should:

Here's the bit that's ignored:

As a result my coverage score is 84%, when it should be 92%:

2020-06-13T21:41:18.4404252Z ----------- coverage: platform linux, python 3.8.3-final-0 -----------
2020-06-13T21:41:18.4404570Z Name                                 Stmts   Miss  Cover
2020-06-13T21:41:18.4404971Z --------------------------------------------------------
2020-06-13T21:41:18.4405227Z datasette/                    3      0   100%
2020-06-13T21:41:18.4405441Z datasette/                    3      3     0%
2020-06-13T21:41:18.4405668Z datasette/                  279    279     0%
2020-06-13T21:41:18.4405921Z datasette/          20      0   100%
2020-06-13T21:41:18.4406135Z datasette/                       499     27    95%
2020-06-13T21:41:18.4406343Z datasette/                       162     45    72%
2020-06-13T21:41:18.4406553Z datasette/                  236     17    93%
2020-06-13T21:41:18.4406761Z datasette/        40      0   100%
2020-06-13T21:41:18.4406975Z datasette/                    210     24    89%
2020-06-13T21:41:18.4407186Z datasette/                   122      7    94%
2020-06-13T21:41:18.4407394Z datasette/                  34      0   100%
2020-06-13T21:41:18.4407600Z datasette/                    36     23    36%
2020-06-13T21:41:18.4407807Z datasette/                    34      6    82%
2020-06-13T21:41:18.4408014Z datasette/publish/            0      0   100%
2020-06-13T21:41:18.4408240Z datasette/publish/           57      2    96%
2020-06-13T21:41:18.4408786Z datasette/publish/             19      1    95%
2020-06-13T21:41:18.4409029Z datasette/publish/             97     13    87%
2020-06-13T21:41:18.4409243Z datasette/                   63      4    94%
2020-06-13T21:41:18.4409450Z datasette/               5      0   100%
2020-06-13T21:41:18.4410480Z datasette/                     87     16    82%
2020-06-13T21:41:18.4410972Z datasette/utils/            504     31    94%
2020-06-13T21:41:18.4411755Z datasette/utils/                264     24    91%
2020-06-13T21:41:18.4412173Z datasette/utils/      44     44     0%
2020-06-13T21:41:18.4412822Z datasette/                     4      0   100%
2020-06-13T21:41:18.4413562Z datasette/views/              0      0   100%
2020-06-13T21:41:18.4414276Z datasette/views/                288     19    93%
2020-06-13T21:41:18.4414579Z datasette/views/            120      2    98%
2020-06-13T21:41:18.4414860Z datasette/views/                57      2    96%
2020-06-13T21:41:18.4415379Z datasette/views/              72     16    78%
2020-06-13T21:41:18.4418994Z datasette/views/               418     18    96%
2020-06-13T21:41:18.4428811Z --------------------------------------------------------
2020-06-13T21:41:18.4430394Z TOTAL                                 3777    623    84%
datasette 107914493 issue    
638104520 MDU6SXNzdWU2MzgxMDQ1MjA= 841 Research feasibility of 100% test coverage simonw 9599 closed 0     9 2020-06-13T06:07:01Z 2020-06-13T21:38:46Z 2020-06-13T21:38:46Z OWNER  

Inspired by

Almost every library I’ve written in the last 2 years has had 100% coverage and that’s probably not going to change in the future. It’s not that hard to start at 100% and hold onto it and the workflow it enables is so much nicer.

datasette 107914493 issue    
638229448 MDU6SXNzdWU2MzgyMjk0NDg= 843 Configure simonw 9599 closed 0     2 2020-06-13T20:45:00Z 2020-06-13T21:36:52Z 2020-06-13T21:36:52Z OWNER  

_Originally posted by @simonw in

datasette 107914493 issue    
638230433 MDExOlB1bGxSZXF1ZXN0NDM0MDU1NzUy 844 Action to run tests and upload coverage report simonw 9599 closed 0     1 2020-06-13T20:52:47Z 2020-06-13T21:36:52Z 2020-06-13T21:36:50Z OWNER simonw/datasette/pulls/844

Refs #843

datasette 107914493 pull    
637899539 MDU6SXNzdWU2Mzc4OTk1Mzk= 40 Demo deploy is broken simonw 9599 closed 0     2 2020-06-12T17:20:17Z 2020-06-12T18:06:48Z 2020-06-12T18:06:48Z MEMBER

The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 11 not upgraded.
Need to get 752 kB of archives.
After this operation, 2482 kB of additional disk space will be used.
Ign:1 bionic-updates/main amd64 sqlite3 amd64 3.22.0-1ubuntu0.3
Err:1 bionic-updates/main amd64 sqlite3 amd64 3.22.0-1ubuntu0.3
  404  Not Found [IP: 80]
E: Failed to fetch  404  Not Found [IP: 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
##[error]Process completed with exit code 100.
github-to-sqlite 207052882 issue    
637889964 MDU6SXNzdWU2Mzc4ODk5NjQ= 115 Ability to execute insert/update statements with the CLI simonw 9599 closed 0     1 2020-06-12T17:01:17Z 2020-06-12T17:51:11Z 2020-06-12T17:41:10Z OWNER  
$ sqlite-utils github.db "update stars set starred_at = ''"
Traceback (most recent call last):
  File "/Users/simon/.local/bin/sqlite-utils", line 8, in <module>
  File "/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/", line 782, in main
    rv = self.invoke(ctx)
  File "/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/click/", line 610, in invoke
    return callback(*args, **kwargs)
  File "/Users/simon/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/", line 673, in query
    headers = [c[0] for c in cursor.description]
TypeError: 'NoneType' object is not iterable
sqlite-utils 140912432 issue    
632753851 MDU6SXNzdWU2MzI3NTM4NTE= 806 Release Datasette 0.44 simonw 9599 closed 0   Datasette 0.44 5512395 10 2020-06-06T21:49:52Z 2020-06-12T01:20:03Z 2020-06-12T01:20:03Z OWNER  

See also milestone. This is a pretty big release: flash messaging, writable canned queries, authentication and permissions!

I'll want to ship some plugin releases in conjunction with this - datasette-auth-github for example.

datasette 107914493 issue    
637409144 MDU6SXNzdWU2Mzc0MDkxNDQ= 839 {"$file": ...} mechanism is broken simonw 9599 closed 0   Datasette 0.44 5512395 0 2020-06-12T00:46:24Z 2020-06-12T00:48:26Z 2020-06-12T00:48:26Z OWNER

    def test_plugin_config_file(app_client):
        open(TEMP_PLUGIN_SECRET_FILE, "w").write("FROM_FILE")
>       assert {"foo": "FROM_FILE"} == app_client.ds.plugin_config("file-plugin")
E       AssertionError: assert {'foo': 'FROM_FILE'} == {'foo': {'$fi...ugin-secret'}}
E         Differing items:
E         {'foo': 'FROM_FILE'} != {'foo': {'$file': '/tmp/plugin-secret'}}
E         Use -v to get the full diff

Broken in as part of #837

datasette 107914493 issue    
637370652 MDU6SXNzdWU2MzczNzA2NTI= 837 Plugin $env secrets mechanism doesn't work inside lists simonw 9599 closed 0   Datasette 0.44 5512395 0 2020-06-11T22:59:54Z 2020-06-12T00:25:20Z 2020-06-12T00:25:19Z OWNER  

This didn't work:

    "plugins": {
        "datasette-auth-tokens": [
                "token": {
                    "$env": "BOT_TOKEN"
                "actor": {
                    "bot_id": "my-bot"
datasette 107914493 issue    
635108074 MDU6SXNzdWU2MzUxMDgwNzQ= 824 Example authentication plugin simonw 9599 closed 0   Datasette 0.44 5512395 4 2020-06-09T04:49:53Z 2020-06-12T00:11:51Z 2020-06-12T00:11:50Z OWNER will work for this.

datasette 107914493 issue    
637365801 MDU6SXNzdWU2MzczNjU4MDE= 836 actor_matches_allow fails to consider all keys simonw 9599 closed 0   Datasette 0.44 5512395 0 2020-06-11T22:46:34Z 2020-06-11T22:47:25Z 2020-06-11T22:47:25Z OWNER  

actor: {"id": "root"}

allow block: {"bot_id": "my-bot", "id": ["root"]}

This should pass, because the id matches - but it fails.

datasette 107914493 issue    
637253789 MDU6SXNzdWU2MzcyNTM3ODk= 833 /-/metadata and so on should respect view-instance permission simonw 9599 closed 0   Datasette 0.44 5512395 4 2020-06-11T19:07:21Z 2020-06-11T22:15:32Z 2020-06-11T22:14:59Z OWNER  

The only URLs that should be available without authentication at all times are the /-/static/ prefix, to allow for HTTP caching.

datasette 107914493 issue    
314847571 MDU6SXNzdWUzMTQ4NDc1NzE= 220 Investigate syntactic sugar for plugins simonw 9599 closed 0     2 2018-04-16T23:01:39Z 2020-06-11T21:50:06Z 2020-06-11T21:49:55Z OWNER  

Suggested by @andrewhayward on Twitter:

Have you considered a basic abstraction on top of that, for standard hook features?

    return random.randint(a,b)

    return str.upper()

Maybe from datasette.plugins import template_filter?

Would have to work out how to get this to play well with pluggy

datasette 107914493 issue    
631932926 MDU6SXNzdWU2MzE5MzI5MjY= 801 allow_by_query setting for configuring permissions with a SQL statement simonw 9599 closed 0   Datasette 1.0 3268330 6 2020-06-05T20:30:19Z 2020-06-11T18:58:56Z 2020-06-11T18:58:49Z OWNER  

Idea: an "allow_sql" key with a SQL query that gets passed the actor JSON as :actor and can extract the relevant keys from it and return 1 or 0.

_Originally posted by @simonw in

See also #800

datasette 107914493 issue    
614806683 MDExOlB1bGxSZXF1ZXN0NDE1Mjg2MTA1 763 Documentation + improvements for db.execute() and Results class simonw 9599 closed 0     0 2020-05-08T15:16:02Z 2020-06-11T16:05:48Z 2020-05-08T16:05:46Z OWNER simonw/datasette/pulls/763

Refs #685

Still TODO:

  • Implement results.first()
  • Implement results.single_value()
  • Unit tests for the above
datasette 107914493 pull    
632919570 MDExOlB1bGxSZXF1ZXN0NDI5NjEzODkz 809 Publish secrets simonw 9599 closed 0   Datasette 0.44 5512395 4 2020-06-07T02:00:31Z 2020-06-11T16:02:13Z 2020-06-11T16:02:03Z OWNER simonw/datasette/pulls/809

Refs #787. Will need quite a bit of manual testing since this involves code which runs against Heroku and Cloud Run.

datasette 107914493 pull    
628089318 MDU6SXNzdWU2MjgwODkzMTg= 787 "datasette publish" should bake in a random --secret simonw 9599 closed 0   Datasette 0.44 5512395 1 2020-06-01T01:15:26Z 2020-06-11T16:02:05Z 2020-06-11T16:02:05Z OWNER  

To allow signed cookies etc to work reliably (see #785) all of the datasette publish commands should generate a random secret on publish and bake it into the configuration - probably by setting the DATASETTE_SECRET environment variable.

datasette 107914493 issue    
396212021 MDU6SXNzdWUzOTYyMTIwMjE= 394 base_url configuration setting simonw 9599 closed 0   Datasette 0.39 5234079 27 2019-01-05T23:48:48Z 2020-06-11T09:15:20Z 2020-03-25T00:18:45Z OWNER  

I've identified a couple of use-cases for running Datasette in a way that over-rides the default way that internal URLs are generated.

  1. Running behind a reverse proxy. I tried running Datasette behind a proxy and found that some of the generated internal links incorrectly referenced - when they should have been referencing - this is a problem both for links within the HTML interface but also for the toggle_url keys returned in the JSON as part of the facets datastructure.
  2. I would like it to be possible to host a Datasette instance at e.g. - either through careful HTTP proxying or, once Datasette has been ported to ASGI, by mounting a Datasette ASGI instance deep within an existing set of URL routes.

I'm going to add a url_prefix configuration option. This will default to "", which means Datasette will behave as it does at the moment - it will use / for most URL prefixes in the HTML version, and an absolute URL derived from the incoming Host header for URLs that are returned as part of the JSON output.

If url_prefix is set to another value (either a full URL or a path) then this path will be appended to all generated URLs.

datasette 107914493 issue    
634917088 MDU6SXNzdWU2MzQ5MTcwODg= 818 Example permissions plugin simonw 9599 closed 0   Datasette 0.44 5512395 9 2020-06-08T20:35:56Z 2020-06-11T05:40:07Z 2020-06-11T05:40:07Z OWNER  

To show how they work. Also useful to confirm how they interact with the default permissions.

datasette 107914493 issue    
636614868 MDU6SXNzdWU2MzY2MTQ4Njg= 831 It would be more intuitive if "allow": none meant "no-one can do this" simonw 9599 closed 0   Datasette 0.44 5512395 1 2020-06-10T23:43:56Z 2020-06-10T23:57:25Z 2020-06-10T23:50:55Z OWNER  

Now that I'm starting to write alternative plugins to control permissions - see #818 - I think I need an easy way to tell Datasette "no-one has permission to do X unless a plugin says otherwise".

One relatively intuitive way to do that could be like this:

  "databases": {
    "fixtures": {
      "allow": null

Right now I think that opens up permissions to everyone, which isn't as obvious.

datasette 107914493 issue    
636511683 MDU6SXNzdWU2MzY1MTE2ODM= 830 Redesign register_facet_classes plugin hook simonw 9599 open 0   Datasette 1.0 3268330 0 2020-06-10T20:03:27Z 2020-06-10T20:03:27Z   OWNER  

Nothing uses this plugin hook yet, so the design is not yet proven.

I'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.

I'll add a warning about this to the documentation.

datasette 107914493 issue    
636426530 MDU6SXNzdWU2MzY0MjY1MzA= 829 Ability to set ds_actor cookie such that it expires simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-10T17:31:40Z 2020-06-10T19:41:35Z 2020-06-10T19:40:05Z OWNER  

I need this for datasette-auth-github:

datasette 107914493 issue    
635914822 MDU6SXNzdWU2MzU5MTQ4MjI= 828 Horizontal scrollbar on changelog page on mobile simonw 9599 closed 0   Datasette 0.44 5512395 3 2020-06-10T04:18:54Z 2020-06-10T04:28:17Z 2020-06-10T04:28:17Z OWNER  

You can scroll sideways on that page and it looks bad:">

The cause is these long links:">

datasette 107914493 issue    
629541395 MDU6SXNzdWU2Mjk1NDEzOTU= 795 response.set_cookie() method simonw 9599 closed 0   Datasette 0.44 5512395 2 2020-06-02T21:57:05Z 2020-06-09T22:33:33Z 2020-06-09T22:19:48Z OWNER  

Mainly to clean up this code:

datasette 107914493 issue    
635519358 MDU6SXNzdWU2MzU1MTkzNTg= 826 Document the ds_actor signed cookie simonw 9599 closed 0   Datasette 0.44 5512395 3 2020-06-09T15:06:52Z 2020-06-09T22:33:12Z 2020-06-09T22:32:31Z OWNER  

Most authentication plugins ( for example) are likely to work by setting the ds_actor signed cookie, which is already magically decoded and supported by default Datasette here:

I should document this.

datasette 107914493 issue    
632673972 MDU6SXNzdWU2MzI2NzM5NzI= 804 python tests/ command has a bug simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-06T19:17:36Z 2020-06-09T20:01:30Z 2020-06-09T19:58:34Z OWNER  

This command is meant to write out fixtures.db, metadata.json and a plugins directory:

$ python tests/ /tmp/fixtures.db /tmp/metadata.json /tmp/plugins/
Test tables written to /tmp/fixtures.db
- metadata written to /tmp/metadata.json
Traceback (most recent call last):
  File "tests/", line 833, in <module>
    ("", PLUGIN1),
NameError: name 'PLUGIN1' is not defined
datasette 107914493 issue    
635696400 MDU6SXNzdWU2MzU2OTY0MDA= 827 Document CSRF protection (for plugins) simonw 9599 closed 0   Datasette 0.44 5512395 1 2020-06-09T19:19:10Z 2020-06-09T19:38:30Z 2020-06-09T19:35:13Z OWNER  

Plugin authors need to know that if they want to POST a form they should include this:

<input type="hidden" name="csrftoken" value="{{ csrftoken() }}">
datasette 107914493 issue    
635147716 MDU6SXNzdWU2MzUxNDc3MTY= 825 Way to enable a default=False permission for anonymous users simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-09T06:26:27Z 2020-06-09T17:19:19Z 2020-06-09T17:01:10Z OWNER  

I'd like plugins to be able to ship with a default that says "anonymous users cannot do this", but allow site administrators to over-ride that such that anonymous users can use the feature after all.

This is tricky because right now the anonymous user doesn't have an actor dictionary at all, so there's no key to match to an allow block.

datasette 107914493 issue    
635107393 MDU6SXNzdWU2MzUxMDczOTM= 823 Documentation is inconsistent about "id" as required field on actor simonw 9599 closed 0   Datasette 0.44 5512395 3 2020-06-09T04:47:58Z 2020-06-09T14:58:36Z 2020-06-09T14:58:19Z OWNER  

Docs at say:

The only required field in an actor is "id", which must be a string.

But the example here returns {"token": token}:

def actor_from_request(datasette, request):
    async def inner():
        token = request.args.get("_token")
        if not token:
            return None
        # Look up ?_token=xxx in sessions table
        result = await datasette.get_database().execute(
            "select count(*) from sessions where token = ?", [token]
        if result.first()[0]:
            return {"token": token}
            return None

    return inner
datasette 107914493 issue    
630120235 MDU6SXNzdWU2MzAxMjAyMzU= 797 Documentation for new "params" setting for canned queries simonw 9599 closed 0   Datasette 0.44 5512395 3 2020-06-03T15:55:11Z 2020-06-09T04:00:40Z 2020-06-03T21:04:51Z OWNER  

Added here:

datasette 107914493 issue    
635077656 MDU6SXNzdWU2MzUwNzc2NTY= 822 request.url_vars helper property simonw 9599 closed 0   Datasette 0.44 5512395 2 2020-06-09T03:15:53Z 2020-06-09T03:40:07Z 2020-06-09T03:40:06Z OWNER  

This example:

from datasette.utils.asgi import Response
import html

async def hello_from(scope):
    name = scope["url_route"]["kwargs"]["name"]
    return Response.html("Hello from {}".format(

def register_routes():
    return [
        (r"^/hello-from/(?P<name>.*)$"), hello_from)

Would be nicer if you could easily get scope["url_route"]["kwargs"]["name"] directly from the request object, without looking at the scope.

datasette 107914493 issue    
635076066 MDU6SXNzdWU2MzUwNzYwNjY= 821 Add Response class to internals documentation simonw 9599 closed 0   Datasette 0.44 5512395 0 2020-06-09T03:11:06Z 2020-06-09T03:32:16Z 2020-06-09T03:32:16Z OWNER  

I'll need to add documentation of the Response object (and Response.html() and Response.text() class methods - I should add Response.json() too) to the internals page

_Originally posted by @simonw in

datasette 107914493 issue    
314506669 MDU6SXNzdWUzMTQ1MDY2Njk= 215 Allow plugins to define additional URL routes and views simonw 9599 closed 0   Datasette 0.44 5512395 14 2018-04-16T05:31:09Z 2020-06-09T03:14:32Z 2020-06-09T03:12:08Z OWNER  

Might be as simple as having plugins get passed the app after the other routes have been defined:

Refs #14

datasette 107914493 issue    
635037204 MDExOlB1bGxSZXF1ZXN0NDMxNDc4NzI0 819 register_routes() plugin hook simonw 9599 closed 0   Datasette 0.44 5512395 0 2020-06-09T01:20:44Z 2020-06-09T03:12:08Z 2020-06-09T03:12:07Z OWNER simonw/datasette/pulls/819

Refs #215

datasette 107914493 pull    
626171242 MDU6SXNzdWU2MjYxNzEyNDI= 777 Error pages not correctly loading CSS simonw 9599 closed 0   Datasette 0.44 5512395 4 2020-05-28T02:47:52Z 2020-06-09T00:35:29Z 2020-06-09T00:35:29Z OWNER  


The HTML starts like this:

<!DOCTYPE html>
    <title>Error 404</title>
    <link rel="stylesheet" href="-/static/app.css?">
datasette 107914493 issue    
634139848 MDU6SXNzdWU2MzQxMzk4NDg= 813 Mechanism for specifying allow_sql permission in metadata.json simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-08T04:57:19Z 2020-06-09T00:09:57Z 2020-06-09T00:07:19Z OWNER  

Split from #811. It would be useful if finely-grained permissions configured in metadata.json could be used to specify if a user is allowed to execute arbitrary SQL queries.

We have a permission check call for this already:

But there's currently no way to implement this check without writing a plugin.

I think a "allow_sql": {...} block at the database level in metadata.json (sibling to the current "allow" block for that database implemented in #811) would be a good option for this.

datasette 107914493 issue    
634651079 MDU6SXNzdWU2MzQ2NTEwNzk= 814 Remove --debug option from datasette serve simonw 9599 open 0   Datasette 1.0 3268330 1 2020-06-08T14:10:14Z 2020-06-08T22:42:17Z   OWNER  

It doesn't appear to do anything useful at all:

datasette 107914493 issue    
449886319 MDU6SXNzdWU0NDk4ODYzMTk= 493 Rename metadata.json to config.json simonw 9599 open 0   Datasette 1.0 3268330 3 2019-05-29T15:48:03Z 2020-06-08T22:40:01Z   OWNER  

It is increasingly being useful configuration options, when it started out as purely metadata.

Could cause confusion with the --config mechanism though - maybe that should be called "settings" instead?

datasette 107914493 issue    
633578769 MDU6SXNzdWU2MzM1Nzg3Njk= 811 Support "allow" block on root, databases and tables, not just queries simonw 9599 closed 0   Datasette 0.44 5512395 16 2020-06-07T17:01:09Z 2020-06-08T19:34:00Z 2020-06-08T19:32:36Z OWNER  

No reason not to expand the "allow" mechanism described here to the root of metadata.json plus to databases and tables.

Refs #810 and #800.

    "databases": {
        "mydatabase": {
            "allow": {
                "id": ["root"]


  • Instance level
  • Database level
  • Table level
  • Query level
  • Affects list of queries
  • Affects list of tables on database page
  • Affects truncated list of tables on index page
  • Affects list of SQL views on database page
  • Affects list of databases on index page
  • Show 🔒 in header on index page for private instances
  • Show 🔒 in header on private database page
  • Show 🔒 in header on private table page
  • Show 🔒 in header on private query page
  • Move assert_permissions_checked() calls from to
  • Update documentation
datasette 107914493 issue    
628499086 MDU6SXNzdWU2Mjg0OTkwODY= 790 "flash messages" mechanism simonw 9599 closed 0   Datasette 0.44 5512395 20 2020-06-01T14:55:44Z 2020-06-08T19:33:59Z 2020-06-02T21:14:03Z OWNER  

Passing ?_success like this isn't necessarily the best approach. Potential improvements include:

  • Signing this message so it can't be tampered with (I could generate a signing secret on startup)
  • Using a cookie with a temporary flash message in it instead
  • Using HTML5 history API to remove the ?_success= from the URL bar when the user lands on the page

If I add an option to redirect the user to another page after success I may need a mechanism to show a flash message on that page as well, in which case I'll need a general flash message solution that works for any page.

_Originally posted by @simonw in

datasette 107914493 issue    
634783573 MDU6SXNzdWU2MzQ3ODM1NzM= 816 Come up with a new example for extra_template_vars plugin simonw 9599 closed 0   Datasette 0.44 5512395 2 2020-06-08T16:57:59Z 2020-06-08T19:06:44Z 2020-06-08T19:06:11Z OWNER  

This example is obsolete, it's from a time before and authentication as a built-in concept (#699):

datasette 107914493 issue    
634844634 MDU6SXNzdWU2MzQ4NDQ2MzQ= 817 Drop resource_type from permission_allowed system simonw 9599 closed 0     1 2020-06-08T18:41:37Z 2020-06-08T19:00:12Z 2020-06-08T19:00:12Z OWNER  

Current signature:

permission_allowed(datasette, actor, action, resource_type, resource_identifier)

It turns out the resource_type is always the same thing for any given action, so it's not actually useful. I'm going to drop it.

New signature will be:

permission_allowed(datasette, actor, action, resource)

Refs #811.

datasette 107914493 issue    
634663505 MDU6SXNzdWU2MzQ2NjM1MDU= 815 Group permission checks by request on /-/permissions debug page simonw 9599 open 0   Datasette 1.0 3268330 6 2020-06-08T14:25:23Z 2020-06-08T14:42:56Z   OWNER  

Now that we're making a LOT more permission checks (on the DB index page we do a check for every listed table for example) the /-/permissions page gets filled up pretty quickly.

Can make this more readable by grouping permission checks by request. Have most recent request at the top of the page but the permission requests within that page sorted chronologically by most recent last.

datasette 107914493 issue    
396215043 MDU6SXNzdWUzOTYyMTUwNDM= 395 Find a cleaner pattern for fixtures with arguments simonw 9599 closed 0     1 2019-01-06T00:31:22Z 2020-06-07T21:23:22Z 2020-06-07T21:23:22Z OWNER  

A lot of Datasette tests look like this:

The loop here isn't actually expected to loop - it's there because the make_app_client function yields a value and then cleans it up afterwards.

This pattern works, but it is a little confusing. It would be nice to replace it with something less strange looking.

The answer may be to switch to the "factories as fixtures" pattern described here:

In particular some variant of this example:

def make_customer_record():

    created_records = []

    def _make_customer_record(name):
        record = models.Customer(name=name, orders=[])
        return record

    yield _make_customer_record

    for record in created_records:

def test_customer_records(make_customer_record):
    customer_1 = make_customer_record("Lisa")
    customer_2 = make_customer_record("Mike")
    customer_3 = make_customer_record("Meredith")
datasette 107914493 issue    
633066114 MDU6SXNzdWU2MzMwNjYxMTQ= 810 Refactor permission check for canned query simonw 9599 closed 0   Datasette 0.44 5512395 1 2020-06-07T05:33:05Z 2020-06-07T17:03:15Z 2020-06-07T17:03:15Z OWNER  

This code here (TODO is follow-on from #808).

I can improve this with extra code in

datasette 107914493 issue    
631931408 MDU6SXNzdWU2MzE5MzE0MDg= 800 Canned query permissions mechanism simonw 9599 closed 0   Datasette 0.44 5512395 14 2020-06-05T20:28:21Z 2020-06-07T16:22:53Z 2020-06-07T16:22:53Z OWNER  

Idea: default is anyone can execute a query.

Or you can specify the following:


"databases": {
"my-database": {
"queries": {
"add_twitter_handle": {
"sql": "insert into twitter_handles (username) values (:username)",
"write": true,
"allow": {
"id": ["simon"],
"role": ["staff"]
`` These get matched against the actor JSON. If any of the fields in any of the keys of"allow"` match a key on the actor, the query is allowed.

"id": "*" matches any actor with an id key.

_Originally posted by @simonw in

datasette 107914493 issue    
632918799 MDU6SXNzdWU2MzI5MTg3OTk= 808 Permission check for every view in Datasette (plus docs) simonw 9599 closed 0   Datasette 0.44 5512395 2 2020-06-07T01:59:23Z 2020-06-07T05:30:49Z 2020-06-07T05:30:49Z OWNER  

Every view in Datasette should perform a permission check to see if the current user/actor is allowed to view that page.

This permission check will default to allowed, but having this check will allow plugins to lock down access selectively or even to everything in a Datasette instance.

datasette 107914493 issue    
628572716 MDU6SXNzdWU2Mjg1NzI3MTY= 791 Tutorial: building a something-interesting with writable canned queries simonw 9599 open 0   Datasette 1.0 3268330 2 2020-06-01T16:32:05Z 2020-06-06T20:51:07Z   OWNER  

Initial idea: TODO list, as a tutorial for #698 writable canned queries.

datasette 107914493 issue    
610829227 MDU6SXNzdWU2MTA4MjkyMjc= 749 Respect Cloud Run max response size of 32MB simonw 9599 open 0   Datasette 1.0 3268330 1 2020-05-01T16:06:46Z 2020-06-06T20:01:54Z   OWNER lists the maximum response size as 32MB.

I spotted a bug where attempting to download a database file larger than that from a Cloud Run deployment (in this case it was after I accidentally increased the size of that database) returned a 500 error because of this.

datasette 107914493 issue    
449854604 MDU6SXNzdWU0NDk4NTQ2MDQ= 492 Facets not correctly persisted in hidden form fields simonw 9599 open 0   Datasette 1.0 3268330 3 2019-05-29T14:49:39Z 2020-06-06T20:01:53Z   OWNER  

Steps to reproduce: visit and click "Apply"

Result is a 500: no such column: attraction_characteristic

The error occurs because of this hidden HTML input:

<input type="hidden" name="_facet" value="attraction_characteristic">

This should be:

<input type="hidden" name="_facet_m2m" value="attraction_characteristic">
datasette 107914493 issue    
450032134 MDU6SXNzdWU0NTAwMzIxMzQ= 495 facet_m2m gets confused by multiple relationships simonw 9599 open 0   Datasette 1.0 3268330 2 2019-05-29T21:37:28Z 2020-06-06T20:01:53Z   OWNER  

I got this for a database I was playing with:">

I think this is because of these three tables:">

datasette 107914493 issue    
463492815 MDU6SXNzdWU0NjM0OTI4MTU= 534 500 error on m2m facet detection simonw 9599 open 0   Datasette 1.0 3268330 1 2019-07-03T00:42:42Z 2020-06-06T20:01:53Z   OWNER  

This may help debug:

diff --git a/datasette/ b/datasette/
index 76d73e5..07a4034 100644
--- a/datasette/
+++ b/datasette/
@@ -499,11 +499,14 @@ class ManyToManyFacet(Facet):
             if len(other_table_outgoing_foreign_keys) == 2:
-                destination_table = [
-                    t
-                    for t in other_table_outgoing_foreign_keys
-                    if t["other_table"] != self.table
-                ][0]["other_table"]
+                try:
+                    destination_table = [
+                        t
+                        for t in other_table_outgoing_foreign_keys
+                        if t["other_table"] != self.table
+                    ][0]["other_table"]
+                except IndexError:
+                    import pdb;
                 # Only suggest if it's not selected already
                 if ("_facet_m2m", destination_table) in args:
datasette 107914493 issue    
520740741 MDU6SXNzdWU1MjA3NDA3NDE= 625 If you apply ?_facet_array=tags then &_facet=tags does nothing simonw 9599 open 0   Datasette 1.0 3268330 0 2019-11-11T04:59:29Z 2020-06-06T20:01:53Z   OWNER  

Start here:">

Note that tags is offered as a suggested facet. But if you click that you get this:

The _facet=tags is added to the URL and it's removed from the list of suggested tags... but the facet itself is not displayed:">

The _facet=tags facet should look like this:">

datasette 107914493 issue    
542553350 MDU6SXNzdWU1NDI1NTMzNTA= 655 Copy and paste doesn't work reliably on iPhone for SQL editor simonw 9599 open 0   Datasette 1.0 3268330 2 2019-12-26T13:15:10Z 2020-06-06T20:01:53Z   OWNER  

I'm having a lot of trouble copying and pasting from the codemirror editor on my iPhone.

datasette 107914493 issue    
576722115 MDU6SXNzdWU1NzY3MjIxMTU= 696 Single failing unit test when run inside the Docker image simonw 9599 open 0   Datasette 1.0 3268330 1 2020-03-06T06:16:36Z 2020-06-06T20:01:53Z   OWNER  
docker run -it -v `pwd`:/mnt datasetteproject/datasette:latest /bin/bash
root@0e1928cfdf79:/# cd /mnt
root@0e1928cfdf79:/mnt# pip install -e .[test]
root@0e1928cfdf79:/mnt# pytest

I get one failure!

It was for test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3]

    def test_searchable(app_client, path, expected_rows):
        response = app_client.get(path)
>       assert expected_rows == response.json["rows"]
E       AssertionError: assert [[1, 'barry c...sel', 'puma']] == []
E         Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']
E         Full diff:
E         + []
E         - [[1, 'barry cat', 'terry dog', 'panther'],
E         -  [2, 'terry dog', 'sara weasel', 'puma']]

_Originally posted by @simonw in

datasette 107914493 issue    
398011658 MDU6SXNzdWUzOTgwMTE2NTg= 398 Ensure downloading a 100+MB SQLite database file works simonw 9599 open 0   Datasette 1.0 3268330 2 2019-01-10T20:57:52Z 2020-06-06T20:01:52Z   OWNER  

I've seen attempted downloads of large files fail after about ten seconds.

datasette 107914493 issue    
440222719 MDU6SXNzdWU0NDAyMjI3MTk= 448 _facet_array should work against views simonw 9599 open 0   Datasette 1.0 3268330 1 2019-05-03T21:08:04Z 2020-06-06T20:01:52Z   OWNER  

I created this view:

CREATE VIEW ads_with_targets as select ads.*, json_group_array( as target_names from ads
  join ad_targets on ad_targets.ad_id =
  join targets on ad_targets.target_id =
  group by ad_targets.ad_id

When I try to apply faceting by array it appears to work at first:

But actually it's doing the wrong thing - the SQL for the facets uses rowid, but rowid is not present on views at all! These results are incorrect, and clicking to select a facet will fail to produce any rows:

Here's the SQL it should be using when you select a facet (note that it does not use a rowid):*

So we need to do something a lot smarter here. I'm not sure what the fix will look like, or even if it's feasible given that views don't have a rowid to hook into so the JSON faceting SQL may have to be completely rewritten.

datasette publish cloudrun \
    russian-ads.db \
    --name json-view-facet-bug-demo \
    --branch master \
    --extra-options "--config sql_time_limit_ms:5000 --config facet_time_limit_ms:5000"
datasette 107914493 issue    
582517965 MDU6SXNzdWU1ODI1MTc5NjU= 698 Ability for a canned query to write to the database simonw 9599 closed 0   Datasette 0.44 5512395 26 2020-03-16T18:31:59Z 2020-06-06T19:43:49Z 2020-06-06T19:43:48Z OWNER  

Canned queries are currently read-only:

Add a "write": true option to their definition in metadata.json which turns them into queries that are submitted via POST and send their queries to the write queue.

Then they can be used as a really quick way to define a writable interface and JSON API!

datasette 107914493 issue    
582526961 MDU6SXNzdWU1ODI1MjY5NjE= 699 Authentication (and permissions) as a core concept simonw 9599 closed 0   Datasette 0.44 5512395 40 2020-03-16T18:48:00Z 2020-06-06T19:42:11Z 2020-06-06T19:42:11Z OWNER  

Right now Datasette authentication is provided exclusively by plugins:

This is an all-or-nothing approach: either your Datasette instance requires authentication at the top level or it does not.

But... as I build new plugins like and I increasingly have individual features which should be reserved for logged-in users while still wanting other parts of Datasette to be open to all.

This is too much for plugins to own independently of Datasette core. Datasette needs to ship a single "user is authenticated" concept (independent of how users actually sign in) so that different plugins can integrate with it.

datasette 107914493 issue    
632645865 MDExOlB1bGxSZXF1ZXN0NDI5MzY2NjQx 803 Canned query permissions simonw 9599 closed 0     0 2020-06-06T18:20:00Z 2020-06-06T19:40:21Z 2020-06-06T19:40:20Z OWNER simonw/datasette/pulls/803

Refs #800. Closes #786

datasette 107914493 pull    
628087971 MDU6SXNzdWU2MjgwODc5NzE= 786 Documentation page describing Datasette's authentication system simonw 9599 closed 0   Datasette 0.44 5512395 2 2020-06-01T01:10:06Z 2020-06-06T19:40:20Z 2020-06-06T19:40:20Z OWNER  

_Originally posted by @simonw in

datasette 107914493 issue    
629524205 MDU6SXNzdWU2Mjk1MjQyMDU= 793 CSRF protection for /-/messages tool and writable canned queries simonw 9599 closed 0   Datasette 0.44 5512395 3 2020-06-02T21:22:21Z 2020-06-06T00:43:41Z 2020-06-05T19:05:59Z OWNER  

The /-/messages debug tool will need CSRF protection or people will be able to add messages using a hidden form on another website.
_Originally posted by @simonw in

datasette 107914493 issue    
631300342 MDExOlB1bGxSZXF1ZXN0NDI4MjEyNDIx 798 CSRF protection simonw 9599 closed 0   Datasette 0.44 5512395 5 2020-06-05T04:22:35Z 2020-06-06T00:43:41Z 2020-06-05T19:05:58Z OWNER simonw/datasette/pulls/798

Refs #793

datasette 107914493 pull    
628025100 MDU6SXNzdWU2MjgwMjUxMDA= 785 Datasette secret mechanism - initially for signed cookies simonw 9599 closed 0   Datasette 0.44 5512395 11 2020-05-31T19:14:52Z 2020-06-06T00:43:40Z 2020-06-01T00:18:40Z OWNER  

See comment in

Datasette needs to be able to set signed cookies - which means it needs a mechanism for safely handling a signing secret.

Since Datasette is a long-running process the default behaviour here can be to create a random secret on startup. This means that if the server restarts any signed cookies will be invalidated.

If the user wants a persistent secret they'll have to generate it themselves - maybe by setting an environment variable?

datasette 107914493 issue    
628121234 MDU6SXNzdWU2MjgxMjEyMzQ= 788 /-/permissions debugging tool simonw 9599 closed 0   Datasette 0.44 5512395 2 2020-06-01T03:13:47Z 2020-06-06T00:43:40Z 2020-06-01T05:01:01Z OWNER  

Debugging tool idea: /-/permissions page which shows you the actor and lets you type in the strings for action, resource_type and resource_identifier - then shows you EVERY plugin hook that would have executed and what it would have said, plus when the chain would have terminated.

Bonus: if you're logged in as the root user (or a user that matches some kind of permission check, maybe a check for permissions_debug) you get to see a rolling log of the last 30 permission checks and what the results were across the whole of Datasette. This should make figuring out permissions policies a whole lot easier.

_Originally posted by @simonw in

datasette 107914493 issue    
632056825 MDU6SXNzdWU2MzIwNTY4MjU= 802 "datasette plugins" command is broken simonw 9599 closed 0     1 2020-06-05T23:33:01Z 2020-06-05T23:46:43Z 2020-06-05T23:46:43Z OWNER  

I broke it in - and it turns out there was no test coverage so I didn't realize it was broken.

datasette 107914493 issue    
631789422 MDU6SXNzdWU2MzE3ODk0MjI= 799 TestResponse needs to handle multiple set-cookie headers simonw 9599 closed 0     2 2020-06-05T17:39:52Z 2020-06-05T18:34:10Z 2020-06-05T18:34:10Z OWNER  

Seeing this test failure on #798:

_______________________ test_auth_token _______________________
app_client = <tests.fixtures.TestClient object at 0x11285c910>
    def test_auth_token(app_client):
        "The /-/auth-token endpoint sets the correct cookie"
        assert app_client.ds._root_token is not None
        path = "/-/auth-token?token={}".format(app_client.ds._root_token)
        response = app_client.get(path, allow_redirects=False,)
        assert 302 == response.status
        assert "/" == response.headers["Location"]
>       assert {"id": "root"} == app_client.ds.unsign(response.cookies["ds_actor"], "actor")
E       KeyError: 'ds_actor'
datasette/tests/ KeyError

It looks like that's happening because the ASGI middleware is adding another set-cookie header - but those two set-cookie headers are combined into one when the TestResponse is constructed:

datasette 107914493 issue    
570301333 MDU6SXNzdWU1NzAzMDEzMzM= 684 Add documentation on Database introspection methods to internals.rst simonw 9599 closed 0   Datasette 1.0 3268330 4 2020-02-25T04:20:24Z 2020-06-04T18:56:15Z 2020-05-30T18:40:39Z OWNER  

internals.rst will be landing as part of #683

datasette 107914493 issue    
275082158 MDU6SXNzdWUyNzUwODIxNTg= 119 Build an "export this data to google sheets" plugin simonw 9599 closed 0     1 2017-11-18T14:14:51Z 2020-06-04T18:46:40Z 2020-06-04T18:46:39Z OWNER  

Inspired by

It should be a plug-in because I'd like to keep all interactions with proprietary / non-open-source software encapsulated in plugins rather than shipped as part of core.

datasette 107914493 issue    
629595228 MDExOlB1bGxSZXF1ZXN0NDI2ODkxNDcx 796 New WIP writable canned queries simonw 9599 closed 0   Datasette 1.0 3268330 9 2020-06-03T00:08:00Z 2020-06-03T15:16:52Z 2020-06-03T15:16:50Z OWNER simonw/datasette/pulls/796

Refs #698. Replaces #703

Still todo:

  • Unit tests
  • <del>Figure out .json mode</del>
  • Flash message solution
  • <del>CSRF protection</del>
  • Better error message display on errors
  • Documentation
  • <del>Maybe widgets?</del> I'll do these later
datasette 107914493 pull    
585597133 MDExOlB1bGxSZXF1ZXN0MzkxOTI0NTA5 703 WIP implementation of writable canned queries simonw 9599 closed 0     3 2020-03-21T22:23:51Z 2020-06-03T00:08:14Z 2020-06-02T23:57:35Z OWNER simonw/datasette/pulls/703

Refs #698.

datasette 107914493 pull    
629535669 MDU6SXNzdWU2Mjk1MzU2Njk= 794 Show hooks implemented by each plugin on /-/plugins simonw 9599 closed 0   Datasette 1.0 3268330 2 2020-06-02T21:44:38Z 2020-06-02T22:30:17Z 2020-06-02T21:50:10Z OWNER  


        "name": "",
        "static": false,
        "templates": false,
        "version": null,
        "hooks": [
datasette 107914493 issue    
626593402 MDU6SXNzdWU2MjY1OTM0MDI= 780 Internals documentation for datasette.metadata() method simonw 9599 open 0   Datasette 1.0 3268330 2 2020-05-28T15:14:22Z 2020-06-02T22:13:12Z   OWNER

datasette 107914493 issue    
497170355 MDU6SXNzdWU0OTcxNzAzNTU= 576 Documented internals API for use in plugins simonw 9599 open 0   Datasette 1.0 3268330 8 2019-09-23T15:28:50Z 2020-06-02T22:13:09Z   OWNER  

Quite a few of the plugin hooks make a datasette”instance of the Datasette class available to the plugins, so that they can look up configuration settings and execute database queries.

This means it should provide a documented, stable API so that plugin authors can rely on it.

datasette 107914493 issue    
440134714 MDU6SXNzdWU0NDAxMzQ3MTQ= 446 Define mechanism for plugins to return structured data simonw 9599 open 0   Datasette 1.0 3268330 6 2019-05-03T17:00:16Z 2020-06-02T22:12:15Z   OWNER  

Several plugin hooks now expect plugins to return data in a specific shape - notably the new output format hook and the custom facet hook.

These use Python dictionaries right now but that's quite error prone: it would be good to have a mechanism that supported a more structured format.

Full list of current hooks is here:

datasette 107914493 issue    
629459637 MDU6SXNzdWU2Mjk0NTk2Mzc= 792 Replace response.body.decode("utf8") with response.text in tests simonw 9599 closed 0     0 2020-06-02T19:32:24Z 2020-06-02T21:29:58Z 2020-06-02T21:29:58Z OWNER  

Make use of the response.text property to clean up the tests a tiny bit:

datasette 107914493 issue    
629473827 MDU6SXNzdWU2Mjk0NzM4Mjc= 5 Suggesion: Add output example to readme harryvederci 26745575 open 0     0 2020-06-02T19:56:49Z 2020-06-02T19:56:49Z   NONE  

First off, thanks for open sourcing this application! This is a suggestion to increase the amount of people that would make use of it: an example in the readme file would help.

Currently, users have to clone the app, install it, authorize through pocket, run a command, an then find out if this application does what they hope it does.

Another possibility is to add a file example-output.db, containing one (mock) Pocket article.

Keep up the good work!

pocket-to-sqlite 213286752 issue    
628156527 MDU6SXNzdWU2MjgxNTY1Mjc= 789 Mechanism for enabling pluggy tracing simonw 9599 open 0     2 2020-06-01T05:10:14Z 2020-06-01T05:11:03Z   OWNER  

Could be useful for debugging plugins:

I tried this out by adding these two lines in

pm = pluggy.PluginManager("datasette")
# Added these:

Output looked something like this:

INFO: - "GET /-/-/static/app.css HTTP/1.1" 404 Not Found
  actor_from_request [hook]
      datasette: < object at 0x106277ad0>
      request: <datasette.utils.asgi.Request object at 0x106550a50>

  finish actor_from_request --> [] [hook]

  extra_body_script [hook]
      template: show_json.html
      database: None
      table: None
      view_name: json_data
      datasette: < object at 0x106277ad0>

  finish extra_body_script --> [] [hook]

  extra_template_vars [hook]
      template: show_json.html
      database: None
      table: None
      view_name: json_data
      request: <datasette.utils.asgi.Request object at 0x1065504d0>
      datasette: < object at 0x106277ad0>

  finish extra_template_vars --> [] [hook]

  extra_css_urls [hook]
      template: show_json.html
      database: None
      table: None
      datasette: < object at 0x106277ad0>

  finish extra_css_urls --> [] [hook]

  extra_js_urls [hook]
      template: show_json.html
      database: None
      table: None
      datasette: < object at 0x106277ad0>

  finish extra_js_urls --> [] [hook]

INFO: - "GET /-/actor HTTP/1.1" 200 OK
  actor_from_request [hook]
      datasette: < object at 0x106277ad0>
      request: <datasette.utils.asgi.Request object at 0x1065500d0>

  finish actor_from_request --> [] [hook]
datasette 107914493 issue    
627836898 MDExOlB1bGxSZXF1ZXN0NDI1NTMxMjA1 783 Authentication: plugin hooks plus default --root auth mechanism simonw 9599 closed 0     0 2020-05-30T22:25:47Z 2020-06-01T01:16:44Z 2020-06-01T01:16:43Z OWNER simonw/datasette/pulls/783

See #699

datasette 107914493 pull    
459590021 MDU6SXNzdWU0NTk1OTAwMjE= 519 Decide what goes into Datasette 1.0 simonw 9599 open 0   Datasette 1.0 3268330 2 2019-06-23T15:47:41Z 2020-05-30T18:55:24Z   OWNER  

Datasette ASGI #272 is a big part of it... but 1.0 will generally be an indicator that Datasette is a stable platform for developers to write plugins and custom templates against. So lots to think about.

datasette 107914493 issue    
326800219 MDU6SXNzdWUzMjY4MDAyMTk= 292 Mechanism for customizing the SQL used to select specific columns in the table view simonw 9599 open 0     14 2018-05-27T09:05:52Z 2020-05-30T18:45:38Z   OWNER  

Some columns don't make a lot of sense in their default representation - binary blobs such as SpatiaLite geometries for example, or lengthy columns that really should be truncated somehow.

We may also find that there are tables where we don't want to show all of the columns - so a mechanism to select a subset of columns would be nice.

I think there are two features here:

  • the ability to request a subset of columns on the table view
  • the ability to override the SQL for a specific column and/or add extra columns - AsGeoJSON(Geometry) for example

Both features should be available via both querystring arguments and in metadata.json

The querystring argument for custom SQL should only work if allow_sql config is turned on.

Refs #276

datasette 107914493 issue    
445850934 MDU6SXNzdWU0NDU4NTA5MzQ= 473 Plugin hook: register_filters simonw 9599 open 0     7 2019-05-19T18:44:33Z 2020-05-30T18:44:55Z   OWNER  

I meant to add this as part of the facets plugin mechanism but didn't quite get to it. This will allow plugins to register extra filters, as seen in datasette/

datasette 107914493 issue    

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Query took 44.832ms · About: github-to-sqlite