1,490 rows sorted by updated_at descending

View and edit SQL

Suggested facets: milestone, author_association, created_at (date), updated_at (date), closed_at (date)



id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app
750141615 MDExOlB1bGxSZXF1ZXN0NTI2ODQ3ODIz 7 Fixed conflicting CLI flags tlockney 8944 open 0     0 2020-11-24T23:25:12Z 2020-11-24T23:25:12Z   FIRST_TIME_CONTRIBUTOR dogsheep/pocket-to-sqlite/pulls/7

The -a used for the auth credentials and the shortened form of the --all flags were in conflict on the fetch command. To be consistent with other -to-sqlite libraries in the Dogsheep ecosystem, I removed the shortened form of the --all flag.

pocket-to-sqlite 213286752 pull    
642651572 MDU6SXNzdWU2NDI2NTE1NzI= 860 Plugin hook for database/table metadata simonw 9599 open 0   Datasette 0.52 6055094 9 2020-06-21T22:20:25Z 2020-11-24T23:20:24Z   OWNER  

I'm not happy with how metadata.(json|yaml) keeps growing new features. Rather than having a single plugin hook for all of metadata.json I'm going to split out the feature that shows actual real metadata for tables and databases - source, license etc - into its own plugin-powered mechanism.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/357#issuecomment-647189045_

datasette 107914493 issue    
750079085 MDU6SXNzdWU3NTAwNzkwODU= 1107 Rename datasette.config() method to datasette.setting() simonw 9599 closed 0   Datasette 0.52 6055094 5 2020-11-24T21:24:11Z 2020-11-24T22:09:11Z 2020-11-24T22:06:38Z OWNER  

Part of #1105. Thankfully this isn't yet part of the documented public API on https://docs.datasette.io/en/stable/internals.html

datasette 107914493 issue    
750089847 MDU6SXNzdWU3NTAwODk4NDc= 1109 Deprecate --config in Datasette 1.0 (in favour of --setting) simonw 9599 open 0   Datasette 1.0 3268330 0 2020-11-24T21:43:57Z 2020-11-24T21:43:58Z   OWNER  

I added a deprecation warning to this in #992.

datasette 107914493 issue    
749982022 MDU6SXNzdWU3NDk5ODIwMjI= 1105 Rebrand config as settings simonw 9599 closed 0   Datasette 0.52 6055094 2 2020-11-24T19:35:12Z 2020-11-24T21:40:28Z 2020-11-24T21:40:28Z OWNER  

I realized I need a tracking ticket for this.

I want to start splitting things like plugin configuration and default facets / sort order out of metadata.json - so I want to start calling those things configuration. But the term configuration is already used for the --config family of global settings. So I'm rebranding that type of configuration as settings to free up the name "configuration" for more run-time concerns (default sort order) and plugin configuration.

datasette 107914493 issue    
749983857 MDU6SXNzdWU3NDk5ODM4NTc= 1106 Rebrand and redirect config.rst as settings.rst simonw 9599 closed 0   Datasette 0.52 6055094 4 2020-11-24T19:38:17Z 2020-11-24T21:39:58Z 2020-11-24T21:39:58Z OWNER   datasette 107914493 issue    
750087350 MDU6SXNzdWU3NTAwODczNTA= 1108 Configure /en/stable/config.html redirect when I ship 0.52 simonw 9599 open 0   Datasette 0.52 6055094 0 2020-11-24T21:39:19Z 2020-11-24T21:39:36Z   OWNER   datasette 107914493 issue    
749981663 MDU6SXNzdWU3NDk5ODE2NjM= 1104 config.json in directory config mode should be settings.json simonw 9599 closed 0   Datasette 0.52 6055094 2 2020-11-24T19:34:38Z 2020-11-24T20:37:42Z 2020-11-24T20:37:41Z OWNER   datasette 107914493 issue    
749979454 MDU6SXNzdWU3NDk5Nzk0NTQ= 1103 Rename /-/config to /-/settings simonw 9599 closed 0   Datasette 0.52 6055094 2 2020-11-24T19:31:00Z 2020-11-24T20:19:20Z 2020-11-24T20:19:19Z OWNER  

As part of rebranding config to settings, see also #992.

datasette 107914493 issue    
714449879 MDU6SXNzdWU3MTQ0NDk4Nzk= 992 Change "--config foo:bar" to "--setting foo bar" simonw 9599 closed 0   Datasette 0.52 6055094 6 2020-10-05T01:27:45Z 2020-11-24T20:01:54Z 2020-11-24T20:01:54Z OWNER  

I designed the config format before I had a good feel for CLI design using Click. --config max_page_size 2000 is better than --config max_page_size:2000.

datasette 107914493 issue    
315738696 MDU6SXNzdWUzMTU3Mzg2OTY= 226 Unit tests for installable plugins simonw 9599 closed 0     2 2018-04-19T06:05:32Z 2020-11-24T19:52:51Z 2020-11-24T19:52:46Z OWNER  

I'd like more thorough unit test coverage of the plugins mechanism - in particular for installable plugins.

I think I can do this while still having the code live in the same repo, by creating a subdirectory in tests/example_plugin with its own setup.py and then running python setup.py install as part of the test runner.

I imagine I will need to bump the version number every time I change the plugin in case someone runs the test again in the same virtual environment.

If that doesn't work I can instead ship a datasette-plugins-tests two to PyPI and add that as a tests_require dependency.

Refs #14

datasette 107914493 issue    
722673818 MDU6SXNzdWU3MjI2NzM4MTg= 1023 Fix issues relating to base_url simonw 9599 closed 0   0.51 6026070 3 2020-10-15T21:02:06Z 2020-11-24T19:51:44Z 2020-10-31T20:51:01Z OWNER  

Lots of base_url bugs that I'd like to solve at once.

datasette 107914493 issue    
346026869 MDU6SXNzdWUzNDYwMjY4Njk= 354 Handle many-to-many relationships simonw 9599 open 0     0 2018-07-31T04:03:13Z 2020-11-24T19:51:18Z   OWNER  

This is a master tracking ticket for various many-2-many features.

datasette 107914493 issue    
749289611 MDU6SXNzdWU3NDkyODk2MTE= 1102 Plugin testing docs should show datasette.client simonw 9599 open 0     0 2020-11-24T02:34:46Z 2020-11-24T02:34:46Z   OWNER  

https://docs.datasette.io/en/stable/testing_plugins.html currently shows how to use HTTPX directly.

datasette 107914493 issue    
749283032 MDU6SXNzdWU3NDkyODMwMzI= 1101 register_output_renderer() should support streaming data simonw 9599 open 0     2 2020-11-24T02:17:09Z 2020-11-24T02:22:55Z   OWNER  

I'd like to implement this by first extending the register_output_renderer() hook to support streaming huge responses, then switching CSV to use the plugin hook in addition to TSV using it.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1096#issuecomment-732542285_

datasette 107914493 issue    
743359646 MDU6SXNzdWU3NDMzNTk2NDY= 1096 TSV should be a default export option simonw 9599 open 0     1 2020-11-15T22:24:02Z 2020-11-24T02:16:22Z   OWNER  

Refs #1095

datasette 107914493 issue    
748372469 MDU6SXNzdWU3NDgzNzI0Njk= 9 ParseError: undefined entity š mkorosec 4028322 open 0     0 2020-11-22T23:04:35Z 2020-11-22T23:04:51Z   NONE  

I encountered a parse error if the enex file contained š or  

Run command:
evernote-to-sqlite enex evernote.db evernote.enex

Traceback (most recent call last):
  File "evernote_to_sqlite/cli.py", line 31, in enex
    save_note(db, note)
  File "evernote_to_sqlite/utils.py", line 35, in save_note
    content = ET.tostring(ET.fromstring(content_xml)).decode("utf-8")
  File "/usr/lib/python3.8/xml/etree/ElementTree.py", line 1320, in XML
xml.etree.ElementTree.ParseError: undefined entity š: line 3, column 35


sed -i 's/š//g' evernote.enex
sed -i 's/ //g' evernote.enex
evernote-to-sqlite 303218369 issue    
748370021 MDExOlB1bGxSZXF1ZXN0NTI1MzcxMDI5 8 fix import error if note has no "updated" element mkorosec 4028322 open 0     0 2020-11-22T22:51:05Z 2020-11-22T22:51:05Z   FIRST_TIMER dogsheep/evernote-to-sqlite/pulls/8

I got the following error when executing evernote-to-sqlite enex evernote.db evernote.enex

  File "evernote_to_sqlite/cli.py", line 31, in enex
    save_note(db, note)
  File "evernote_to_sqlite/utils.py", line 28, in save_note
    updated = note.find("updated").text
AttributeError: 'NoneType' object has no attribute 'text'

Seems that in some cases the updated element is not added to the note, this is a part of the problematic note:

evernote-to-sqlite 303218369 pull    
737394470 MDU6SXNzdWU3MzczOTQ0NzA= 1084 Table/database action menu cut off if too short simonw 9599 closed 0   Datasette 0.52 6055094 4 2020-11-06T01:55:23Z 2020-11-21T23:45:59Z 2020-11-21T23:45:59Z OWNER  

datasette 107914493 issue    
747702144 MDU6SXNzdWU3NDc3MDIxNDQ= 1100 Error on OPTIONS request to database akehrer 1319404 open 0     0 2020-11-20T18:16:43Z 2020-11-20T18:16:43Z   NONE  

When I perform an OPTIONS request against a database or table datasette fails with an internal error.

All these tests result in the traceback below.

curl -XOPTIONS\?_search\=test
Traceback (most recent call last):
  File "[path-to-python]/site-packages/datasette/app.py", line 1033, in route_path
    response = await view(request, send)
  File "[path-to-python]/site-packages/datasette/views/base.py", line 146, in view
    request, **request.scope["url_route"]["kwargs"]
  File "[path-to-python]/site-packages/datasette/views/base.py", line 118, in dispatch_request
    return await handler(request, *args, **kwargs)
TypeError: object Response can't be used in 'await' expression

Making the options function in the DataView class async fixed it for me.

    async def options(self, request, *args, **kwargs):
        r = Response.text("ok")
        if self.ds.cors:
            r.headers["Access-Control-Allow-Origin"] = "*"
        return r
datasette 107914493 issue    
743011397 MDU6SXNzdWU3NDMwMTEzOTc= 1094 import EX_CANTCREAT means datasette fails to work on Windows drkane 1049910 open 0     1 2020-11-14T14:17:11Z 2020-11-20T16:11:29Z   NONE  

Trying to use datasette 0.51.1 gives the following error:

ImportError: cannot import name 'EX_CANTCREAT' from 'os' (C:\Users\drkan\AppData\Local\Programs\Python\Python39\lib\os.py)

Looks like that code is only available on unix: https://docs.python.org/3/library/os.html#os.EX_CANTCREAT

Removing the line makes it work fine (EX_CANTCREAT doesn't seem to be used anywhere?)

datasette 107914493 issue    
456578474 MDU6SXNzdWU0NTY1Nzg0NzQ= 511 Get Datasette working on Windows, including CI simonw 9599 open 0     6 2019-06-15T21:41:58Z 2020-11-20T06:35:14Z   OWNER  

This should almost happen as a side-effect or moving from Sanic to Uvicorn during the port to ASGI: #272

Additional steps:

  • test it manually
  • update documentation
  • set up some form of Windows CI
datasette 107914493 issue    
745393298 MDU6SXNzdWU3NDUzOTMyOTg= 52 Discussion: Adding support for fetching only fresh tweets fatihky 4169772 closed 0     1 2020-11-18T07:01:48Z 2020-11-18T07:12:45Z 2020-11-18T07:12:45Z NONE  

I think it'd be very useful if this tool has an option like --incremental to fetch only newer tweets. This way operations could complete very fast in sequential runs. I'd want to try to implement this feature if it seems OK for this tool's purpose.

twitter-to-sqlite 206156866 issue    
742011049 MDU6SXNzdWU3NDIwMTEwNDk= 1091 .json and .csv exports fail to apply base_url simonw 9599 open 0     10 2020-11-12T23:45:16Z 2020-11-17T16:31:00Z   OWNER  

Just tested with the latest Docker image, and it works pretty much everywhere! THANK YOU!

I did notice that if I try to export json or csv, the base is not applied. Not sure if I should reopen this issue or open a new one.

To see this, go here: https://corpora.tika.apache.org/datasette/corpora-metadata/REF_PARSE_EXCEPTION_TYPES

Click/hover over json or CSV and you'll see that the 'datasette' base is not included.

_Originally posted by @tballison in https://github.com/simonw/datasette/issues/865#issuecomment-726385422_

datasette 107914493 issue    
743384829 MDExOlB1bGxSZXF1ZXN0NTIxMjg3OTk0 203 changes to allow for compound foreign keys drkane 1049910 open 0     0 2020-11-16T00:30:10Z 2020-11-16T11:03:07Z   FIRST_TIME_CONTRIBUTOR simonw/sqlite-utils/pulls/203

Add support for compound foreign keys, as per issue #117

Not sure if this is the right approach. In particular I'm unsure about:

  • the new ForeignKey class, which replaces the namedtuple in order to ensure that column and other_column are forced into tuples. The class does the job, but doesn't feel very elegant.
  • I haven't rewritten guess_foreign_table to take account of multiple columns, so it just checks for the first column in the foreign key definition. This isn't ideal.
  • I haven't added any ability to the CLI to add compound foreign keys, it's only in the python API at the moment.

The PR also contains a minor related change that columns and tables are always quoted in foreign key definitions.

sqlite-utils 140912432 pull    
743400216 MDU6SXNzdWU3NDM0MDAyMTY= 11 Error thrown: sqlite3.OperationalError: table users has no column named lastName beaugunderson 61791 open 0     1 2020-11-16T01:21:18Z 2020-11-16T02:15:22Z   NONE  

Just installed swarm-to-sqlite-0.3.2 and tried according to the docs:

Traceback (most recent call last):
  File "/usr/local/bin/swarm-to-sqlite", line 8, in <module>
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/swarm_to_sqlite/cli.py", line 73, in cli
    save_checkin(checkin, db)
  File "/usr/local/lib/python3.9/site-packages/swarm_to_sqlite/utils.py", line 82, in save_checkin
    checkins_table.m2m("users", user, m2m_table="likes", pk="id")
  File "/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py", line 1914, in m2m
    id = other_table.insert(record, pk=pk, replace=True).last_pk
  File "/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py", line 1647, in insert
    return self.insert_all(
  File "/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py", line 1765, in insert_all
  File "/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py", line 1575, in insert_chunk
    result = self.db.execute(query, params)
  File "/usr/local/lib/python3.9/site-packages/sqlite_utils/db.py", line 200, in execute
    return self.conn.execute(sql, parameters)
sqlite3.OperationalError: table users has no column named lastName
swarm-to-sqlite 205429375 issue    
743370900 MDU6SXNzdWU3NDMzNzA5MDA= 1098 Foreign key links break for compound foreign keys simonw 9599 open 0   Datasette 0.52 6055094 2 2020-11-15T23:22:14Z 2020-11-15T23:26:14Z   OWNER  

Reported on Twitter here: https://twitter.com/ZaneSelvans/status/1328093641395548161

Maybe I'm doing something wrong here but the automatically generated links based foreign key relationships seem to be working here for utility_id_eia, but not for plant_id_eia & generator_id which seems odd: https://pudl-datasette-xl7xwcpe2a-uc.a.run.app/pudl/generators_eia860

Right now it seems like they're trying to, but with only one of the two keys, so it gives "Error 500. You did not supply a value for binding 2." Maybe only create the links when it's a simple foreign key?

datasette 107914493 issue    
743369188 MDExOlB1bGxSZXF1ZXN0NTIxMjc2Mjk2 1097 Use f-strings simonw 9599 closed 0     1 2020-11-15T23:12:36Z 2020-11-15T23:24:24Z 2020-11-15T23:24:23Z OWNER simonw/datasette/pulls/1097

Since Datasette now requires Python 3.6, how about some f-strings?

I ran this in the datasette root checkout:

pip install flynt
flynt .
black .
datasette 107914493 pull    
743371103 MDU6SXNzdWU3NDMzNzExMDM= 1099 Support linking to compound foreign keys simonw 9599 open 0     0 2020-11-15T23:23:17Z 2020-11-15T23:23:17Z   OWNER  

Reported as a bug in #1098 because they caused 500 errors - but it would be even better if Datasette could hyperlink to related rows via compound foreign keys.

datasette 107914493 issue    
681334912 MDU6SXNzdWU2ODEzMzQ5MTI= 942 Support column descriptions in metadata.json simonw 9599 open 0   Datasette 0.52 6055094 4 2020-08-18T20:52:00Z 2020-11-15T19:54:44Z   OWNER  

Could look something like this:

    "title": "Five Thirty Eight",
    "license": "CC Attribution 4.0 License",
    "license_url": "https://creativecommons.org/licenses/by/4.0/",
    "source": "fivethirtyeight/data on GitHub",
    "source_url": "https://github.com/fivethirtyeight/data",
    "databases": {
        "fivethirtyeight": {
            "tables": {
                "mueller-polls/mueller-approval-polls": {
                    "description_html": "<p>....</p>",
                    "columns": {
                        "name_of_column": "column_description goes here"
datasette 107914493 issue    
743297582 MDU6SXNzdWU3NDMyOTc1ODI= 7 evernote-to-sqlite on windows 10 give this error: TypeError: insert() got an unexpected keyword argument 'replace' martinvanwieringen 42387931 open 0     0 2020-11-15T16:57:28Z 2020-11-15T16:57:28Z   NONE  

running evernote-to-sqlite 0.2 on windows 10. Command:

evernote-to-sqlite enex evernote.db MyNotes.enex

I get the followinng error:

File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\evernote_to_sqlite\utils.py", line 46, in save_note
note_id = db["notes"].insert(row, hash_id="id", replace=True, alter=True).last_pk
TypeError: insert() got an unexpected keyword argument 'replace'

Removing replace=True,

Leads to below error:

note_id = db["notes"].insert(row, hash_id="id", alter=True).last_pk
File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py", line 924, in insert
return self.insert_all(
File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py", line 1046, in insert_all
result = self.db.conn.execute(sql, values)
sqlite3.IntegrityError: UNIQUE constraint failed: notes.id

evernote-to-sqlite 303218369 issue    
743071410 MDExOlB1bGxSZXF1ZXN0NTIxMDU0NjEy 13 SQLite does not have case sensitive columns tomaskrehlik 1689944 open 0     0 2020-11-14T20:12:32Z 2020-11-14T20:12:32Z   FIRST_TIME_CONTRIBUTOR dogsheep/healthkit-to-sqlite/pulls/13

This solves a weird issue when there is record with metadata key
that is only different in letter cases.

See the test for details.

healthkit-to-sqlite 197882382 pull    
742041667 MDU6SXNzdWU3NDIwNDE2Njc= 1092 Make cascading permission checks available to plugins simonw 9599 open 0     0 2020-11-13T01:02:55Z 2020-11-13T01:02:55Z   OWNER  

The BaseView class has a method for cascading permission checks, but it's not easily accessible to plugins.


This leaves plugins like datasette-graphql having to implement their own versions of this logic, which is bad: https://github.com/simonw/datasette-graphql/issues/65

First check view-database - if that says False then disallow access, if it says True then allow access. If it says None check view-instance.

This should become a supported API that plugins are encouraged to use.

datasette 107914493 issue    
323718842 MDU6SXNzdWUzMjM3MTg4NDI= 268 Mechanism for ranking results from SQLite full-text search simonw 9599 open 0   Datasette 0.52 6055094 5 2018-05-16T17:36:40Z 2020-11-13T00:09:05Z   OWNER  

This isn't particularly straight-forward - all the more reason for Datasette to implement it for you. This article is helpful: http://charlesleifer.com/blog/using-sqlite-full-text-search-with-python/

datasette 107914493 issue    
610829227 MDU6SXNzdWU2MTA4MjkyMjc= 749 Respect Cloud Run max response size of 32MB simonw 9599 open 0     2 2020-05-01T16:06:46Z 2020-11-13T00:05:15Z   OWNER  

https://cloud.google.com/run/quotas lists the maximum response size as 32MB.

I spotted a bug where attempting to download a database file larger than that from a Cloud Run deployment (in this case it was https://github-to-sqlite.dogsheep.net/github.db after I accidentally increased the size of that database) returned a 500 error because of this.

datasette 107914493 issue    
644582921 MDU6SXNzdWU2NDQ1ODI5MjE= 865 base_url doesn't seem to work when adding criteria and clicking "apply" tballison 6739646 closed 0   0.51 6026070 11 2020-06-24T12:39:57Z 2020-11-12T23:49:24Z 2020-10-20T05:22:59Z NONE  

Over on Apache Tika, we're using datasette to allow users to make sense of the metadata for our file regression testing corpus.

This could be user error in how I've set up the reverse proxy!

I started datasette like so:
docker run -d -p 8001:8001 -vpwd:/mnt datasetteproject/datasette datasette -p 8001 -h /mnt/corpora-metadata.db --config sql_time_limit_ms:60000 --config base_url:/datasette/

I then reverse proxied like so:

ProxyPreserveHost On
ProxyPass /datasette http://x.y.z.q:xxxx
ProxyPassReverse /datasette http://x.y.z.q:xxx

Regular sql works perfectly:

However, adding criteria and clicking 'Apply'

bounces back to:

datasette 107914493 issue    
741665726 MDU6SXNzdWU3NDE2NjU3MjY= 1089 Sweep documentation for words that minimize involved difficulty simonw 9599 closed 0     1 2020-11-12T14:53:05Z 2020-11-12T20:07:26Z 2020-11-12T20:07:26Z OWNER   datasette 107914493 issue    
741862364 MDU6SXNzdWU3NDE4NjIzNjQ= 1090 Custom widgets for canned query forms simonw 9599 open 0     0 2020-11-12T19:21:07Z 2020-11-12T19:58:34Z   OWNER  

This is an idea that was cut from the first version of writable canned queries:

I really want the option to use a <textarea> for a specific value.

Idea: metadata syntax like this:

json { "databases": { "my-database": { "queries": { "add_twitter_handle": { "sql": "insert into twitter_handles (username) values (:username)", "write": true, "params": { "username": { "widget": "textarea" } } } } } } }

I can ship with some default widgets and provide a plugin hook for registering extra widgets.

This opens up some really exciting possibilities for things like map widgets that let you draw polygons.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/698#issuecomment-608125928_

datasette 107914493 issue    
741268956 MDU6SXNzdWU3NDEyNjg5NTY= 1088 OperationalError('interrupted') can 500 on row page simonw 9599 closed 0     3 2020-11-12T04:29:55Z 2020-11-12T04:36:52Z 2020-11-12T04:36:52Z OWNER   datasette 107914493 issue    
741231849 MDU6SXNzdWU3NDEyMzE4NDk= 1087 Idea: ?_extra=urls for getting back URLs to useful things simonw 9599 open 0     0 2020-11-12T02:55:41Z 2020-11-12T02:55:41Z   OWNER  

Working on https://github.com/simonw/datasette-search-all/issues/10 made me realize that sometimes it can be difficult to calculate the URL for a database, table or row within Datasette.

It would be useful to have an optional extra JSON extension (using ?_extra= from #262) that can help with this.

datasette 107914493 issue    
735644513 MDU6SXNzdWU3MzU2NDQ1MTM= 1081 Fixtures should use FTS4 or FTS5, not FTS3 simonw 9599 closed 0   Datasette 0.52 6055094 0 2020-11-03T21:24:13Z 2020-11-12T00:03:00Z 2020-11-12T00:02:59Z OWNER  

Just spotted that fixtures.db uses FTS3, which is pretty much obsolete these days.


datasette 107914493 issue    
740512882 MDExOlB1bGxSZXF1ZXN0NTE4OTg4ODc5 1085 Use FTS4 in fixtures simonw 9599 closed 0     1 2020-11-11T06:44:30Z 2020-11-12T00:02:59Z 2020-11-12T00:02:58Z OWNER simonw/datasette/pulls/1085

Refs #1081

datasette 107914493 pull    
741021342 MDU6SXNzdWU3NDEwMjEzNDI= 1086 Foreign keys with blank titles result in non-clickable links simonw 9599 closed 0     3 2020-11-11T19:41:09Z 2020-11-11T23:55:39Z 2020-11-11T23:46:20Z OWNER  


The HTML looks like this:

<td class="col-tag_id type-int"><a href="/index/core_tag/1"></a>&nbsp;<em>1</em></td>
datasette 107914493 issue    
706167456 MDU6SXNzdWU3MDYxNjc0NTY= 168 Automate (as much as possible) updates published to Homebrew simonw 9599 closed 0     2 2020-09-22T08:08:37Z 2020-11-09T07:43:30Z 2020-11-09T07:43:30Z OWNER  

I'd like to get new sqlite-utils (and Datasette) releases submitted to Homebrew as painlessly as possible.

sqlite-utils 140912432 issue    
738514367 MDU6SXNzdWU3Mzg1MTQzNjc= 202 sqlite-utils insert -f colname - for configuring full-text search simonw 9599 open 0     0 2020-11-08T17:30:09Z 2020-11-08T17:30:22Z   OWNER  

A mechanism for specifying columns that should be configured for full-text search as part of the initial data import:

sqlite-utils insert mydb.db articles articles.csv --csv -f title -f body
sqlite-utils 140912432 issue    
735650864 MDU6SXNzdWU3MzU2NTA4NjQ= 194 3.0 release with some minor breaking changes simonw 9599 closed 0   3.0 6079500 3 2020-11-03T21:36:31Z 2020-11-08T17:19:35Z 2020-11-08T17:19:34Z OWNER  

While working on search (#192) I've spotted a few small changes I would like to make that would break backwards compatibility in minor ways, hence requiring a 3.x release.

db[table].search() - I would like this to default to sorting by rank

Also I'd like to free up the -c and -f options for other purposes from the standard output formats here:


I'd like -f to be used to indicate a full-text search column during an insert and -c to indicate a column (so you can specify which columns you want to output).

sqlite-utils 140912432 issue    
737153927 MDU6SXNzdWU3MzcxNTM5Mjc= 197 Rethink how table.search() method works simonw 9599 closed 0   3.0 6079500 5 2020-11-05T18:04:34Z 2020-11-08T17:07:37Z 2020-11-08T17:07:37Z OWNER  

I need to improve this method to help build sqlite-utils search in #192 (PR is #195).

The challenge is deciding how it should handle sorting by relevance - especially since that is easy in FTS5 but not at all easy in FTS4.

Latest test failure:
114 -> assert [("racoons are biting trash pandas", "USA", "bar")] == table.search( 115 "bite", order="rowid" 116 ) 117 118 119 def test_optimize_fts(fresh_db): (Pdb) table.search("bite") [(2, 'racoons are biting trash pandas', 'USA', 'bar', -9.641434262948206e-07)]
The problem here is that the table.search() method now behaves differently for FTS4 v.s. FTS5 tables.

With FTS4 you get back just the table columns.

With FTS5 you also get back the rowid as the first column and the rank score as the last column.

This is weird. It also makes me question whether having .search() return a list of tuples is the right API design.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/pull/195#issuecomment-722542895_

sqlite-utils 140912432 issue    
735532751 MDU6SXNzdWU3MzU1MzI3NTE= 192 sqlite-utils search command simonw 9599 closed 0   3.0 6079500 9 2020-11-03T18:07:59Z 2020-11-08T17:07:01Z 2020-11-08T17:07:01Z OWNER  

A command that knows how to run a search against a FTS enabled table and return results ranked by relevance.

sqlite-utils 140912432 issue    
738128913 MDU6SXNzdWU3MzgxMjg5MTM= 201 .search(columns=) and sqlite-utils search -c ... bug simonw 9599 closed 0   3.0 6079500 1 2020-11-07T01:27:26Z 2020-11-08T16:54:15Z 2020-11-08T16:54:15Z OWNER  

Both table.search(columns=) and the sqlite-utils search -c option do not work as expected - they always return both the rowid and the rank columns even if those have not been requested.

This should be fixed before the 3.0 non-alpha release.

sqlite-utils 140912432 issue    
738115165 MDU6SXNzdWU3MzgxMTUxNjU= 200 sqlite-utils rows -c option simonw 9599 closed 0   3.0 6079500 1 2020-11-07T00:22:12Z 2020-11-07T00:28:48Z 2020-11-07T00:28:47Z OWNER  

To let you specify the exact columns you want. Based on the -c option to sqlite-utils search in #192.

sqlite-utils 140912432 issue    
735648209 MDU6SXNzdWU3MzU2NDgyMDk= 193 --tsv output format option simonw 9599 closed 0   3.0 6079500 0 2020-11-03T21:31:18Z 2020-11-07T00:09:52Z 2020-11-07T00:09:52Z OWNER  

We already support --csv for output, and the insert command accepts --tsv. The output format options should accept --tsv too.

sqlite-utils 140912432 issue    
577302229 MDU6SXNzdWU1NzczMDIyMjk= 91 Enable ordering FTS results by rank slygent 416374 closed 0   3.0 6079500 1 2020-03-07T08:43:51Z 2020-11-06T23:53:26Z 2020-11-06T23:53:25Z NONE  

According to https://www.sqlite.org/fts5.html (not sure about FTS4) results can be sorted by relevance. At the moment results are returned by default by rowid. Perhaps a flag can be added to the search method?

sqlite-utils 140912432 issue    
735663855 MDExOlB1bGxSZXF1ZXN0NTE1MDE0ODgz 195 table.search() improvements plus sqlite-utils search command simonw 9599 closed 0     3 2020-11-03T22:02:08Z 2020-11-06T18:30:49Z 2020-11-06T18:30:42Z OWNER simonw/sqlite-utils/pulls/195

Refs #192. Still needs tests.

sqlite-utils 140912432 pull    
737476423 MDU6SXNzdWU3Mzc0NzY0MjM= 198 Support order by relevance against FTS4 simonw 9599 closed 0     6 2020-11-06T05:36:31Z 2020-11-06T18:30:44Z 2020-11-06T18:30:44Z OWNER  

For #192 and #197 I've decided I want to be able to order by relevance in FTS4 as well as FTS5.

This means I need to port over my work on bm25() from https://github.com/simonw/sqlite-fts4 (since I don't want to add a full dependency).

sqlite-utils 140912432 issue    
737855731 MDU6SXNzdWU3Mzc4NTU3MzE= 199 @db.register_function(..., replace=False) to avoid double-registering custom functions simonw 9599 closed 0     1 2020-11-06T15:39:21Z 2020-11-06T18:30:44Z 2020-11-06T18:30:44Z OWNER  

I'd like a mechanism to optionally avoid registering a custom function if it has already been registered.

SQLite doesn't seem to offer a way to introspect registered custom functions so I'll need to track what has already been registered in sqlite-utils instead.

Should I register the custom rank_bm25 SQLite function for every connection, or should I register it against the connection just the first time the user attempts an FTS4 search? I think I'd rather register it only if it is needed.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/198#issuecomment-723145383_

sqlite-utils 140912432 issue    
736520310 MDU6SXNzdWU3MzY1MjAzMTA= 196 Introspect if table is FTS4 or FTS5 simonw 9599 closed 0     19 2020-11-05T00:45:50Z 2020-11-05T03:54:07Z 2020-11-05T03:54:07Z OWNER  

I want .search() to work against both FTS5 and FTS4 tables - but sort by rank should only work for FTS5.

This means I need to be able to introspect and tell if a table is FTS4 or FTS5.

_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/192#issuecomment-722054264_

sqlite-utils 140912432 issue    
735852274 MDU6SXNzdWU3MzU4NTIyNzQ= 1082 DigitalOcean buildpack memory errors for large sqlite db? justmars 39538958 open 0     3 2020-11-04T06:35:32Z 2020-11-04T19:35:44Z   NONE  
  1. Have a sqlite db stored in Dropbox
  2. Previously tried the Digital Ocean build pack minimal approach (e.g. Procfile, requirements.txt, bin/post_compile)
  3. bin/post_compile with wget from Dropbox
  4. download of large sqlite db is successful
  5. log reveals that when building Docker container, Digital Ocean runs out of memory for 5gb+ sqlite db but works fine for 2gb+ sqlite db
datasette 107914493 issue    
736365306 MDU6SXNzdWU3MzYzNjUzMDY= 1083 Advanced CSV export for arbitrary queries simonw 9599 open 0     2 2020-11-04T19:23:05Z 2020-11-04T19:24:34Z   OWNER  

There's no link to download the CSV file - the table page has that as an advanced export option, but this is missing from the query page.

datasette 107914493 issue    
675753042 MDU6SXNzdWU2NzU3NTMwNDI= 131 sqlite-utils insert: options for column types simonw 9599 open 0     1 2020-08-09T18:59:11Z 2020-11-03T23:54:26Z   OWNER  

The insert command currently results in string types for every column - at least when used against CSV or TSV inputs.

It would be useful if you could do the following:

  • automatically detects the column types based on eg the first 1000 records
  • explicitly state the rule for specific columns

--detect-types could work for the former - or it could do that by default and allow opt-out using --no-detect-types

For specific columns maybe this:

sqlite-utils insert db.db images images.tsv \
  --tsv \
  -c id int \
  -c score float
sqlite-utils 140912432 issue    
352768017 MDU6SXNzdWUzNTI3NjgwMTc= 362 Add option to include/exclude columns in search filters annapowellsmith 78156 open 0     1 2018-08-22T01:32:08Z 2020-11-03T19:01:59Z   NONE  

I have a dataset with many columns, of which only some are likely to be of interest for searching.

It would be great for usability if the search filters in the UI could be configured to include/exclude columns.

See also: https://github.com/simonw/datasette/issues/292

datasette 107914493 issue    
426722204 MDU6SXNzdWU0MjY3MjIyMDQ= 423 ?_search_col=X not reflected correctly in the UI simonw 9599 open 0     0 2019-03-28T21:48:19Z 2020-11-03T19:01:59Z   OWNER   datasette 107914493 issue    
507454958 MDU6SXNzdWU1MDc0NTQ5NTg= 596 Handle really wide tables better simonw 9599 open 0     6 2019-10-15T20:05:46Z 2020-11-02T21:44:45Z   OWNER  

If a table has hundreds of columns the Datasette UI starts getting unwieldy.

Addressing this would be neat. One option would be to only select the first 30 columns by default and provide a UI for selecting more.

datasette 107914493 issue    
734777631 MDU6SXNzdWU3MzQ3Nzc2MzE= 1080 "View all" option for facets, to provide a (paginated) list of ALL of the facet counts plus a link to view them simonw 9599 open 0   Datasette 1.0 3268330 5 2020-11-02T19:55:06Z 2020-11-02T20:16:05Z   OWNER  

Can use /database/-/... namespace from #296

datasette 107914493 issue    
636511683 MDU6SXNzdWU2MzY1MTE2ODM= 830 Redesign register_facet_classes plugin hook simonw 9599 open 0   Datasette 1.0 3268330 1 2020-06-10T20:03:27Z 2020-11-02T20:15:36Z   OWNER  

Nothing uses this plugin hook yet, so the design is not yet proven.

I'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.

I'll add a warning about this to the documentation.

datasette 107914493 issue    
733829385 MDU6SXNzdWU3MzM4MjkzODU= 1077 database_actions plugin hook simonw 9599 closed 0   Datasette 0.52 6055094 3 2020-10-31T23:48:12Z 2020-11-02T18:43:25Z 2020-11-02T18:29:50Z OWNER  

Like column_actions but adds a cog menu to the database page.

datasette 107914493 issue    
733999615 MDU6SXNzdWU3MzM5OTk2MTU= 1079 Handle long breadcrumbs better with new menu simonw 9599 open 0   Datasette 0.52 6055094 1 2020-11-01T15:57:41Z 2020-11-02T18:28:29Z   OWNER   datasette 107914493 issue    
657572753 MDU6SXNzdWU2NTc1NzI3NTM= 894 ?sort=colname~numeric to sort by by column cast to real simonw 9599 open 0   Datasette 0.52 6055094 19 2020-07-15T18:47:48Z 2020-11-02T18:28:24Z   OWNER  

If a text column actually contains numbers, being able to "sort by column, treated as numeric" would be really useful.

Probably depends on column actions enabled by #690

datasette 107914493 issue    
637395097 MDU6SXNzdWU2MzczOTUwOTc= 838 Incorrect URLs when served behind a proxy with base_url set tsibley 79913 closed 0   0.51 6026070 9 2020-06-11T23:58:55Z 2020-11-02T09:33:58Z 2020-10-31T20:50:41Z NONE  

I'm running datasette serve --config base_url:/foo/ …, proxying to it with this Apache config:

    ProxyPass /foo/ http://localhost:8001/ 
    ProxyPassReverse /foo/ http://localhost:8001/

and then accessing it via https://example.com/foo/.

Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include base_url or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at http://localhost:8001.

I looked into this a little in the source code, and it seems to be an issue anywhere request.url or request.path is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the path_with_… family of utility functions and the Datasette.absolute_url method.

datasette 107914493 issue    
323658641 MDU6SXNzdWUzMjM2NTg2NDE= 262 Add ?_extra= mechanism for requesting extra properties in JSON simonw 9599 open 0   Datasette 0.52 6055094 8 2018-05-16T14:55:42Z 2020-11-01T05:04:41Z   OWNER  

Datasette views currently work by creating a set of data that should be returned as JSON, then defining an additional, optional template_data() function which is called if the view is being rendered as HTML.

This template_data() function calculates extra template context variables which are necessary for the HTML view but should not be included in the JSON.

Example of how that is used today: https://github.com/simonw/datasette/blob/2b79f2bdeb1efa86e0756e741292d625f91cb93d/datasette/views/table.py#L672-L704

With features like Facets in #255 I'm beginning to want to move more items into the template_data() - in the case of facets it's the suggested_facets array. This saves that feature from being calculated (involving several SQL queries) for the JSON case where it is unlikely to be used.

But... as an API user, I want to still optionally be able to access that information.

Solution: Add a ?_extra=suggested_facets&_extra=table_metadata argument which can be used to optionally request additional blocks to be added to the JSON API.

Then redefine as many of the current template_data() features as extra arguments instead, and teach Datasette to return certain extras by default when rendering templates.

This could allow the JSON representation to be slimmed down further (removing e.g. the table_definition and view_definition keys) while still making that information available to API users who need it.

datasette 107914493 issue    
520655983 MDU6SXNzdWU1MjA2NTU5ODM= 619 "Invalid SQL" page should let you edit the SQL simonw 9599 open 0   Datasette 0.52 6055094 7 2019-11-10T20:54:12Z 2020-11-01T05:03:13Z   OWNER  


Would be useful if this page showed you the invalid SQL you entered so you can edit it and try again.

datasette 107914493 issue    
712984738 MDU6SXNzdWU3MTI5ODQ3Mzg= 987 Documentation on hooks for JavaScript plugin authors simonw 9599 open 0   Datasette 0.52 6055094 2 2020-10-01T16:10:14Z 2020-11-01T05:02:01Z   OWNER  

In #981 I added data-column= attributes to the <th> on the table page. These should become part of Datasette's documented API so JavaScript plugin authors can use them to derive things about the tables shown on a page (`datasette-cluster-map uses them as-of https://github.com/simonw/datasette-cluster-map/issues/18).

datasette 107914493 issue    
712202333 MDU6SXNzdWU3MTIyMDIzMzM= 982 SQL editor should allow execution of write queries, if you have permission simonw 9599 open 0   Datasette 0.52 6055094 2 2020-09-30T19:04:35Z 2020-11-01T05:01:59Z   OWNER  

The datasette-write plugin provides this at the moment https://github.com/simonw/datasette-write - but it feels like it should be a built-in capability, protected by a default permission.

UI concept: if you have write permission then the existing SQL editor gets an "execute write" checkbox underneath it.

JavaScript can spot if you appear to be trying to execute an UPDATE or INSERT or DELETE query and check that checkbox for you.

If you link to a query page with a non-SELECT then that query will be displayed in the box ready for you to POST submit it. The page will also then get "cannot be embedded" headers to protect against clickjacking.

datasette 107914493 issue    
627794879 MDU6SXNzdWU2Mjc3OTQ4Nzk= 782 Redesign default JSON format in preparation for Datasette 1.0 simonw 9599 open 0   Datasette 0.52 6055094 21 2020-05-30T18:47:07Z 2020-11-01T05:01:56Z   OWNER  

The default JSON just isn't right. I find myself using ?_shape=array for almost everything I build against the API.

datasette 107914493 issue    
648435885 MDU6SXNzdWU2NDg0MzU4ODU= 878 New pattern for views that return either JSON or HTML, available for plugins simonw 9599 open 0   Datasette 0.52 6055094 2 2020-06-30T19:26:13Z 2020-11-01T04:59:22Z   OWNER  

Can be part of #870 - refactoring existing views to use register_routes().

I'm going to put the new check_permissions() method on BaseView as well. If I want that method to be available to plugins I can do so by turning that BaseView class into a documented API that plugins are encouraged to use themselves.
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/832#issuecomment-651995453_

datasette 107914493 issue    
710650633 MDU6SXNzdWU3MTA2NTA2MzM= 979 Default table view JSON should include CREATE TABLE simonw 9599 open 0   Datasette 0.52 6055094 2 2020-09-28T23:54:58Z 2020-11-01T04:58:12Z   OWNER  

https://latest.datasette.io/fixtures/facetable.json doesn't currently include the CREATE TABLE statement for the page, even though it's available on the HTML version at https://latest.datasette.io/fixtures/facetable

datasette 107914493 issue    
707849175 MDU6SXNzdWU3MDc4NDkxNzU= 974 static assets and favicon aren't cached by the browser obra 45416 open 0   Datasette 0.52 6055094 1 2020-09-24T04:44:55Z 2020-11-01T04:58:11Z   NONE  

Using datasette to solve some frustrating problems with our fulfillment provider today, I was surprised to see repeated requests for assets under /-/static and the favicon. While it won't likely be a huge performance bottleneck, I bet datasette would feel a bit zippier if you had Uvicorn serving up some caching-related headers telling the browser it was safe to cache static assets.

datasette 107914493 issue    
684961449 MDU6SXNzdWU2ODQ5NjE0NDk= 949 Try out CodeMirror SQL hints simonw 9599 closed 0     3 2020-08-24T20:58:21Z 2020-11-01T03:29:53Z 2020-11-01T03:29:48Z OWNER  

It would also be interesting to try out the SQL hint mode, which can autocomplete against tables and columns. This demo shows how to configure that: https://codemirror.net/mode/sql/

Some missing documentation: https://stackoverflow.com/questions/20023381/codemirror-how-add-tables-to-sql-hint
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/948#issuecomment-679355426_

datasette 107914493 issue    
728905098 MDU6SXNzdWU3Mjg5MDUwOTg= 1048 Documentation and unit tests for urls.row() urls.row_blob() methods simonw 9599 open 0     1 2020-10-25T00:13:53Z 2020-10-31T22:32:09Z   OWNER   datasette 107914493 issue    
733805089 MDU6SXNzdWU3MzM4MDUwODk= 1076 Release notes for 0.51 simonw 9599 closed 0   0.51 6026070 0 2020-10-31T20:51:21Z 2020-10-31T22:27:00Z 2020-10-31T22:27:00Z OWNER   datasette 107914493 issue    
728895233 MDU6SXNzdWU3Mjg4OTUyMzM= 1047 A new section in the docs about how Datasette handles BLOB columns simonw 9599 closed 0   0.51 6026070 1 2020-10-24T23:01:02Z 2020-10-31T22:11:25Z 2020-10-31T21:38:05Z OWNER  

Split from #1040, refs #1036.

datasette 107914493 issue    
722758132 MDU6SXNzdWU3MjI3NTgxMzI= 1027 Add documentation on serving Datasette behind a proxy using base_url simonw 9599 closed 0   0.51 6026070 5 2020-10-15T23:46:29Z 2020-10-31T21:14:05Z 2020-10-31T21:14:05Z OWNER  

This can go on this page: https://docs.datasette.io/en/stable/deploying.html

Refs #1023, #865

datasette 107914493 issue    
727627923 MDU6SXNzdWU3Mjc2Mjc5MjM= 1041 extra_js_urls and extra_css_urls should respect base_url setting simonw 9599 closed 0   0.51 6026070 4 2020-10-22T18:34:33Z 2020-10-31T20:49:28Z 2020-10-31T20:48:58Z OWNER   datasette 107914493 issue    
733796942 MDU6SXNzdWU3MzM3OTY5NDI= 1075 PrefixedUrlString mechanism broke everything simonw 9599 closed 0   0.51 6026070 6 2020-10-31T19:58:05Z 2020-10-31T20:48:51Z 2020-10-31T20:48:51Z OWNER  

Added in 7a67bc7a569509d65b3a8661e0ad2c65f0b09166 refs #1026. Lots of tests are failing now.

datasette 107914493 issue    
733499930 MDU6SXNzdWU3MzM0OTk5MzA= 1072 load_template hook doesn't work for include/extends simonw 9599 closed 0   0.51 6026070 20 2020-10-30T20:33:44Z 2020-10-31T20:48:18Z 2020-10-30T22:50:57Z OWNER  

Includes like this one always go to disk, without hitting the load_template plugin hook:

<footer class="ft">{% block footer %}{% include "_footer.html" %}{% endblock %}</footer>
datasette 107914493 issue    
733768037 MDU6SXNzdWU3MzM3NjgwMzc= 1074 latest.datasette.io should include plugins from fixtures simonw 9599 closed 0   0.51 6026070 3 2020-10-31T17:23:23Z 2020-10-31T19:47:47Z 2020-10-31T19:47:47Z OWNER  

It bothers me that these aren't visible in any public demos. Maybe latest.datasette.io should include the my_plugins.py and my_plugins2.py plugins?
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1067#issuecomment-719961701_

datasette 107914493 issue    
722738988 MDU6SXNzdWU3MjI3Mzg5ODg= 1026 How should datasette.client interact with base_url simonw 9599 closed 0   0.51 6026070 5 2020-10-15T23:07:11Z 2020-10-31T19:29:52Z 2020-10-31T19:29:51Z OWNER  

Refs #1023. If Datasette is running with a base_url setting and a plugin calls e.g. datasette.client.get("/-/plugins.json") what should happen?

datasette 107914493 issue    
725743755 MDU6SXNzdWU3MjU3NDM3NTU= 1035 datasette.urls.table(..., format="json") argument simonw 9599 closed 0   0.51 6026070 3 2020-10-20T16:09:34Z 2020-10-31T18:16:43Z 2020-10-31T18:16:43Z OWNER  

That datasette.urls.table("db", "table") + ".json" example is bad because if the table name contains a . it should be ?_format=json instead.

Maybe .table() should have a format="json" option that knows how to do this.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1026#issuecomment-712962517_

datasette 107914493 issue    
732905360 MDU6SXNzdWU3MzI5MDUzNjA= 1067 Table actions menu on view pages, not on query pages simonw 9599 closed 0   0.51 6026070 6 2020-10-30T05:56:39Z 2020-10-31T17:51:31Z 2020-10-31T17:40:14Z OWNER  

Follow-on from #1066.

datasette 107914493 issue    
733390884 MDU6SXNzdWU3MzMzOTA4ODQ= 1070 load_template() example in documentation showing loading from a database simonw 9599 closed 0   0.51 6026070 1 2020-10-30T17:45:03Z 2020-10-31T16:22:51Z 2020-10-31T16:22:45Z OWNER  

I should include an example in the documentation that shows loading templates from a database table.
_Originally posted by @simonw in https://github.com/simonw/datasette/pull/1069#issuecomment-719664530_

datasette 107914493 issue    
733560417 MDU6SXNzdWU3MzM1NjA0MTc= 1073 Remove load_template plugin hook simonw 9599 closed 0   0.51 6026070 1 2020-10-30T22:51:52Z 2020-10-31T16:22:00Z 2020-10-31T16:22:00Z OWNER  

I couldn't get it working correctly with async (necessary for include/extend to function), and on deeper investigation it appears that I can build something equivalent to what I wanted using the existing prepare_jinja2_environment hook.

I'm going to remove the load_template plugin hook and see if it's possible to build the edit templates extension against prepare_jinja2_environment instead.
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1072#issuecomment-719833744_

datasette 107914493 issue    
733485423 MDU6SXNzdWU3MzM0ODU0MjM= 1071 Messages should be displayed full width simonw 9599 closed 0   0.51 6026070 1 2020-10-30T20:11:35Z 2020-10-30T20:20:02Z 2020-10-30T20:13:05Z OWNER   datasette 107914493 issue    
727802081 MDU6SXNzdWU3Mjc4MDIwODE= 1042 Plugin hook for loading templates simonw 9599 closed 0   0.51 6026070 14 2020-10-23T00:18:39Z 2020-10-30T17:47:21Z 2020-10-30T17:47:20Z OWNER  

This can work with the Jinja template loaders. It would unlock things like storing templates in SQLite.

datasette 107914493 issue    
733303548 MDExOlB1bGxSZXF1ZXN0NTEzMTA2MDI2 1069 load_template() plugin hook simonw 9599 closed 0   0.51 6026070 6 2020-10-30T15:59:45Z 2020-10-30T17:47:20Z 2020-10-30T17:47:19Z OWNER simonw/datasette/pulls/1069

Refs #1042

datasette 107914493 pull    
732939921 MDU6SXNzdWU3MzI5Mzk5MjE= 1068 Default menu links should check a real permission simonw 9599 closed 0   0.51 6026070 5 2020-10-30T07:08:34Z 2020-10-30T15:44:13Z 2020-10-30T15:42:11Z OWNER  


This should check a named permission so that it can be customized by permission plugins.

datasette 107914493 issue    
573755726 MDU6SXNzdWU1NzM3NTU3MjY= 690 Mechanism for plugins to add action menu items for various things simonw 9599 closed 0   0.51 6026070 11 2020-03-02T06:48:36Z 2020-10-30T05:20:43Z 2020-10-30T05:20:42Z OWNER  

Now that we have support for plugins that can write I'm seeing all sorts of places where a plugin might need to add UI to the table page.

Some examples:

  • datasette-configure-fts needs to add a "configure search for this table" link
  • a plugin that lets you render or delete tables needs to add a link or button somewhere
  • existing plugins like datasette-vega and datasette-cluster-map already do this with JavaScript

The challenge here is that multiple plugins may want to do this, so simply overriding templates and populating names blocks doesn't entirely work as templates may override each other.

datasette 107914493 issue    
732859030 MDU6SXNzdWU3MzI4NTkwMzA= 1066 Table actions menu plus plugin hook simonw 9599 closed 0   0.51 6026070 3 2020-10-30T03:46:54Z 2020-10-30T05:18:36Z 2020-10-30T05:16:50Z OWNER  

For the table actions: attaching it to a cog icon next to the table name could make sense.


This is the column action icon at twice the size, color #666.
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/690#issuecomment-709497595_

datasette 107914493 issue    
732856937 MDExOlB1bGxSZXF1ZXN0NTEyNzM2NzA1 1065 Nav menu plus menu_links() hook simonw 9599 closed 0   0.51 6026070 1 2020-10-30T03:40:18Z 2020-10-30T03:45:17Z 2020-10-30T03:45:16Z OWNER simonw/datasette/pulls/1065

Closes #1064, refs #690.

datasette 107914493 pull    
732798913 MDU6SXNzdWU3MzI3OTg5MTM= 1064 Navigation menu plus plugin hook simonw 9599 closed 0   0.51 6026070 10 2020-10-30T00:49:36Z 2020-10-30T03:45:16Z 2020-10-30T03:45:16Z OWNER   datasette 107914493 issue    
725184645 MDU6SXNzdWU3MjUxODQ2NDU= 1034 Better way of representing binary data in .csv output simonw 9599 closed 0   0.51 6026070 19 2020-10-20T04:28:58Z 2020-10-30T00:11:17Z 2020-10-29T22:47:46Z OWNER  

I just noticed this: https://latest.datasette.io/fixtures/binary_data.csv


There's no good way to represent binary data in a CSV file, but this seems like one of the more-bad options.

datasette 107914493 issue    
732685643 MDU6SXNzdWU3MzI2ODU2NDM= 1063 .csv should link to .blob downloads simonw 9599 closed 0   0.51 6026070 3 2020-10-29T21:45:58Z 2020-10-29T22:47:46Z 2020-10-29T22:47:45Z OWNER  
  • Update .csv output to link to these things (and get that xfail test to pass)
  • <del>Add a .csv?_blob_base64=1 argument that causes them to be output in base64 in the CSV</del>

Moving the CSV work to a separate ticket.
_Originally posted by @simonw in https://github.com/simonw/datasette/pull/1061#issuecomment-719042601_

datasette 107914493 issue    

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);