home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

145 rows where comments = 4 and repo = 107914493 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: milestone, author_association, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 126
  • pull 19

state 2

  • closed 110
  • open 35

repo 1

  • datasette · 145 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
959137143 MDU6SXNzdWU5NTkxMzcxNDM= 1415 feature request: document minimum permissions for service account for cloudrun fgregg 536941 open 0     4 2021-08-03T13:48:43Z 2023-11-05T16:46:59Z   CONTRIBUTOR  

Thanks again for such a powerful project.

For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions.

It would be great to document what those minimum permission that need to be set in the IAM.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1901768721 PR_kwDOBm6k_c5anSg5 2191 Move `permissions`, `allow` blocks, canned queries and more out of `metadata.yaml` and into `datasette.yaml` asg017 15178711 closed 0     4 2023-09-18T21:21:16Z 2023-10-12T16:16:38Z 2023-10-12T16:16:38Z CONTRIBUTOR simonw/datasette/pulls/2191

The PR moves the following fields from metadata.yaml to datasette.yaml:

permissions allow allow_sql queries extra_css_urls extra_js_urls

This is a significant breaking change that users will need to upgrade their metadata.yaml files for. But the format/locations are similar to the previous version, so it shouldn't be too difficult to upgrade.

One note: I'm still working on the Configuration docs, specifically the "reference" section. Though it's pretty small, the rest of read to review

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2191/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1891212159 PR_kwDOBm6k_c5aD33C 2183 `datasette.yaml` plugin support asg017 15178711 closed 0     4 2023-09-11T20:26:04Z 2023-09-13T21:06:25Z 2023-09-13T21:06:25Z CONTRIBUTOR simonw/datasette/pulls/2183

Part of #2093

In #2149 , we ported over "settings.json" into the new datasette.yaml config file, with a top-level "settings" key. This PR ports over plugin configuration into top-level "plugins" key, as well as nested database/table plugin config.

From now on, no plugin-related configuration is allowed in metadata.yaml, and must be in datasette.yaml in this new format. This is a pretty significant breaking change. Thankfully, you should be able to copy-paste your legacy plugin key/values into the new datasette.yaml format.

An example of what datasette.yaml would look like with this new plugin config:

```yaml

plugins: datasette-my-plugin: config_key: value

databases: fixtures: plugins: datasette-my-plugin: config_key: fixtures-db-value tables: students: plugins: datasette-my-plugin: config_key: fixtures-students-table-value

```

As an additional benefit, this now works with the new -s flag:

bash datasette --memory -s 'plugins.datasette-my-plugin.config_key' new_value

Marked as a "Draft" right now until I add better documentation. We also should have a plan for the next alpha release to document and publicize this change, especially for plugin authors (since their docs will have to change to say datasette.yaml instead of metadata.yaml


:books: Documentation preview :books:: https://datasette--2183.org.readthedocs.build/en/2183/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2183/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
336464733 MDU6SXNzdWUzMzY0NjQ3MzM= 328 Installation instructions, including how to use the docker image simonw 9599 closed 0     4 2018-06-28T03:59:33Z 2023-09-05T14:10:39Z 2018-06-28T04:02:10Z OWNER  
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/328/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1870672704 PR_kwDOBm6k_c5Y-7Em 2162 Add new `--internal internal.db` option, deprecate legacy `_internal` database asg017 15178711 closed 0     4 2023-08-29T00:05:07Z 2023-08-29T03:24:23Z 2023-08-29T03:24:23Z CONTRIBUTOR simonw/datasette/pulls/2162

refs #2157

This PR adds a new --internal option to datasette serve. If provided, it is the path to a persistent internal database that Datasette core and Datasette plugins can use to store data, as discussed in the proposal issue.

This PR also removes and deprecates the previous in-memory _internal database. Those tables now appear in the internal database, with core_ prefixes (ex tables in _internal is now core_tables in internal).

A note on the new core_ tables

However, one important notes about those new core_ tables: If a --internal DB is passed in, that means those core_ tables will persist across multiple Datasette instances. This wasn't the case before, since _internal was always an in-memory database created from scratch.

I tried to put those core_ tables as TEMP tables - after all, there's always one 1 internal DB connection at a time, so I figured it would work. But, since we use the Database() wrapper for the internal DB, it has two separate connections: a default read-only connection and a write connection that is created when a write operation occurs. Which meant the TEMP tables would be created by the write connection, but not available in the read-only connection.

So I had a brillant idea: Attach an in-memory named database with cache=shared, and create those tables there!

sql ATTACH DATABASE 'file:datasette_internal_core?mode=memory&cache=shared' AS core;

We'd run this on both the read-only connection and the write-only connection. That way, those tables would stay in memory, they'd communicate with the cache=shared feature, and we'd be good to go.

However, I couldn't find an easy way to run a ATTACH DATABASE command on the read-only query.

Using Database() as a wrapper for the internal DB is pretty limiting - it's meant for Datasette "data" databases, where we want multiple readers and possibly 1 write connection at a time. But the internal database doesn't really require that kind of support - I think we could get away with a single read/write connection, but it seemed like too big of a rabbithole to go through now.


:books: Documentation preview :books:: https://datasette--2162.org.readthedocs.build/en/2162/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2162/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1865649347 I_kwDOBm6k_c5vM4zD 2156 datasette -s/--setting option for setting nested configuration options simonw 9599 open 0     4 2023-08-24T18:09:27Z 2023-08-28T19:33:05Z   OWNER  

I've been thinking about what it might look like to allow command-line arguments to be used to define any of the configuration options in datasette.yml, as alternative and more convenient syntax.

Here's what I've come up with: datasette \ -s settings.sql_time_limit_ms 1000 \ -s plugins.datasette-auth-tokens.manage_tokens true \ -s plugins.datasette-auth-tokens.manage_tokens_database tokens \ mydatabase.db tokens.db Which would be equivalent to datasette.yml containing this: yaml plugins: datasette-auth-tokens: manage_tokens: true manage_tokens_database: tokens settings: sql_time_limit_ms: 1000 More details in https://github.com/simonw/datasette/issues/2143#issuecomment-1690792514

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2156/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1863810783 I_kwDOBm6k_c5vF37f 2150 form label { width: 15% } is a bad default simonw 9599 closed 0     4 2023-08-23T18:22:27Z 2023-08-23T18:37:18Z 2023-08-23T18:35:48Z OWNER  

See: - https://github.com/simonw/datasette-configure-fts/issues/14 - https://github.com/simonw/datasette-auth-tokens/issues/12

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2150/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1843710170 I_kwDOBm6k_c5t5Mja 2136 Query view shouldn't return `columns` simonw 9599 closed 0   Datasette 1.0a3 9700784 4 2023-08-09T17:23:57Z 2023-08-09T19:03:04Z 2023-08-09T19:03:04Z OWNER  

I just noticed that https://latest.datasette.io/fixtures/roadside_attraction_characteristics.json?_labels=on&_size=1 returns: json { "ok": true, "next": "1", "rows": [ { "rowid": 1, "attraction_id": { "value": 1, "label": "The Mystery Spot" }, "characteristic_id": { "value": 2, "label": "Paranormal" } } ], "truncated": false } But https://latest.datasette.io/fixtures.json?sql=select+rowid%2C+attraction_id%2C+characteristic_id+from+roadside_attraction_characteristics+order+by+rowid+limit+1 returns: json { "rows": [ { "rowid": 1, "attraction_id": 1, "characteristic_id": 2 } ], "columns": [ "rowid", "attraction_id", "characteristic_id" ], "ok": true, "truncated": false } The columns key in the query response is inconsistent with the table response.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2136/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1822937426 I_kwDOBm6k_c5sp9FS 2111 Implement new /content.json?sql=... simonw 9599 closed 0   Datasette 1.0a3 9700784 4 2023-07-26T18:22:39Z 2023-08-08T02:00:37Z 2023-08-08T02:00:22Z OWNER  

This will be the base that the remaining work builds on top of. Refs: - #2109

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2111/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1355148385 I_kwDOBm6k_c5Qxexh 1796 Research an upgrade to CodeMirror 6 simonw 9599 closed 0     4 2022-08-30T04:27:46Z 2023-07-03T04:58:21Z 2023-07-03T04:58:21Z OWNER  

There are still a bunch of bugs in CodeMirror 5 that affect various mobile browsers - see Datasette Discord report here: https://discord.com/channels/823971286308356157/823971286941302908/1013878624992108645

https://user-images.githubusercontent.com/9599/187349269-7b7c0c8c-3894-4810-82f0-de7c1eb940b3.mp4

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1796/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1686033652 I_kwDOBm6k_c5kftT0 2065 Datasette cannot be installed with Rye simonw 9599 closed 0     4 2023-04-27T03:35:42Z 2023-04-27T05:09:36Z 2023-04-27T05:09:36Z OWNER  

https://github.com/mitsuhiko/rye

I tried this:

rye install datasette

But now:

% ~/.rye/shims/datasette Traceback (most recent call last): File "/Users/simon/.rye/shims/datasette", line 5, in <module> from datasette.cli import cli File "/Users/simon/.rye/tools/datasette/lib/python3.11/site-packages/datasette/cli.py", line 17, in <module> from .app import ( File "/Users/simon/.rye/tools/datasette/lib/python3.11/site-packages/datasette/app.py", line 14, in <module> import pkg_resources ModuleNotFoundError: No module named 'pkg_resources' I think that's because setuptools is not included in Rye.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2065/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1646734246 I_kwDOBm6k_c5iJyum 2049 Custom SQL queries should use new JSON ?_extra= format simonw 9599 open 0   Datasette 1.0a-next 8755003 4 2023-03-30T00:42:53Z 2023-04-05T23:29:27Z   OWNER  

Related: - #262

I've made the change to the table view, now I need the new format to work for arbitrary SQL queries too.

Note that this incorporates both arbitrary SQL queries and canned queries.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2049/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1590183272 I_kwDOBm6k_c5eyEVo 2027 How to redirect from "/" to a specific db/table dmick 1350673 open 0     4 2023-02-18T03:14:01Z 2023-03-08T04:42:22Z   NONE  

Using nginx to redirect public IP to the local uvicorn server as 'normal'. I can't figure out how to redirect such that '/' results in accessing the one db/table I want to serve; redirecting / to /db/table breaks some of the CSS; fooling with base_url doesn't seem to help. Can someone explain this, if it's possible?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2027/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1515815014 I_kwDOBm6k_c5aWYBm 1973 render_cell plugin hook's row object is not a sqlite.Row cldellow 193185 open 0     4 2023-01-01T20:27:46Z 2023-01-29T00:40:31Z   CONTRIBUTOR  

From https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-row-value-column-table-database-datasette:

row - sqlite.Row The SQLite row object that the value being rendered is part of

This appears to actually be a CustomRow, but I think that's unrelated to my issue.

I have a table:

sql CREATE TABLE IF NOT EXISTS "dss_job_stats"( job_id integer not null references dss_job(id) on delete cascade, host text not null, // other columns elided as irrelevant primary key (job_id, host) );

On datasette 0.63.2, the render_cell hook receives a row value that looks like:

CustomRow([('job_id', {'value': 2, 'label': '2'}), ('host', 'cldellow.com')])

I expected the job_id value to be 2, but it's actually {'value': 2, 'label': '2'}.

I can work around this, but was wondering if this was intended behaviour?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1973/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1553615704 I_kwDOBm6k_c5cmktY 2001 Datasette is not compatible with SQLite's strict quoting compilation option gwk 406380 open 0     4 2023-01-23T19:10:07Z 2023-01-25T04:59:58Z   NONE  

I have linked Python3.11 on macOS against recent SQLite that was compiled using -DSQLITE_DQS=0. This option disables interpretation of double-quoted identifiers as string literals, described in the SQLite docs as a "MySQL 3.x misfeature". See https://www.sqlite.org/quirks.html#dblquote for background.

Datasette uses the double-quote syntax in a number of key places, and is thus completely broken in this environment.

My experience was to pip install datasette, then run datasette serve -I my-data.db. When I visit http://127.0.0.1:8001 I get a 500 response.

The error: sqlite3.OperationalError: no such column: geometry_columns

The responsible SQL: 'select 1 from sqlite_master where tbl_name = "geometry_columns"'

I then installed datasette from GitHub master in development mode and changed the offending SQL to use correct quotes: "select 1 from sqlite_master where tbl_name = 'geometry_columns'".

With this change, I get a little further, but have the same problem with the first table name in my database (in my case, "Meta"): OperationalError: no such column: Meta Traceback (most recent call last): File "/Users/gwk/external/datasette/datasette/app.py", line 1522, in route_path response = await view(request, send) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/views/base.py", line 151, in view return await self.dispatch_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/views/base.py", line 105, in dispatch_request response = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/views/index.py", line 70, in get "fts_table": await db.fts_table(table), ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/database.py", line 363, in fts_table return await self.execute_fn(lambda conn: detect_fts(conn, table)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/database.py", line 213, in execute_fn return await asyncio.get_event_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/py/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/database.py", line 211, in in_thread return fn(conn) ^^^^^^^^ File "/Users/gwk/external/datasette/datasette/database.py", line 363, in <lambda> return await self.execute_fn(lambda conn: detect_fts(conn, table)) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/gwk/external/datasette/datasette/utils/__init__.py", line 588, in detect_fts rows = conn.execute(detect_fts_sql(table)).fetchall() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ sqlite3.OperationalError: no such column: Meta INFO: 127.0.0.1:50258 - "GET / HTTP/1.1" 500 Internal Server Error

I will try to continue playing with this, but I also hope that the datasette developers will enable this mode in a test environment as I am unlikely to be able to exercise all of the SQL in the codebase, or make a pull request very soon.

Note that the DQS setting compile-time option can be overridden at runtime with calls to the C API: sqlite3_db_config(db, SQLITE_DBCONFIG_DQS_DDL, 0, (void*)0); sqlite3_db_config(db, SQLITE_DBCONFIG_DQS_DML, 0, (void*)0);

As far as I can tell, sqlite3_db_config is not exposed in Python, but perhaps we could figure out how to invoke it using ctypes.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2001/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1529707837 I_kwDOBm6k_c5bLX09 1988 Reconsider pattern where plugins could break existing template context simonw 9599 open 0   Datasette 1.0 3268330 4 2023-01-11T21:13:43Z 2023-01-11T21:25:05Z   OWNER  

I hadn't run into an issue with plugins like datasette-template-sql interfering with the existing context for other features before! Definitely not a good thing.

Originally posted by @simonw in https://github.com/simonw/datasette-write/issues/6#issuecomment-1379490596

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1988/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1529452371 I_kwDOBm6k_c5bKZdT 1987 installpython3.com is now a spam website simonw 9599 closed 0     4 2023-01-11T17:55:12Z 2023-01-11T18:29:26Z 2023-01-11T18:29:25Z OWNER  

Need to stop linking to it from the docs.

I'll link to https://www.python.org/about/gettingstarted/ instead.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1987/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
806849424 MDU6SXNzdWU4MDY4NDk0MjQ= 1221 Support SSL/TLS directly simonw 9599 closed 0     4 2021-02-12T00:18:29Z 2022-12-18T02:39:04Z 2021-02-12T00:52:18Z OWNER  

This should be pretty easy because Uvicorn supports them already. Need a good mechanism for testing it - https://pypi.org/project/trustme/ looks ideal.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1221/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1384549993 I_kwDOBm6k_c5Sho5p 1818 Setting to turn off table row counts entirely simonw 9599 open 0     4 2022-09-24T06:39:22Z 2022-12-11T02:03:09Z   OWNER  

There are situations - such as loading SQLite files remotely using HTTP range headers - where counting all of the rows in a table should be avoided entirely.

Also, this chunked inefficiency means that I have to hack the URL to not load tables of a database as it seems to try to load the whole database when I click on a database.

I bet that's because Datasette tries to show a count of all of the rows in each table when it shows the list on that page, which triggers a full table scan.

Would be great to have a setting that turns that feature off, which could then be exposed as a query string option for Datasette Lite.

Originally posted by @simonw in https://github.com/simonw/datasette-lite/issues/49#issuecomment-1256880715

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1818/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1198822563 I_kwDOBm6k_c5HdJSj 1706 [feature] immutable mode for a directory, not just individual sqlite file hydrosquall 9020979 open 0     4 2022-04-10T00:50:57Z 2022-12-09T19:11:40Z   CONTRIBUTOR  

Motivation

  • I have a directory of sqlite databases
  • I'd like to use immutable mode when opening them for better performance docs
  • Currently using this flag throws the following error

    IsADirectoryError: [Errno 21] Is a directory: '/name-of-directory'

Proposal

Immutable flag works for both single files and directories

datasette -i /folder-of-sqlite-files
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1706/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1473659191 I_kwDOBm6k_c5X1kE3 1929 Incorrect link from the API explorer to the JSON API documentation davidbgk 3556 closed 0     4 2022-12-03T02:08:58Z 2022-12-06T19:36:23Z 2022-12-06T19:34:20Z CONTRIBUTOR  

I installed datasette==1.0a1.

When I go to http://127.0.0.1:8001/-/api I have a link: Use this tool to try out the [Datasette API](https://docs.datasette.io/en/1.0a1/json_api.html). but that documentation page does not exist.

I'm not sure where it has to be fixed, should it link to the stable page https://docs.datasette.io/en/stable/json_api.html , the latest one https://docs.datasette.io/en/latest/json_api.html#the-json-write-api or would it be more appropriated to deploy documentation for the 1.0a1 version?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1929/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1470509936 I_kwDOBm6k_c5XpjNw 1924 Docs for replace:true and ignore:true options for insert API simonw 9599 closed 0   Datasette 1.0a1 7867486 4 2022-12-01T01:33:25Z 2022-12-01T18:15:15Z 2022-12-01T02:08:02Z OWNER  

Equivalent to https://sqlite-utils.datasette.io/en/stable/cli.html#insert-replacing-data

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1924/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1450303205 I_kwDOBm6k_c5Wcd7l 1891 1.0a0 release notes simonw 9599 closed 0   Datasette 1.0a0 8658075 4 2022-11-15T19:58:20Z 2022-11-29T19:23:41Z 2022-11-29T19:23:41Z OWNER  

This release will mainly help preview the new Datasette write API: - #1850

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1891/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1425029275 I_kwDOBm6k_c5U8Dib 1864 Delete a single record from an existing table simonw 9599 closed 0   Datasette 1.0a0 8658075 4 2022-10-27T04:53:22Z 2022-11-29T18:54:04Z 2022-11-29T18:54:04Z OWNER  

API design: POST /db/table/row-pks/-/delete Or... DELETE /db/table/row-pks/-/delete I'm just going to do POST for the moment, like I did here: - #1874

Permission: delete-row

Still needed:

  • [ ] Tests for rowid tables
  • [ ] Tests for compound primary keys
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1864/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1456012874 I_kwDOBm6k_c5WyP5K 1905 `publish heroku` failing due to old Python version simonw 9599 closed 0     4 2022-11-19T00:01:45Z 2022-11-19T01:12:05Z 2022-11-19T00:52:29Z OWNER  

Reported on Discord: https://discord.com/channels/823971286308356157/823971286941302908/1042814317118115901

``` -----> Building on the Heroku-22 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> Using Python version specified in runtime.txt ! Requested runtime 'python-3.8.10' is not available for this stack (heroku-22). ! For supported versions, see: https://devcenter.heroku.com/articles/python-support ! Push rejected, failed to compile Python app.

! Push failed ▸ Build failed ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1905/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1452364777 I_kwDOBm6k_c5WkVPp 1896 Extract logic for resolving a URL to a database / table / row simonw 9599 closed 0   Datasette 1.0a0 8658075 4 2022-11-16T22:25:20Z 2022-11-18T22:57:47Z 2022-11-18T22:56:55Z OWNER  

In trying to write this I realize that there's a lot of duplicated code with delete row, specifically around resolving the incoming URL into a row (or a database or a table).

Since this is so common, I think it's worth extracting the logic out first.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1863#issuecomment-1317755263

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1896/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1452495049 I_kwDOBm6k_c5Wk1DJ 1899 Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused bgrins 95570 closed 0     4 2022-11-17T00:29:52Z 2022-11-18T07:28:28Z 2022-11-18T07:20:53Z CONTRIBUTOR  

After the upgrade to 6 (#1893) I noticed this. I think it's because we're doing overflow:hidden to accomplish the CSS resizer.

When there's a single line of SQL there's a gap below that line where clicking doesn't do anything. It should focus at the end of the line.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1899/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1433576351 I_kwDOBm6k_c5VcqOf 1880 Datasette with many and large databases > Memory use amitkoth 525934 open 0     4 2022-11-02T18:10:27Z 2022-11-16T17:50:29Z   NONE  

Datasette maintains an in-memory SQLite database with details of the the databases, tables and columns for all of the attached databases.

The above is from the docs ^. There's two problems here - the number of datasette "instances" in a single server/VM and the size of the database itself. We want the opposite of in-memory, including what happens on SQLlite - documented in https://www.sqlite.org/inmemorydb.html

From the context in https://github.com/simonw/datasette/issues/1150 - does it mean datasette is memory-bound to the size of the dataset - which might be a deal-breaker for many large-scale use cases?

In an extreme case - let's say a single server had 100 SQLlite databases, which would enable 100 "instances" of datasette to run, one per client (e.g. in a SaaS multi-tenant environment). How could we achieve all these goals:

  1. Allow any one of these 100 databases to grow to say 2Tb in size
  2. Have one datasette instance, which connects to 1 of the 100 instances, based on incoming credentials/tenant ID
  3. Minimize memory use entirely - both by datasette and SQLlite, such that almost all operations are executed in real-time on-disk with little to no memory consumption per-tenant, or per-database.

Any ideas appreciated - we're looking to use this in a SaaS type of setting - many instances, single server.

@simonw great work on datasette, in general! Possibly related to https://github.com/simonw/datasette/issues/1480 but we don't want use any kind of serverless infra - this is a long-running VM/server.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1880/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1429030341 I_kwDOBm6k_c5VLUXF 1874 API to drop a table simonw 9599 closed 0   Datasette 1.0a0 8658075 4 2022-10-30T21:55:11Z 2022-11-15T19:59:53Z 2022-11-14T05:45:06Z OWNER  

POST /db/table/-/drop

Require drop-table permission.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1874/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1423364990 I_kwDOBm6k_c5U1tN- 1858 `max_signed_tokens_ttl` setting for a maximum duration on API tokens simonw 9599 closed 0   Datasette 1.0a0 8658075 4 2022-10-26T03:05:53Z 2022-11-15T19:58:52Z 2022-10-27T03:15:05Z OWNER  

It's currently possible to use /-/create-token to create a token that lasts forever.

Some administrators may wish to have a maximum expiry instead. I should support that with a setting.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1858/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
802513359 MDU6SXNzdWU4MDI1MTMzNTk= 1217 Possible to deploy as a python app (for Rstudio connect server)? plpxsk 6165713 open 0     4 2021-02-05T22:21:24Z 2022-11-04T11:37:52Z   NONE  

Is it possible to deploy a datasette application as a python web app?

In my enterprise, I have option to deploy python apps via Rstudio Connect, and I would like to publish a datasette dashboard for sharing.

I welcome any pointers to converting datasette serve into a python app that can be run as something like python datasette.py --my_data.db

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1217/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1342430983 I_kwDOBm6k_c5QA98H 1786 Adjust height of textarea for no JS case simonw 9599 closed 0     4 2022-08-18T01:15:15Z 2022-10-27T21:50:12Z 2022-08-18T16:06:09Z OWNER  

Datasette Lite: https://lite.datasette.io/?sql=https://gist.githubusercontent.com/simonw/1f8a91123ccefd8844187225b1832d7a/raw/5069075b86aa79358fbab3d4482d1d269077d632/recipes.sql#/data?sql=select+id%2C+name%2C+ingredients%2C+%28%0A++select+json_group_array%28value%29+from+json_each%28ingredients%29%0A++where+value+in+%28select+value+from+json_each%28%3Ap0%29%29%0A%29+as+matching_ingredients%0Afrom+recipes%0Awhere+json_array_length%28matching_ingredients%29+%3E+0%0Aorder+by+json_array_length%28matching_ingredients%29+desc&p0=%5B%22sugar%22%2C+%22cheese%22%5D

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1786/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
377155320 MDU6SXNzdWUzNzcxNTUzMjA= 370 Integration with JupyterLab psychemedia 82988 open 0     4 2018-11-04T13:57:13Z 2022-09-29T08:17:47Z   CONTRIBUTOR  

I just watched a demo video for the JupyterLab Chart Editor which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue).

JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe.

This made me wonder about what datasette integration with JupyterLab might do.

For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a View / Run as datasette file viewer option that will:

  • run the CSV file through csvs-to-sqlite;
  • launch the datasette server and display the datasette view in a JupyterLab panel.

(? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all datasette viewed CSVs and runs on a single port?)

As a freebie, the datasette API would allow you to run efficient SQL queries against the file eg using using pandas.read_sql() queries in a notebook in the same space.

Related:

  • JupyterLab extensions docs
  • a cookiecutter for wrting JupyterLab extensions using Javascript
  • a cookiecutter for writing JupyterLab extensions using Typescript
  • tutorial: Let’s Make an xkcd JupyterLab Extension
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/370/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1374626873 I_kwDOBm6k_c5R7yQ5 1810 Featured table(s) on the homepage simonw 9599 open 0     4 2022-09-15T14:30:49Z 2022-09-15T15:51:25Z   OWNER  

Many Datasette instances mainly exist to serve a single table - for example:

  • https://global-power-plants.datasettes.com/global-power-plants/global-power-plants
  • https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/images

It would be neat if the / homepage of those instances could be configured to highlight that specific table.

Or maybe more than one?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1810/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1359557737 I_kwDOBm6k_c5RCTRp 1798 Parts of YAML file do not work when db name is "off" CharlesNepote 562352 closed 0     4 2022-09-01T22:10:57Z 2022-09-02T00:02:53Z 2022-09-01T23:56:33Z NONE  

I guess this issue is not very important and probably rare.

To reproduce: * create and populate a db named off.db * in the yaml file, add any kind of information below databases:\n off: * the data are not taken into account (because "off" is interpreted as "false")

YAML file: ```yaml title: Some title description_html: |-

This is an experiment.

databases: off: tables: products_from_owners: title: products_from_owners* description_html: |-

Description

```

The result for http://xxxx.xxx/-/metadata gives: json { "title": "Some title", "description_html": "<p>This is an experiment.</p>", "databases": { "false": { "tables": { "products_from_owners": { "title": "products_from_owners*", "description_html": "<p>Description</p>" } } } } } => see the "false" instead of "off".

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1798/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
855476501 MDU6SXNzdWU4NTU0NzY1MDE= 1298 improve table horizontal scroll experience mroswell 192568 open 0     4 2021-04-12T01:55:16Z 2022-08-30T21:11:49Z   CONTRIBUTOR  

Wide tables aren't a huge problem if you know to click and drag right. But it's not at all obvious to do that. (it also tends to blue-select any content as it's dragging.) Depending on column widths, public users might entirely miss all the columns to the right.

There is a scrollbar at the bottom of the table, but I'm displaying ALL my records because it's the only way for datasette-vega to make accurate charts. So that bottom scrollbar is likely to be missed. I wonder if some sort of javascript-y mouseover to an arrow might help, similar to those seen in image carousels. Ah: here's a perfect example:

  1. Visit http://google.com
  2. Search for: animals endangered
  3. Note the 'g-right-button' (in the code) that looks like a right-facing caret in a circle.
  4. Click on that and the carousel scrolls right (and 'g-left-button' appears on the left).

Might be tricky to do that on a table, rather than a one-row carousel, but it's worth experimenting with.

Another option is just to put the scrollbars at the top of the table, too.

Meantime, I'm trying to build a button like the "View/hide all columns on https://salaries.news.baltimoresun.com/salaries-be494cf/2019+Maryland+state+salaries Might be nice to have that available by default, with settings in the metadata showing which are on by default.

(I saw some other closed issues related to horizontal scrolling, and admit I don't entirely understand them. For instance, the animated gif at https://github.com/simonw/datasette/issues/998#issuecomment-714117534 confuses me. )

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1298/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1318907685 I_kwDOBm6k_c5OnO8l 1773 500 error if sorted by a column not in the ?_col= list simonw 9599 closed 0   Datasette 0.62 8303187 4 2022-07-27T01:20:27Z 2022-08-14T16:06:25Z 2022-08-14T15:44:05Z OWNER  

For example: https://latest.datasette.io/fixtures/sortable?_sort_desc=sortable&_col=sortable_with_nulls

That's ?_sort_desc=sortable&_col=sortable_with_nulls

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1773/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
779156520 MDU6SXNzdWU3NzkxNTY1MjA= 1175 Use structlog for logging simonw 9599 open 0     4 2021-01-05T15:11:36Z 2022-07-26T12:52:10Z   OWNER  

To solve #241 JSON logging.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1175/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1237586379 I_kwDOBm6k_c5JxBHL 1742 ?_trace=1 fails with datasette-geojson for some reason simonw 9599 open 0     4 2022-05-16T19:06:05Z 2022-05-16T19:42:13Z   OWNER  

view-source:https://calands.datasettes.com/calands/CPAD_2020a_SuperUnits.geojson?_sort=id&id__exact=4&_labels=on&_trace=1 is showing me a blank page.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1742/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1223459734 I_kwDOBm6k_c5I7IOW 1737 Automated test for Pyodide compatibility simonw 9599 closed 0     4 2022-05-02T23:24:25Z 2022-05-02T23:40:50Z 2022-05-02T23:40:50Z OWNER  

Refs: - #1733

Need something in the test suite such that if Datasette breaks against Pyodide in the future we hear about it.

I'm thinking this is an opportunity to use shot-scraper javascript.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1737/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1065432388 I_kwDOBm6k_c4_gTVE 1534 Maybe return JSON from HTML pages if `Accept: application/json` is sent simonw 9599 closed 0     4 2021-11-28T20:48:09Z 2022-04-27T21:59:34Z 2022-02-02T23:39:33Z OWNER  

Relates to #1533 - and to the work I've been doing on the https://github.com/simonw/datasette-table Web Component.

It would be useful to support users pasting in a URL to a Datasette table or query without first having to add the .json extension themselves - since then other systems could hit that URL with Accept: application/json to get back the JSON representation without first needing to read the Link: header from #1533 to figure out what the URL to that JSON is.

(There is weird logic deep in Datasette that says that you add .json to the path UNLESS the table name itself ends with .json, in which case you add ?_format=json - this is super-confusing).

[Update: I removed that confusing feature here: https://simonwillison.net/2022/Mar/19/weeknotes/]

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1534/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
340396247 MDU6SXNzdWUzNDAzOTYyNDc= 339 Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way bsilverm 12617395 closed 0     4 2018-07-11T20:38:06Z 2022-03-21T22:22:40Z 2022-03-21T22:22:34Z NONE  

Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/339/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1065429936 I_kwDOBm6k_c4_gSuw 1532 Use datasette-table Web Component to guide the design of the JSON API for 1.0 simonw 9599 open 0   Datasette 1.0 3268330 4 2021-11-28T20:37:18Z 2022-03-16T20:13:34Z   OWNER  

I realized that one of the reasons I'm having trouble committing to nailing down the JSON API for 1.0 is that I don't use it much myself - I use the ?_shape=array one quite often, but I don't have any projects that are using the default, more fully-featured API.

As an experiment I built a Web Component for embedding Datasette tables on pages - https://github.com/simonw/datasette-table - and I think it's actually going to be a really useful tool for helping me dog food the v1.0 API design.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1532/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1131295060 I_kwDOBm6k_c5DbjFU 1634 Update Dockerfile generated by `datasette publish` simonw 9599 open 0   Datasette 1.0 3268330 4 2022-02-11T00:07:26Z 2022-03-11T17:38:08Z   OWNER  

The generated Dockerfile currently looks something like this: ```Dockerfile FROM python:3.8 COPY . /app WORKDIR /app

ENV DATASETTE_SECRET 'edab49cbc5d5f6f33238f54852037e3fee710821960b73edd2ce743454182ae2' RUN pip install -U datasette datasette-auth-passwords datasette-tiddlywiki datasette-graphql RUN datasette inspect fixtures.db other.db --inspect-file inspect-data.json ENV PORT 8080 EXPOSE 8080 CMD datasette serve --host 0.0.0.0 -i fixtures.db -i other.db --cors --inspect-file inspect-data.json --metadata metadata.json --create --port $PORT /data/*.db `` This is still on Python 3.8, and it generates a pretty large image compared to theDockerfile` used for https://hub.docker.com/datasetteproject/datasette - https://github.com/simonw/datasette/blob/0.60.2/Dockerfile

Here's the code that generates it: https://github.com/simonw/datasette/blob/7d24fd405f3c60e4c852c5d746c91aa2ba23cf5b/datasette/utils/init.py#L389-L400

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1634/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 2,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1154399841 I_kwDOBm6k_c5Ezr5h 1645 Sensible `cache-control` headers for static assets, including those served by plugins curiousleo 697092 open 0   Datasette 1.0 3268330 4 2022-02-28T18:12:03Z 2022-03-08T02:59:29Z   NONE  

What I'm seeing

With default_cache_ttl = 86400, I see the following:

A table view returns Cache-control: max-age=86400:

A static asset returns no Cache-control header:

What I expected to see

I expected the static asset to return a Cache-control header indicating that this response can be cached.

Why this matters

I'm productionising a Datasette deployment right now and was looking into putting it behind a Varnish instance. I was surprised to see requests for static assets being served from Datasette rather than Varnish, this is what led me to look more closely at the response headers.

While Datasette serves those static assets pretty quickly, I don't see why Datasette should serve them. By their nature, static assets like images and JS files are very cacheable, so it should be easy to serve them from a cache like Varnish.

(Note that Varnish can easily be configured to override this header, enabling caching for static assets. But it would be better if this override was not necessary.)

Discussion

It seems clear to me that serving static assets without a Cache-control header is not ideal.

I see two options here:

A. Static assets use the same logic as table / SQL views to set the Cache-control header based on default_cache_ttl. B. An additional setting for static assets is introduced (default_static_cache_ttl, say).

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1645/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
677272618 MDU6SXNzdWU2NzcyNzI2MTg= 928 Test failures caused by failed attempts to mock pip simonw 9599 closed 0     4 2020-08-11T23:53:18Z 2022-02-23T16:19:47Z 2020-08-12T00:07:49Z OWNER  

Errors like this one:

https://github.com/simonw/datasette/pull/927/checks?check_run_id=973559696

2020-08-11T23:36:39.8801334Z =================================== FAILURES =================================== 2020-08-11T23:36:39.8802411Z _________________________________ test_install _________________________________ 2020-08-11T23:36:39.8803242Z 2020-08-11T23:36:39.8804935Z thing = <module 'pip._internal.cli' from '/opt/hostedtoolcache/Python/3.8.5/x64/lib/python3.8/site-packages/pip/_internal/cli/__init__.py'> 2020-08-11T23:36:39.8806663Z comp = 'main', import_path = 'pip._internal.cli.main' 2020-08-11T23:36:39.8807696Z 2020-08-11T23:36:39.8808728Z def _dot_lookup(thing, comp, import_path): 2020-08-11T23:36:39.8810573Z try: 2020-08-11T23:36:39.8812262Z > return getattr(thing, comp) 2020-08-11T23:36:39.8817136Z E AttributeError: module 'pip._internal.cli' has no attribute 'main' 2020-08-11T23:36:39.8843043Z 2020-08-11T23:36:39.8855951Z /opt/hostedtoolcache/Python/3.8.5/x64/lib/python3.8/unittest/mock.py:1215: AttributeError 2020-08-11T23:36:39.8873372Z 2020-08-11T23:36:39.8877803Z During handling of the above exception, another exception occurred: 2020-08-11T23:36:39.8906532Z 2020-08-11T23:36:39.8925767Z def get_src_prefix(): 2020-08-11T23:36:39.8928277Z # type: () -> str 2020-08-11T23:36:39.8930068Z if running_under_virtualenv(): 2020-08-11T23:36:39.8949721Z src_prefix = os.path.join(sys.prefix, 'src') 2020-08-11T23:36:39.8951813Z else: 2020-08-11T23:36:39.8969014Z # FIXME: keep src in cwd for now (it is not a temporary folder) 2020-08-11T23:36:39.9012110Z try: 2020-08-11T23:36:39.9013489Z > src_prefix = os.path.join(os.getcwd(), 'src') 2020-08-11T23:36:39.9014538Z E FileNotFoundError: [Errno 2] No such file or directory 2020-08-11T23:36:39.9016122Z 2020-08-11T23:36:39.9017617Z /opt/hostedtoolcache/Python/3.8.5/x64/lib/python3.8/site-packages/pip/_internal/locations.py:50: FileNotFoundError 2020-08-11T23:36:39.9018802Z 2020-08-11T23:36:39.9020070Z During handling of the above exception, another exception occurred: 2020-08-11T23:36:39.9020930Z 2020-08-11T23:36:39.9022275Z args = (), keywargs = {} 2020-08-11T23:36:39.9023183Z 2020-08-11T23:36:39.9024077Z @wraps(func) 2020-08-11T23:36:39.9024984Z def patched(*args, **keywargs): 2020-08-11T23:36:39.9028770Z > with self.decoration_helper(patched, 2020-08-11T23:36:39.9031861Z args, 2020-08-11T23:36:39.9038358Z keywargs) as (newargs, newkeywargs): 2020-08-11T23:36:39.9039654Z 2020-08-11T23:36:39.9040566Z /opt/hostedtoolcache/Python/3.8.5/x64/lib/python3.8/unittest/mock.py:1322: 2020-08-11T23:36:39.9041492Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/928/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
335200136 MDU6SXNzdWUzMzUyMDAxMzY= 327 Explore if SquashFS can be used to shrink size of packaged Docker containers simonw 9599 open 0     4 2018-06-24T18:15:16Z 2022-02-17T23:37:24Z   OWNER  

Inspired by this article: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html#sqlite-database-indexed--squashed

https://en.wikipedia.org/wiki/SquashFS is "a compressed read-only file system for Linux" - which means it could be a really nice fit for Datasette and its read-only SQLite databases.

It would be interesting to explore a Dockerfile recipe that used SquashFS to compress the SQLite database file that was bundled up by datasette package and friends.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/327/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1065431383 I_kwDOBm6k_c4_gTFX 1533 Add `Link: rel="alternate"` header pointing to JSON for a table/query simonw 9599 closed 0   Datasette 1.0 3268330 4 2021-11-28T20:43:25Z 2022-02-02T07:56:51Z 2022-02-02T07:49:33Z OWNER  

Originally explored in https://github.com/simonw/datasette-notebook/issues/2#issuecomment-980789406 - I wanted an efficient way to scan a list of URLs and figure out which if any of those corresponded to Datasette tables, canned queries or SQL output that could be represented as a table on a page.

It looks like a neat way to do that is with Link: header like this:

Link: http://127.0.0.1:8058/fixtures/compound_three_primary_keys.json; rel="alternate"; type="application/datasette+json"

I can put a <link href=... in the page header too.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1533/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
752966476 MDU6SXNzdWU3NTI5NjY0NzY= 1114 --load-extension=spatialite not working with datasetteproject/datasette docker image danp 2182 closed 0     4 2020-11-29T17:35:20Z 2022-01-20T21:29:42Z 2020-11-29T17:37:45Z CONTRIBUTOR  

https://github.com/simonw/datasette/commit/6aa5886379dd9017215904fb28567b80018902f9 added the --load-extension=spatialite shortcut looking for the extension in these places:

https://github.com/simonw/datasette/blob/12877d7a48e2aa28bb5e780f929a218f7265d849/datasette/utils/init.py#L56-L60

However, in the datasetteproject/datasette docker image the file is at /usr/local/lib/mod_spatialite.so.

This results in the example command here failing:

% docker run --rm -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/data.db --load-extension=spatialite Error: Could not find SpatiaLite extension

But it does work when given an explicit path:

% docker run --rm -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/data.db --load-extension=/usr/local/lib/mod_spatialite.so INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit) ...

Perhaps SPATIALITE_PATHS should include /usr/local/lib/mod_spatialite.so?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1114/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
838382890 MDU6SXNzdWU4MzgzODI4OTA= 1273 Refresh SpatiaLite documentation simonw 9599 open 0     4 2021-03-23T06:05:55Z 2022-01-20T21:28:50Z   OWNER  

https://docs.datasette.io/en/0.55/spatialite.html was written before I had tools like geojson-to-sqlite and shapefile-to-sqlite.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1273/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1102484126 I_kwDOBm6k_c5BtpKe 1595 Release notes for 0.60 simonw 9599 closed 0   Datasette 0.60 7571612 4 2022-01-13T22:23:14Z 2022-01-14T01:37:39Z 2022-01-14T01:37:39Z OWNER     datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1595/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1087919372 I_kwDOBm6k_c5A2FUM 1578 Confirm if documented nginx proxy config works for row pages with escaped characters in their primary key simonw 9599 open 0     4 2021-12-23T18:27:59Z 2021-12-24T21:33:19Z   OWNER  

Found this while working on https://github.com/simonw/datasette-tiddlywiki

Then clicking on /tiddlywiki/tiddlers/%24%3A%2FDefaultTiddlers returns a 404.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1578/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
781262510 MDU6SXNzdWU3ODEyNjI1MTA= 1181 Certain database names results in 404: "Database not found: None" jieter 1470389 closed 0   Datasette 0.54 6346396 4 2021-01-07T12:01:16Z 2021-12-21T18:25:15Z 2021-01-25T05:13:19Z NONE  

I have a file named test-database (1).sqlite. When requesting the home route /, I see datasette is able to read it correctly:

However, if I click any of the links, datasette replies with: Error 404 Database not found: None

It seems the hash is crucial, as renaming the file to database (1).sqlite makes the error go away.

This lines checks for a single dash: https://github.com/simonw/datasette/blob/97fb10c17dd007a275ab743742e93e932335ad67/datasette/views/base.py#L184

``` $ datasette test-database\ (1).sqlite INFO: Started server process [68314] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) INFO: 127.0.0.1:54043 - "GET /favicon.ico HTTP/1.1" 200 OK INFO: 127.0.0.1:54043 - "GET / HTTP/1.1" 200 OK ... INFO: 127.0.0.1:54044 - "GET /favicon.ico HTTP/1.1" 200 OK INFO: 127.0.0.1:54044 - "GET /test-database (1) HTTP/1.1" 404 Not Found

Version: $ datasette --version datasette, version 0.53 ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1181/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1083246400 PR_kwDOBm6k_c4wAMK8 1562 Update janus requirement from <0.8,>=0.6.2 to >=0.6.2,<1.1 dependabot[bot] 49699333 closed 0     4 2021-12-17T13:11:10Z 2021-12-17T23:08:29Z 2021-12-17T23:08:28Z CONTRIBUTOR simonw/datasette/pulls/1562

Updates the requirements on janus to permit the latest version.

Release notes

Sourced from janus's releases.

janus 1.0.0 release

  • Dropped Python 3.6 support
  • Janus is marked as stable, no API changes was made for years
Changelog

Sourced from janus's changelog.

1.0.0 (2021-12-17)

  • Drop Python 3.6 support

0.7.0 (2021-11-24)

  • Add SyncQueue and AsyncQueue Protocols to provide type hints for sync and async queues #374

0.6.2 (2021-10-24)

  • Fix Python 3.10 compatibility #358

0.6.1 (2020-10-26)

  • Raise RuntimeError on queue.join() after queue closing. #295

  • Replace timeout type from Optional[int] to Optional[float] #267

0.6.0 (2020-10-10)

  • Drop Python 3.5, the minimal supported version is Python 3.6

  • Support Python 3.9

  • Refomat with black

0.5.0 (2020-04-23)

  • Remove explicit loop arguments and forbid creating queues outside event loops #246

0.4.0 (2018-07-28)

  • Add py.typed macro #89

  • Drop python 3.4 support and fix minimal version python3.5.3 #88

  • Add property with that indicates if queue is closed #86

0.3.2 (2018-07-06)

  • Fixed python 3.7 support #97

... (truncated)

Commits
  • 0783f9b Fix coverage upload
  • 41c49ba Make deployment only if checks are green
  • ec94b35 Fix CI again
  • 2303208 Fix CI
  • dff5078 Bump to 1.0.0
  • 3421545 Bump mypy from 0.910 to 0.920 (#384)
  • 56b2d1d Bump black from 21.11b1 to 21.12b0 (#383)
  • 883e82b Update README.rst
  • 2e30d8a Bump coverage from 6.1.2 to 6.2 (#382)
  • 7b72d85 Bump to 0.7
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1562/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
863884805 MDU6SXNzdWU4NjM4ODQ4MDU= 1304 Document how to send multiple values for "Named parameters" rayvoelker 9308268 open 0     4 2021-04-21T13:19:06Z 2021-12-08T03:23:14Z   NONE  

https://docs.datasette.io/en/stable/sql_queries.html#named-parameters

I thought that I had seen an example of how to do this example below, but I can't seem to find it

sql select * from bib where bib.bib_record_num in (1008088,1008092)

sql select * from bib where bib.bib_record_num in (:bib_record_numbers)

https://ilsweb.cincinnatilibrary.org/collection-analysis/current_collection-204d100?sql=select%0D%0A++*%0D%0Afrom%0D%0A++bib%0D%0Awhere%0D%0A++bib.bib_record_num+in+%28%3Abib_record_numbers%29&bib_record_numbers=1008088%2C1008092

Or, maybe this isn't a fully supported feature.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1304/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1059219106 I_kwDOBm6k_c4_Imai 1524 Improve Apache proxy documentation, link to demo simonw 9599 closed 0     4 2021-11-20T20:03:14Z 2021-11-20T23:34:03Z 2021-11-20T23:34:03Z OWNER  

The latest demo is now live at https://datasette-apache-proxy-demo.fly.dev/prefix/fixtures/sortable?_facet=pk2

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1519#issuecomment-974697824

I'm going to put out 0.59.3 bugfix release with this, but I'd like to first improve the documentation on https://docs.datasette.io/en/stable/deploying.html#apache-proxy-configuration to highlight the new demo.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1524/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
459590021 MDU6SXNzdWU0NTk1OTAwMjE= 519 Decide what goes into Datasette 1.0 simonw 9599 closed 0   Datasette 1.0 3268330 4 2019-06-23T15:47:41Z 2021-11-15T23:26:11Z 2021-11-15T23:26:11Z OWNER  

Datasette ASGI #272 is a big part of it... but 1.0 will generally be an indicator that Datasette is a stable platform for developers to write plugins and custom templates against. So lots to think about.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/519/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
845794436 MDU6SXNzdWU4NDU3OTQ0MzY= 1284 Feature or Documentation Request: Individual table as home page template mroswell 192568 open 0     4 2021-03-31T03:56:17Z 2021-11-04T03:15:01Z   CONTRIBUTOR  

It would be great to have a sample showing how to move a single database that has a single table, to the index page. I'm trying it now, and find there is a real depth of Datasette and Python understanding that's required to be successful.

I've got all the basic jinja concepts down... variables, template control structures, template inheritance, template overrides, css, html, the --template-dir and --static arguments, etc.

But copying the table.html file to index.html doesn't work. There are undocumented functions and filters... I can figure some of them out (yay, url_builder.py and utils/init.py!) but it's a slog better handled by a much stronger Python developer.

One sample would make a world of difference. The ideal form of this documentation would be a diff between the default table.html and how that would look if essentially moved to index.html. The use case is for everyone who wants to create a public-facing website to explore a single table at the root directory. (Maybe a second bit of documentation for people who have a single database with multiple tables.)

(Hmm... might be cool to have a setting for that, where it happens automagically! If only one table, then home page is at the table level. if only one database, then home page is at the database level.... as an option.)

I suppose I could ignore this, and somehow do this in the DNS settings once I hook up Vercel to a domain name, maybe.. and remove the breadcrumbs in table.html... but for now, a documentation request in the form of a diff... for viewing a single table (or a single database) at the root.

(Actually, there's probably room for a whole expanded section on templates. Noticed some nice table metadata in one of the datasette examples, for instance... Hmm... maybe a whole library of solutions in one place... maybe a documentation hackathon! If that's of interest, of course it's a separate issue. )

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1284/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
995098231 MDU6SXNzdWU5OTUwOTgyMzE= 1470 ?_sort=rowid with _next= returns error eigenfoo 19851673 closed 0     4 2021-09-13T16:36:15Z 2021-10-18T19:30:15Z 2021-10-10T01:15:03Z NONE  

For example:

  • Go to https://cryptics.eigenfoo.xyz/clues/clues?_next=100 (this is the second page of results in a Datasette site)
  • Search anything using the FTS search bar. For example, searching for hello will take you to https://cryptics.eigenfoo.xyz/clues/clues?_search=hello&_sort=rowid&_next=100
  • A 500 Error: list index out of range is raised.

This is because the search URL includes the &_next=100 UTM parameter, carried over from where the FTS search was run. However, there isn't a second page in the search results, so a list index out of range error is raised. You can confirm that removing this UTM parameter from the URL returns the appropriate search results.

The FTS search request should strip any _next UTM parameter.


bash datasette, version 0.58.1 sqlite-utils, version 3.17

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1470/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
268469569 MDU6SXNzdWUyNjg0Njk1Njk= 39 Protect against malicious SQL that causes damage even though our DB is immutable simonw 9599 closed 0   Ship first public release 2857392 4 2017-10-25T16:44:27Z 2021-08-17T23:52:07Z 2017-11-05T02:53:47Z OWNER  

I’m currently operating under the assumption that it’s safe to allow arbitrary SQL statements because we are dealing with an immutable database. But this might not be the case - there are some pretty weird SQLite language extensions (ATTACH, PRAGMA etc) and I’m not certain they cannot be used to break things in a way that would affect future requests to the API.

Solution: provide a “safe mode” option which disables the ?sql= mechanism. This still leaves the URL filter lookups, so I need to make sure that those are “safe”.

In the future I may also implement a whitelist option where datasets can be configured to only allow specific filters against specific columns.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/39/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
963527045 MDU6SXNzdWU5NjM1MjcwNDU= 1424 Document exceptions that can be raised by db.execute() and friends simonw 9599 open 0     4 2021-08-08T22:23:25Z 2021-08-08T22:27:31Z   OWNER  

Not currently covered here: https://docs.datasette.io/en/stable/internals.html#await-db-execute-sql

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1424/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
961367843 MDU6SXNzdWU5NjEzNjc4NDM= 1422 Ability to default to hiding the SQL for a canned query simonw 9599 closed 0     4 2021-08-05T02:51:39Z 2021-08-07T05:32:29Z 2021-08-07T05:32:29Z OWNER  

I'm working on a project with some HUGE (400+ lines of SQL) canned queries right now.

Any time you land on the canned query page you have to scroll down a long distance to get to the results!

Would be useful to be able to default to https://latest.datasette.io/fixtures/magic_parameters?_hide_sql=1 without needing the parameter.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1422/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
855446829 MDExOlB1bGxSZXF1ZXN0NjEzMTc4OTY4 1296 Dockerfile: use Ubuntu 20.10 as base tmcl-it 82332573 open 0     4 2021-04-12T00:23:32Z 2021-07-20T08:52:13Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1296

This PR changes the main Dockerfile to use ubuntu:20.10 as base image instead of python:3.9.2-slim-buster (itself based on debian:buster-slim).

The Dockerfile is essentially the one from https://github.com/simonw/datasette/issues/1249#issuecomment-803698983 with some additional cleanups to slim it down.

This fixes a couple of issues: 1. The SQLite version in Debian Buster (2.6.0) doesn't support generated columns 2. Installing SpatiaLite from the Debian sid repositories has the side effect of also installing updates to libc and libstdc++ from sid.

As a bonus, the Docker image becomes smaller:

$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE datasette 0.56-ubuntu f7aca255140a 5 hours ago 212MB datasetteproject/datasette 0.56 efb3b282f390 13 days ago 258MB

Reproduction of the first issue

``` $ curl -O https://latest.datasette.io/fixtures.db % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 260k 0 260k 0 0 489k 0 --:--:-- --:--:-- --:--:-- 489k

$ docker run -v pwd:/mnt datasetteproject/datasette:0.56 datasette /mnt/fixtures.db Traceback (most recent call last): File "/usr/local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in call return self.main(args, kwargs) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke return callback(args, kwargs) File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 544, in serve asyncio.get_event_loop().run_until_complete(check_databases(ds)) File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 584, in check_databases await database.execute_fn(check_connection) File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 155, in execute_fn return await asyncio.get_event_loop().run_in_executor( File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, self.kwargs) File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 153, in in_thread return fn(conn) File "/usr/local/lib/python3.9/site-packages/datasette/utils/init.py", line 892, in check_connection for r in conn.execute( sqlite3.DatabaseError: malformed database schema (generated_columns) - near "AS": syntax error ```

Here is the SQLite version:

`` $ docker run -vpwd`:/mnt -it datasetteproject/datasette:0.56 /bin/bash root@d9220d3b95dd:/# python3 Python 3.9.2 (default, Mar 27 2021, 02:50:26) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information.

import sqlite3 sqlite3.version '2.6.0' ```

Reproduction of the second issue

$ docker build . -t datasette --build-arg VERSION=0.55 [...snip...] The following packages will be upgraded: libc-bin libc6 libstdc++6 [...snip...] Unpacking libc6:amd64 (2.31-11) over (2.28-10) ... [...snip...] Unpacking libstdc++6:amd64 (10.2.1-6) over (8.3.0-6) ... [...snip...]

Both libc and libstdc++ are backwards compatible, so the image still works, but it will result in a combination of libraries and Python versions that exists only in the Datasette image, so it's likely untested. In addition, since Debian sid is an always-changing rolling-release, the versions of libc, libstdc++, Spatialite, and their dependencies change frequently, so the library versions in the Datasette image will depend on the day when it was built.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1296/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
944870799 MDU6SXNzdWU5NDQ4NzA3OTk= 1394 Big performance boost on faceting: skip the inner order by simonw 9599 closed 0     4 2021-07-14T23:32:29Z 2021-07-16T02:23:32Z 2021-07-15T00:05:50Z OWNER  

I just noticed something that could make for a huge performance improvement in faceting.

The default query used by Datasette when faceting looks like this: sql select country_long, count(*) from ( select * from [global-power-plants] order by rowid ) where country_long is not null group by country_long order by count(*) desc Here it takes 53ms: https://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++country_long%2C%0D%0A++count%28%29%0D%0Afrom+%28%0D%0A++select++from+%5Bglobal-power-plants%5D+order+by+rowid%0D%0A%29%0D%0Awhere%0D%0A++country_long+is+not+null%0D%0Agroup+by%0D%0A++country_long%0D%0Aorder+by%0D%0A++count%28*%29+desc

Note that there's a order by rowid in there which isn't necessary - the order on that inner query doesn't matter since we're grouping and counting.

I had assumed SQLite would optimize this away - but it turns out it doesn't! Consider this version of the query, with that pointless order by removed: select country_long, count(*) from ( select * from [global-power-plants] ) where country_long is not null group by country_long order by count(*) desc https://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++country_long%2C%0D%0A++count%28%29%0D%0Afrom+%28%0D%0A++select++from+%5Bglobal-power-plants%5D%0D%0A%29%0D%0Awhere%0D%0A++country_long+is+not+null%0D%0Agroup+by%0D%0A++country_long%0D%0Aorder+by%0D%0A++count%28*%29+desc runs in 7.2ms!

I tried this optimization on a table with 2.5m rows in it - without the optimization it took 5 seconds, with the optimization it took 450ms. So this is a very significant improvement!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1394/reactions",
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
466996584 MDExOlB1bGxSZXF1ZXN0Mjk2NzM1MzIw 557 Get tests running on Windows using Travis CI simonw 9599 closed 0     4 2019-07-11T16:36:57Z 2021-07-10T23:39:48Z 2021-07-10T23:39:48Z OWNER simonw/datasette/pulls/557

Refs #511

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/557/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
756876238 MDExOlB1bGxSZXF1ZXN0NTMyMzQ4OTE5 1130 Fix footer not sticking to bottom in short pages abdusco 3243482 open 0     4 2020-12-04T07:29:01Z 2021-06-15T13:27:48Z   CONTRIBUTOR simonw/datasette/pulls/1130

Fixes https://github.com/simonw/datasette/issues/1129

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1130/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
912485040 MDU6SXNzdWU5MTI0ODUwNDA= 1361 Intermittent CI failure: restore_working_directory FileNotFoundError simonw 9599 closed 0     4 2021-06-05T22:48:13Z 2021-06-05T23:16:24Z 2021-06-05T23:16:24Z OWNER  

e.g. in https://github.com/simonw/datasette/runs/2754772233 - this is an intermittent error: ``` _ ERROR at setup of testhook_register_routes_render_message __ [gw0] linux -- Python 3.8.10 /opt/hostedtoolcache/Python/3.8.10/x64/bin/python

tmpdir = local('/tmp/pytest-of-runner/pytest-0/popen-gw0/test_hook_register_routes_rend0') request = <SubRequest 'restore_working_directory' for \<Function test_hook_register_routes_render_message>>

@pytest.fixture
def restore_working_directory(tmpdir, request):
  previous_cwd = os.getcwd()

E FileNotFoundError: [Errno 2] No such file or directory ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1361/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
864979486 MDExOlB1bGxSZXF1ZXN0NjIxMTE3OTc4 1306 Avoid error sorting by relationships if related tables are not allowed gfrmin 416374 closed 0     4 2021-04-22T13:53:17Z 2021-06-02T04:27:00Z 2021-06-02T04:25:28Z CONTRIBUTOR simonw/datasette/pulls/1306

Refs #1305

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1306/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
903978133 MDU6SXNzdWU5MDM5NzgxMzM= 1343 Figure out how to publish alpha/beta releases to Docker Hub simonw 9599 closed 0     4 2021-05-27T16:42:17Z 2021-05-27T16:46:37Z 2021-05-27T16:45:41Z OWNER  

It looks like all I need to do to ship an alpha version to Docker Hub is NOT point the latest tag at it after it goes live: https://github.com/simonw/datasette/blob/1a8972f9c012cd22b088c6b70661a9c3d3847853/.github/workflows/publish.yml#L75-L77

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1319#issuecomment-849780481

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1343/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
884952179 MDU6SXNzdWU4ODQ5NTIxNzk= 1320 Can't use apt-get in Dockerfile when using datasetteproj/datasette as base brandonrobertz 2670795 closed 0     4 2021-05-10T19:37:27Z 2021-05-24T18:15:56Z 2021-05-24T18:07:08Z CONTRIBUTOR  

The datasette base Docker image is super convenient, but there's one problem: if any of the plugins you install require additional system dependencies (e.g., xz, git, curl) then any attempt to use apt in said Dockerfile results in an explosion:

``` $ docker-compose build Building server [+] Building 9.9s (7/9) => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 666B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 34B 0.0s => [internal] load metadata for docker.io/datasetteproject/datasette:latest 0.6s => [base 1/4] FROM docker.io/datasetteproject/datasette@sha256:2250d0fbe57b1d615a8d6df0c9d43deb9533532e00bac68854773d8ff8dcf00a 0.0s => [internal] load build context 1.8s => => transferring context: 2.44MB 1.8s => CACHED [base 2/4] WORKDIR /datasette 0.0s => ERROR [base 3/4] RUN apt-get update && apt-get install --no-install-recommends -y git ssh curl xz-utils 9.2s


[base 3/4] RUN apt-get update && apt-get install --no-install-recommends -y git ssh curl xz-utils:

6 0.446 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]

6 0.449 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]

6 0.459 Get:3 http://httpredir.debian.org/debian sid InRelease [157 kB]

6 0.784 Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]

6 0.790 Get:5 http://httpredir.debian.org/debian sid/main amd64 Packages [8626 kB]

6 1.003 Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]

6 1.180 Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [286 kB]

6 7.095 Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [10.9 kB]

6 8.058 Fetched 17.2 MB in 8s (2243 kB/s)

6 8.058 Reading package lists...

6 9.166 E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)

6 9.166 E: Could not open file - open (2: No such file or directory)

6 9.166 E: Problem opening

6 9.166 E: The package lists or status file could not be parsed or opened.

```

The problem seems to be from completely wiping out /var/lib/dpkg in the upstream Dockerfile:

https://github.com/simonw/datasette/blob/1b697539f5b53cec3fe13c0f4ada13ba655c88c7/Dockerfile#L18

I've tested without removing the directory and apt works as expected.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1320/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
273775212 MDU6SXNzdWUyNzM3NzUyMTI= 88 Add NHS England Hospitals example to wiki tomdyson 15543 closed 0     4 2017-11-14T12:29:10Z 2021-03-22T23:46:36Z 2017-11-14T22:54:06Z CONTRIBUTOR  

https://nhs-england-hospitals.now.sh

and an associated map visualisation:

http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/

Datasette is wonderful!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/88/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
797651831 MDU6SXNzdWU3OTc2NTE4MzE= 1212 Tests are very slow. kbaikov 4488943 closed 0     4 2021-01-31T08:06:16Z 2021-02-19T22:54:13Z 2021-02-19T22:54:13Z CONTRIBUTOR  

Working on my PR i noticed that tests are very slow.

The plain pytest run took about 37 minutes for me. However i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution. pytest -n 8 is run only in 28 minutes on my machine.

I can create a PR to mention that in your documentation. This will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation.

Does that make sense to you?

After a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run.

Profiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense.

Perhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases. And/or use conn = sqlite3.connect(":memory:") which is much faster. And/or truncate tables after each TC instead of deleting the file and re-creating them.

I can take a look which is the best approach if you give the go-ahead.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1212/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
808843401 MDU6SXNzdWU4MDg4NDM0MDE= 1226 --port option should validate port is between 0 and 65535 simonw 9599 closed 0     4 2021-02-15T22:01:33Z 2021-02-18T18:41:27Z 2021-02-18T18:41:27Z OWNER  

Currently throws an ugly error message: (datasette-graphql) datasette-graphql % datasette fivethirtyeight.db -p 80094 INFO: Started server process [45497] INFO: Waiting for application startup. INFO: Application startup complete. Traceback (most recent call last): File "/Users/simon/.local/share/virtualenvs/datasette-graphql-n1OSJCS8/bin/datasette", line 8, in <module> sys.exit(cli()) ... server = await loop.create_server( File "/Users/simon/.pyenv/versions/3.8.2/lib/python3.8/asyncio/base_events.py", line 1461, in create_server sock.bind(sa) OverflowError: bind(): port must be 0-65535.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1226/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
806743116 MDU6SXNzdWU4MDY3NDMxMTY= 1220 Installing datasette via docker: Path 'fixtures.db' does not exist aborruso 30607 closed 0     4 2021-02-11T21:09:14Z 2021-02-12T21:35:17Z 2021-02-12T21:35:17Z NONE  

Hi, If I run

docker run -p 8001:8001 -v `pwd`:/mnt \ 1 ↵ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 fixtures.db

I have

Error: Invalid value for '[FILES]...': Path 'fixtures.db' does not exist.

If I run test -f fixtures.db && echo "it exists." I have it exists..

What's my error?

Thank you

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1220/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
789336592 MDU6SXNzdWU3ODkzMzY1OTI= 1195 view_name = "query" for the query page simonw 9599 open 0     4 2021-01-19T20:21:36Z 2021-01-25T04:40:08Z   OWNER  

It uses view_name of database at the moment which isn't as useful.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1195/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
773913793 MDExOlB1bGxSZXF1ZXN0NTQ0OTIzNDM3 1158 Modernize code to Python 3.6+ eumiro 6774676 closed 0   Datasette 0.54 6346396 4 2020-12-23T16:21:38Z 2021-01-24T21:20:50Z 2020-12-23T17:04:32Z CONTRIBUTOR simonw/datasette/pulls/1158
  • compact dict and set building
  • remove redundant parentheses
  • simplify chained conditions
  • change method name to lowercase
  • use triple double quotes for docstrings

please feel free to accept/reject any of these independent commits

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1158/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
548591089 MDU6SXNzdWU1NDg1OTEwODk= 657 Allow creation of virtual tables at startup dazzag24 1055831 open 0     4 2020-01-12T16:10:55Z 2021-01-15T20:24:35Z   NONE  

Hi,

I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable

This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial

CREATE VIRTUAL TABLE .........

command that is required to load the data from the parquet file into the table.

It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time?

My thoughts are that something at init time could detect that the user was loading a .parquet file and then switch to a mode were it loads that via the "CREATE VIRTUAL TABLE..." rather than loading the .db file in the default case??

I'm happy to contribute code and testing, I just need some pointers on the best approach.

Thanks Darren

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/657/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
777677671 MDU6SXNzdWU3Nzc2Nzc2NzE= 1169 Prettier package not actually being cached benpickles 3637 closed 0     4 2021-01-03T17:04:41Z 2021-01-04T19:52:34Z 2021-01-04T19:52:33Z CONTRIBUTOR  

With the current configuration Prettier seems to be installed on every run - which can been seen from the output:

npx: installed 1 in 5.166s

Prettier isn't explicitly being installed (it's surprising that actually installing the dependencies isn't included in the actions/cache docs) but it turns out that npx will automatically install the package for the specified command (it actually guesses the package name from the name of the command). I'm not sure where Prettier ends up being installed but it doesn't appear to be in ~/.npm according to the post-cache output (or ./node_modules when I tested locally):

Cache hit occurred on the primary key Linux-npm-565329898f77080e58b14d45cf816ab94877e6f2ece9d395c369c533548a7ee7, not saving cache.

I think there are a couple of approaches to tackling this, you could manually install/cache Prettier within the action, or add a package.json with Prettier. I would go with the latter because it's a more standard and maintainable approach and it will also ensure that, along with CI, anyone working on the project will run the same version of Prettier (you'll also get Dependabot JavaScript updates).

I've tested the package.json approach on a branch and am happy to turn it into a pull request if you fancy.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1169/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
456568880 MDU6SXNzdWU0NTY1Njg4ODA= 509 Support opening multiple databases with the same stem simonw 9599 closed 0 simonw 9599 Datasette 1.0 3268330 4 2019-06-15T19:32:00Z 2020-12-22T20:04:35Z 2020-12-22T20:04:35Z OWNER  

e.g. I should be able to do this:

datasette App/data.db Other_App/data.db

This currently errors because you can't have two databases taking the /data URL path.

Instead, how about in this particular case assigning the second database /data-1?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/509/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
443021509 MDU6SXNzdWU0NDMwMjE1MDk= 461 Paginate + search for databases/tables on the homepage simonw 9599 open 0   Datasette 1.0 3268330 4 2019-05-11T18:05:34Z 2020-12-17T22:14:46Z   OWNER  

Split out from #460 - in order to support large numbers of connected databases the homepage needs to be paginated.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/461/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
760312579 MDU6SXNzdWU3NjAzMTI1Nzk= 1134 "_searchmode=raw" throws an index out of range error when combined with "_search_COLUMN" clausjuhl 2181410 closed 0     4 2020-12-09T13:05:37Z 2020-12-10T05:57:17Z 2020-12-09T19:56:55Z NONE  

Hi Simon! Maybe it's just me, but when using _searchmode=raw (trying to enable wildcard-searching) in combination with the "_search_COLUMN"-table argument, I get a list index out of range error. When combining with the simpler "_search"-argument everything works, including wildcard-seaches.. Here's the traceback:

``` Traceback (most recent call last): File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py", line 122, in route_path return await view(new_scope, receive, send) File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py", line 196, in view request, scope["url_route"]["kwargs"] File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py", line 204, in get request, database, hash, correct_hash_provided, kwargs File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py", line 342, in view_get request, database, hash, **kwargs File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/table.py", line 393, in data search_col = key.split("search", 1)[1] IndexError: list index out of range

```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1134/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
309047460 MDU6SXNzdWUzMDkwNDc0NjA= 188 Ability to bundle metadata and templates inside the SQLite file simonw 9599 open 0     4 2018-03-27T16:42:07Z 2020-12-04T17:18:34Z   OWNER  

One of the nicest qualities of SQLite as a data format is that you get a single file which you can then backup or share with other people.

Datasette breaks this a little once you start including custom metadata.json or template files and CSS.

It would be cool if there was an optional mechanism for baking that extra configuration into the SQLite file itself. That way entire datasette mini-applications (including canned queries and custom HTML and CSS) could be constructed as single .db files.

Since datasette configuration is all file-based, one way to achieve that would be to support a "datasette_files" table which, if present is used to search for file contents by path.

This is inline with the philosophy described by https://www.sqlite.org/appfileformat.html

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/188/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
610829227 MDU6SXNzdWU2MTA4MjkyMjc= 749 Cloud Run fails to serve database files larger than 32MB simonw 9599 closed 0     4 2020-05-01T16:06:46Z 2020-12-03T00:31:15Z 2020-12-03T00:31:14Z OWNER  

https://cloud.google.com/run/quotas lists the maximum response size as 32MB.

I spotted a bug where attempting to download a database file larger than that from a Cloud Run deployment (in this case it was https://github-to-sqlite.dogsheep.net/github.db after I accidentally increased the size of that database) returned a 500 error because of this.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/749/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
749983857 MDU6SXNzdWU3NDk5ODM4NTc= 1106 Rebrand and redirect config.rst as settings.rst simonw 9599 closed 0   Datasette 0.52 6055094 4 2020-11-24T19:38:17Z 2020-11-24T21:39:58Z 2020-11-24T21:39:58Z OWNER  

I'd like to redirect https://docs.datasette.io/en/stable/config.html to a new https://docs.datasette.io/en/stable/settings.html page too. I can use https://docs.readthedocs.io/en/stable/user-defined-redirects.html for that.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1105#issuecomment-733190827

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1106/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
737394470 MDU6SXNzdWU3MzczOTQ0NzA= 1084 Table/database action menu cut off if too short simonw 9599 closed 0   Datasette 0.52 6055094 4 2020-11-06T01:55:23Z 2020-11-21T23:45:59Z 2020-11-21T23:45:59Z OWNER  

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1084/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
727627923 MDU6SXNzdWU3Mjc2Mjc5MjM= 1041 extra_js_urls and extra_css_urls should respect base_url setting simonw 9599 closed 0   0.51 6026070 4 2020-10-22T18:34:33Z 2020-10-31T20:49:28Z 2020-10-31T20:48:58Z OWNER  

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1033#issuecomment-714681365

Refs #1023

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1041/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
732634375 MDExOlB1bGxSZXF1ZXN0NTEyNTQ1MzY0 1061 .blob output renderer simonw 9599 closed 0   0.51 6026070 4 2020-10-29T20:25:08Z 2020-10-29T22:01:40Z 2020-10-29T22:01:39Z OWNER simonw/datasette/pulls/1061
  • [x] Remove the /-/...blob/... route I added in #1040 in place of the new .blob renderer URLs
  • [x] Link to new .blob download links on the arbitrary query page (using _blob_hash=...) - plus tests for this

Closes #1050, Closes #1051

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1061/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
729017519 MDExOlB1bGxSZXF1ZXN0NTA5NTkwMjA1 1049 Add template block prior to extra URL loaders psychemedia 82988 closed 0     4 2020-10-25T13:08:55Z 2020-10-29T09:20:52Z 2020-10-29T09:20:34Z CONTRIBUTOR simonw/datasette/pulls/1049

To handle packages that require Javascript state setting prior to loading a package (eg thebelab, provide a template block before the URLs are loaded.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1049/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
727915394 MDExOlB1bGxSZXF1ZXN0NTA4NzE5NTY3 1043 Include LICENSE in sdist bollwyvl 45380 closed 0     4 2020-10-23T05:04:12Z 2020-10-26T00:14:57Z 2020-10-23T20:54:35Z CONTRIBUTOR simonw/datasette/pulls/1043

Hi, thanks for datasette!

This PR adds the LICENSE to source distributions, which seems the norm for Apache-2.0 stuff.

I noticed the 0.50.2 sdist doesn't ship LICENSE, but the 0.5.2 whl does, so I'm assuming the intent is to ship... and it's a one-liner!

Motivation:

It might be a bit of a slog, but I'm looking to see about getting datasette (and friends!) available on conda-forge. There are a few missing upstreams (asgi-csrf, python-basecov, mergedeep) and some of the plugins don't even appear to have tarballs (just whl!), but the little stuff like licenses are nice to get out handled upstream vs separately grabbing them.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1043/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
718521469 MDU6SXNzdWU3MTg1MjE0Njk= 1011 column name links broken in 0.50.1 mhalle 649467 closed 0     4 2020-10-10T03:37:51Z 2020-10-10T04:09:32Z 2020-10-10T03:52:07Z NONE  

I just upgraded from 0.49 to 0.50.1 and found that the links on column headers are broken.

If I inspect the source, they have a leading "//" (without host or port) rather than including base_url like other links on the page do. The links in the "gears" menu for each column do work.

I don't have custom templates for my project.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1011/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
718238967 MDU6SXNzdWU3MTgyMzg5Njc= 1003 from_json jinja2 filter mhalle 649467 open 0     4 2020-10-09T15:30:58Z 2020-10-09T17:17:07Z   NONE  

When JSON fields are rendered in a jinja2 template, it is handy to be able to manipulate them as data (e.g., iterate over an array of values).

Ansible has a "from_json" function, which just called json.loads. It's a trivial as a datasette plugin, but it seems generally useful. Does it makes sense to add it directly into the app?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1003/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
705108492 MDU6SXNzdWU3MDUxMDg0OTI= 970 request an "-o" option on "datasette server" to open the default browser at the running url secretGeek 2861690 closed 0   Datasette 0.50 5971510 4 2020-09-20T13:16:34Z 2020-10-08T23:54:27Z 2020-09-22T14:27:04Z NONE  

This is a request for a "convenience" feature, and only a nice to have. It's based on seeing this feature in several little command line hypertext server apps.

If you run, for example:

datasette.exe serve --open "mydb.s3db"

I would like it if default browser is launched, at the URL that is being served.

The angular cli does this, for example

ng serve <project> --open #see https://angular.io/cli/serve

...as does my usual mini web server of choice when inspecting local static files....

npx http-server -o # see https://www.npmjs.com/package/http-server

Just a tiny thing. Love your work!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/970/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
455852801 MDU6SXNzdWU0NTU4NTI4MDE= 507 Every datasette plugin on the ecosystem page should have a screenshot simonw 9599 open 0     4 2019-06-13T17:02:51Z 2020-09-17T02:47:35Z   OWNER  

https://github.com/simonw/datasette/blob/master/docs/ecosystem.rst

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/507/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
449854604 MDU6SXNzdWU0NDk4NTQ2MDQ= 492 Facets not correctly persisted in hidden form fields simonw 9599 closed 0   Datasette 1.0 3268330 4 2019-05-29T14:49:39Z 2020-09-15T20:12:29Z 2020-09-15T20:12:29Z OWNER  

Steps to reproduce: visit https://2a4b892.datasette.io/fixtures/roadside_attractions?_facet_m2m=attraction_characteristic and click "Apply"

Result is a 500: no such column: attraction_characteristic

The error occurs because of this hidden HTML input:

<input type="hidden" name="_facet" value="attraction_characteristic">

This should be:

<input type="hidden" name="_facet_m2m" value="attraction_characteristic">
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/492/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
675724951 MDU6SXNzdWU2NzU3MjQ5NTE= 918 Security issue: read-only canned queries leak CSRF token in URL simonw 9599 closed 0     4 2020-08-09T16:03:01Z 2020-08-09T16:56:48Z 2020-08-09T16:11:59Z OWNER  

The HTML form for a read-only canned query includes the hidden CSRF token field added in #798 for writable canned queries (#698).

This means that submitting those read-only forms exposes the CSRF token in the URL - for example on https://latest.datasette.io/fixtures/neighborhood_search submitting the form took me to:

https://latest.datasette.io/fixtures/neighborhood_search?text=down&csrftoken=IlFubnoxVVpLU1NGT3NMVUoi.HbOPd2YH_epQmp8f_aAt0s-MxtU

This token could potentially leak to an attacker if the resulting page has a link to an external site on it and the user clicks the link, since the token would be exposed in the referral logs.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/918/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
658476055 MDU6SXNzdWU2NTg0NzYwNTU= 896 Use white-space: pre-wrap on ALL table cell contents simonw 9599 closed 0     4 2020-07-16T19:05:21Z 2020-07-17T01:26:08Z 2020-07-17T01:26:08Z OWNER  

Is there any reason NOT to apply white-space: pre-wrap to the contents of all table cells in Datasette?

The default display mechanism of HTML (stripping leading/trailing slashes and collapsing all other whitespace) doesn't really make sense for displaying the kind of data that Datasette works with.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/896/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
648749062 MDExOlB1bGxSZXF1ZXN0NDQyNTA1MDg4 883 Skip counting hidden tables abdusco 3243482 open 0     4 2020-07-01T07:38:08Z 2020-07-02T00:25:44Z   CONTRIBUTOR simonw/datasette/pulls/883

Potential fix for https://github.com/simonw/datasette/issues/859.

Disabling table counts for hidden tables speeds up database page quite a bit. In my setup it reduced load time by 2/3 (~300 -> ~90ms)

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/883/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
637966833 MDU6SXNzdWU2Mzc5NjY4MzM= 840 Log out mechanism for clearing ds_actor cookie simonw 9599 closed 0   Datasette 0.45 5533512 4 2020-06-12T19:41:51Z 2020-06-29T04:31:43Z 2020-06-29T04:31:43Z OWNER  

Need a cookie clearing mechanism and a way to show that you are logged in.

datasette-auth-github had a solution for this that can be pulled into core.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/840/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
638259643 MDU6SXNzdWU2MzgyNTk2NDM= 847 Take advantage of .coverage being a SQLite database simonw 9599 closed 0     4 2020-06-14T00:41:25Z 2020-06-28T20:50:21Z 2020-06-28T20:50:21Z OWNER  

The .coverage file generated by running pytest-cov is now a SQLite database!

I could do something interesting with this. Maybe after each test run for a new commit I could store that database file somewhere?

Lots of interesting challenges here.

I got a change into coveragepy last year which helps make the custom SQL functions available for doing fun things in Datasette: https://github.com/nedbat/coveragepy/issues/868

Bigger challenge: if I have a DB file for every commit, that's hundreds (potentially thousands) of DB files. Datasette isn't designed to handle thousands of files like that.

So, do I figure out how to have Datasette open a file on-command for just a single request? Or, an easier option, do I copy data from those files into a single database with a modified schema to include the commit hash in each table row?

(Following on from #841 and #844)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/847/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
635108074 MDU6SXNzdWU2MzUxMDgwNzQ= 824 Example authentication plugin simonw 9599 closed 0   Datasette 0.44 5512395 4 2020-06-09T04:49:53Z 2020-06-12T00:11:51Z 2020-06-12T00:11:50Z OWNER  

https://github.com/simonw/datasette-auth-github/issues/62 will work for this.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/824/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 312.419ms · About: github-to-sqlite