1,779 rows sorted by updated_at descending

View and edit SQL

Suggested facets: milestone, author_association, created_at (date), updated_at (date), closed_at (date)

type

state

id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app
884952179 MDU6SXNzdWU4ODQ5NTIxNzk= 1320 Can't use apt-get in Dockerfile when using datasetteproj/datasette as base brandonrobertz 2670795 open 0     0 2021-05-10T19:37:27Z 2021-05-10T19:37:27Z   NONE  

The datasette base Docker image is super convenient, but there's one problem: if any of the plugins you install require additional system dependencies (e.g., xz, git, curl) then any attempt to use apt in said Dockerfile results in an explosion:

$ docker-compose build
Building server
[+] Building 9.9s (7/9)
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                     0.0s
 => => transferring dockerfile: 666B                                                                                                                                                                                                                     0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                        0.0s
 => => transferring context: 34B                                                                                                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/datasetteproject/datasette:latest                                                                                                                                                                             0.6s
 => [base 1/4] FROM docker.io/datasetteproject/datasette@sha256:2250d0fbe57b1d615a8d6df0c9d43deb9533532e00bac68854773d8ff8dcf00a                                                                                                                         0.0s
 => [internal] load build context                                                                                                                                                                                                                        1.8s
 => => transferring context: 2.44MB                                                                                                                                                                                                                      1.8s
 => CACHED [base 2/4] WORKDIR /datasette                                                                                                                                                                                                                 0.0s
 => ERROR [base 3/4] RUN apt-get update     && apt-get install --no-install-recommends -y git ssh curl xz-utils                                                                                                                                          9.2s
------
 > [base 3/4] RUN apt-get update     && apt-get install --no-install-recommends -y git ssh curl xz-utils:
#6 0.446 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
#6 0.449 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]
#6 0.459 Get:3 http://httpredir.debian.org/debian sid InRelease [157 kB]
#6 0.784 Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
#6 0.790 Get:5 http://httpredir.debian.org/debian sid/main amd64 Packages [8626 kB]
#6 1.003 Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
#6 1.180 Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [286 kB]
#6 7.095 Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [10.9 kB]
#6 8.058 Fetched 17.2 MB in 8s (2243 kB/s)
#6 8.058 Reading package lists...
#6 9.166 E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)
#6 9.166 E: Could not open file  - open (2: No such file or directory)
#6 9.166 E: Problem opening
#6 9.166 E: The package lists or status file could not be parsed or opened.

The problem seems to be from completely wiping out /var/lib/dpkg in the upstream Dockerfile:

https://github.com/simonw/datasette/blob/1b697539f5b53cec3fe13c0f4ada13ba655c88c7/Dockerfile#L18

I've tested without removing the directory and apt works as expected.

datasette 107914493 issue    
842862708 MDU6SXNzdWU4NDI4NjI3MDg= 1280 Ability to run CI against multiple SQLite versions simonw 9599 open 0     2 2021-03-28T23:54:50Z 2021-05-10T19:07:46Z   OWNER  

Issue #1276 happened because I didn't run tests against a SQLite version prior to 3.16.0 (released 2017-01-02).

Glitch is a deployment target and runs SQLite 3.11.0 from 2016-02-15.

If CI ran against that version of SQLite this bug could have been avoided.

datasette 107914493 issue    
855446829 MDExOlB1bGxSZXF1ZXN0NjEzMTc4OTY4 1296 Dockerfile: use Ubuntu 20.10 as base tmcl-it 82332573 open 0     3 2021-04-12T00:23:32Z 2021-05-08T19:59:01Z   FIRST_TIMER simonw/datasette/pulls/1296

This PR changes the main Dockerfile to use ubuntu:20.10 as base image instead of python:3.9.2-slim-buster (itself based on debian:buster-slim).

The Dockerfile is essentially the one from https://github.com/simonw/datasette/issues/1249#issuecomment-803698983 with some additional cleanups to slim it down.

This fixes a couple of issues:
1. The SQLite version in Debian Buster (2.6.0) doesn't support generated columns
2. Installing SpatiaLite from the Debian sid repositories has the side effect of also installing updates to libc and libstdc++ from sid.

As a bonus, the Docker image becomes smaller:

$ docker image ls
REPOSITORY                   TAG           IMAGE ID       CREATED       SIZE
datasette                    0.56-ubuntu   f7aca255140a   5 hours ago   212MB
datasetteproject/datasette   0.56          efb3b282f390   13 days ago   258MB

Reproduction of the first issue

$ curl -O https://latest.datasette.io/fixtures.db
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  260k    0  260k    0     0   489k      0 --:--:-- --:--:-- --:--:--  489k

$ docker run -v `pwd`:/mnt datasetteproject/datasette:0.56 datasette /mnt/fixtures.db
Traceback (most recent call last):
  File "/usr/local/bin/datasette", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 544, in serve
    asyncio.get_event_loop().run_until_complete(check_databases(ds))
  File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 584, in check_databases
    await database.execute_fn(check_connection)
  File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 155, in execute_fn
    return await asyncio.get_event_loop().run_in_executor(
  File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 153, in in_thread
    return fn(conn)
  File "/usr/local/lib/python3.9/site-packages/datasette/utils/__init__.py", line 892, in check_connection
    for r in conn.execute(
sqlite3.DatabaseError: malformed database schema (generated_columns) - near "AS": syntax error

Here is the SQLite version:

$ docker run -v `pwd`:/mnt -it datasetteproject/datasette:0.56 /bin/bash
root@d9220d3b95dd:/# python3
Python 3.9.2 (default, Mar 27 2021, 02:50:26) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> sqlite3.version
'2.6.0'

Reproduction of the second issue

$ docker build . -t datasette --build-arg VERSION=0.55
[...snip...]
The following packages will be upgraded:
  libc-bin libc6 libstdc++6
[...snip...]
Unpacking libc6:amd64 (2.31-11) over (2.28-10) ...
[...snip...]
Unpacking libstdc++6:amd64 (10.2.1-6) over (8.3.0-6) ...
[...snip...]

Both libc and libstdc++ are backwards compatible, so the image still works, but it will result in a combination of libraries and Python versions that exists only in the Datasette image, so it's likely untested. In addition, since Debian sid is an always-changing rolling-release, the versions of libc, libstdc++, Spatialite, and their dependencies change frequently, so the library versions in the Datasette image will depend on the day when it was built.

datasette 107914493 pull    
881219362 MDExOlB1bGxSZXF1ZXN0NjM0ODIxMDY1 1319 Add Docker multi-arch support with Buildx blairdrummond 10801138 open 0     0 2021-05-08T19:35:03Z 2021-05-08T19:35:03Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1319

This adds Docker support to extra CPU architectures (like arm) using Docker's Buildx action

You can see what that looks like on Dockerhub

And how it lets Datasette run on a Raspberry Pi (top is my dockerhub, bottom is upstream)

The workflow log here (I subbed blairdrummond for datasetteproject in my branch)

datasette 107914493 pull    
777333388 MDU6SXNzdWU3NzczMzMzODg= 1168 Mechanism for storing metadata in _metadata tables simonw 9599 open 0     19 2021-01-01T18:47:27Z 2021-05-07T17:22:52Z   OWNER  

Original title: Perhaps metadata should all live in a _metadata in-memory database

Inspired by #1150 - metadata should be exposed as an API, and for large Datasette instances that API may need to be paginated. So why not expose it through an in-memory database table?

One catch to this: plugins. #860 aims to add a plugin hook for metadata. But if the metadata comes from an in-memory table, how do the plugins interact with it?

The need to paginate over metadata does make a plugin hook that returns metadata for an individual table seem less wise, since we don't want to have to do 10,000 plugin hook invocations to show a list of all metadata.

If those plugins write directly to the in-memory table how can their contributions survive the server restarting?

datasette 107914493 issue    
860625833 MDU6SXNzdWU4NjA2MjU4MzM= 1300 Generating URL for a row inside `render_cell` hook abdusco 3243482 open 0     3 2021-04-18T10:14:37Z 2021-05-06T09:13:39Z   CONTRIBUTOR  

Hey,
I am using Datasette to view a database that contains video metadata. It has BLOB columns that contain video thumbnails in JPG format (around 100-500KB per row).

I've registered an output formatter that extends datasette.blob_renderer.render_blob function and serves the column with image/jpeg content type.

from datasette.blob_renderer import render_blob

async def render_jpg(datasette, database, rows, columns, request, table, view_name):
    response = await render_blob(datasette, database, rows, columns, request, table, view_name)
    response.content_type = "image/jpeg"
    response.headers["Content-Disposition"] = f'inline; filename="image.jpg"'
    return response


@hookimpl
def register_output_renderer():
    return {
        "extension": "jpg",
        "render": render_jpg,
        "can_render": lambda: True,
    }

This works well. I can visit http://localhost:8001/mydb/videos/1.jpg?_blob_column=thumbnail and view the image.

I want to display the image directly with an <img> tag (lazy-loaded of course). So, I need a URL, because embedding base64 would increase the page size too much (each image > 100KB).

Datasette generates a link with .blob extension for blob columns. It does this by calling datasette.urls.row_blob

https://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/views/table.py#L169-L179

But I have no way of getting the row inside the render_cell hook.

@hookimpl
def render_cell(value, column, table, database, datasette):
    if isinstance(value, bytes) and imghdr.what(None, value):
        # generate url
        return '$renderedLink'

Any pointers?

datasette 107914493 issue    
876431852 MDExOlB1bGxSZXF1ZXN0NjMwNTc4NzM1 1318 Bump black from 21.4b2 to 21.5b0 dependabot[bot] 49699333 open 0     1 2021-05-05T13:07:51Z 2021-05-05T13:13:45Z   NONE simonw/datasette/pulls/1318

Bumps black from 21.4b2 to 21.5b0.

Release notes

Sourced from black's releases.

21.5b0

Black

  • Set --pyi mode if --stdin-filename ends in .pyi (#2169)
  • Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are being called but with no arguments (#2182)

Black-Primer

  • Add --no-diff to black-primer to suppress formatting changes (#2187)
Changelog

Sourced from black's changelog.

21.5b0

Black

  • Set --pyi mode if --stdin-filename ends in .pyi (#2169)
  • Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are being called but with no arguments (#2182)

Black-Primer

  • Add --no-diff to black-primer to suppress formatting changes (#2187)
Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
datasette 107914493 pull    
496415321 MDU6SXNzdWU0OTY0MTUzMjE= 1 Figure out some interesting example SQL queries simonw 9599 open 0     9 2019-09-20T15:28:07Z 2021-05-03T03:46:23Z   MEMBER  

My knowledge of genetics has left me short here. I'd love to be able to provide some interesting example SELECT queries - maybe one that spots if you are likely to have red hair?

genome-to-sqlite 209590345 issue    
870125126 MDU6SXNzdWU4NzAxMjUxMjY= 1310 I'm creating a plugin to export a spreadsheet file (.ods or .xlsx) ColinMaudry 3747136 closed 0     2 2021-04-28T16:20:11Z 2021-04-30T07:26:11Z 2021-04-30T06:58:46Z NONE  

Hi,

I have started developing a plugin to export records as a spreadsheet file. It could be ods or xlsx, whatever is easier.

I have spotted the following packages:

This is the code I have so far, I test it with the --plugins-dir option:

from datasette import hookimpl
from datasette.utils.asgi import Response
import odswriter as ods

def render_spreadsheet(rows):
    with ods.writer(open("test.ods","wb")) as odsfile:
        for row in rows:
            odsfile.writerow(["String", "ABCDEF123456", "123456"])
        return Response(odsfile, content_type="application/vnd.oasis.opendocument.spreadsheet", status=200)


@hookimpl
def register_output_renderer():
    return {"extension": "ods", "render": render_spreadsheet}

I get the following error:

Traceback (most recent call last):
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/app.py", line 1128, in route_path
    await response.asgi_send(send)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/utils/asgi.py", line 339, in asgi_send
    body = body.encode("utf-8")
AttributeError: 'ODSWriter' object has no attribute 'encode'
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/app.py", line 1128, in route_path
    await response.asgi_send(send)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/utils/asgi.py", line 339, in asgi_send
    body = body.encode("utf-8")
AttributeError: 'ODSWriter' object has no attribute 'encode'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/colin/.local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/home/colin/.local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/utils/asgi.py", line 161, in __call__
    await self.app(scope, receive, send)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/tracer.py", line 75, in __call__
    await self.app(scope, receive, send)
  File "/home/colin/.local/lib/python3.8/site-packages/asgi_csrf.py", line 107, in app_wrapped_with_csrf
    await app(scope, receive, wrapped_send)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/app.py", line 1086, in __call__
    return await self.route_path(scope, receive, send, path)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/app.py", line 1133, in route_path
    return await self.handle_500(request, send, exception)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/app.py", line 1267, in handle_500
    await asgi_send_html(
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/utils/asgi.py", line 217, in asgi_send_html
    await asgi_send(
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/utils/asgi.py", line 237, in asgi_send
    await asgi_start(send, status, headers, content_type)
  File "/home/colin/.local/lib/python3.8/site-packages/datasette/utils/asgi.py", line 246, in asgi_start
    await send(
  File "/home/colin/.local/lib/python3.8/site-packages/asgi_csrf.py", line 103, in wrapped_send
    await send(event)
  File "/home/colin/.local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 482, in send
    raise RuntimeError(msg % message_type)
RuntimeError: Expected ASGI message 'http.response.body', but got 'http.response.start'.

I tried with AsgiFileDownload like in DatabaseDownload to deal with the binary nature of the ods file, but the renderer expects a Response:

<datasette.utils.asgi.AsgiFileDownload object at 0x7f5b9bd503d0> should be dict or Response

However, the Response class only supports the following methods, not binary:

  • html
  • text
  • json
  • redirect

How would you suggest me to proceed to have my ods file downloaded?

datasette 107914493 issue    
871304967 MDU6SXNzdWU4NzEzMDQ5Njc= 1315 settings.json should be picked up by "datasette publish cloudrun" simonw 9599 open 0     0 2021-04-29T18:16:41Z 2021-04-29T18:16:41Z   OWNER  
datasette 107914493 issue    
871046111 MDExOlB1bGxSZXF1ZXN0NjI2MTMwMTM1 1313 Bump black from 20.8b1 to 21.4b2 dependabot-preview[bot] 27856297 closed 0     2 2021-04-29T13:58:06Z 2021-04-29T15:47:50Z 2021-04-29T15:47:49Z CONTRIBUTOR simonw/datasette/pulls/1313

Bumps black from 20.8b1 to 21.4b2.

Release notes

Sourced from black's releases.

21.4b2

Black

  • Fix crash if the user configuration directory is inaccessible. (#2158)

  • Clarify circumstances in which Black may change the AST (#2159)

Packaging

  • Install primer.json (used by black-primer by default) with black. (#2154)

21.4b1

Black

  • Fix crash on docstrings ending with "\ ". (#2142)

  • Fix crash when atypical whitespace is cleaned out of dostrings (#2120)

  • Reflect the --skip-magic-trailing-comma and --experimental-string-processing flags in the name of the cache file. Without this fix, changes in these flags would not take effect if the cache had already been populated. (#2131)

  • Don't remove necessary parentheses from assignment expression containing assert / return statements. (#2143)

Packaging

  • Bump pathspec to >= 0.8.1 to solve invalid .gitignore exclusion handling

21.4b0

Black

  • Fixed a rare but annoying formatting instability created by the combination of optional trailing commas inserted by Black and optional parentheses looking at pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many duplicates. (#2126)

  • Black now processes one-line docstrings by stripping leading and trailing spaces, and adding a padding space when needed to break up """". (#1740)

  • Black now cleans up leading non-breaking spaces in comments (#2092)

  • Black now respects --skip-string-normalization when normalizing multiline docstring quotes (#1637)

  • Black no longer removes all empty lines between non-function code and decorators when formatting typing stubs. Now Black enforces a single empty line. (#1646)

... (truncated)

Changelog

Sourced from black's changelog.

21.4b2

Black

  • Fix crash if the user configuration directory is inaccessible. (#2158)

  • Clarify circumstances in which Black may change the AST (#2159)

Packaging

  • Install primer.json (used by black-primer by default) with black. (#2154)

21.4b1

Black

  • Fix crash on docstrings ending with "\ ". (#2142)

  • Fix crash when atypical whitespace is cleaned out of dostrings (#2120)

  • Reflect the --skip-magic-trailing-comma and --experimental-string-processing flags in the name of the cache file. Without this fix, changes in these flags would not take effect if the cache had already been populated. (#2131)

  • Don't remove necessary parentheses from assignment expression containing assert / return statements. (#2143)

Packaging

  • Bump pathspec to >= 0.8.1 to solve invalid .gitignore exclusion handling

21.4b0

Black

  • Fixed a rare but annoying formatting instability created by the combination of optional trailing commas inserted by Black and optional parentheses looking at pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many duplicates. (#2126)

  • Black now processes one-line docstrings by stripping leading and trailing spaces, and adding a padding space when needed to break up """". (#1740)

  • Black now cleans up leading non-breaking spaces in comments (#2092)

  • Black now respects --skip-string-normalization when normalizing multiline docstring quotes (#1637)

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language - `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com): - Update frequency (including time of day and day of week) - Pull request limits (per update run and/or open at any time) - Out-of-range updates (receive only lockfile updates, if desired) - Security updates (receive only security updates, if desired)
datasette 107914493 pull    
871157602 MDExOlB1bGxSZXF1ZXN0NjI2MjIyNjc2 1314 Upgrade to GitHub-native Dependabot dependabot-preview[bot] 27856297 closed 0     1 2021-04-29T15:36:41Z 2021-04-29T15:47:22Z 2021-04-29T15:47:21Z CONTRIBUTOR simonw/datasette/pulls/1314

Dependabot Preview will be shut down on August 3rd, 2021. In order to keep getting Dependabot updates, please merge this PR and migrate to GitHub-native Dependabot before then.

Dependabot has been fully integrated into GitHub, so you no longer have to install and manage a separate app. This pull request migrates your configuration from Dependabot.com to a config file, using the new syntax. When merged, we'll swap out dependabot-preview (me) for a new dependabot app, and you'll be all set!

With this change, you'll now use the Dependabot page in GitHub, rather than the Dependabot dashboard, to monitor your version updates, and you'll configure Dependabot through the new config file rather than a UI.

If you've got any questions or feedback for us, please let us know by creating an issue in the dependabot/dependabot-core repository.

Learn more about migrating to GitHub-native Dependabot

Please note that regular @dependabot commands do not work on this pull request.

datasette 107914493 pull    
870227815 MDExOlB1bGxSZXF1ZXN0NjI1NDU3NTc5 1311 Bump black from 20.8b1 to 21.4b1 dependabot-preview[bot] 27856297 closed 0     2 2021-04-28T18:25:58Z 2021-04-29T13:58:11Z 2021-04-29T13:58:09Z CONTRIBUTOR simonw/datasette/pulls/1311

Bumps black from 20.8b1 to 21.4b1.

Release notes

Sourced from black's releases.

21.4b1

Black

  • Fix crash on docstrings ending with "\ ". (#2142)

  • Fix crash when atypical whitespace is cleaned out of dostrings (#2120)

  • Reflect the --skip-magic-trailing-comma and --experimental-string-processing flags in the name of the cache file. Without this fix, changes in these flags would not take effect if the cache had already been populated. (#2131)

  • Don't remove necessary parentheses from assignment expression containing assert / return statements. (#2143)

Packaging

  • Bump pathspec to >= 0.8.1 to solve invalid .gitignore exclusion handling

21.4b0

Black

  • Fixed a rare but annoying formatting instability created by the combination of optional trailing commas inserted by Black and optional parentheses looking at pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many duplicates. (#2126)

  • Black now processes one-line docstrings by stripping leading and trailing spaces, and adding a padding space when needed to break up """". (#1740)

  • Black now cleans up leading non-breaking spaces in comments (#2092)

  • Black now respects --skip-string-normalization when normalizing multiline docstring quotes (#1637)

  • Black no longer removes all empty lines between non-function code and decorators when formatting typing stubs. Now Black enforces a single empty line. (#1646)

  • Black no longer adds an incorrect space after a parenthesized assignment expression in if/while statements (#1655)

  • Added --skip-magic-trailing-comma / -C to avoid using trailing commas as a reason to split lines (#1824)

  • fixed a crash when PWD=/ on POSIX (#1631)

  • fixed "I/O operation on closed file" when using --diff (#1664)

  • Prevent coloured diff output being interleaved with multiple files (#1673)

  • Added support for PEP 614 relaxed decorator syntax on python 3.9 (#1711)

... (truncated)

Changelog

Sourced from black's changelog.

21.4b1

Black

  • Fix crash on docstrings ending with "\ ". (#2142)

  • Fix crash when atypical whitespace is cleaned out of dostrings (#2120)

  • Reflect the --skip-magic-trailing-comma and --experimental-string-processing flags in the name of the cache file. Without this fix, changes in these flags would not take effect if the cache had already been populated. (#2131)

  • Don't remove necessary parentheses from assignment expression containing assert / return statements. (#2143)

Packaging

  • Bump pathspec to >= 0.8.1 to solve invalid .gitignore exclusion handling

21.4b0

Black

  • Fixed a rare but annoying formatting instability created by the combination of optional trailing commas inserted by Black and optional parentheses looking at pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many duplicates. (#2126)

  • Black now processes one-line docstrings by stripping leading and trailing spaces, and adding a padding space when needed to break up """". (#1740)

  • Black now cleans up leading non-breaking spaces in comments (#2092)

  • Black now respects --skip-string-normalization when normalizing multiline docstring quotes (#1637)

  • Black no longer removes all empty lines between non-function code and decorators when formatting typing stubs. Now Black enforces a single empty line. (#1646)

  • Black no longer adds an incorrect space after a parenthesized assignment expression in if/while statements (#1655)

  • Added --skip-magic-trailing-comma / -C to avoid using trailing commas as a reason to split lines (#1824)

  • fixed a crash when PWD=/ on POSIX (#1631)

  • fixed "I/O operation on closed file" when using --diff (#1664)

  • Prevent coloured diff output being interleaved with multiple files (#1673)

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language - `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com): - Update frequency (including time of day and day of week) - Pull request limits (per update run and/or open at any time) - Out-of-range updates (receive only lockfile updates, if desired) - Security updates (receive only security updates, if desired)
datasette 107914493 pull    
870946764 MDU6SXNzdWU4NzA5NDY3NjQ= 1312 how to query many-to-many relationship via json API? bram2000 5268174 open 0     0 2021-04-29T12:09:49Z 2021-04-29T12:09:49Z   NONE  

Hi,

Firstly thanks for Datasette, it's great!

I'm trying to use the JSON API to query data from a Datasette instance. I have a simple 3 table many-to-many relationship, like so:

category - list of categories
document - list of documents
document_category - join table (a category contains many documents, and a document can be a member of multiple categories)

the document_category table foreign keys to the other two using their respective row_ids.

Now I want to return "all documents within category X" but I cannot see a way to do this without executing two queries; the first to lookup the row_id of category X, and the second to join document with document_category where category ID is <id>.

I could easily write this in SQL, but this makes programmatic handling of pagination much more difficult (we'd have to dynamically modify the SQL to select the row_id and include the correct where and limit clauses).

Is there a way to achieve this using the JSON API?

datasette 107914493 issue    
869237023 MDExOlB1bGxSZXF1ZXN0NjI0NjM1NDQw 1309 Bump black from 20.8b1 to 21.4b0 dependabot-preview[bot] 27856297 closed 0     2 2021-04-27T20:28:11Z 2021-04-28T18:26:06Z 2021-04-28T18:26:04Z CONTRIBUTOR simonw/datasette/pulls/1309

Bumps black from 20.8b1 to 21.4b0.

Release notes

Sourced from black's releases.

21.4b0

Black

  • Fixed a rare but annoying formatting instability created by the combination of optional trailing commas inserted by Black and optional parentheses looking at pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many duplicates. (#2126)

  • Black now processes one-line docstrings by stripping leading and trailing spaces, and adding a padding space when needed to break up """". (#1740)

  • Black now cleans up leading non-breaking spaces in comments (#2092)

  • Black now respects --skip-string-normalization when normalizing multiline docstring quotes (#1637)

  • Black no longer removes all empty lines between non-function code and decorators when formatting typing stubs. Now Black enforces a single empty line. (#1646)

  • Black no longer adds an incorrect space after a parenthesized assignment expression in if/while statements (#1655)

  • Added --skip-magic-trailing-comma / -C to avoid using trailing commas as a reason to split lines (#1824)

  • fixed a crash when PWD=/ on POSIX (#1631)

  • fixed "I/O operation on closed file" when using --diff (#1664)

  • Prevent coloured diff output being interleaved with multiple files (#1673)

  • Added support for PEP 614 relaxed decorator syntax on python 3.9 (#1711)

  • Added parsing support for unparenthesized tuples and yield expressions in annotated assignments (#1835)

  • use lowercase hex strings (#1692)

  • added --extend-exclude argument (PR #2005)

  • speed up caching by avoiding pathlib (#1950)

  • --diff correctly indicates when a file doesn't end in a newline (#1662)

  • Added --stdin-filename argument to allow stdin to respect --force-exclude rules (#1780)

  • Lines ending with fmt: skip will now be not formatted (#1800)

  • PR #2053: Black no longer relies on typed-ast for Python 3.8 and higher

... (truncated)

Changelog

Sourced from black's changelog.

21.4b0

Black

  • Fixed a rare but annoying formatting instability created by the combination of optional trailing commas inserted by Black and optional parentheses looking at pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many duplicates. (#2126)

  • Black now processes one-line docstrings by stripping leading and trailing spaces, and adding a padding space when needed to break up """". (#1740)

  • Black now cleans up leading non-breaking spaces in comments (#2092)

  • Black now respects --skip-string-normalization when normalizing multiline docstring quotes (#1637)

  • Black no longer removes all empty lines between non-function code and decorators when formatting typing stubs. Now Black enforces a single empty line. (#1646)

  • Black no longer adds an incorrect space after a parenthesized assignment expression in if/while statements (#1655)

  • Added --skip-magic-trailing-comma / -C to avoid using trailing commas as a reason to split lines (#1824)

  • fixed a crash when PWD=/ on POSIX (#1631)

  • fixed "I/O operation on closed file" when using --diff (#1664)

  • Prevent coloured diff output being interleaved with multiple files (#1673)

  • Added support for PEP 614 relaxed decorator syntax on python 3.9 (#1711)

  • Added parsing support for unparenthesized tuples and yield expressions in annotated assignments (#1835)

  • added --extend-exclude argument (PR #2005)

  • speed up caching by avoiding pathlib (#1950)

  • --diff correctly indicates when a file doesn't end in a newline (#1662)

  • Added --stdin-filename argument to allow stdin to respect --force-exclude rules (#1780)

  • Lines ending with fmt: skip will now be not formatted (#1800)

  • PR #2053: Black no longer relies on typed-ast for Python 3.8 and higher

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language - `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com): - Update frequency (including time of day and day of week) - Pull request limits (per update run and/or open at any time) - Out-of-range updates (receive only lockfile updates, if desired) - Security updates (receive only security updates, if desired)
datasette 107914493 pull    
868191959 MDExOlB1bGxSZXF1ZXN0NjIzNzU1NzIz 258 Fixing insert from JSON containing strings with non-ascii characters … dylan-wu 6586811 open 0     0 2021-04-26T20:50:00Z 2021-04-26T20:50:00Z   FIRST_TIME_CONTRIBUTOR simonw/sqlite-utils/pulls/258

…are escaped aps unicode for lists, tuples, dicts

Fix of #257

sqlite-utils 140912432 pull    
868188068 MDU6SXNzdWU4NjgxODgwNjg= 257 Insert from JSON containing strings with non-ascii characters are escaped as unicode for lists, tuples, dicts. dylan-wu 6586811 open 0     0 2021-04-26T20:46:25Z 2021-04-26T20:46:25Z   NONE  

JSON Test File (test.json):

[
    {
        "id": 123,
        "text": "FR Théâtre"
    },
    {
        "id": 223,
        "text": [
            "FR Théâtre"
        ]
    }
]

Command to import:

sqlite-utils insert test.db text test.json --pk=id

Resulting table view from datasette:

Original, db.py line 2225:

        return json.dumps(value, default=repr)

Fix, db.py line 2225:

        return json.dumps(value, default=repr, ensure_ascii=False)
sqlite-utils 140912432 issue    
281110295 MDU6SXNzdWUyODExMTAyOTU= 173 I18n and L10n support janimo 50138 open 0     2 2017-12-11T17:49:58Z 2021-04-26T12:10:01Z   NONE  

It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized.

datasette 107914493 issue    
866668415 MDU6SXNzdWU4NjY2Njg0MTU= 1308 Columns named "link" display in bold simonw 9599 closed 0     3 2021-04-24T05:58:11Z 2021-04-24T06:07:49Z 2021-04-24T06:07:49Z OWNER  

Reported in office hours today.

datasette 107914493 issue    
864979486 MDExOlB1bGxSZXF1ZXN0NjIxMTE3OTc4 1306 Possible fix for issue #1305 slygent 416374 open 0     1 2021-04-22T13:53:17Z 2021-04-22T13:59:04Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1306
datasette 107914493 pull    
864969683 MDU6SXNzdWU4NjQ5Njk2ODM= 1305 Index view crashes when any database table is not accessible to actor slygent 416374 open 0     0 2021-04-22T13:44:22Z 2021-04-22T13:46:31Z   NONE  

Because of https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L63, the tables dict built does not include invisible tables; however, if https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L80 is reached (because table_counts was not successfully initialized, e.g. due to a very large database) then as db.get_all_foreign_keys() returns ALL tables, a KeyError will be raised.

This error can be recreated with the fixtures.db if any table is hidden, e.g. by adding something like "foreign_key_references": { "allow": {} } to fixtures-metadata.json and deleting or not table_counts from https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L77.

I'm not sure how to fix this error; perhaps by testing if the table is in the aforementions tables dict.

datasette 107914493 issue    
863884805 MDU6SXNzdWU4NjM4ODQ4MDU= 1304 Document how to send multiple values for "Named parameters" rayvoelker 9308268 open 0     0 2021-04-21T13:19:06Z 2021-04-21T13:19:06Z   NONE  

https://docs.datasette.io/en/stable/sql_queries.html#named-parameters

I thought that I had seen an example of how to do this example below, but I can't seem to find it

select
  *
from
  bib
where
  bib.bib_record_num in (1008088,1008092)
select
  *
from
  bib
where
  bib.bib_record_num in (:bib_record_numbers)

https://ilsweb.cincinnatilibrary.org/collection-analysis/current_collection-204d100?sql=select%0D%0A++*%0D%0Afrom%0D%0A++bib%0D%0Awhere%0D%0A++bib.bib_record_num+in+%28%3Abib_record_numbers%29&bib_record_numbers=1008088%2C1008092

Or, maybe this isn't a fully supported feature.

datasette 107914493 issue    
855476501 MDU6SXNzdWU4NTU0NzY1MDE= 1298 improve table horizontal scroll experience mroswell 192568 open 0     3 2021-04-12T01:55:16Z 2021-04-20T08:51:23Z   CONTRIBUTOR  

Wide tables aren't a huge problem if you know to click and drag right. But it's not at all obvious to do that. (it also tends to blue-select any content as it's dragging.) Depending on column widths, public users might entirely miss all the columns to the right.

There is a scrollbar at the bottom of the table, but I'm displaying ALL my records because it's the only way for datasette-vega to make accurate charts. So that bottom scrollbar is likely to be missed. I wonder if some sort of javascript-y mouseover to an arrow might help, similar to those seen in image carousels. Ah: here's a perfect example:

  1. Visit http://google.com
  2. Search for: animals endangered
  3. Note the 'g-right-button' (in the code) that looks like a right-facing caret in a circle.
  4. Click on that and the carousel scrolls right (and 'g-left-button' appears on the left).

Might be tricky to do that on a table, rather than a one-row carousel, but it's worth experimenting with.

Another option is just to put the scrollbars at the top of the table, too.

Meantime, I'm trying to build a button like the "View/hide all columns on https://salaries.news.baltimoresun.com/salaries-be494cf/2019+Maryland+state+salaries
Might be nice to have that available by default, with settings in the metadata showing which are on by default.

(I saw some other closed issues related to horizontal scrolling, and admit I don't entirely understand them. For instance, the animated gif at https://github.com/simonw/datasette/issues/998#issuecomment-714117534 confuses me. )

datasette 107914493 issue    
860734722 MDU6SXNzdWU4NjA3MzQ3MjI= 1302 Fix disappearing facets mroswell 192568 open 0     0 2021-04-18T18:42:33Z 2021-04-20T07:40:15Z   CONTRIBUTOR  
  1. Clone https://github.com/mroswell/list-N
  2. Run datasette disinfectants.db -o
  3. Select the Safer_or_Toxic facet.
  4. Select Toxic.
  5. Close out the Safer_or_Toxic facet.
  6. Examine Suggested facets list. Safer_or_Toxic is GONE.
  7. Try some other facets. When you select an element, and then close the list, in some cases, the facet properly returns to the Suggested facet list... Arrays and dates properly return to the list, but fields with strings don't return to the list.

Since my site is devoted to whether disinfectants are Safer or Toxic, having the suggested facet disappear from the suggested facet list is very confusing* to end-users. This, along with a few other issues, unfortunately proved beyond my own programming ability to address. So I hired a Senior-level developer to address a number of issues, including this disappearing act.

  1. Open a new terminal. Run datasette disinfectants.db -m metadata.json --static static:static/ --template-dir templates/ --plugins-dir plugins/ -p 8001 -o
  2. Repeat steps 3-6, but this time, the Safer_or_Toxic facet returns to the list (and the related URL parameters are removed).

I'm not sure how to do a pull request for this, because the plugin contains other functionality that goes beyond this bug. I wanted the facets sorted in a certain order (both in the suggested facet list, and the detail lists) (... the detail lists were hopping around all over the place before...) I wanted the duplicate facets removed (leaving only the one where you can facet by individual item in an array.) I wanted the arrays to be presented in a prettier fashion (I did that in the template... That could be moved over to the plugin at some point)

I'm thinking it'll be very helpful if applicable parts of my project's plugin (sort_suggested_facets_plugin.py) will be able to be incorporated back into datasette, but I leave that to you to consider.

(* The disappearing facet bug was especially confusing because I'm removing the filters and sql from the table page, at the request of the organization. The filters and sql detail created a lot of confusion for end users who try to find disinfectants used by Hospitals, for instance, as an '=' won't find them, since they are part of the Use_site array.) My disappearing-facet confusion was documented in my own issue: https://github.com/mroswell/list-N/issues/57 (addressed by the plugin). Other facet-related issues here: https://github.com/mroswell/list-N/issues/54 (addressed by the plugin); https://github.com/mroswell/list-N/issues/15 (addressed by template); https://github.com/mroswell/list-N/issues/53 (not yet addressed).

datasette 107914493 issue    
861331159 MDExOlB1bGxSZXF1ZXN0NjE4MDExOTc3 1303 Update pytest-asyncio requirement from <0.15,>=0.10 to >=0.10,<0.16 dependabot-preview[bot] 27856297 closed 0     1 2021-04-19T13:49:12Z 2021-04-19T18:18:17Z 2021-04-19T18:18:17Z CONTRIBUTOR simonw/datasette/pulls/1303

Updates the requirements on pytest-asyncio to permit the latest version.

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language - `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com): - Update frequency (including time of day and day of week) - Pull request limits (per update run and/or open at any time) - Out-of-range updates (receive only lockfile updates, if desired) - Security updates (receive only security updates, if desired)
datasette 107914493 pull    
861622839 MDU6SXNzdWU4NjE2MjI4Mzk= 256 inserting with --nl errors with: sqlite3.OperationalError: table <table> has no column named <column> rathboma 279769 open 0     0 2021-04-19T18:01:03Z 2021-04-19T18:01:20Z   NONE  

I have a jsonl file, it is 10,000 lines long.

Inserting from the cli with sqlite-utils insert db table file --nl --batch-size 10000 fails with this missing column error, even though I'm telling it to use the whole file in the first batch.

This seems similar to #18 and #139, but maybe it's unique to --nl?

sqlite-utils 140912432 issue    
860722711 MDU6SXNzdWU4NjA3MjI3MTE= 1301 Publishing to cloudrun with immutable mode? louispotok 5413548 open 0     0 2021-04-18T17:51:46Z 2021-04-18T17:51:46Z   NONE  

I'm a bit confused about immutable mode and publishing to cloudrun. (I want to publish with immutable mode so that I can support database downloads.)

Running datasette publish cloudrun --extra-options="-i example.db" leads to an error:

Error: Invalid value for '-i' / '--immutable': Path 'example.db' does not exist.

However, running datasette publish cloudrun example.db not only works but seems to publish in immutable mode anyway! I'm seeing this both with /-/databases.json and the fact that downloads are working.

When I just datasette serve locally, this succeeds both ways and works as expected.

datasette 107914493 issue    
858501079 MDU6SXNzdWU4NTg1MDEwNzk= 255 transform --help should tell you the available types simonw 9599 open 0     0 2021-04-15T05:24:48Z 2021-04-15T05:24:48Z   OWNER  
Usage: sqlite-utils transform [OPTIONS] PATH TABLE

  Transform a table beyond the capabilities of ALTER TABLE

Options:
  --type <TEXT TEXT>...     Change column type to X

This should specify that the possible types are 'INTEGER', 'TEXT', 'FLOAT', 'BLOB'.

sqlite-utils 140912432 issue    
791237799 MDU6SXNzdWU3OTEyMzc3OTk= 1196 Access Denied Error in Windows QAInsights 2826376 open 0     2 2021-01-21T15:40:40Z 2021-04-14T19:28:38Z   NONE  

I am trying to publish a db to vercel. But while issuing the below command throwing Access Denied error which is leading to RecursionError: maximum recursion depth exceeded while calling a Python object.

I am using PyCharm and Python 3.9. I have reinstalled both and launched PyCharm as Admin in Windows 10. But still the issue persists.

Issued command datasette publish vercel jmeter.db --project jmeter --install datasette-vega

PS: localhost is working fine.

datasette 107914493 issue    
857280617 MDExOlB1bGxSZXF1ZXN0NjE0NzI3MDM2 254 Fix incorrect create-table cli description robjwells 1935268 open 0     0 2021-04-13T20:03:15Z 2021-04-13T20:03:15Z   FIRST_TIME_CONTRIBUTOR simonw/sqlite-utils/pulls/254

The description for create-table was duplicated from create-index.

sqlite-utils 140912432 pull    
856895291 MDU6SXNzdWU4NTY4OTUyOTE= 1299 Design better empty states simonw 9599 open 0     0 2021-04-13T12:06:12Z 2021-04-13T12:06:12Z   OWNER  

Inspiration here: https://emptystat.es/

datasette 107914493 issue    
855451460 MDU6SXNzdWU4NTU0NTE0NjA= 1297 Documentation: json1, and introspection endpoints mroswell 192568 open 0     0 2021-04-12T00:38:00Z 2021-04-12T01:29:33Z   CONTRIBUTOR  

https://docs.datasette.io/en/stable/facets.html notes that:

If your SQLite installation provides the json1 extension (you can check using /-/versions) Datasette will automatically detect columns that contain JSON arrays...

When I check -/versions I see two sections relevant to json1:

        "extensions": {
            "json1": null
        },
        "compile_options": [
            ...
            "ENABLE_JSON1",

The ENABLE_JSON1 makes me think json1 is likely available. But the "json1": null made me think it wasn't available (because of the null). It would help if the documentation provided clarity about how to know if json1 is installed. It would also be helpful if the /-/versions information signalled somehow that that is to be appended to the hostname or domain name (or whatever you want to call it, or simply show it, using example.com/-/versions instead of /-/versions. Likewise on that last point, for https://docs.datasette.io/en/stable/introspection.html#introspection , at least at some point on that page detailing where those introspection endpoints go. (Sometimes documentation can be so abbreviated that it's hard for new users to figure out what's going on.)

datasette 107914493 issue    
636511683 MDU6SXNzdWU2MzY1MTE2ODM= 830 Redesign register_facet_classes plugin hook simonw 9599 open 0   Datasette 1.0 3268330 2 2020-06-10T20:03:27Z 2021-04-12T01:07:27Z   OWNER  

Nothing uses this plugin hook yet, so the design is not yet proven.

I'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.

I'll add a warning about this to the documentation.

datasette 107914493 issue    
855296937 MDU6SXNzdWU4NTUyOTY5Mzc= 1295 Errors should have links to further information simonw 9599 open 0     1 2021-04-11T12:39:12Z 2021-04-11T12:41:06Z   OWNER  

Inspired by this tweet:
https://twitter.com/willmcgugan/status/1381186384510255104

While I am thinking about faqs. I’d also like to add short URLs to Rich exceptions.

I loath cryptic error messages, and I’ve created a fair few myself. In Rich I’ve tried to make them as plain English as possible. But...

would be great if every error message linked to a page that explains the error in detail and offers fixes.

datasette 107914493 issue    
849220154 MDU6SXNzdWU4NDkyMjAxNTQ= 1286 Better default display of arrays of items mroswell 192568 open 0     4 2021-04-02T13:31:40Z 2021-04-10T03:59:00Z   CONTRIBUTOR  

Would be great to have template filters that convert array fields to bullets and/or delimited lists upon table display:

|to_bullets
|to_comma_delimited
|to_semicolon_delimited

or maybe:

|join_array("bullet")
|join_array("bullet","square")
|join_array(";")
|join_array(",")

Keeping in mind that bullets show up in html as \<li> while other delimiting characters appear after the value.

Of course, the fields themselves would remain as facetable arrays.

datasette 107914493 issue    
853672224 MDU6SXNzdWU4NTM2NzIyMjQ= 1294 "You can check out any time you like. But you can never leave!" mroswell 192568 open 0     0 2021-04-08T17:02:15Z 2021-04-08T18:35:50Z   CONTRIBUTOR  

(Feel free to rename this one.)

  • The column gear lets you "Show not-blank rows." Then it places a parameter in the URL, which a web developer would notice, but a lot of users won't notice, or know to delete it. Would be good to toggle "Show not-blank rows" with "Show all rows." (Also would be quite helpful to have a "Show blank rows | Show all rows" option)
  • The column gear lets you "Sort ascending" and "Sort descending" but then you're stuck with some sort of sorted version thereafter, unless you know to sort the ID column, or to remove the full _sort parameter and its value in the URL. Would be good to offer a "Remove sort" option in the gear.
  • These requests are in the same camp as: https://github.com/simonw/datasette-vega/issues/36
  • I suspect there are other url parameter instances where similar analysis would be helpful, but the three above are the use cases I've run across.

UPDATE:
- It would be helpful to have a "Previous page" available for all but the first table page.

datasette 107914493 issue    
849978964 MDU6SXNzdWU4NDk5Nzg5NjQ= 1293 Research: Automatically display foreign key links on arbitrary query pages simonw 9599 open 0     18 2021-04-04T22:59:42Z 2021-04-05T16:16:18Z   OWNER   datasette 107914493 issue    
842695374 MDU6SXNzdWU4NDI2OTUzNzQ= 35 Support to annotate photos on other than macOS OSes ligurio 1151557 open 0     1 2021-03-28T09:01:25Z 2021-04-05T07:37:57Z   NONE  

dogsheep-photos allows to annotate photos using Apple Photo's db. It would be nice to have such ability on other OSes too. For example using trained local model or using Google Vision API (see #14).

dogsheep-photos 256834907 issue    
520667773 MDU6SXNzdWU1MjA2Njc3NzM= 620 Mechanism for indicating foreign key relationships in the table and query page URLs simonw 9599 open 0     6 2019-11-10T22:26:27Z 2021-04-05T03:57:22Z   OWNER  

Datasette currently only inflates foreign keys (into names hyperlinks) if it detects them as foreign key constraints in the underlying database.

It would be useful if you could specify additional "foreign keys" using both metadata.json and the querystring - similar time how you can pass ?_fts_table=x https://datasette.readthedocs.io/en/stable/full_text_search.html#configuring-full-text-search-for-a-table-or-view

datasette 107914493 issue    
849975810 MDU6SXNzdWU4NDk5NzU4MTA= 1292 Research ctypes.util.find_library('spatialite') simonw 9599 open 0     1 2021-04-04T22:36:59Z 2021-04-04T22:37:47Z   OWNER   datasette 107914493 issue    
838382890 MDU6SXNzdWU4MzgzODI4OTA= 1273 Refresh SpatiaLite documentation simonw 9599 open 0     4 2021-03-23T06:05:55Z 2021-04-04T16:32:41Z   OWNER   datasette 107914493 issue    
672421411 MDU6SXNzdWU2NzI0MjE0MTE= 916 Support reverse pagination (previous page, has-previous-items) simonw 9599 open 0     7 2020-08-04T00:32:06Z 2021-04-03T23:43:11Z   OWNER  

I need this for datasette-graphql for full compatibility with the way Relay likes to paginate - using cursors for paginating backwards as well as for paginating forwards.

This may be the kick I need to get Datasette pagination to work in reverse too.
_Originally posted by @simonw in https://github.com/simonw/datasette-graphql/issues/2#issuecomment-668305853_

datasette 107914493 issue    
849396758 MDU6SXNzdWU4NDkzOTY3NTg= 1287 Upgrade to Python 3.9.4 simonw 9599 open 0     5 2021-04-02T18:43:15Z 2021-04-03T22:38:39Z   OWNER   datasette 107914493 issue    
849582643 MDExOlB1bGxSZXF1ZXN0NjA4MzM0MDk2 1291 Update docs: explain allow_download setting louispotok 5413548 open 0     1 2021-04-03T05:28:33Z 2021-04-03T13:31:20Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1291

This fixes one possible source of confusion seen in #502 and clarifies
when database downloads will be shown and allowed.

datasette 107914493 pull    
453131917 MDU6SXNzdWU0NTMxMzE5MTc= 502 Exporting sqlite database(s)? chrismp 7936571 closed 0     3 2019-06-06T16:39:53Z 2021-04-03T05:16:54Z 2019-06-11T18:50:42Z NONE  

I'm working on datasette from one computer. But if I want to work on it from another computer and want to copy the SQLite database(s) already on the Heroku datasette instance, how to I copy the database(s) to the second computer so that I can then update it and push to online via datasette's command line code that pushes code to Heroku?

datasette 107914493 issue    
849568079 MDExOlB1bGxSZXF1ZXN0NjA4MzIzMDI4 1290 Use pytest-xdist to speed up tests simonw 9599 closed 0     1 2021-04-03T03:34:36Z 2021-04-03T03:42:29Z 2021-04-03T03:42:28Z OWNER simonw/datasette/pulls/1290

Closes #1289, refs #1212.

datasette 107914493 pull    
849543502 MDU6SXNzdWU4NDk1NDM1MDI= 1289 Speed up tests with pytest-xdist simonw 9599 closed 0     3 2021-04-03T00:47:39Z 2021-04-03T03:42:28Z 2021-04-03T03:42:28Z OWNER  

I think I can get this working for almost every test, then use the pattern in https://github.com/pytest-dev/pytest-xdist/issues/385#issuecomment-444545641 to opt specific tests out of being run in parallel.

datasette 107914493 issue    
849512840 MDU6SXNzdWU4NDk1MTI4NDA= 1288 Facets: show counts for null jungle-boogie 1111743 open 0     0 2021-04-02T22:33:44Z 2021-04-02T22:33:44Z   NONE  

Hi,

Thank you for Datasette and being a fan of SQLite!

Not all rows in a record will always contain data.
So when using a facet on a column where some records have data and others don't, you don't get an accurate count of the results.

Please consider also counting and showing null records with facets.

datasette 107914493 issue    
817544251 MDU6SXNzdWU4MTc1NDQyNTE= 1245 Sticky table column headers would be useful, especially on the query page simonw 9599 open 0     1 2021-02-26T17:42:51Z 2021-04-02T20:53:35Z   OWNER  

Suggestion from office hours.

datasette 107914493 issue    
826700095 MDU6SXNzdWU4MjY3MDAwOTU= 1255 Facets timing out but work when filtering robroc 1219001 open 0     2 2021-03-09T22:01:39Z 2021-04-02T20:50:08Z   NONE  

System info:

Windows 10
Datasette 0.55 installed via pip
Python 3.8.5 in a conda environment

I'm getting the message These facets timed out on any faceting operation. However, when I apply a filter, the facets appear in the filtered view. The error returns when the filter is removed. My data only has 38,450 rows.

datasette 107914493 issue    
845794436 MDU6SXNzdWU4NDU3OTQ0MzY= 1284 Feature or Documentation Request: Individual table as home page template mroswell 192568 open 0     2 2021-03-31T03:56:17Z 2021-04-02T13:47:06Z   CONTRIBUTOR  

It would be great to have a sample showing how to move a single database that has a single table, to the index page. I'm trying it now, and find there is a real depth of Datasette and Python understanding that's required to be successful.

I've got all the basic jinja concepts down... variables, template control structures, template inheritance, template overrides, css, html, the --template-dir and --static arguments, etc.

But copying the table.html file to index.html doesn't work. There are undocumented functions and filters... I can figure some of them out (yay, url_builder.py and utils/init.py!) but it's a slog better handled by a much stronger Python developer.

One sample would make a world of difference. The ideal form of this documentation would be a diff between the default table.html and how that would look if essentially moved to index.html. The use case is for everyone who wants to create a public-facing website to explore a single table at the root directory. (Maybe a second bit of documentation for people who have a single database with multiple tables.)

(Hmm... might be cool to have a setting for that, where it happens automagically! If only one table, then home page is at the table level. if only one database, then home page is at the database level.... as an option.)

I suppose I could ignore this, and somehow do this in the DNS settings once I hook up Vercel to a domain name, maybe.. and remove the breadcrumbs in table.html... but for now, a documentation request in the form of a diff... for viewing a single table (or a single database) at the root.

(Actually, there's probably room for a whole expanded section on templates. Noticed some nice table metadata in one of the datasette examples, for instance... Hmm... maybe a whole library of solutions in one place... maybe a documentation hackathon! If that's of interest, of course it's a separate issue. )

datasette 107914493 issue    
847700726 MDU6SXNzdWU4NDc3MDA3MjY= 1285 Feature Request or Plugin Request: Numeric Range Facets mroswell 192568 open 0     0 2021-04-01T01:50:20Z 2021-04-01T02:28:19Z   CONTRIBUTOR  

It would be great to offer facets for numeric data ranges.

The ranges could pull from typical GIS methods of creating choropleth maps.
https://gisgeography.com/choropleth-maps-data-classification/
Of the following, for mapping, I've always preferred a Jenks Natural Breaks, or a cross between Jenks and Pretty breaks.

  • Equal Intervals
  • Quantile (equal count)
  • Standard Deviation
  • Natural Breaks (Jenks) Classification
  • Pretty Breaks
  • Some sort of Aggregate Jenks Classification (this isn't standard, but it would be nice to be able to set classification ranges that work across tables.)

Here are some links for Natural Breaks, in case this method is unfamiliar.

Per that last link, there is a Jenks Python module... They also describe it as data-intensive for larger datasets. Maybe this is a good plugin idea.

An example of equal Intervals would be
0 – < 10
10 – < 20
20 – < 30
30 – < 40

It's kind of confusing to have that less-than sign in there. it could also be displayed as:
0 – 10
10 – 20
20 – 30
30 – 40

But then it's not completely clear which category 10 is in, for instance.

(Best to right-justify.. and use an "en dash" between numbers.)

datasette 107914493 issue    
847423559 MDU6SXNzdWU4NDc0MjM1NTk= 253 fixtures.db example error in sql-utils blog post mroswell 192568 open 0     0 2021-03-31T22:07:36Z 2021-03-31T22:07:36Z   NONE  

En route to trying to understand column order transform documentation, I tried the instructions here:
https://simonwillison.net/2020/Sep/23/sqlite-advanced-alter-table/
I get a malformed database schema syntax error.

$ wget https://latest.datasette.io/fixtures.db
--2021-03-31 18:00:23--  https://latest.datasette.io/fixtures.db
Resolving latest.datasette.io (latest.datasette.io)... 2607:f8b0:4004:801::2013, 142.250.73.211
Connecting to latest.datasette.io (latest.datasette.io)|2607:f8b0:4004:801::2013|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]
Saving to: ‘fixtures.db’

fixtures.db                                                             [ <=>                                                                                                                                                              ] 260.00K  --.-KB/s    in 0.1s

2021-03-31 18:00:23 (2.41 MB/s) - ‘fixtures.db’ saved [266240]

$ sqlite3 fixtures.db '.schema facetable'
Error: malformed database schema (generated_columns) - near "AS": syntax error

$ sqlite3 fixtures.db
SQLite version 3.28.0 2019-04-15 14:49:49
Enter ".help" for usage hints.
sqlite> .schema
Error: malformed database schema (generated_columns) - near "AS": syntax error
sqlite-utils 140912432 issue    
771511344 MDExOlB1bGxSZXF1ZXN0NTQzMDE1ODI1 31 Update for Big Sur RhetTbull 41546558 open 0     2 2020-12-20T04:36:45Z 2021-03-31T19:14:39Z   CONTRIBUTOR dogsheep/dogsheep-photos/pulls/31

Refactored out the SQL for extracting aesthetic scores to use osxphotos -- adds compatbility for Big Sur via osxphotos which has been updated for new table names in Big Sur. Have not yet refactored the SQL for extracting labels which is still compatible with Big Sur.

dogsheep-photos 256834907 pull    
841456306 MDU6SXNzdWU4NDE0NTYzMDY= 1276 Invalid SQL: "no such table: pragma_database_list" on database page justinallen 1314318 closed 0     7 2021-03-26T00:03:53Z 2021-03-31T16:27:27Z 2021-03-28T23:52:31Z NONE  

Don't think this has been covered here yet. I'm a little stumped with this one and can't tell if it's a bug or I have something misconfigured.

Oddly, when running locally the usual list of tables populates (i.e. at /charts a list of tables in charts.db). But when on the web server it throws an Invalid SQL error and "no such table: pragma_database_list" below.

All the url endpoints seem to work fine aside from this - individual tables (/charts/chart_one), as well as stored queries (/charts/query_one).

Not sure if this has anything to do with upgrading to Datasette 0.55, or something to do with our setup, which uses a metadata build script similar to the one for the 538 server, or something else.

datasette 107914493 issue    
459882902 MDU6SXNzdWU0NTk4ODI5MDI= 526 CSV streaming for canned queries matej-fr 50578294 open 0     4 2019-06-24T13:09:45Z 2021-03-31T10:03:55Z   NONE  

I think that there is a difficulty with canned queries.

When I want to stream all results of a canned query TwoDays I get only first 1.000 records.

Example:
http://myserver/history_sample/two_days.csv?_stream=on

returns only first 1.000 records.

If I do the same with the whole database i.e.
http://myserver/history_sample/database.csv?_stream=on

I get correctly all records.

Any ideas?

datasette 107914493 issue    
843884745 MDU6SXNzdWU4NDM4ODQ3NDU= 1283 advanced #export causes unexpected scrolling mroswell 192568 open 0     0 2021-03-29T22:46:57Z 2021-03-29T22:46:57Z   CONTRIBUTOR  
  1. Visit a datasette table page
  2. Click on the "(advanced)" link. This adds a fragment identifier "#export" to the URL, and scrolls down to the "Advanced export" div with the "export" id.
  3. Manually scroll back up, and click on a suggested facet. The fragment identifier is still present, and the app scrolls back down to the "Advanced export" div. I think this is unwanted behavior.

The user remedy seems to be to manually remove the "#export" from the URL.

This behavior happens in my project, and in:
https://covid-19.datasettes.com/covid/economist_excess_deaths (for instance)
but not in this table:
https://global-power-plants.datasettes.com/global-power-plants/global-power-plants

datasette 107914493 issue    
843739658 MDExOlB1bGxSZXF1ZXN0NjAzMDgyMjgw 1282 Fix little typo mroswell 192568 closed 0     2 2021-03-29T19:45:28Z 2021-03-29T19:57:34Z 2021-03-29T19:57:34Z CONTRIBUTOR simonw/datasette/pulls/1282
datasette 107914493 pull    
576722115 MDU6SXNzdWU1NzY3MjIxMTU= 696 Single failing unit test when run inside the Docker image simonw 9599 closed 0   Datasette 1.0 3268330 2 2020-03-06T06:16:36Z 2021-03-29T17:04:19Z 2021-03-07T07:41:18Z OWNER  
docker run -it -v `pwd`:/mnt datasetteproject/datasette:latest /bin/bash
root@0e1928cfdf79:/# cd /mnt
root@0e1928cfdf79:/mnt# pip install -e .[test]
root@0e1928cfdf79:/mnt# pytest

I get one failure!

It was for test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3]

    def test_searchable(app_client, path, expected_rows):
        response = app_client.get(path)
>       assert expected_rows == response.json["rows"]
E       AssertionError: assert [[1, 'barry c...sel', 'puma']] == []
E         Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']
E         Full diff:
E         + []
E         - [[1, 'barry cat', 'terry dog', 'panther'],
E         -  [2, 'terry dog', 'sara weasel', 'puma']]

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/695#issuecomment-595614469_

datasette 107914493 issue    
724369025 MDExOlB1bGxSZXF1ZXN0NTA1NzY5NDYy 1031 Fallback to databases in inspect-data.json when no -i options are passed frankier 299380 closed 0     6 2020-10-19T07:51:06Z 2021-03-29T01:46:45Z 2021-03-29T00:23:41Z FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1031

Currenlty Datasette.__init__ checks immutables against None to decide whether to fallback to inspect-data.json. This patch modifies the serve command to pass None when no -i options are passed so this fallback works correctly.

datasette 107914493 pull    
842881221 MDU6SXNzdWU4NDI4ODEyMjE= 1281 Latest Datasette tags missing from Docker Hub simonw 9599 closed 0     7 2021-03-29T00:58:30Z 2021-03-29T01:41:48Z 2021-03-29T01:41:48Z OWNER  

Spotted this while testing https://github.com/simonw/datasette/issues/1249#issuecomment-808998719_

https://hub.docker.com/r/datasetteproject/datasette/tags?page=1&ordering=last_updated isn't showing the tags for any version more recent than 0.54.1 - we are up to 0.56 now.

But the :latest tag is for the new 0.56 release.

datasette 107914493 issue    
335200136 MDU6SXNzdWUzMzUyMDAxMzY= 327 Explore if SquashFS can be used to shrink size of packaged Docker containers simonw 9599 open 0     2 2018-06-24T18:15:16Z 2021-03-29T01:09:35Z   OWNER  

Inspired by this article: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html#sqlite-database-indexed--squashed

https://en.wikipedia.org/wiki/SquashFS is "a compressed read-only file system for Linux" - which means it could be a really nice fit for Datasette and its read-only SQLite databases.

It would be interesting to explore a Dockerfile recipe that used SquashFS to compress the SQLite database file that was bundled up by datasette package and friends.

datasette 107914493 issue    
824064069 MDU6SXNzdWU4MjQwNjQwNjk= 1249 Updated Dockerfile with SpatiaLite version 5.0 simonw 9599 closed 0     45 2021-03-08T00:17:36Z 2021-03-29T00:57:14Z 2021-03-29T00:57:13Z OWNER  

The version bundled in Datasette's Docker image right now is 4.4.0-RC0

https://github.com/simonw/datasette/blob/d0fd833b8cdd97e1b91d0f97a69b494895d82bee/Dockerfile#L16-L17

5 has been out for a couple of months and has a bunch of big improvements, most notable stable KNN support.

datasette 107914493 issue    
831163537 MDExOlB1bGxSZXF1ZXN0NTkyNTQ4MTAz 1260 Fix: code quality issues withshubh 25361949 closed 0     2 2021-03-14T13:56:10Z 2021-03-29T00:22:41Z 2021-03-29T00:22:41Z NONE simonw/datasette/pulls/1260

Description

Hi :wave: I work at DeepSource, I ran DeepSource analysis on the forked copy of this repo and found some interesting code quality issues in the codebase, opening this PR so you can assess if our platform is right and helpful for you.

Summary of changes

  • Replaced ternary syntax with if expression
  • Removed redundant None default
  • Used is to compare type of objects
  • Iterated dictionary directly
  • Removed unnecessary lambda expression
  • Refactored unnecessary else / elif when if block has a return statement
  • Refactored unnecessary else / elif when if block has a raise statement
  • Added .deepsource.toml to continuously analyze and detect code quality issues
datasette 107914493 pull    
810507413 MDExOlB1bGxSZXF1ZXN0NTc1MTg3NDU3 1229 ensure immutable databses when starting in configuration directory mode with camallen 295329 closed 0     3 2021-02-17T20:18:26Z 2021-03-29T00:17:32Z 2021-03-29T00:17:32Z CONTRIBUTOR simonw/datasette/pulls/1229

fixes #1224

This PR ensures all databases found in a configuration directory that match the files in inspect-data.json will be set to immutable as outlined in https://docs.datasette.io/en/latest/settings.html#configuration-directory-mode

specifically on building the datasette instance it checks:
- if immutables is an empty tuple - as passed by the cli code
- if immutables is the default function value None - when it's not explicitly set

And correctly builds the immutable database list from the inspect-data[file] keys.

Note for this to work the inspect-data.json file must contain file paths which are relative to the configuration directory otherwise the file paths won't match and the dbs won't be set to immutable.

I couldn't find an easy way to test this due to the way make_app_client works, happy to take directions on adding a test for this.

I've updated the relevant docs as well, i.e. use the inspect cli cmd from the config directory path to create the relevant file

cd $config_dir
datasette inspect *.db --inspect-file=inspect-data.json

https://docs.datasette.io/en/latest/performance.html#using-datasette-inspect

datasette 107914493 pull    
807433181 MDU6SXNzdWU4MDc0MzMxODE= 1224 can't start immutable databases from configuration dir mode camallen 295329 closed 0     0 2021-02-12T17:50:13Z 2021-03-29T00:17:31Z 2021-03-29T00:17:31Z CONTRIBUTOR  

Say I have a /databases/ directory with multiple sqlite db files in that dir (1.db & 2.db) and an inspect-data.json file.

If I start datasette via datasette -h 0.0.0.0 /databases/ then the resulting databases are set to is_mutable: true as inspected via http://127.0.0.1:8001/-/databases.json

I don't want to have to list out the databases by name, e.g. datasette -i /databases/1.db -i /databases/2.db as i want the system to autodetect the sqlite dbs i have in the configuration directory

According to the docs outlined in https://docs.datasette.io/en/latest/settings.html?highlight=immutable#configuration-directory-mode this should be possible

inspect-data.json the result of running datasette inspect - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running

I believe that if the inspect-json.json file present, then in theory the databases will be automatically set to immutable via this code https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/app.py#L211-L216

However it appears the Click Multiple Options will return a tuple via https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/cli.py#L311-L317

The resulting tuple is passed to the Datasette app via kwargs and overrides the behaviour to set the databases to immutable via this arg https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/app.py#L182

If you think this is a bug and needs fixing, I am willing to make a PR to check for the empty immutable tuple before calling the Datasette class initializer as I think leaving that class interface alone is the best path here.

Thoughts?

Also - i'm loving Datasette, it truly is a wonderful tool, thank you :)

datasette 107914493 issue    
763207948 MDU6SXNzdWU3NjMyMDc5NDg= 1141 Default styling for bullet point lists simonw 9599 closed 0     0 2020-12-12T02:49:33Z 2021-03-29T00:14:05Z 2021-03-29T00:14:05Z OWNER   datasette 107914493 issue    
825217564 MDExOlB1bGxSZXF1ZXN0NTg3MzMyNDcz 1252 Add back styling to lists within table cells (fixes #1141) bobwhitelock 7476523 closed 0     2 2021-03-09T03:00:57Z 2021-03-29T00:14:04Z 2021-03-29T00:14:04Z CONTRIBUTOR simonw/datasette/pulls/1252

This overrides the Datasette reset - see https://github.com/simonw/datasette/blob/d0fd833b8cdd97e1b91d0f97a69b494895d82bee/datasette/static/app.css#L35-L38 - to add back the default styling of list items displayed within Datasette table cells.

Following this change, the same content as in the original issue looks like this:

datasette 107914493 pull    
842556944 MDExOlB1bGxSZXF1ZXN0NjAyMTA3OTM1 1279 Minor Docs Update. Added `--app` to fly install command. koaning 1019791 closed 0     2 2021-03-27T16:58:08Z 2021-03-29T00:11:55Z 2021-03-29T00:11:55Z CONTRIBUTOR simonw/datasette/pulls/1279

Without this flag, there's an error locally.

> datasette publish fly bigmac.db

Usage: datasette publish fly [OPTIONS] [FILES]...
Try 'datasette publish fly --help' for help.

Error: Missing option '-a' / '--app'.

I also got an error message which later turned out to be because I hadn't added my credit card information yet to fly. I wasn't sure if I should add that mention to the docs here, or to submit a bug-report over at https://github.com/simonw/datasette-publish-fly.

datasette 107914493 pull    
842765105 MDExOlB1bGxSZXF1ZXN0NjAyMjYxMDky 6 Add testres-db tool ligurio 1151557 open 0     0 2021-03-28T15:43:23Z 2021-03-28T15:43:23Z   FIRST_TIME_CONTRIBUTOR dogsheep/dogsheep.github.io/pulls/6
dogsheep.github.io 214746582 pull    
741862364 MDU6SXNzdWU3NDE4NjIzNjQ= 1090 Custom widgets for canned query forms simonw 9599 open 0     3 2020-11-12T19:21:07Z 2021-03-27T16:25:25Z   OWNER  

This is an idea that was cut from the first version of writable canned queries:

I really want the option to use a <textarea> for a specific value.

Idea: metadata syntax like this:

json { "databases": { "my-database": { "queries": { "add_twitter_handle": { "sql": "insert into twitter_handles (username) values (:username)", "write": true, "params": { "username": { "widget": "textarea" } } } } } } }

I can ship with some default widgets and provide a plugin hook for registering extra widgets.

This opens up some really exciting possibilities for things like map widgets that let you draw polygons.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/698#issuecomment-608125928_

datasette 107914493 issue    
842416110 MDU6SXNzdWU4NDI0MTYxMTA= 1278 SpatiaLite timezones demo is broken simonw 9599 closed 0     2 2021-03-27T04:45:27Z 2021-03-27T16:17:13Z 2021-03-27T16:17:13Z OWNER   datasette 107914493 issue    
828858421 MDU6SXNzdWU4Mjg4NTg0MjE= 1258 Allow canned query params to specify default values wdccdw 1385831 open 0     4 2021-03-11T07:19:02Z 2021-03-27T04:42:14Z   NONE  

If I call a canned query that includes named parameters, without passing any parameters, datasette runs the query anyway, resulting in an HTTP status code 400, and a visible error in the browser, with only a link back to home. This means that one of the default links on https://site/database/ will lead to a broken page with no apparent way out.

Is there any way to skip performing the query when parameters aren't supplied, but otherwise render the usual canned query page? Alternatively, can I supply default values for my parameters, either when defining my canned queries or when linking to the canned query page from the default database template.

datasette 107914493 issue    
838245338 MDU6SXNzdWU4MzgyNDUzMzg= 1272 Unit tests for the Dockerfile simonw 9599 open 0     2 2021-03-23T01:36:29Z 2021-03-27T04:29:42Z   OWNER  

Working on the Dockerfile in #1249 made me wish for automated tests - to confirm that it boots up correctly, can run SpatiaLite and doesn't have weird bugs like the /db page hanging.

These could run in CI too, but maybe only if the Dockerfile is updated.

datasette 107914493 issue    
842212586 MDU6SXNzdWU4NDIyMTI1ODY= 1277 Facet by array breaks if table name contains a space simonw 9599 closed 0     1 2021-03-26T18:38:19Z 2021-03-27T03:49:38Z 2021-03-27T03:49:34Z OWNER  

It breaks when you try to select a filtered item.

datasette 107914493 issue    
842062949 MDU6SXNzdWU4NDIwNjI5NDk= 252 Support json-line files rathboma 279769 closed 0     1 2021-03-26T15:19:39Z 2021-03-26T15:21:38Z 2021-03-26T15:21:38Z NONE  

It's common for many processes to create flat files where each line is a JSON object.

So the file isn't a json array.

Many tools (like jq) support this natively, it'd be great for sqlite-utils to also!

sqlite-utils 140912432 issue    
841377702 MDU6SXNzdWU4NDEzNzc3MDI= 251 "sqlite-utils convert" command to replace the separate "sqlite-transform" tool simonw 9599 open 0     2 2021-03-25T22:36:36Z 2021-03-25T22:44:31Z   OWNER  

See https://github.com/simonw/sqlite-transform/issues/11 - I built a separate sqlite-transform tool a while ago that uses the word "transform" to means something entirely different from sqlite-utils transform - I'd like to resolve this by merging the two tools.

sqlite-utils 140912432 issue    
607223136 MDU6SXNzdWU2MDcyMjMxMzY= 741 Replace "datasette publish --extra-options" with "--setting" simonw 9599 open 0   Datasette 1.0 3268330 8 2020-04-27T04:29:04Z 2021-03-24T20:30:33Z   OWNER  

See https://github.com/simonw/datasette-publish-now/issues/9#issuecomment-618155764 - the --extra-options mechanism is in practice just used to set --config options in data that you publish, but that means you end up with pretty messy looking commands:

datasette publish my.db --extra-options="--config default_page_size:50 --config sql_time_limit_ms:3500"

A neater design would be to support --config as an option for datasette publish directly:

datasette publish my.db --config default_page_size:50 --config sql_time_limit_ms:3500
datasette 107914493 issue    
839367451 MDU6SXNzdWU4MzkzNjc0NTE= 1275 Idea: long-running query mode simonw 9599 open 0     0 2021-03-24T05:23:20Z 2021-03-24T05:23:20Z   OWNER  

It would be cool if you could run Datasette in a long-running query mode, for use with trusted users - something like this:

datasette --unlimited my.db

This would disable the query limit, but would also enable a feature where if a query takes longer than e.g. 1s to return Datasette returns an HTML page to the browser with a progress indicator and polls the server until the query is complete.... but also provides the user with a "cancel" button. This relates to the .interrupt() research in #1270.

datasette 107914493 issue    
839008371 MDU6SXNzdWU4MzkwMDgzNzE= 1274 Might there be some way to comment metadata.json? mroswell 192568 closed 0     2 2021-03-23T18:33:00Z 2021-03-23T20:14:54Z 2021-03-23T20:14:54Z CONTRIBUTOR  

I don't know what license to use... Would be nice to be able to add a comment regarding that uncertainty in my metadata.json file

I like laktak's little video comment in favor of Human json (Hjson)
https://stackoverflow.com/questions/244777/can-comments-be-used-in-json

Hmmm... one of the commenters there said comments are allowed in yaml... so that's a good argument for yaml.

Anyhow, just came to mind, and thought I'd mention it here. Looks like https://hjson.github.io/ has the details.

datasette 107914493 issue    
771202454 MDU6SXNzdWU3NzEyMDI0NTQ= 1153 Use YAML examples in documentation by default, not JSON simonw 9599 open 0     8 2020-12-18T22:20:15Z 2021-03-23T18:41:57Z   OWNER  

YAML configuration is much better for multi-line strings, and I'm increasingly adding configuration options to Datasette that benefit from that - fragments of HTML in description_html or SQL queries used to configure things like https://github.com/simonw/datasette-atom for example.

Rather than confusing things by showing both in the documentation, I should switch all of the default examples to use YAML instead.

datasette 107914493 issue    
837350092 MDU6SXNzdWU4MzczNTAwOTI= 1270 Try implementing SQLite timeouts using .interrupt() instead of using .set_progress_handler() simonw 9599 open 0     3 2021-03-22T06:00:17Z 2021-03-23T16:45:39Z   OWNER  

Maybe I could implement SQLite query timeouts using the interrupt() method instead of the progress handler hack I'm currently using?

https://stackoverflow.com/questions/43240496/python-sqlite3-how-to-quickly-and-cleanly-interrupt-long-running-query-with-e has some tips.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1268#issuecomment-803764919_

datasette 107914493 issue    
774332247 MDExOlB1bGxSZXF1ZXN0NTQ1MjY0NDM2 1159 Improve the display of facets information lovasoa 552629 open 0     5 2020-12-24T11:01:47Z 2021-03-23T07:58:38Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/1159

This PR changes the display of facets to hopefully make them more readable.

<table> <thead> <tr> <th>Before</th> <th>After</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> </tr> </tbody> </table>
datasette 107914493 pull    
280013907 MDU6SXNzdWUyODAwMTM5MDc= 164 datasette skeleton command for kick-starting database and table metadata simonw 9599 closed 0   Custom templates edition 2949431 3 2017-12-07T06:13:28Z 2021-03-23T02:45:12Z 2017-12-07T06:20:45Z OWNER  

Generates an example metadata.json file populated with all of the databases and tables inspected from the specified databases.

datasette 107914493 issue    
279547886 MDU6SXNzdWUyNzk1NDc4ODY= 163 Document the querystring argument for setting a different time limit simonw 9599 closed 0     2 2017-12-05T22:05:08Z 2021-03-23T02:44:33Z 2017-12-06T15:06:57Z OWNER   datasette 107914493 issue    
273775212 MDU6SXNzdWUyNzM3NzUyMTI= 88 Add NHS England Hospitals example to wiki tomdyson 15543 closed 0     4 2017-11-14T12:29:10Z 2021-03-22T23:46:36Z 2017-11-14T22:54:06Z CONTRIBUTOR  

https://nhs-england-hospitals.now.sh

and an associated map visualisation:

http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/

Datasette is wonderful!

datasette 107914493 issue    
838148087 MDU6SXNzdWU4MzgxNDgwODc= 250 Handle byte order marks (BOMs) in CSV files simonw 9599 open 0     0 2021-03-22T22:13:18Z 2021-03-22T22:13:18Z   OWNER  

I often find sqlite-utils insert ... --csv creates a first column with a weird character at the start of it - which it turns out is the UTF-8 BOM. Fix that.

sqlite-utils 140912432 issue    
837956424 MDExOlB1bGxSZXF1ZXN0NTk4MjEzNTY1 1271 Use SQLite conn.interrupt() instead of sqlite_timelimit() simonw 9599 open 0     3 2021-03-22T17:34:20Z 2021-03-22T21:49:27Z   OWNER simonw/datasette/pulls/1271

Refs #1270, #1268, #1249

Before merging this I need to do some more testing (to make sure that expensive queries really are properly cancelled). I also need to delete a bunch of code relating to the old mechanism of cancelling queries.

[See comment below: this doesn't actually cancel the query due to a thread-local confusion]

datasette 107914493 pull    
769520939 MDU6SXNzdWU3Njk1MjA5Mzk= 1149 Make it easier to theme Datasette with CSS simonw 9599 open 0   Datasette 1.0 3268330 3 2020-12-17T05:01:26Z 2021-03-22T21:43:16Z   OWNER  

I want to theme https://datasette.io/ so that when you visit https://datasette.io/content (the Datasette UI part of it) the navigation from the parent site is used.

I tried dropping in a base.html template like this:

{% extends "page_base.html" %}

{% block base_extra_head %}

<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
{% for url in extra_css_urls %}
    <link rel="stylesheet" href="{{ url.url }}"{% if url.sri %} integrity="{{ url.sri }}" crossorigin="anonymous"{% endif %}>
{% endfor %}
{% for url in extra_js_urls %}
    <script src="{{ url.url }}"{% if url.sri %} integrity="{{ url.sri }}" crossorigin="anonymous"{% endif %}></script>
{% endfor %}
{% block extra_head %}{% endblock %}
{% endblock %}

{% block extra_body_end %}
{% include "_close_open_menus.html" %}

{% for body_script in body_scripts %}
    <script>{{ body_script }}</script>
{% endfor %}
{% endblock %}

But this resulted in pages looking like this:

https://user-images.githubusercontent.com/9599/102446045-c168e280-3fe1-11eb-94d6-e7350798eb96.png">

Note that the cog menu is broken and the filter UI is unstyled. To get these working correctly I would need to copy over a whole lot of Datasette's default CSS - and that means that when Datasette changes in the future those pages could break in subtle ways.

datasette 107914493 issue    
837308703 MDU6SXNzdWU4MzczMDg3MDM= 1268 Figure out why SpatiaLite 5.0 hangs the database page on Linux simonw 9599 closed 0     18 2021-03-22T04:44:16Z 2021-03-22T17:41:12Z 2021-03-22T17:41:12Z OWNER  

See detailed notes in https://github.com/simonw/datasette/issues/1249 - for some reason SpatiaLite 5.0 hangs the /dbname page on Linux (inside Docker containers, both with a custom compiled SpatiaLite and one installed from the Ubuntu 20.10 package repository). This doesn't happen on macOS with SpatiaLite 5 installed using Homebrew.

datasette 107914493 issue    
837348479 MDU6SXNzdWU4MzczNDg0Nzk= 1269 Don't attempt to run count(*) against virtual tables simonw 9599 closed 0     2 2021-03-22T05:57:43Z 2021-03-22T17:40:42Z 2021-03-22T17:40:41Z OWNER  

Counting the rows in a virtual table doesn't seem very interesting to me, and it's the cause of at least one crashing bug with SpatiaLite 5.0 on Linux, see https://github.com/simonw/datasette/issues/1268

datasette 107914493 issue    
837208901 MDU6SXNzdWU4MzcyMDg5MDE= 1267 Update Datasette alternativeto listening with details RayBB 921217 closed 0     1 2021-03-21T23:20:20Z 2021-03-22T04:37:26Z 2021-03-22T04:37:26Z NONE  

Hello,

I recently learned about Datasette from an old hackernews post. It seems like an awesome project and I actually have use case I might be trying out in the coming months.

Alas, to get a better understanding of your project I looked it up on alternativeto to see what it is similar too. I promise it's not spam, it's reputable enough to have a Wikipedia page. There was no listing on the website so I went ahead and created a listing that is now approved.

I encourage anyone who likes this project and hopes to spread the word to help update the listing by:
1. Adding to the list of software it compares to
2. Uploading screenshots
3. Writing a review
4. Adding "features"

I know this may seem spammy but I promise I have no affiliation with alternativeto I'm just a happy user and know it's a popular site for discovering software.

Here is the listing for datasette:
https://alternativeto.net/software/datasette/about/

Cheers

datasette 107914493 issue    
830567275 MDU6SXNzdWU4MzA1NjcyNzU= 1259 Research using CTEs for faster facet counts simonw 9599 open 0     5 2021-03-12T22:19:49Z 2021-03-21T22:55:31Z   OWNER  

https://www.sqlite.org/changes.html#version_3_35_0

Add support for the MATERIALIZED and NOT MATERIALIZED hints when specifying common table expressions. The default behavior was formerly NOT MATERIALIZED, but is now changed to MATERIALIZED for CTEs that are used more than once.

If a CTE creates a table that is used multiple time in that query, SQLite will now default to creating a materialized table for the duration of that query.

This could be a big performance boost when applying faceting multiple times against the same query. Consider this example query:

WITH data as (
  select
    *
  from
    [global-power-plants]
),
country_long as (select 
  'country_long' as col, country_long as value, count(*) as c from data group by country_long
  order by c desc limit 10
),
primary_fuel as (
select
  'primary_fuel' as col, primary_fuel as value, count(*) as c from data group by primary_fuel
  order by c desc limit 10
)
select * from primary_fuel union select * from country_long order by col, c desc

https://global-power-plants.datasettes.com/global-power-plants?sql=WITH+data+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%5Bglobal-power-plants%5D%0D%0A%29%2C%0D%0Acountry_long+as+%28select+%0D%0A++%27country_long%27+as+col%2C+country_long+as+value%2C+count%28*%29+as+c+from+data+group+by+country_long%0D%0A++order+by+c+desc+limit+10%0D%0A%29%2C%0D%0Aprimary_fuel+as+%28%0D%0Aselect%0D%0A++%27primary_fuel%27+as+col%2C+primary_fuel+as+value%2C+count%28*%29+as+c+from+data+group+by+primary_fuel%0D%0A++order+by+c+desc+limit+10%0D%0A%29%0D%0Aselect+*+from+primary_fuel+union+select+*+from+country_long+order+by+col%2C+c+desc

Outputs:

<table> <thead> <tr> <th>col</th> <th>value</th> <th>c</th> </tr> </thead> <tbody> <tr> <td>country_long</td> <td>United States of America</td> <td>8688</td> </tr> <tr> <td>country_long</td> <td>China</td> <td>4235</td> </tr> <tr> <td>country_long</td> <td>United Kingdom</td> <td>2603</td> </tr> <tr> <td>country_long</td> <td>Brazil</td> <td>2360</td> </tr> <tr> <td>country_long</td> <td>France</td> <td>2155</td> </tr> <tr> <td>country_long</td> <td>India</td> <td>1590</td> </tr> <tr> <td>country_long</td> <td>Germany</td> <td>1309</td> </tr> <tr> <td>country_long</td> <td>Canada</td> <td>1159</td> </tr> <tr> <td>country_long</td> <td>Spain</td> <td>829</td> </tr> <tr> <td>country_long</td> <td>Russia</td> <td>545</td> </tr> <tr> <td>primary_fuel</td> <td>Solar</td> <td>9662</td> </tr> <tr> <td>primary_fuel</td> <td>Hydro</td> <td>7155</td> </tr> <tr> <td>primary_fuel</td> <td>Wind</td> <td>5188</td> </tr> <tr> <td>primary_fuel</td> <td>Gas</td> <td>3922</td> </tr> <tr> <td>primary_fuel</td> <td>Coal</td> <td>2390</td> </tr> <tr> <td>primary_fuel</td> <td>Oil</td> <td>2290</td> </tr> <tr> <td>primary_fuel</td> <td>Biomass</td> <td>1396</td> </tr> <tr> <td>primary_fuel</td> <td>Waste</td> <td>1087</td> </tr> <tr> <td>primary_fuel</td> <td>Nuclear</td> <td>198</td> </tr> <tr> <td>primary_fuel</td> <td>Geothermal</td> <td>189</td> </tr> </tbody> </table>
datasette 107914493 issue    
531755959 MDU6SXNzdWU1MzE3NTU5NTk= 647 Move hashed URL mode out to a plugin simonw 9599 open 0     1 2019-12-03T06:29:03Z 2021-03-21T22:44:19Z   OWNER  

They used to be the default until #418. Since making them optional I haven't felt the need to use them even once.

That suggests to me that they should be removed. I think their effect could be entirely handled by an ASGI wrapping plugin.

https://datasette.readthedocs.io/en/0.32/performance.html#hashed-url-mode

datasette 107914493 issue    
681334912 MDU6SXNzdWU2ODEzMzQ5MTI= 942 Support column descriptions in metadata.json simonw 9599 open 0   Datasette Next 6158551 9 2020-08-18T20:52:00Z 2021-03-21T17:48:42Z   OWNER  

Could look something like this:

{
    "title": "Five Thirty Eight",
    "license": "CC Attribution 4.0 License",
    "license_url": "https://creativecommons.org/licenses/by/4.0/",
    "source": "fivethirtyeight/data on GitHub",
    "source_url": "https://github.com/fivethirtyeight/data",
    "databases": {
        "fivethirtyeight": {
            "tables": {
                "mueller-polls/mueller-approval-polls": {
                    "description_html": "<p>....</p>",
                    "columns": {
                        "name_of_column": "column_description goes here"
}
datasette 107914493 issue    
836963850 MDU6SXNzdWU4MzY5NjM4NTA= 249 Full text search possibly broken? prabhur 36287 closed 0     2 2021-03-21T02:03:44Z 2021-03-21T02:43:32Z 2021-03-21T02:43:32Z NONE  

I'm not quite sure if this is an issue with sqlite-utils or datasette.

Background
I was previously using sqlite-utils version < 3.6. I have a bunch of csv files that have some data scraped from a website.

sqlite-utils create-table mydb.db post \
    posted_date text \
    url text \
    title text \
    raw_text text \
    --not-null posted_date \
    --not-null url \
    --pk=url

FTS is enabled via
sqlite-utils enable-fts ./mydb.db post title raw_text

Data is loaded to the table via
sqlite-utils insert ./mydb.db post ${filename} --csv

Note that the data contains text in my language Tamil.

Loading happens just fine.
datasette serves the db file just fine. It recognizes FTS and shows the "search" box. However, none of the queries work. Whatever text I supply, it always returns 0 rows. I literally copy paste words from the row listing on the screen and paste it on the search box.

Interestingly, only thing I can remember is switching to sqlite-utils 3.6. I had to do this because the prior version had an issue with column size.

I have attached one of the csv files that can be loaded to the table. Substitute "${filename}" with that file for the sqlite-utils insert command.
posts_20200417-20201231.csv.zip

Interestingly, the FTS based search from datasette worked just fine before this version upgrade. That is, the queries returned results. I will try to downgrade just to see if the theory is correct.

I appreciate any help here. Thanks.

sqlite-utils 140912432 issue    
832092321 MDU6SXNzdWU4MzIwOTIzMjE= 1261 Some links aren't properly URL encoded. brimstone 812795 closed 0     3 2021-03-15T18:43:59Z 2021-03-21T02:06:44Z 2021-03-20T21:36:06Z NONE  

It seems like a percent sign in the query causes some links to end invalid.

The json and CSV links on this page don't behave like expected: https://honeypot-brimston3.vercel.app/honeypot?sql=select+time%2C+count%28time%29+as+count+from+%28select+strftime%28%22%25Y-%25m-%25d%22%2C+_etime%29+as+time+from+ssh+%29+group+by+time+order+by+time%3B

I can take a swing at trying to fix this, but my python isn't strong and I need a pointer at the right approach and files to change.

Thanks!

datasette 107914493 issue    
648435885 MDU6SXNzdWU2NDg0MzU4ODU= 878 New pattern for views that return either JSON or HTML, available for plugins simonw 9599 open 0   Datasette Next 6158551 6 2020-06-30T19:26:13Z 2021-03-20T22:33:05Z   OWNER  

Can be part of #870 - refactoring existing views to use register_routes().

I'm going to put the new check_permissions() method on BaseView as well. If I want that method to be available to plugins I can do so by turning that BaseView class into a documented API that plugins are encouraged to use themselves.
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/832#issuecomment-651995453_

datasette 107914493 issue    
836923194 MDU6SXNzdWU4MzY5MjMxOTQ= 32 JSON API for search results simonw 9599 open 0     0 2021-03-20T22:21:36Z 2021-03-20T22:21:36Z   MEMBER   dogsheep-beta 197431109 issue    
627794879 MDU6SXNzdWU2Mjc3OTQ4Nzk= 782 Redesign default JSON format in preparation for Datasette 1.0 simonw 9599 open 0   Datasette 1.0 3268330 46 2020-05-30T18:47:07Z 2021-03-20T22:01:23Z   OWNER  

The default JSON just isn't right. I find myself using ?_shape=array for almost everything I build against the API.

datasette 107914493 issue    

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);