home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

92 rows where comments = 6 and type = "issue" sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, milestone, author_association, created_at (date), updated_at (date), closed_at (date)

repo 7

  • datasette 67
  • sqlite-utils 18
  • twitter-to-sqlite 2
  • github-to-sqlite 2
  • dogsheep-beta 1
  • evernote-to-sqlite 1
  • apple-notes-to-sqlite 1

state 2

  • closed 74
  • open 18

type 1

  • issue · 92 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1907655261 I_kwDOBm6k_c5xtIJd 2193 "Test DATASETTE_LOAD_PLUGINS" test shows errors but did not fail the CI run simonw 9599 closed 0     6 2023-09-21T19:49:34Z 2023-09-21T21:56:43Z 2023-09-21T21:56:43Z OWNER  

That passed on 3.8 but should have failed: https://github.com/simonw/datasette/actions/runs/6266341481/job/17017099801 - the "Test DATASETTE_LOAD_PLUGINS" test shows errors but did not fail the CI run.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/2057#issuecomment-1730201226

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2193/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1876353656 I_kwDOBm6k_c5v1uJ4 2168 Consider a request/response wrapping hook slightly higher level than asgi_wrapper() simonw 9599 open 0     6 2023-08-31T21:42:04Z 2023-09-10T17:54:08Z   OWNER  

There's a long justification for why this might be needed here: - https://github.com/simonw/datasette-auth-tokens/issues/10#issuecomment-1701820001

Short version: it would be neat if it was possible to stash some data on the request object such that a later plugin/middleware-type-thing could use that to influence the final returned response - similar to the kinds of things you can do with Django middleware.

The asgi_wrapper() mechanism doesn't have access to the request or response objects - it gets scope and can mess around with receive and send, but those are pretty low-level primitives.

Since Datasette has well-defined request and response objects now it might be nice to have a middleware layer that can manipulate those directly.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2168/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1886771493 I_kwDOCGYnMM5wddkl 592 `table.transform()` should preserve `rowid` values simonw 9599 closed 0     6 2023-09-08T00:42:38Z 2023-09-10T17:46:41Z 2023-09-09T00:45:32Z OWNER  

I just spotted a bug when using https://datasette.io/plugins/datasette-configure-fts and https://datasette.io/plugins/datasette-edit-schema at the same time.

Steps to reproduce:

  • Configure FTS for a table, then run a test search
  • Edit the schema for that table and change the order of columns
  • Run the test search again

I got the wrong search results, which I think is because the _fts table pointed to the first table by rowid but those rowid values were entirely rewritten as a consequence of running table.transform() on the table.

Reconfiguring FTS on the table fixed the problem.

I think table.transform() should be able to preserve rowid values.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/592/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1886791100 I_kwDOBm6k_c5wdiW8 2180 Plugin hook: `actors_from_ids()` simonw 9599 closed 0     6 2023-09-08T01:16:41Z 2023-09-10T17:44:14Z 2023-09-08T04:28:03Z OWNER  

In building Datasette Cloud we realized that a bunch of the features we are building need a way of resolving an actor ID to the actual actor, in order to display something more interesting than just an integer ID.

Social plugins in particular need this - comments by X, CSV uploaded by X, that kind of thing.

I think the solution is a new plugin hook: actors_from_ids(datasette, ids) which can return a list of actor dictionaries.

The default implementation can return [{"id": "..."}] for the IDs passed to it.

Pluggy has a firstresult=True option which is relevant here, since this is the first plugin hook we will have implemented where only one plugin should provide an answer.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2180/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
685806511 MDU6SXNzdWU2ODU4MDY1MTE= 950 Private/secret databases: database files that are only visible to plugins simonw 9599 closed 0     6 2020-08-25T20:46:17Z 2023-08-24T22:26:09Z 2023-08-24T22:26:08Z OWNER  

In thinking about the best way to implement https://github.com/simonw/datasette-auth-passwords/issues/6 (SQL-backed user accounts for datasette-auth-passwords) I realized that there are a few different use-cases where a plugin might want to store data that isn't visible to regular Datasette users:

  • Storing password hashes
  • Storing API tokens
  • Storing secrets that are used for data import integrations (secrets for talking to the Twitter API for example)

Idea: allow one or more private database files to be attached to Datasette, something like this:

datasette github.db linkedin.db -s secrets.db -m metadata.yml

The secrets.db file would not be visible using any of the Datasette's usual interface or API routes - but plugins would be able to run queries against it.

So datasette-auth-passwords might then be configured like this:

yaml plugins: datasette-auth-passwords: database: secrets sql: "select password_hash from passwords where username = :username" The plugin could even refuse to operate against a database that hadn't been loaded as a secret database.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/950/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
814595021 MDU6SXNzdWU4MTQ1OTUwMjE= 1241 Share button for copying current URL Kabouik 7107523 open 0     6 2021-02-23T15:55:40Z 2023-08-24T20:09:52Z   NONE  

I use datasette in an iframe inside another HTML file that contains other ways to represent my data (mostly leaflets maps built with R on summarized data), and the datasette iframe is a tab in that page.

This particular use prevents users to access the full URLs of their datasette views and queries, which is a shame because the way datasette handles URLs to make every view or query easy to share is awesome. I know how to get the URL from the context menu of my browser, but I don't think many visitors would do it or even notice that datasette uses permalinks for pretty much every action they do. Would it be possible to add a "Share link" button to the interface, either in datasette itself or in a plugin?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1241/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1858228057 I_kwDOBm6k_c5uwk9Z 2147 Plugin hook for database queries that are run jackowayed 18899 open 0     6 2023-08-20T18:43:50Z 2023-08-24T03:54:35Z   NONE  

I'm interested in making a plugin that saves every query that gets run to a table in the database. (I know about datasette-query-history but thought it would be good to have a server-side option.)

As far as I can tell reading the docs, there isn't really a hook setup to allow this.

Maybe I could hack it with some of the hooks that are passed requests, but that doesn't seem good.

I'm a little surprised this isn't possible, so I thought I would open an issue and see if that's a deeply considered decision or just "haven't needed it yet." I'm potentially interested in implementing the hook if the latter.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2147/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1786258502 I_kwDOCGYnMM5qeCRG 565 Table renaming: db.rename_table() and sqlite-utils rename-table simonw 9599 closed 0     6 2023-07-03T14:07:42Z 2023-07-22T22:12:40Z 2023-07-22T22:12:40Z OWNER  

I find myself wanting two new features in sqlite-utils: - The ability to have the new transformed table set to a specific name, while keeping the old table around - The ability to rename a table (sqlite-utils doesn't have a table rename function at all right now)

Originally posted by @simonw in https://github.com/simonw/llm/issues/65#issuecomment-1618375042

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/565/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1718607907 I_kwDOCGYnMM5mb-Aj 551 Make as many examples in the CLI docs as possible copy-and-pastable simonw 9599 closed 0     6 2023-05-21T19:04:10Z 2023-05-21T21:04:04Z 2023-05-21T20:57:24Z OWNER  

e.g. in this section:

https://sqlite-utils.datasette.io/en/stable/cli.html#running-queries-directly-against-csv-or-json

The little copy button will also copy the $ which breaks the examples when copied.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/551/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1718517882 I_kwDOCGYnMM5mboB6 545 Try out Trogon for a tui interface simonw 9599 closed 0     6 2023-05-21T14:08:25Z 2023-05-21T19:33:13Z 2023-05-21T18:41:58Z OWNER  

https://github.com/Textualize/trogon

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/545/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1617769847 I_kwDOJHON9s5gbTV3 7 Folder support simonw 9599 closed 0     6 2023-03-09T18:21:33Z 2023-03-09T20:48:18Z 2023-03-09T20:48:18Z MEMBER  

Notes can live in folders. These relationships should be exported too.

apple-notes-to-sqlite 611552758 issue    
{
    "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/7/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1615862295 I_kwDOBm6k_c5gUBoX 2036 `publish cloudrun` reuses image tags, which can lead to very surprising deploy problems simonw 9599 closed 0     6 2023-03-08T20:11:44Z 2023-03-08T20:57:34Z 2023-03-08T20:57:34Z OWNER  

See this issue: - https://github.com/simonw/datasette.io/issues/141

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2036/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
743371103 MDU6SXNzdWU3NDMzNzExMDM= 1099 Support linking to compound foreign keys simonw 9599 open 0     6 2020-11-15T23:23:17Z 2023-01-25T00:58:26Z   OWNER  

Reported as a bug in #1098 because they caused 500 errors - but it would be even better if Datasette could hyperlink to related rows via compound foreign keys.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1099/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
664485022 MDU6SXNzdWU2NjQ0ODUwMjI= 46 Feature: pull request reviews and comments bhrutledge 1326704 open 0     6 2020-07-23T13:43:45Z 2022-12-20T14:40:15Z   NONE  

Hi there! I saw your presentation at Boston Python. I'm already a light user of Datasette (thank you!), but wasn't aware of this project.

I've been working on a "pull request dashboard" to get a comprehensive view of the state of open PR's, esp. related to reviews (i.e., pending, approved, changes requested). Currently it's a CLI command, but I thought a Datasette UI might be fun.

I see that PR's are available from the issues command, but I don't see reviews anywhere. From the API docs, it looks like there are separate endpoints for those (as well as pull requests in general). What do you think about adding that? Would you accept a PR? Any sense of the level of effort?

github-to-sqlite 207052882 issue    
{
    "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/46/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1495431932 I_kwDOBm6k_c5ZInr8 1951 `datasette.create_token(...)` method for creating signed API tokens simonw 9599 closed 0   Datasette 1.0a2 8711695 6 2022-12-14T01:25:34Z 2022-12-14T02:43:45Z 2022-12-14T02:42:05Z OWNER  

I need this for: - #1947

And I can refactor this to use it too: - #1855

By making this a documented internal API it can be used by other plugins too. It's also going to be really useful for writing tests.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1951/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1468709531 I_kwDOBm6k_c5Xirqb 1915 Interactive demo of Datasette 1.0 write APIs simonw 9599 closed 0     6 2022-11-29T21:16:03Z 2022-11-30T04:05:46Z 2022-11-30T04:05:46Z OWNER  

I'm going to try to get this working on https://latest.datasette.io/ - it already has a way for people to sign in as root, but none of the databases there are writable.

So I'm going to build a plugin which adds a writable named in-memory database.

And some kind of mechanism for clearing out that database on a regular basis - maybe tables in that database get deleted automatically an hour after they are created?

(Would be neat to display their time-left-until-deleted too)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1915/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1439009231 I_kwDOBm6k_c5VxYnP 1884 Exclude virtual tables from datasette inspect eyeseast 25778 open 0     6 2022-11-07T21:26:01Z 2022-11-21T04:40:56Z   CONTRIBUTOR  

Ran inspect on a spatialite database and got these warnings:

ERROR: conn=<sqlite3.Connection object at 0x119e46110>, sql = 'select count(*) from [SpatialIndex]', params = None: no such module: VirtualSpatialIndex ERROR: conn=<sqlite3.Connection object at 0x119e46110>, sql = 'select count(*) from [ElementaryGeometries]', params = None: no such module: VirtualElementary ERROR: conn=<sqlite3.Connection object at 0x119e46110>, sql = 'select count(*) from [KNN]', params = None: no such module: VirtualKNN

It still worked, but probably want to catch this.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1884/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1363440999 I_kwDOBm6k_c5RRHVn 1804 Ability to set a custom facet_size per table simonw 9599 closed 0     6 2022-09-06T15:11:40Z 2022-09-07T00:21:56Z 2022-09-06T18:06:53Z OWNER  

Suggestion from Discord: https://discord.com/channels/823971286308356157/823971286941302908/1016725586351247430

Is it possible to limit the facet size per database or even per table?

This is a really good idea, it could be done in metadata.yml.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1804/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1352932038 I_kwDOCGYnMM5QpBrG 470 Upgrade `--load-extension` to accept entrypoints like Datasette simonw 9599 closed 0   3.29 8355157 6 2022-08-27T03:53:20Z 2022-08-27T05:55:49Z 2022-08-27T05:55:48Z OWNER  

Imitate: - https://github.com/simonw/datasette/pull/1789 ```

would load default entrypoint like before

datasette data.db --load-extension ext

loads the extensions with the "sqlite3_foo_init" entrpoint

datasette data.db --load-extension ext:sqlite3_foo_init

loads the extensions with the "sqlite3_bar_init" entrpoint

datasette data.db --load-extension ext:sqlite3_bar_init ```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/470/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1223699280 I_kwDOBm6k_c5I8CtQ 1739 .db downloads should be served with an ETag simonw 9599 closed 0     6 2022-05-03T05:11:21Z 2022-05-04T18:21:18Z 2022-05-03T14:59:51Z OWNER  

I noticed that my Pyodide Datasette prototype is downloading the same database file every single time rather than browser caching it:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1739/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1193090967 I_kwDOBm6k_c5HHR-X 1699 Proposal: datasette query eyeseast 25778 open 0     6 2022-04-05T12:36:43Z 2022-04-11T01:32:12Z   CONTRIBUTOR  

I started sketching out a plugin to add a datasette query subcommand to export data from the command line. This is based on discussions in #1356 and #1605. Before I get too far down this rabbit hole, I figure it's worth getting some feedback here (unless this should happen in Discussions). Here's what I'm thinking:

At its most basic, it will write the results of a query to STDOUT.

sh datasette query -d data.db 'select * from data' > results.json

This isn't much improvement over using sqlite-utils. To make better use of datasette and its ecosystem, run datasette query using a canned query defined in a metadata.yml file.

For example, using the metadata file from alltheplaces-datasette:

sh cd alltheplaces-datasette datasette query -d alltheplaces.db -m metadata.yml count_by_spider

That query would be good to get as CSV, and we can auto-discover metadata and databases in the current directory:

sh cd alltheplaces-datasette datasette query count_by_spider -f csv

In this case, count_by_spider is a canned query defined on the alltheplaces database. If the same query is defined on multiple databases or its otherwise unclear which database query should use, pass the -d or --database option.

If a query takes parameters, I can pass them in at runtime, using the --param or -p option:

sh datasette query -d data.db -p value something 'select * from neighborhoods where some_column = :value'

I'm very interested in feedback on this, including whether it should be a plugin or in Datasette core. (I don't have a strong opinion about this, but I'm prototyping it as a plugin to start.)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1699/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1174423568 I_kwDOBm6k_c5GAEgQ 1670 Ship Datasette 0.61 simonw 9599 closed 0     6 2022-03-20T02:47:54Z 2022-03-23T18:32:32Z 2022-03-23T18:32:03Z OWNER  

Let the alpha bake for a while, since #1668 is a big last-minute change.

After shipping, release a new datasette-hashed-urls that depends on it, also this:

  • https://github.com/simonw/datasette-hashed-urls/issues/11
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1670/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1126604194 I_kwDOBm6k_c5DJp2i 1632 datasette one.db one.db opens database twice, as one and one_2 simonw 9599 closed 0   Datasette 1.0 3268330 6 2022-02-07T23:14:47Z 2022-03-19T04:04:49Z 2022-02-07T23:50:01Z OWNER  

% mkdir /tmp/data % cp ~/Dropbox/Development/datasette/fixtures.db /tmp/data % datasette /tmp/data/*.db /tmp/data/created.db --create -p 8852 ... INFO: Uvicorn running on http://127.0.0.1:8852 (Press CTRL+C to quit) ^CINFO: Shutting down % datasette /tmp/data/*.db /tmp/data/created.db --create -p 8852 ... INFO: 127.0.0.1:49533 - "GET / HTTP/1.1" 200 OK The first time I ran Datasette I got two databases - fixtures and created

BUT... when I ran Datasette the second time it looked like this:

This is the same result you get if you run:

datasette /tmp/data/fixtures.db /tmp/data/created.db /tmp/data/created.db

This is caused by this Datasette issue: - https://github.com/simonw/datasette/issues/509

So... either I teach Datasette to de-duplicate multiple identical file paths passed to the command, or I can't use /data/*.db in the Dockerfile here and I need to go back to other solutions for the challenge described in this comment: https://github.com/simonw/datasette-publish-fly/pull/12#issuecomment-1031971831

Originally posted by @simonw in https://github.com/simonw/datasette-publish-fly/pull/12#issuecomment-1032029874

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1632/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1145882578 I_kwDOCGYnMM5ETMfS 408 `deterministic=True` fails on versions of SQLite prior to 3.8.3 learning4life 24938923 closed 0     6 2022-02-21T14:36:43Z 2022-03-13T16:54:09Z 2022-03-02T00:38:11Z NONE  

Hi, love your work.

I am unable to lookup indexes in a database using sqlite-utils:

sqlite-utils indexes city_spec.db --table

or

sqlite-utils indexes city_spec.db MyTable

Software sqlite-utils, version 3.24 sqlite3 --version: 3.36.0

Output:

Traceback (most recent call last): File "/opt/app-root/bin/sqlite-utils", line 8, in <module> sys.exit(cli()) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1128, in call return self.main(args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, ctx.params) File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 754, in invoke return __callback(args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py", line 2123, in indexes ctx.invoke( File "/opt/app-root/lib64/python3.8/site-packages/click/core.py", line 754, in invoke return __callback(args, kwargs) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/cli.py", line 1624, in query db.register_fts4_bm25() File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 403, in register_fts4_bm25 self.register_function(rank_bm25, deterministic=True) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 399, in register_function register(fn) File "/opt/app-root/lib64/python3.8/site-packages/sqlite_utils/db.py", line 392, in register self.conn.create_function(name, arity, fn, *kwargs) sqlite3.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/408/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
678760988 MDU6SXNzdWU2Nzg3NjA5ODg= 932 End-user documentation simonw 9599 open 0   Datasette 1.0 3268330 6 2020-08-13T22:04:39Z 2022-03-08T15:20:48Z   OWNER  

Datasette's documentation is aimed at people who install and configure it.

What about end users of preconfigured and deployed Datasette instances?

Something that can be linked to from the Datasette UI would be really useful.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/932/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
764059235 MDU6SXNzdWU3NjQwNTkyMzU= 1143 More flexible CORS support in core, to encourage good security practices yurivish 114388 open 0   Datasette 1.0 3268330 6 2020-12-12T17:06:35Z 2022-02-13T17:41:17Z   NONE  

It would be nice if the --cors option accepted an origin regex to more securely allow secure local development.

As an example, Observable notebooks namespace every user's notebooks by their username and user content is served from username.observableusercontent.com, so you would set --cors-origin username.observableusercontent.com to restrict access to a local development Datasette instance to only your own notebooks, rather than exposing the data to any website that makes a request.

Thank you for all of your work on Datasette!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1143/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1121121305 I_kwDOBm6k_c5C0vQZ 1618 Reconsider policy on blocking queries containing the string "pragma" strada 770231 open 0     6 2022-02-01T19:39:46Z 2022-02-02T19:42:03Z   NONE  

First of all, thanks for creating this cool project, and also supporting publishing to various hosting services out of the box.

While testing out, I noticed legitimate queries such as select * from books where title like 'Pragmatic%' or select * from books where title = 'The Pragmatic Programmer' are blocked, due to the regular expression check here: https://github.com/simonw/datasette/blob/main/datasette/utils/init.py#L185

Example as seen from a Datasette instance: https://fivethirtyeight.datasettes.com/polls?sql=select+*+from+books+where+title+like+%27Pragmatic%25%27%0D%0A

I'd propose a regular expression like re.compile(f"pragma_(?!({'|'.join(allowed_pragmas)}))"), instead of re.compile(f"pragma(?!_({'|'.join(allowed_pragmas)}))"),

I can create a pull request with this change, unless the maintainers think it would allow unwanted queries to be executed.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1618/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1087913724 I_kwDOBm6k_c5A2D78 1577 Drop support for Python 3.6 simonw 9599 closed 0   Datasette 1.0 3268330 6 2021-12-23T18:17:03Z 2022-01-25T23:30:03Z 2022-01-20T04:31:41Z OWNER  

Original title: Decide when to drop support for Python 3.6

context_vars can solve this but they were introduced in Python 3.7: https://www.python.org/dev/peps/pep-0567/

Python 3.6 support ends in a few days time, and it looks like Glitch has updated to 3.7 now - so maybe I can get away with Datasette needing 3.7 these days?

Tweeted about that here: https://twitter.com/simonw/status/1473761478155010048

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1576#issuecomment-999878907

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1577/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1083669410 I_kwDOBm6k_c5Al3ui 1566 Release Datasette 0.60 simonw 9599 closed 0   Datasette 0.60 7571612 6 2021-12-17T22:58:12Z 2022-01-14T01:59:55Z 2022-01-14T01:59:55Z OWNER  

Using this as a tracking issue. I'm hoping to get the bulk of the JSON redesign work from the refactor in #1554 in for this release.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1566/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1076388044 I_kwDOBm6k_c5AKGDM 1547 Writable canned queries fail to load custom templates wragge 127565 closed 0   Datasette 0.60 7571612 6 2021-12-10T03:31:48Z 2022-01-13T22:27:59Z 2021-12-19T21:12:00Z CONTRIBUTOR  

I've created a canned query with "write": true set. I've also created a custom template for it, but the template doesn't seem to be found. If I look in the HTML I see (stock_exchange is the db name):

<!-- Templates considered: query-stock_exchange.html, *query.html -->

My non-writeable canned queries pick up custom templates as expected, and if I look at their HTML I see the canned query name added to the templates considered (the canned query here is date_search):

<!-- Templates considered: query-stock_exchange-date_search.html, query-stock_exchange.html, *query.html -->

So it seems like the writeable canned query is behaving differently for some reason. Is it an authentication thing? I'm using the built in --root authentication.

Thanks!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1547/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1067771698 I_kwDOCGYnMM4_pOcy 348 Command for creating an empty database simonw 9599 closed 0   3.21 7558727 6 2021-11-30T23:24:27Z 2022-01-13T07:06:59Z 2022-01-09T20:33:20Z OWNER  

I sometimes find the need to create an empty SQLite database file - for example if I want to enable WAL on it before using it with another script. I currently do that like this:

sqlite3 my.db vacuum
sqlite-utils enable-wal my.db

It would be nice if sqlite-utils had a convenience command for doing this.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/348/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1097128334 I_kwDOCGYnMM5BZNmO 371 Support mutating row in `--convert` without returning it simonw 9599 closed 0   3.21 7558727 6 2022-01-09T07:38:44Z 2022-01-10T19:27:30Z 2022-01-09T20:06:15Z OWNER  

Currently you have to do this: $ sqlite-utils insert dogs.db dogs dogs.json --convert ' row["is_good"] = 1 return row' Would be neat if this worked too: $ sqlite-utils insert dogs.db dogs dogs.json \ --convert 'row["is_good"] = 1'

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/371/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1068791148 I_kwDOBm6k_c4_tHVs 1540 Idea: hover to reveal details of linked row simonw 9599 open 0     6 2021-12-01T19:28:07Z 2021-12-09T23:38:39Z   OWNER  

Hovering over that could work a little bit like GitHub issue links:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1540/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1052851176 I_kwDOBm6k_c4-wTvo 1507 ReadTheDocs build failed for 0.59.2 release simonw 9599 closed 0     6 2021-11-14T05:24:34Z 2021-11-14T05:41:55Z 2021-11-14T05:41:55Z OWNER  

I had to cancel the 0.59.2 release because ReadTheDocs was failing to build the documentation.

https://readthedocs.org/projects/datasette/builds/15268454/

``` /home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/bin/python -m sphinx -T -b html -d _build/doctrees -D language=en . _build/html Running Sphinx v1.8.5 loading translations [en]... done making output directory... building [mo]: targets for 0 po files that are out of date building [html]: targets for 27 source files that are out of date updating environment: 27 added, 0 changed, 0 removed reading sources... [ 3%] authentication

Traceback (most recent call last): File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/cmd/build.py", line 304, in build_main app.build(args.force_all, filenames) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/application.py", line 341, in build self.builder.build_update() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 347, in build_update len(to_build)) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 360, in build updated_docnames = set(self.read()) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 468, in read self._read_serial(docnames) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 490, in _read_serial self.read_doc(docname) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/builders/init.py", line 534, in read_doc doctree = read_doc(self.app, self.env, self.env.doc2path(docname)) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/io.py", line 318, in read_doc pub.publish() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py", line 219, in publish self.apply_transforms() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/core.py", line 200, in apply_transforms self.document.transformer.apply_transforms() File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/init.py", line 90, in apply_transforms Transformer.apply_transforms(self) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/docutils/transforms/init.py", line 171, in apply_transforms transform.apply(**kwargs) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/transforms/init.py", line 245, in apply apply_source_workaround(n) File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py", line 94, in apply_source_workaround for classifier in reversed(node.parent.traverse(nodes.classifier)): TypeError: argument to reversed() must be a sequence

Exception occurred: File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/0.59.2/lib/python2.7/site-packages/sphinx/util/nodes.py", line 94, in apply_source_workaround for classifier in reversed(node.parent.traverse(nodes.classifier)): TypeError: argument to reversed() must be a sequence The full traceback has been saved in /tmp/sphinx-err-vkl0oE.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. A bug report can be filed in the tracker at https://github.com/sphinx-doc/sphinx/issues. Thanks! ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1507/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
707478649 MDU6SXNzdWU3MDc0Nzg2NDk= 173 Progress bar for sqlite-utils insert simonw 9599 closed 0     6 2020-09-23T15:43:56Z 2021-11-01T08:42:24Z 2020-10-27T18:16:04Z OWNER  

It would be nice if sqlite-utils insert had a progress bar, for when it's churning through huge CSV files.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/173/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
991191951 MDU6SXNzdWU5OTExOTE5NTE= 1464 clean checkout & clean environment has test failures ctb 51016 open 0     6 2021-09-08T14:16:23Z 2021-09-13T22:17:17Z   CONTRIBUTOR  

I followed the instructions here, and even after running python update-docs-help.py I get the following failed tests -- any thoughts?

FAILED tests/test_api.py::test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3] FAILED tests/test_api.py::test_searchmode[table_metadata1-_search=te*+AND+do*-expected_rows1] FAILED tests/test_api.py::test_searchmode[table_metadata2-_search=te*+AND+do*&_searchmode=raw-expected_rows2]

This is with python 3.9.7 and lots of other packages, as in attached environment listing from conda list. conda-installed.txt

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1464/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
907645813 MDU6SXNzdWU5MDc2NDU4MTM= 57 Error: Use either --since or --since_id, not both rubenv 42904 closed 0     6 2021-05-31T18:11:04Z 2021-08-20T00:01:31Z 2021-08-20T00:01:31Z CONTRIBUTOR  

I'm using the following command:

twitter-to-sqlite user-timeline -a twitter-auth.json twitter/tweets.db --since

Which gives the following error: Error: Use either --since or --since_id, not both

Running without --since.

Traceback (most recent call last): File "/usr/local/bin/twitter-to-sqlite", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/twitter_to_sqlite/cli.py", line 317, in user_timeline for tweet in bar: File "/usr/local/lib/python3.9/site-packages/click/_termui_impl.py", line 328, in generator for rv in self.iter: File "/usr/local/lib/python3.9/site-packages/twitter_to_sqlite/utils.py", line 234, in fetch_user_timeline yield from fetch_timeline( File "/usr/local/lib/python3.9/site-packages/twitter_to_sqlite/utils.py", line 202, in fetch_timeline raise Exception(str(tweets["errors"])) Exception: [{'code': 44, 'message': 'since_id parameter is invalid.'}]

Python 3.9.5 twitter-to-sqlite, version 0.21.3

twitter-to-sqlite 206156866 issue    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/57/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
974987856 MDU6SXNzdWU5NzQ5ODc4NTY= 1442 Mechanism to cause specific branches to deploy their own demos simonw 9599 closed 0     6 2021-08-19T19:41:39Z 2021-08-19T21:11:45Z 2021-08-19T21:09:40Z OWNER  

A useful capability would be if it was super-easy to say "any pushes to branch X should be deployed to latest-X.datasette.io".

I'd like to use this for the column query information work in #1434

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1442/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
465815372 MDU6SXNzdWU0NjU4MTUzNzI= 37 Experiment with type hints simonw 9599 closed 0     6 2019-07-09T14:30:34Z 2021-08-18T21:48:57Z 2021-08-18T21:48:57Z OWNER  

Since it's designed to be used in Jupyter or for rapid prototyping in an IDE (and it's still pretty small) sqlite-utils feels like a great candidate for me to finally try out Python type hints.

https://veekaybee.github.io/2019/07/08/python-type-hints/ is good.

It suggests the mypy docs for getting started: https://mypy.readthedocs.io/en/latest/existing_code.html plus this tutorial: https://pymbook.readthedocs.io/en/latest/typehinting.html

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/37/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
944326512 MDU6SXNzdWU5NDQzMjY1MTI= 296 `table.search(..., quote=True)` parameter and `sqlite-utils search --quote` option deafmute1 32427188 closed 0     6 2021-07-14T11:26:47Z 2021-08-18T20:13:12Z 2021-08-18T20:10:48Z NONE  

Hi, Recently got this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ethan/git/music-metadata-indexer/src/mmindexer/__init__.py", line 38, in <module> start("/home/ethan/git/music-metadata-indexer/sample", "/home/ethan/git/music-metadata-indexer/test.db") File "/home/ethan/git/music-metadata-indexer/src/mmindexer/__init__.py", line 23, in start scanner.build_database() File "/home/ethan/git/music-metadata-indexer/src/mmindexer/scan.py", line 79, in build_database _import_song(self.db, Path(dirpath).joinpath(f), self.logger) File "/home/ethan/git/music-metadata-indexer/src/mmindexer/scan.py", line 23, in _import_song db.add_song(filepath) File "/home/ethan/git/music-metadata-indexer/src/mmindexer/index.py", line 166, in add_song for match in self.search("albums", album): File "/home/ethan/git/music-metadata-indexer/env/lib/python3.9/site-packages/sqlite_utils/db.py", line 1625, in search cursor = self.db.execute( File "/home/ethan/git/music-metadata-indexer/env/lib/python3.9/site-packages/sqlite_utils/db.py", line 243, in execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: fts5: syntax error near "." So, the error seems to suggest there was a "." character somewhere in the SQL command that was causing the error. I did a little digging and found this in the docs: https://www.sqlite.org/fts5.html#fts5_strings. "." is one of the many prohibited characters.

My solution was to just strip these out of the query using this line query = query.translate({e: None for e in itertools.chain(range(0,26), range(27, 48), range(58,65), range(91,95), [96], range(123,128))})

Perhaps this could be included into the table.search() function?

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/296/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
963897111 MDU6SXNzdWU5NjM4OTcxMTE= 309 sqlite-utils insert errors should show SQL and parameters, if possible scaleoutsean 16622642 closed 0     6 2021-08-09T11:24:14Z 2021-08-09T23:40:29Z 2021-08-09T22:25:58Z NONE  

I've tried several approaches, but this is the current one:

sh echo $json-line | sqlite-utils insert json.db jsontable --truncate --alter --detect-types - In all cases, I get this error:

sh OverflowError: Python int too large to convert to SQLite INTEGER Traceback (most recent call last): File "/home/sean/.local/bin/sqlite-utils", line 8, in <module> sys.exit(cli()) File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/cli.py", line 841, in insert insert_upsert_implementation( File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/cli.py", line 780, in insert_upsert_implementation db[table].insert_all( File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/db.py", line 2145, in insert_all self.insert_chunk( File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/db.py", line 1957, in insert_chunk result = self.db.execute(query, params) File "/home/sean/.local/lib/python3.8/site-packages/sqlite_utils/db.py", line 257, in execute return self.conn.execute(sql, parameters)

I googled the error and checked SO answers and advice, all good. I changed my JSON file to not use integers so I no longer get this error. Of course, that makes using the database a bit harder, so I also tried to solve the problem by modifying DB structure (while using integers in JSON).

If change all INTEGER Data Types to something else (STRING, TEXT) and try to import again using --truncate, I still get this error. I suppose I should tell sqlite-utils which columns should use non-INTEGER Data Type rather than rely on it to check my SQL table configuration.

If that is the case, can this error be a bit more specific for easier troubleshooting - maybe tell us which which record caused the problem when that error is thrown?

My table has 60+ columns, many of which use 64-bit integers (not all records are large or known in advance), so while I can modify JSON to use strings instead of integers, it decreases usability and finding out which records have values for which SQLite integers aren't sufficient requires some work (I'm thinking about parsing all integers with jq and sorting output by length to identify those columns, but I'd prefer if sqlite-utils could tell me that).

My environment:

  • Python 3.8.10
  • sqlite-utils 3.14
  • pandas 1.3.1
  • numpy 1.21.1
  • sqlite-fts4 1.0.1
  • sqlite 3.31.1-4ubuntu0.2
sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/309/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
940077168 MDU6SXNzdWU5NDAwNzcxNjg= 1389 "searchmode": "raw" in table metadata simonw 9599 closed 0     6 2021-07-08T17:32:10Z 2021-07-10T18:33:13Z 2021-07-10T18:33:13Z OWNER  

http://localhost:8001/index/summary?_search=language%3Aeng&_sort=title&_searchmode=raw

But I'm not able to manage it in the metadata file. Here is mine (note that the sort column is taken into account) Here it is:

``` { "databases": { "index": { "tables": { "summary": { "sort": "title", "searchmode": "raw" } } } } }

Originally posted by @Krazybug in https://github.com/simonw/datasette/issues/759#issuecomment-624860451

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1389/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
926777310 MDU6SXNzdWU5MjY3NzczMTA= 290 `db.query()` method (renamed `db.execute_returning_dicts()`) simonw 9599 closed 0     6 2021-06-22T03:03:54Z 2021-06-24T23:17:38Z 2021-06-24T22:54:43Z OWNER  

Most of this library deals with lists of Python dictionaries - .insert_all(), .rows, .rows_where(), .search().

The db.execute() method is the only thing that returns a sqlite3 cursor.

There is a clumsily named db.execute_returning_dicts(sql) method but it's not currently mentioned in the documentation.

It needs a better name, and needs to be properly documented.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/290/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
395236066 MDU6SXNzdWUzOTUyMzYwNjY= 393 CSV export in "Advanced export" pane doesn't respect query ltrgoddard 1727065 closed 0     6 2019-01-02T12:39:41Z 2021-06-17T18:14:24Z 2019-01-03T02:44:10Z NONE  

It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at songs released in 1989 in the classic-rock/classic-rock-song-list table from the Five Thirty Eight data. The JSON and CSV export links at the top of the page both give me filtered data using Release+Year__exact=1989 in the URL. In the Advanced export tab, though, the CSV option gives me the whole data set, while the JSON options preserve the query.

It may be that this is intended behaviour related to the streaming CSV stuff discussed here, but if that's the case then I think it should be a little clearer.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/393/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
906356331 MDU6SXNzdWU5MDYzNTYzMzE= 263 `sqlite-utils indexes` command simonw 9599 closed 0     6 2021-05-29T04:52:34Z 2021-06-03T04:34:38Z 2021-06-03T04:34:38Z OWNER  

While working on #260 I realized there's no command to show indexes in a database, even though there is one for showing tables and one for triggers.

I should implement #261 first.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/263/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
520667773 MDU6SXNzdWU1MjA2Njc3NzM= 620 Mechanism for indicating foreign key relationships in the table and query page URLs simonw 9599 open 0     6 2019-11-10T22:26:27Z 2021-04-05T03:57:22Z   OWNER  

Datasette currently only inflates foreign keys (into names hyperlinks) if it detects them as foreign key constraints in the underlying database.

It would be useful if you could specify additional "foreign keys" using both metadata.json and the querystring - similar time how you can pass ?_fts_table=x https://datasette.readthedocs.io/en/stable/full_text_search.html#configuring-full-text-search-for-a-table-or-view

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/620/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
   
714449879 MDU6SXNzdWU3MTQ0NDk4Nzk= 992 Change "--config foo:bar" to "--setting foo bar" simonw 9599 closed 0   Datasette 0.52 6055094 6 2020-10-05T01:27:45Z 2020-11-24T20:01:54Z 2020-11-24T20:01:54Z OWNER  

I designed the config format before I had a good feel for CLI design using Click. --config max_page_size 2000 is better than --config max_page_size:2000.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/992/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
737476423 MDU6SXNzdWU3Mzc0NzY0MjM= 198 Support order by relevance against FTS4 simonw 9599 closed 0     6 2020-11-06T05:36:31Z 2020-11-06T18:30:44Z 2020-11-06T18:30:44Z OWNER  

For #192 and #197 I've decided I want to be able to order by relevance in FTS4 as well as FTS5.

This means I need to port over my work on bm25() from https://github.com/simonw/sqlite-fts4 (since I don't want to add a full dependency).

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/198/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
733796942 MDU6SXNzdWU3MzM3OTY5NDI= 1075 PrefixedUrlString mechanism broke everything simonw 9599 closed 0   0.51 6026070 6 2020-10-31T19:58:05Z 2020-10-31T20:48:51Z 2020-10-31T20:48:51Z OWNER  

Added in 7a67bc7a569509d65b3a8661e0ad2c65f0b09166 refs #1026. Lots of tests are failing now.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1075/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
732905360 MDU6SXNzdWU3MzI5MDUzNjA= 1067 Table actions menu on view pages, not on query pages simonw 9599 closed 0   0.51 6026070 6 2020-10-30T05:56:39Z 2020-10-31T17:51:31Z 2020-10-31T17:40:14Z OWNER  

Follow-on from #1066.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1067/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
729096595 MDU6SXNzdWU3MjkwOTY1OTU= 1051 Better display of binary data on arbitrary query results page simonw 9599 closed 0     6 2020-10-25T19:38:06Z 2020-10-29T22:12:16Z 2020-10-29T22:01:39Z OWNER  

https://latest.datasette.io/fixtures?sql=select+rowid%2C+data+from+binary_data+order+by+rowid+limit+101

Problem: if these were larger fields that HTML page could have multiple megabytes of Python binary string representations on it.

It should behave more like the regular table view does:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1051/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
718723543 MDU6SXNzdWU3MTg3MjM1NDM= 1014 Add Link: pagination HTTP headers simonw 9599 closed 0   0.51 6026070 6 2020-10-10T23:42:40Z 2020-10-23T19:44:05Z 2020-10-11T00:18:51Z OWNER  

Spun off from #782. These can go on all of the JSON endpoints that support pagination.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1014/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
721068929 MDU6SXNzdWU3MjEwNjg5Mjk= 1020 Method for datasette.client() to forward on authentication simonw 9599 open 0     6 2020-10-14T01:47:49Z 2020-10-19T22:45:01Z   OWNER  

I stumbled into this while working on Dogsheep Beta: the requests it re-dispatched through TableView did not carry authentication cookies, and since this was against a private instance they were thus denied.

https://github.com/dogsheep/dogsheep-beta/blob/bed9df2b3ef68189e2e445427721a28f4e9b4887/dogsheep_beta/init.py#L223-L231

This made me think that datasette.client.get() (which Dogsheep Beta will start using shortly) could benefit from some kind of utility mechanism for passing through the cookies and general authenticated state from the current request.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1020/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
718938889 MDU6SXNzdWU3MTg5Mzg4ODk= 5 Figure out how to display images from <en-media> tags inline in Datasette simonw 9599 open 0     6 2020-10-11T22:17:03Z 2020-10-16T20:16:28Z   MEMBER  

Relates to #1. Evernote XML looks like this:

```xml

<en-note>

This note includes two images.

The Python logo
<en-media hash="61098c2c541de7f0a907c301dd6542da" type="image/svg+xml" width="125"/>
The Evernote logo
<en-media hash="91bd26175acac0b2ffdb6efac199f8ca" type="image/svg+xml" width="125"/>

</en-note> ``` That hash is the md5 we use to store resources. It should be possible to turn these into embedded image tags, especially if done in conjunction with the https://github.com/simonw/datasette-media plugin.

evernote-to-sqlite 303218369 issue    
{
    "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/5/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
710506708 MDU6SXNzdWU3MTA1MDY3MDg= 978 Rendering glitch with column headings on mobile simonw 9599 closed 0   Datasette 0.50 5971510 6 2020-09-28T19:04:45Z 2020-10-08T23:54:40Z 2020-09-28T22:43:01Z OWNER  

https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_parse%28%2210+october+2020+3pm%22%29%2C%0D%0A++dateutil_easter%28%222020%22%29%2C%0D%0A++dateutil_parse_fuzzy%28%22This+is+due+10+september%22%29%2C%0D%0A++dateutil_parse%28%221%2F2%2F2020%22%29%2C%0D%0A++dateutil_parse%28%222020-03-04%22%29%2C%0D%0A++dateutil_parse_dayfirst%28%222020-03-04%22%29%2C%0D%0A++dateutil_easter%282020%29

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/978/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
314506446 MDU6SXNzdWUzMTQ1MDY0NDY= 214 Ability for plugins to define extra JavaScript and CSS simonw 9599 closed 0     6 2018-04-16T05:29:34Z 2020-09-30T20:36:11Z 2018-04-18T03:13:03Z OWNER  

This can hook in to the existing extra_css_urls and extra_js_urls mechanism:

https://github.com/simonw/datasette/blob/b2955d9065ea019500c7d072bcd9d49d1967f051/datasette/app.py#L304-L305

The plugins should be able to bundle their own assets though, so it will also have to integrate with the /static/ static mounts mechanism somehow:

https://github.com/simonw/datasette/blob/b2955d9065ea019500c7d072bcd9d49d1967f051/datasette/app.py#L1255-L1257

Refs #14

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/214/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
691265198 MDU6SXNzdWU2OTEyNjUxOTg= 7 Mechanism for differentiating between "by me" and "liked by me" simonw 9599 closed 0     6 2020-09-02T17:44:37Z 2020-09-02T21:06:28Z 2020-09-02T21:06:28Z MEMBER  

Some of the content I'm indexing is by me - photos I've taken, tweets I wrote, commits, comments I posted.

Some of it is stuff that I've "liked" or "bookmarked" in some way - favourited tweets, Pocket articles, starred GitHub repos.

It woud be useful to be able to differentiate between the two.

dogsheep-beta 197431109 issue    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/7/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
679779797 MDU6SXNzdWU2Nzk3Nzk3OTc= 939 extra_ plugin hooks should take the same arguments simonw 9599 closed 0     6 2020-08-16T16:04:54Z 2020-08-16T18:25:05Z 2020-08-16T16:50:29Z OWNER  
  • [x] extra_css_urls(template, database, table, datasette)
  • [x] extra_js_urls(template, database, table, datasette)
  • [x] extra_body_script(template, database, table, view_name, datasette)
  • [x] extra_template_vars(template, database, table, view_name, request, datasette)

Originally posted by @simonw in https://github.com/simonw/datasette/issues/938#issuecomment-674544691

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/939/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
677326155 MDU6SXNzdWU2NzczMjYxNTU= 930 Datasette sdist is missing templates (hence broken when installing from Homebrew) simonw 9599 closed 0     6 2020-08-12T02:20:16Z 2020-08-12T03:30:59Z 2020-08-12T03:30:59Z OWNER  

Pretty nasty bug this: I'm getting 500 errors for all pages that try to render a template after installing the newly released Datasette 0.47 - both from pip install and via Homebrew.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/930/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
644309017 MDU6SXNzdWU2NDQzMDkwMTc= 864 datasette.add_message() doesn't work inside plugins simonw 9599 closed 0   Datasette 0.45 5533512 6 2020-06-24T04:30:06Z 2020-06-29T00:51:01Z 2020-06-29T00:51:01Z OWNER  

Similar problem to #863 - calling datasette.add_message() in a view registered using the register_routes() plugin hook doesn't work, because the code that writes accumulated messages to the ds_messages signed cookie lives in the BaseView class here:

https://github.com/simonw/datasette/blob/28bb1c51897f3956861755e345e18b8e0b1423ac/datasette/views/base.py#L94-L97

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/864/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
637342551 MDU6SXNzdWU2MzczNDI1NTE= 834 startup() plugin hook simonw 9599 closed 0   Datasette 0.45 5533512 6 2020-06-11T21:48:14Z 2020-06-28T19:38:50Z 2020-06-13T17:56:12Z OWNER  

It might be useful to have an startup hook which gets passed the datasette object as soon as Datasette has finished initializing.

My initial use-case for this is configuration verification - checking that the "plugins" configuration block for this plugin contains valid details.

I imagine there are plenty of other potential uses for this as well.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/834/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
348043884 MDU6SXNzdWUzNDgwNDM4ODQ= 357 Plugin hook for loading metadata.json simonw 9599 open 0     6 2018-08-06T19:00:01Z 2020-06-21T22:19:58Z   OWNER  

For https://github.com/simonw/russian-ira-facebook-ads-datasette/tree/af6d956995e14afd585c35a6a06bb01da32043ba I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements.

Example yaml metadata file: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/af6d956995e14afd585c35a6a06bb01da32043ba/russian-ads-metadata.yaml

It would be useful if Datasette could be fed a YAML file directly:

datasette -m metadata.yaml

Question is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a datasette-metadata-yaml plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/357/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
529429214 MDU6SXNzdWU1Mjk0MjkyMTQ= 642 Provide a cookiecutter template for creating new plugins simonw 9599 closed 0   Datasette 1.0 3268330 6 2019-11-27T15:46:36Z 2020-06-20T03:20:33Z 2020-06-20T03:20:25Z OWNER  

See this conversation: https://twitter.com/psychemedia/status/1199707352540368896

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/642/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
638241779 MDU6SXNzdWU2MzgyNDE3Nzk= 846 "Too many open files" error running tests simonw 9599 closed 0     6 2020-06-13T22:11:40Z 2020-06-14T00:26:31Z 2020-06-14T00:26:31Z OWNER  

I got this on my laptop: ```pytest ... /Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.7/site-packages/jinja2/loaders.py:171: in get_source f = open_if_exists(filename)


filename = '/Users/simon/Dropbox/Development/datasette/datasette/templates/400.html', mode = 'rb'

def open_if_exists(filename, mode='rb'):
    """Returns a file descriptor for the filename if that file exists,
    otherwise `None`.
    """
    try:
      return open(filename, mode)

E OSError: [Errno 24] Too many open files: '/Users/simon/Dropbox/Development/datasette/datasette/templates/400.html'

/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.7/site-packages/jinja2/utils.py:154: OSError ``` Based on the conversation in https://github.com/pytest-dev/pytest/issues/2970 I'm worried that my tests are opening too many files without closing them.

In particular... I call sqlite3.connect(filepath) a LOT - and I don't ever call conn.close() on those opened connections:

https://github.com/simonw/datasette/blob/cf7a2bdb404734910ec07abc7571351a2d934828/datasette/database.py#L58-L60

Could this be resulting in my tests eventually opening too many unclosed file handles? How could I confirm this?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/846/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
631932926 MDU6SXNzdWU2MzE5MzI5MjY= 801 allow_by_query setting for configuring permissions with a SQL statement simonw 9599 closed 0   Datasette 1.0 3268330 6 2020-06-05T20:30:19Z 2020-06-11T18:58:56Z 2020-06-11T18:58:49Z OWNER  

Idea: an "allow_sql" key with a SQL query that gets passed the actor JSON as :actor and can extract the relevant keys from it and return 1 or 0.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/698#issuecomment-639787304

See also #800

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/801/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
636426530 MDU6SXNzdWU2MzY0MjY1MzA= 829 Ability to set ds_actor cookie such that it expires simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-10T17:31:40Z 2020-06-10T19:41:35Z 2020-06-10T19:40:05Z OWNER  

I need this for datasette-auth-github: https://github.com/simonw/datasette-auth-github/issues/62#issuecomment-642152076

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/829/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
632673972 MDU6SXNzdWU2MzI2NzM5NzI= 804 python tests/fixtures.py command has a bug simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-06T19:17:36Z 2020-06-09T20:01:30Z 2020-06-09T19:58:34Z OWNER  

This command is meant to write out fixtures.db, metadata.json and a plugins directory: $ python tests/fixtures.py /tmp/fixtures.db /tmp/metadata.json /tmp/plugins/ Test tables written to /tmp/fixtures.db - metadata written to /tmp/metadata.json Traceback (most recent call last): File "tests/fixtures.py", line 833, in <module> ("my_plugin.py", PLUGIN1), NameError: name 'PLUGIN1' is not defined

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/804/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
635147716 MDU6SXNzdWU2MzUxNDc3MTY= 825 Way to enable a default=False permission for anonymous users simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-09T06:26:27Z 2020-06-09T17:19:19Z 2020-06-09T17:01:10Z OWNER  

I'd like plugins to be able to ship with a default that says "anonymous users cannot do this", but allow site administrators to over-ride that such that anonymous users can use the feature after all.

This is tricky because right now the anonymous user doesn't have an actor dictionary at all, so there's no key to match to an allow block.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/825/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
634139848 MDU6SXNzdWU2MzQxMzk4NDg= 813 Mechanism for specifying allow_sql permission in metadata.json simonw 9599 closed 0   Datasette 0.44 5512395 6 2020-06-08T04:57:19Z 2020-06-09T00:09:57Z 2020-06-09T00:07:19Z OWNER  

Split from #811. It would be useful if finely-grained permissions configured in metadata.json could be used to specify if a user is allowed to execute arbitrary SQL queries.

We have a permission check call for this already: https://github.com/simonw/datasette/blob/9397d718345c4b35d2a5c55bfcbd1468876b5ab9/datasette/views/database.py#L159

But there's currently no way to implement this check without writing a plugin.

I think a "allow_sql": {...} block at the database level in metadata.json (sibling to the current "allow" block for that database implemented in #811) would be a good option for this.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/813/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
585633142 MDU6SXNzdWU1ODU2MzMxNDI= 706 Documentation for the "request" object simonw 9599 closed 0   Datasette 1.0 3268330 6 2020-03-22T02:55:50Z 2020-05-30T13:20:00Z 2020-05-27T22:31:22Z OWNER  

Since that object is passed to the extra_template_vars hooks AND the classes registered by register_facet_classes it should be part of the documented interface on https://datasette.readthedocs.io/en/stable/internals.html

I could also start passing it to the register_output_renderer callback.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/706/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
612382643 MDU6SXNzdWU2MTIzODI2NDM= 758 Question: Access to immutable database-path clausjuhl 2181410 open 0     6 2020-05-05T07:01:18Z 2020-05-28T08:23:27Z   NONE  

Hi Simon

Is there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context (databases[0]["path") when rendering urls of the database (eg. /db-44b06v9/cases...), but where can I find the hashed url when rendering the index-page? I'm trying to avoid redirects. Thanks!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/758/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
613755043 MDU6SXNzdWU2MTM3NTUwNDM= 110 Support decimal.Decimal type dvhthomas 134771 closed 0     6 2020-05-07T03:57:19Z 2020-05-11T01:58:20Z 2020-05-11T01:50:11Z NONE  

Decimal types in Postgres cause a failure in db.py data type selection

I have a Django app using a MoneyField, which uses a numeric(14,0) data type in Postgres (https://www.postgresql.org/docs/9.3/datatype-numeric.html). When attempting to export that table I get the following error:

bash $ db-to-sqlite --table isaweb_proposal "postgres://connection" test.db .... column_type=COLUMN_TYPE_MAPPING[column_type], KeyError: <class 'decimal.Decimal'>

Looking at sql_utils.db.py at 292-ish it's clear that there is no matching type for what I assume SQLAlchemy interprets as Python decimal.Decimal.

From the SQLite docs it looks like DECIMAL in other DBs are considered numeric.

I'm not quite sure if it's as simple as adding a data type to that list or if there are repercussions beyond it.

Thanks for a great tool!

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/110/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
610408908 MDU6SXNzdWU2MTA0MDg5MDg= 34 Command for retrieving dependents for a repo simonw 9599 closed 0     6 2020-04-30T21:47:51Z 2020-05-03T15:53:01Z 2020-05-03T15:53:01Z MEMBER  

I really, really want to start grabbing this data: https://github.com/simonw/datasette/network/dependents

github-to-sqlite 207052882 issue    
{
    "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/34/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
573583971 MDU6SXNzdWU1NzM1ODM5NzE= 689 "Templates considered" comment broken in >=0.35 chrishas35 35075 closed 0     6 2020-03-01T17:31:21Z 2020-04-05T19:39:44Z 2020-04-05T19:39:44Z NONE  

Noticed that the "Templates Considered" comment is missing in 0.37. Believe I traced it back to #664 as you can see it in https://v0-34.datasette.io/ but not https://v0-35.datasette.io/. Looking at the template context debug between the two you can see what is missing from 0.35 vs. 0.34:

```diff < "datasette_version": "0.34", < "app_css_hash": "ffa51a", < "select_templates": [ < "*index.html" < ], < "zip": "<class 'zip'>", < "body_scripts": [], < "extra_css_urls": "<generator object BaseView._asset_urls at 0x7f6529ac05f0>", < "extra_js_urls": "<generator object BaseView._asset_urls at 0x7f6529ac0660>", < "format_bytes": "<function format_bytes at 0x7f652a1588b0>", < "database_url": "<bound method BaseView.database_url of \<datasette.views.index.IndexView object at 0x7f6529b03e50>>", < "database_color": "<bound method BaseView.database_color of \<datasette.views.index.IndexView object at 0x7f6529b03e50>>"


"datasette_version": "0.35",
"database_url": "<bound method BaseView.database_url of <datasette.views.index.IndexView object at 0x7f6140dacd90>>",
"database_color": "<bound method BaseView.database_color of <datasette.views.index.IndexView object at 0x7f6140dacd90>>"

```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/689/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
592829135 MDU6SXNzdWU1OTI4MjkxMzU= 713 Support YAML in metadata - metadata.yaml simonw 9599 closed 0     6 2020-04-02T18:10:05Z 2020-04-02T19:36:17Z 2020-04-02T19:30:55Z OWNER  

I was originally going to do this with a plugin - see #357 - but the more I work with metadata.json the more I want it to just accept YAML as an optional alternative to JSON.

The best example why is still this one: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/master/russian-ads-metadata.yaml

YAML is just SO much better than JSON for multi-line strings - in particular HTML and SQL, both of which are common in metadata.json files.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/713/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
545407916 MDU6SXNzdWU1NDU0MDc5MTY= 73 upsert_all() throws issue when upserting to empty table psychemedia 82988 closed 0     6 2020-01-05T11:58:57Z 2020-01-31T14:21:09Z 2020-01-05T17:20:18Z NONE  

If I try to add a list of dicts to an empty table using upsert_all, I get an error:

```python import sqlite3 from sqlite_utils import Database import pandas as pd

conx = sqlite3.connect(':memory') cx = conx.cursor() cx.executescript('CREATE TABLE "test" ("Col1" TEXT);')

q="SELECT * FROM test;" pd.read_sql(q, conx) #shows empty table

db = Database(conx) db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}])


TypeError Traceback (most recent call last) <ipython-input-74-8c26d93d7587> in <module> 1 db = Database(conx) ----> 2 db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}])

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1159 upsert=True, 1160 ) 1161

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert) 1040 sql = "INSERT OR IGNORE INTO {table} VALUES({pk_placeholders});".format( 1041 table=self.name, -> 1042 pks=", ".join(["[{}]".format(p) for p in pks]), 1043 pk_placeholders=", ".join(["?" for p in pks]), 1044 )

TypeError: 'NoneType' object is not iterable

```

A hacky workaround in use is:

python try: db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) except: db['test'].insert_all([{'Col1':'a'},{'Col1':'b'}])

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
527670799 MDU6SXNzdWU1Mjc2NzA3OTk= 639 updating metadata.json without recreating the app pkoppstein 172847 open 0     6 2019-11-24T09:19:53Z 2019-11-30T06:08:50Z   NONE  

I've sucessfully "uploaded" an SQLite database (with a metadata.json file) to heroku using:

$ datasette publish heroku so-sales.db -m metadata.json -n so-sales

The question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database.

The directions on heroku indicate I should run:

heroku git:clone -a so-sales

But this just results in an empty directory with a warning: warning: You appear to have cloned an empty repository.

I've been able to "clone" the heroku "app" using the command:

$ heroku slugs:download -a so-sales

but this is not a git repository....

Ideally, it seems to me, there'd be an option of the datasette CLI to allow a file to be updated, or there'd be some way to create a local git "clone" of the app so that the heroku instructions for "Deploying with git" would apply.

(p.s. I ran datasette publish heroku -m metadata.json -n so-sales in the hope that that would not cause the .db file to be wiped, but of course it was.)

(p.p.s. Thanks for Datasette!)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/639/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
513008936 MDU6SXNzdWU1MTMwMDg5MzY= 608 Improve UI of "datasette publish cloudrun" to reduce chances of accidentally over-writing a service simonw 9599 closed 0     6 2019-10-27T19:21:28Z 2019-11-08T02:51:36Z 2019-11-08T02:48:46Z OWNER  

The concept of a "service" in Cloud Run is crucial: if you deploy to the same service, you will over-write what you deployed there last!

As such, I'd like to make service a required positional argument for publish cloudrun:

datasette publish cloudrun my-service one.db two.db three.db
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/608/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
512996469 MDU6SXNzdWU1MTI5OTY0Njk= 607 Ways to improve fuzzy search speed on larger data sets? zeluspudding 8431341 closed 0     6 2019-10-27T17:31:37Z 2019-11-07T03:38:10Z 2019-11-07T03:38:10Z NONE  

I have an sqlite table with 16 million rows in it. Having read @simonw article "Fast Autocomplete Search for Your Website" I was curious to try datasette to see what kind of query performance I could get out of it. In truth I don't need to do full text search since all I would like to do is give my users a way to search for the names of investors such as "Warren Buffet", or "Tim Cook" (who's names are in a single column).

On the first search, Datasette takes over 20 seconds to return all records associated with elon musk:

If I rerun the same search, it then takes almost 9 seconds:

That's far to slow to implement an autocomplete feature. I could reduce the latency by making a special table of only unique investor names, thereby reducing the search space to less than a million rows (then I'd need to implement a way to add only new investor names to the table as I received new data.. about 4,000 rows a day). If I did that, I'm still concerned the new table wouldn't be lean enough to lookup investor names quickly. Plus, even if I can implement the autocomplete feature, I would still finally have to lookup records for that investors which would take between 8 - 20 seconds.

Are there any tricks for speeding this up?

Here's my hardware:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/607/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
488833975 MDU6SXNzdWU0ODg4MzM5NzU= 3 Command for running a search and saving tweets for that search simonw 9599 closed 0     6 2019-09-03T21:29:56Z 2019-11-04T05:31:56Z 2019-11-04T05:31:16Z MEMBER  
$ twitter-to-sqlite search dogsheep
twitter-to-sqlite 206156866 issue    
{
    "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/3/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
459621683 MDU6SXNzdWU0NTk2MjE2ODM= 521 Easier way of creating custom row templates simonw 9599 closed 0     6 2019-06-23T21:49:27Z 2019-07-03T03:23:56Z 2019-07-03T03:23:56Z OWNER  

I was messing around with a custom _rows_and_columns.html template and ended up with this: ```html {% for row in display_rows %}


{% for cell in row %} {% if cell.column == "First_Name" %}

{{ cell.value }} {% elif cell.column == "Last_Name" %} {{ cell.value }}

{% elif cell.column == "Short_Description" %}

{{ cell.column }}: {{ cell.value }}

{% else %} {{ cell.column }}: {{ cell.value }}    {% endif %} {% endfor %}

{% endfor %} This is nasty. I'd like to be able to do something like this instead: {% for row in display_rows %}

{{ row["First_Name"] }} {{ row["Last_Name"] }}

... ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/521/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
449818897 MDU6SXNzdWU0NDk4MTg4OTc= 24 Additional Column Constraints? IgnoredAmbience 98555 closed 0     6 2019-05-29T13:47:03Z 2019-06-13T06:47:17Z 2019-06-13T06:30:26Z NONE  

I'm looking to import data from XML with a pre-defined schema that maps fairly closely to a relational database. In particular, it has explicit annotations for when fields are required, optional, or when a default value should be inferred.

Would there be value in adding the ability to define NOT NULL and DEFAULT column constraints to sqlite-utils?

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/24/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
349827640 MDU6SXNzdWUzNDk4Mjc2NDA= 359 Faceted browse against a JSON list of tags simonw 9599 closed 0     6 2018-08-12T17:01:14Z 2019-05-29T21:39:12Z 2019-05-03T00:21:44Z OWNER  

If a table has a ["foo", "bar", "baz"] JSON column allow that to be faceted against.

  • [x] Support ?column__arraycontains=x filter queries
  • [x] Support ?_facet_array=column faceting
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/359/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
322787470 MDU6SXNzdWUzMjI3ODc0NzA= 259 inspect() should detect many-to-many relationships simonw 9599 closed 0     6 2018-05-14T12:03:58Z 2019-05-23T03:55:37Z 2019-05-23T03:55:37Z OWNER  

Relates to #255 - in particular supporting facets across M2M relationships.

It should be possible for .inspect() to notice when a table has two foreign keys to two different tables, and assume that this means there is a M2M relationship between those tables.

When rendering a table with a m2m relationship we could display the first X associated records as a comma separated list of hyperlinks in a new column on the table view, with a column name derived from the table on the other side.

Since SQLite doesn't have RANK or an equivalent of https://www.xaprb.com/blog/2006/12/02/how-to-number-rows-in-mysql/ this would be implemented as N+1 queries (one query per cell that we want to display an m2m summary). This should be OK in SQLite: https://sqlite.org/np1queryprob.html

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/259/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
310533258 MDU6SXNzdWUzMTA1MzMyNTg= 191 Figure out how to bundle a more up-to-date SQLite simonw 9599 closed 0     6 2018-04-02T16:33:25Z 2018-07-10T17:46:13Z 2018-07-10T17:46:13Z OWNER  

The version of SQLite that ships with Python 3 is a bit limited - it doesn't support row values for example https://www.sqlite.org/rowvalue.html

Figure out how to bundle a more recent SQLite engine with datasette. We need to figure out two cases:

  • Bundling a recent version in a Dockerfile build. I expect this to be quite easy.
  • Making a more recent version available to people hacking around in Mac OS X. I have no idea how to start on this.

I want it working on Mac OS X too because I don't want to force Docker as a dependency for anyone who just want to hack around with Datasette a little and run the test suite.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/191/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
333086005 MDU6SXNzdWUzMzMwODYwMDU= 313 Deploy demo of Datasette on every commit that passes tests simonw 9599 closed 0     6 2018-06-17T19:19:12Z 2018-06-17T21:52:58Z 2018-06-17T21:52:58Z OWNER  

We can use Travis CI and Zeit Now to ensure there is always a live demo of current master. We can ship archived demos for releases as well.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/313/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
314471743 MDU6SXNzdWUzMTQ0NzE3NDM= 211 Load plugins from a `--plugins-dir=plugins/` directory simonw 9599 closed 0     6 2018-04-16T01:17:43Z 2018-04-16T05:22:02Z 2018-04-16T05:22:02Z OWNER  

In #14 and 33c7c53ff87c2 I've added working support for setuptools entry_points plugins. These can be installed from PyPI using pip install ....

I imagine some projects will benefit from being able to add plugins without first publishing them to PyPI. Datasette already supports loading custom templates like so:

datasette serve --template-dir=mytemplates/ mydb.db

I propose an additional option, --plugins-dir= which specifies a directory full of blah.py files which will be loaded into Datasette when the application server starts.

datasette serve --plugins-dir=myplugins/ mydb.db

This will also need to be supported by datasette publish as those Python files should be copied up as part of the deployment.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/211/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
280315352 MDU6SXNzdWUyODAzMTUzNTI= 167 Nasty bug: last column not being correctly displayed simonw 9599 closed 0   Custom templates edition 2949431 6 2017-12-07T23:23:46Z 2017-12-10T01:00:21Z 2017-12-10T01:00:20Z OWNER  

e.g. https://datasette-bwnojrhmmg.now.sh/dk3-bde9a9a/dk?source__contains=http

The JSON output shows that the column is there, but is being displayed incorrectly:

https://datasette-bwnojrhmmg.now.sh/dk3-bde9a9a/dk.jsono?source__contains=http

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/167/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
273678673 MDU6SXNzdWUyNzM2Nzg2NzM= 85 Detect foreign keys and use them to link HTML pages together simonw 9599 closed 0   Foreign key edition 2919870 6 2017-11-14T06:12:05Z 2017-11-19T06:08:19Z 2017-11-19T06:08:19Z OWNER  

https://stackoverflow.com/a/44430157/6083 documents the PRAGMA needed to extract foreign key references for a table.

At a minimum we can link column values known to be foreign keys to the corresponding row page. We could try to summarize the linked row in some way too - somehow extracting a sensible link title, maybe based on additional configuration in the metadata.json file.

Still todo:

  • [x] Fix it to csvs-to-sqlite refactoring command correctly creates primary key on generated tables
  • [x] Ship new csvs-to-sqlite with refactoring command
  • [x] Refactor column logic to be more predictable in our templates (the rowid special case)
  • [x] Mechanism by which table metadata can specify the "label" column for a table
  • [x] Automatically set the label column as the first column that isn't a primary key (falling back on primary key)
  • [x] Code which runs a "select id, label from table where id in (...)" query as part of the tableview and populates a lookup dictionary
  • [x] Modify templates to use values from that lookup dictionary
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/85/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
267726219 MDU6SXNzdWUyNjc3MjYyMTk= 16 Default HTML/CSS needs to look reasonable and be responsive simonw 9599 closed 0   Ship first public release 2857392 6 2017-10-23T16:05:22Z 2017-11-11T20:19:07Z 2017-11-11T20:19:07Z OWNER  

Version one should have the following characteristics:

  • Looks OK
  • Works great on mobile
  • Loads extremely fast
  • No JavaScript! At least not in v1.
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/16/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
267788884 MDU6SXNzdWUyNjc3ODg4ODQ= 23 Support Django-style filters in querystring arguments simonw 9599 closed 0   Ship first public release 2857392 6 2017-10-23T19:29:42Z 2017-10-25T04:23:03Z 2017-10-25T04:23:02Z OWNER  

e.g

/database/table?name__contains=Simon&age__gte=4

Same format as Django: double underscore as the split.

If you need to match against a column that happens to contain a double underscore in its official name, do this:

/database/table?weird__column__exact=Simon

__exact is the default operation if none is supplied.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/23/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
267513424 MDU6SXNzdWUyNjc1MTM0MjQ= 1 Addressable pages for every row in a table simonw 9599 closed 0   Ship first public release 2857392 6 2017-10-23T00:44:16Z 2017-10-24T14:11:04Z 2017-10-24T14:11:03Z OWNER  
/database-name-7sha256/table-name/compound-pk
/database-name-7sha256/table-name/compound-pk.json

Tricky part will be figuring out what the private key is - especially since it could be a compound primary key and it might involve different data types.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 1077.605ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows