home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

12 rows where "updated_at" is on date 2019-06-24 and user = 9599 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 11
  • pull 1

state 2

  • closed 11
  • open 1

repo 1

  • datasette 12
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
460095928 MDU6SXNzdWU0NjAwOTU5Mjg= 528 Establish a pattern for Datasette plugins built on top of Pandas simonw 9599 open 0     0 2019-06-24T21:05:52Z 2019-06-24T21:05:52Z   OWNER  

The Pandas ecosystem is huge, varied and full of tools that are really good at doing interesting analysis on top of tabular data.

Pandas should not be a dependency of Datasette core, but I think there is a lot of potential in having plugins which use Pandas to apply interesting analysis to data sucked out of Datasette's SQLite tables.

One example (thanks, Tony): https://github.com/ResidentMario/missingno could form the basis of a fantastic plugin for getting a high-level overview of how complete each column in a table is.

Some thought is needed here about what shape these kind of plugins might take, and what plugin hooks they would use.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/528/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
459714943 MDU6SXNzdWU0NTk3MTQ5NDM= 525 Add section on sqite-utils enable-fts to the search documentation simonw 9599 closed 0 simonw 9599   2 2019-06-24T06:39:16Z 2019-06-24T16:36:35Z 2019-06-24T16:29:43Z OWNER  

https://datasette.readthedocs.io/en/stable/full_text_search.html already has a section about csvs-to-sqlite, sqlite-utils is even more relevant.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/525/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
459587155 MDExOlB1bGxSZXF1ZXN0MjkwODk3MTA0 518 Port Datasette from Sanic to ASGI + Uvicorn simonw 9599 closed 0 simonw 9599 Datasette 1.0 3268330 12 2019-06-23T15:18:42Z 2019-06-24T13:42:50Z 2019-06-24T03:13:09Z OWNER simonw/datasette/pulls/518

Most of the code here was fleshed out in comments on #272 (Port Datasette to ASGI) - this pull request will track the final pieces:

  • [x] Update test harness to more correctly simulate the raw_path issue
  • [x] Use raw_path so table names containing / can work correctly
  • [x] Bug: JSON not served with correct content-type
  • [x] Get ?_trace=1 working again
  • [x] Replacement for @app.listener("before_server_start")
  • [x] Bug: /fixtures/table%2Fwith%2Fslashes.csv?_format=json downloads as CSV
  • [x] Replace Sanic request and response objects with my own classes, so I can remove Sanic dependency
  • [x] Final code tidy-up before merging to master
datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/518/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
272391665 MDU6SXNzdWUyNzIzOTE2NjU= 48 Switch to ujson simonw 9599 closed 0     4 2017-11-08T23:50:29Z 2019-06-24T06:57:54Z 2019-06-24T06:57:43Z OWNER  

ujson is already a dependency of Sanic, and should be quite a bit faster.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/48/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
317714268 MDU6SXNzdWUzMTc3MTQyNjg= 238 External metadata.json simonw 9599 closed 0     3 2018-04-25T17:02:30Z 2019-06-24T06:52:55Z 2019-06-24T06:52:45Z OWNER  

A frustration I'm having with https://register-of-members-interests.datasettes.com/ is that I keep coming up with new canned queries but I don't want to redeploy the whole thing just to add them to metadata.json

Maybe Datasette could optionally take a --metadata-url option which causes it to load from a URL instead and occasionally check for updates.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/238/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
340730961 MDU6SXNzdWUzNDA3MzA5NjE= 340 Embrace black simonw 9599 closed 0     1 2018-07-12T17:32:29Z 2019-06-24T06:50:27Z 2019-06-24T06:50:26Z OWNER  

Run black against everything. Then set up CI to fail if code doesn't conform to black's style.

Here's how Starlette does this:

  • https://github.com/encode/starlette/blob/e3d090b3597167f7b3a4f76e4bb3c0d3e94be61a/.travis.yml#L14
  • https://github.com/encode/starlette/blob/e3d090b3597167f7b3a4f76e4bb3c0d3e94be61a/scripts/lint - essentially runs black starlette tests --check

And here's an example of a test run that failed: https://travis-ci.org/encode/starlette/jobs/403172478

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/340/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
276455748 MDU6SXNzdWUyNzY0NTU3NDg= 146 datasette publish gcloud simonw 9599 closed 0     2 2017-11-23T18:55:03Z 2019-06-24T06:48:20Z 2019-06-24T06:48:20Z OWNER  

See also #103

It looks like you can start a Google Cloud VM with a "docker container" option - and the Google Cloud Registry is easy to push containers to. So it would be feasible to have datasette publish gcloud ... automatically build a container, push it to GCR, then start a new VM instance with it:

https://cloud.google.com/container-registry/docs/pushing-and-pulling

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/146/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
275125805 MDU6SXNzdWUyNzUxMjU4MDU= 124 Option to open readonly but not immutable simonw 9599 closed 0     5 2017-11-19T02:11:03Z 2019-06-24T06:43:46Z 2019-06-24T06:43:46Z OWNER  

Immutable assumes no other process can modify the file. An option to open reqdonly instead would enable other processes to update the file in place.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/124/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
274315193 MDU6SXNzdWUyNzQzMTUxOTM= 106 Document how pagination works simonw 9599 closed 0     1 2017-11-15T21:44:32Z 2019-06-24T06:42:33Z 2019-06-24T06:42:33Z OWNER  

I made a start at that in this comment: https://news.ycombinator.com/item?id=15691926

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/106/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
323716411 MDU6SXNzdWUzMjM3MTY0MTE= 267 Documentation for URL hashing, redirects and cache policy simonw 9599 closed 0     3 2018-05-16T17:29:01Z 2019-06-24T06:41:02Z 2019-06-24T06:41:02Z OWNER  

See my comments on #258 for a starting point

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/267/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
329147284 MDU6SXNzdWUzMjkxNDcyODQ= 305 Add contributor guidelines to docs simonw 9599 closed 0     2 2018-06-04T17:25:30Z 2019-06-24T06:40:19Z 2019-06-24T06:40:19Z OWNER  

https://channels.readthedocs.io/en/latest/contributing.html is a nice example of this done well.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/305/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
324188953 MDU6SXNzdWUzMjQxODg5NTM= 272 Port Datasette to ASGI simonw 9599 closed 0 simonw 9599 Datasette 1.0 3268330 42 2018-05-17T21:16:32Z 2019-06-24T04:54:15Z 2019-06-24T03:33:06Z OWNER  

Datasette doesn't take much advantage of Sanic, and I'm increasingly having to work around parts of it because of idiosyncrasies that are specific to Datasette - caring about the exact order of querystring arguments for example.

Since Datasette is GET-only our needs from a web framework are actually pretty slim.

This becomes more important as I expand the plugins #14 framework. Am I sure I want the plugin ecosystem to depend on a Sanic if I might move away from it in the future?

If Datasette wasn't all about async/await I would use WSGI, but today it makes more sense to use ASGI. I'd like to be confident that switching to ASGI would still give me the excellent performance that Sanic provides.

https://github.com/django/asgiref/blob/master/specs/asgi.rst

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/272/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 613.172ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows