1 row where user = 701 sorted by updated_at descending

View and edit SQL

Suggested facets: created_at (date), updated_at (date)


  • jokull · 1


id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
810943882 https://github.com/simonw/datasette/issues/526#issuecomment-810943882 https://api.github.com/repos/simonw/datasette/issues/526 MDEyOklzc3VlQ29tbWVudDgxMDk0Mzg4Mg== jokull 701 2021-03-31T10:03:55Z 2021-03-31T10:03:55Z NONE

+1 on using nested queries to achieve this! Would be great as streaming CSV is an amazing feature.

Some UX/DX details:

I was expecting it to work to simply add &_stream=on to custom SQL queries because the docs say

Any Datasette table, view or custom SQL query can be exported as CSV.

After a bit of testing back and forth I realized streaming only works for full tables.

Would love this feature because I'm using pandas.read_csv to paint graphs from custom queries and the graphs are cut off because of the 1000 row limit.

    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
CSV streaming for canned queries 459882902  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);