home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 610829227 and user = 9599 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • simonw · 4 ✖

issue 1

  • Cloud Run fails to serve database files larger than 32MB · 4 ✖

author_association 1

  • OWNER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
737580084 https://github.com/simonw/datasette/issues/749#issuecomment-737580084 https://api.github.com/repos/simonw/datasette/issues/749 MDEyOklzc3VlQ29tbWVudDczNzU4MDA4NA== simonw 9599 2020-12-03T00:31:14Z 2020-12-03T00:31:14Z OWNER

This works!

``` /tmp % wget 'https://covid-19.datasettes.com/covid.db' --2020-12-02 16:28:02-- https://covid-19.datasettes.com/covid.db Resolving covid-19.datasettes.com (covid-19.datasettes.com)... 172.217.5.83 Connecting to covid-19.datasettes.com (covid-19.datasettes.com)|172.217.5.83|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/octet-stream] Saving to: ‘covid.db’

covid.db [ <=> ] 306.42M 3.27MB/s in 98s

2020-12-02 16:29:40 (3.13 MB/s) - ‘covid.db’ saved [321306624] ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Cloud Run fails to serve database files larger than 32MB 610829227  
737563699 https://github.com/simonw/datasette/issues/749#issuecomment-737563699 https://api.github.com/repos/simonw/datasette/issues/749 MDEyOklzc3VlQ29tbWVudDczNzU2MzY5OQ== simonw 9599 2020-12-02T23:45:42Z 2020-12-02T23:45:42Z OWNER

I asked about this on Twitter - https://twitter.com/steren/status/1334281184965140483

You simply need to send the Transfer-Encoding: chunked header.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Cloud Run fails to serve database files larger than 32MB 610829227  
726417847 https://github.com/simonw/datasette/issues/749#issuecomment-726417847 https://api.github.com/repos/simonw/datasette/issues/749 MDEyOklzc3VlQ29tbWVudDcyNjQxNzg0Nw== simonw 9599 2020-11-13T00:05:14Z 2020-11-13T00:05:14Z OWNER

https://cloud.google.com/blog/products/serverless/cloud-run-now-supports-http-grpc-server-streaming indicates this limit should no longer apply:

With this addition, Cloud Run can now ... Send responses larger than the previous 32 MB limit

But I'm still getting errors from Cloud Run attempting to download .db files larger than 32 MB.

I filed a question in their issue tracker about that here: https://issuetracker.google.com/issues/173038375

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Cloud Run fails to serve database files larger than 32MB 610829227  
622450636 https://github.com/simonw/datasette/issues/749#issuecomment-622450636 https://api.github.com/repos/simonw/datasette/issues/749 MDEyOklzc3VlQ29tbWVudDYyMjQ1MDYzNg== simonw 9599 2020-05-01T16:08:46Z 2020-05-01T16:08:46Z OWNER

Proposed solution: on Cloud Run don't show the "download database" link if the database file is larger than 32MB.

I can do this with a new config setting, max_db_mb, which is automatically set by the publish cloudrun command.

This is consistent with the existing max_csv_mb setting: https://datasette.readthedocs.io/en/stable/config.html#max-csv-mb

I should set max_csv_mb to 32MB on Cloud Run deploys as well.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Cloud Run fails to serve database files larger than 32MB 610829227  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 382.419ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows