home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

8 rows where issue = 576582604 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • simonw 7
  • kwladyka 1

author_association 2

  • OWNER 7
  • NONE 1

issue 1

  • datasette publish cloudrun --memory option · 8 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
648296323 https://github.com/simonw/datasette/issues/694#issuecomment-648296323 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDY0ODI5NjMyMw== kwladyka 3903726 2020-06-23T17:10:51Z 2020-06-23T17:10:51Z NONE

@simonw

Did you find the reason? I had similar situation and I check this on millions ways. I am sure app doesn't consume such memory.

I was trying the app with: docker run --rm -it -p 80:80 -m 128M foo

I was watching app with docker stats. Even limited memory by CMD ["java", "-Xms60M", "-Xmx60M", "-jar", "api.jar"]. Checked memory usage by app in code and print bash commands. The app definitely doesn't use this memory. Also doesn't write files.

Only one solution is to change memory to 512M.

It is definitely something wrong with cloud run.

I even did special app for testing this. It looks like when I cross very small amount of code / memory / app size in random when, then memory needs grow +hundreds. Nothing make sense here. Especially it works everywhere expect cloud run.

Please let me know if you discovered something more.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
596266190 https://github.com/simonw/datasette/issues/694#issuecomment-596266190 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NjI2NjE5MA== simonw 9599 2020-03-08T23:32:58Z 2020-03-08T23:32:58Z OWNER

Shipped in Datasette 0.38.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
595491182 https://github.com/simonw/datasette/issues/694#issuecomment-595491182 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NTQ5MTE4Mg== simonw 9599 2020-03-05T23:07:33Z 2020-03-05T23:45:38Z OWNER

So two things I need to do for this:

  • Add a --memory option to datasette publish cloudrun
  • Maybe capture this error and output a helpful suggestion that you increase the memory with that option? Not sure how feasible that is.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
595498926 https://github.com/simonw/datasette/issues/694#issuecomment-595498926 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NTQ5ODkyNg== simonw 9599 2020-03-05T23:35:32Z 2020-03-05T23:35:32Z OWNER

Tested that with: datasette publish cloudrun fixtures.db --memory 512Mi --service fixtures-memory-512mi Here's the result in the Google Cloud web console:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
595492478 https://github.com/simonw/datasette/issues/694#issuecomment-595492478 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NTQ5MjQ3OA== simonw 9599 2020-03-05T23:12:25Z 2020-03-05T23:12:25Z OWNER

I wonder if there's some weird reason that we churn through too much RAM on initial datasette startup here? I wouldn't expect startup to use a huge spike of RAM. Maybe need to profile a bit.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
595490889 https://github.com/simonw/datasette/issues/694#issuecomment-595490889 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NTQ5MDg4OQ== simonw 9599 2020-03-05T23:06:30Z 2020-03-05T23:06:30Z OWNER

This fixed it (I tried 1Gi first but that gave the same error): gcloud run deploy --allow-unauthenticated --platform=managed \ --image gcr.io/datasette-222320/datasette non-profit-ethics --memory=2Gi

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
595489514 https://github.com/simonw/datasette/issues/694#issuecomment-595489514 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NTQ4OTUxNA== simonw 9599 2020-03-05T23:01:35Z 2020-03-05T23:01:35Z OWNER

Aha! The logs said "Memory limit of 244M exceeded with 247M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits"

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  
595489222 https://github.com/simonw/datasette/issues/694#issuecomment-595489222 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDU5NTQ4OTIyMg== simonw 9599 2020-03-05T23:00:33Z 2020-03-05T23:00:33Z OWNER

The initial datasette publish cloudrun failed, now I can replicate that error by running: gcloud run deploy --allow-unauthenticated --platform=managed \ --image gcr.io/datasette-222320/datasette non-profit-ethics

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 17.881ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows