home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where "created_at" is on date 2021-10-07 and "updated_at" is on date 2021-10-07 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 3

  • ?_sort=rowid with _next= returns error 2
  • Exceeding Cloud Run memory limits when deploying a 4.8G database 2
  • Fix compatibility with Python 3.10 1

user 2

  • simonw 4
  • ghing 1

author_association 2

  • OWNER 4
  • CONTRIBUTOR 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
938171377 https://github.com/simonw/datasette/issues/1480#issuecomment-938171377 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c4361vx ghing 110420 2021-10-07T21:33:12Z 2021-10-07T21:33:12Z CONTRIBUTOR

Thanks for the reply @simonw. What services have you had better success with than Cloud Run for larger database?

Also, what about my issue description makes you think there may be a workaround?

Is there any instrumentation I could add to see at which point in the deploy the memory usage spikes? Should I be able to see this whether it's running under Docker locally, or do you suspect this is Cloud Run-specific?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
938142436 https://github.com/simonw/datasette/pull/1481#issuecomment-938142436 https://api.github.com/repos/simonw/datasette/issues/1481 IC_kwDOBm6k_c436urk simonw 9599 2021-10-07T20:44:43Z 2021-10-07T20:44:43Z OWNER

The 3.10 tests failed a lot. Trying to run this locally:

``` /tmp % pyenv install 3.10 python-build: definition not found: 3.10

The following versions contain `3.10' in the name: 3.10.0a6 3.10-dev miniconda-3.10.1 miniconda3-3.10.1

See all available versions with `pyenv install --list'.

If the version you need is missing, try upgrading pyenv:

brew update && brew upgrade pyenv ``` So trying:

brew update && brew upgrade pyenv

Then did this:

/tmp % brew upgrade pyenv ==> Upgrading 1 outdated package: pyenv 1.2.24.1 -> 2.1.0 This decided to upgrade everything by downloaded everything on the internet. Aah, Homebrew.

But it looks like I have 3.10.0 available to pyenv now.

/tmp % pyenv install 3.10.0 python-build: use openssl@1.1 from homebrew python-build: use readline from homebrew Downloading Python-3.10.0.tar.xz... -> https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tar.xz Installing Python-3.10.0... ...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix compatibility with Python 3.10 1020436713  
938134038 https://github.com/simonw/datasette/issues/1480#issuecomment-938134038 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c436soW simonw 9599 2021-10-07T20:31:46Z 2021-10-07T20:31:46Z OWNER

I've had this problem too - my solution was to not use Cloud Run for databases larger than about 2GB, but the way you describe it here makes me think that maybe there is a workaround here which could get it to work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
938131806 https://github.com/simonw/datasette/issues/1470#issuecomment-938131806 https://api.github.com/repos/simonw/datasette/issues/1470 IC_kwDOBm6k_c436sFe simonw 9599 2021-10-07T20:28:30Z 2021-10-07T20:28:30Z OWNER

On further investigation this isn't related to _search at all - it happens when you explicitly sort by _sort=rowid and apply a _next

  • https://global-power-plants.datasettes.com/global-power-plants/global-power-plants?_next=200 works without an error (currently)
  • https://global-power-plants.datasettes.com/global-power-plants/global-power-plants?_next=200&_sort=rowid shows that error
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
?_sort=rowid with _next= returns error 995098231  
938124652 https://github.com/simonw/datasette/issues/1470#issuecomment-938124652 https://api.github.com/repos/simonw/datasette/issues/1470 IC_kwDOBm6k_c436qVs simonw 9599 2021-10-07T20:17:53Z 2021-10-07T20:18:55Z OWNER

Here's the exception: -> params[f"p{len(params)}"] = components[0] (Pdb) list 603 604 # Figure out the SQL for next-based-on-primary-key first 605 next_by_pk_clauses = [] 606 if use_rowid: 607 next_by_pk_clauses.append(f"rowid > :p{len(params)}") 608 -> params[f"p{len(params)}"] = components[0] 609 else: 610 # Apply the tie-breaker based on primary keys 611 if len(components) == len(pks): 612 param_len = len(params) 613 next_by_pk_clauses.append( Debugger shows that components is an empty array, so components[0] cannot be resolved:

-> params[f"p{len(params)}"] = components[0] (Pdb) params {'search': 'hello'} (Pdb) components []

So the bug is in this code: https://github.com/simonw/datasette/blob/adb5b70de5cec3c3dd37184defe606a082c232cf/datasette/views/table.py#L604-L617

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
?_sort=rowid with _next= returns error 995098231  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 568.526ms · About: github-to-sqlite