home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "OWNER" and "created_at" is on date 2020-11-04 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 4

  • DigitalOcean buildpack memory errors for large sqlite db? 2
  • Advanced CSV export for arbitrary queries 2
  • Mechanism for ranking results from SQLite full-text search 1
  • sqlite-utils search command 1

user 1

  • simonw 6

author_association 1

  • OWNER · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
721931504 https://github.com/simonw/datasette/issues/1082#issuecomment-721931504 https://api.github.com/repos/simonw/datasette/issues/1082 MDEyOklzc3VlQ29tbWVudDcyMTkzMTUwNA== simonw 9599 2020-11-04T19:32:47Z 2020-11-04T19:35:44Z OWNER

I wonder if setting a soft memory limit within Datasette would help here: https://www.sqlite.org/malloc.html#_setting_memory_usage_limits

If attempts are made to allocate more memory than specified by the soft heap limit, then SQLite will first attempt to free cache memory before continuing with the allocation request.

https://www.sqlite.org/pragma.html#pragma_soft_heap_limit

PRAGMA soft_heap_limit PRAGMA soft_heap_limit=N

This pragma invokes the sqlite3_soft_heap_limit64() interface with the argument N, if N is specified and is a non-negative integer. The soft_heap_limit pragma always returns the same integer that would be returned by the sqlite3_soft_heap_limit64(-1) C-language function.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
DigitalOcean buildpack memory errors for large sqlite db? 735852274  
721927254 https://github.com/simonw/datasette/issues/1083#issuecomment-721927254 https://api.github.com/repos/simonw/datasette/issues/1083 MDEyOklzc3VlQ29tbWVudDcyMTkyNzI1NA== simonw 9599 2020-11-04T19:24:34Z 2020-11-04T19:24:34Z OWNER

Related: #856 - if it's possible to paginate correctly configured canned query then the CSV option to "stream all rows" could work for queries as well as tables.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Advanced CSV export for arbitrary queries 736365306  
721926827 https://github.com/simonw/datasette/issues/1083#issuecomment-721926827 https://api.github.com/repos/simonw/datasette/issues/1083 MDEyOklzc3VlQ29tbWVudDcyMTkyNjgyNw== simonw 9599 2020-11-04T19:23:42Z 2020-11-04T19:23:42Z OWNER

https://latest.datasette.io/fixtures/sortable#export has advanced export options, but https://latest.datasette.io/fixtures?sql=select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable+order+by+pk1%2C+pk2+limit+101 does not.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Advanced CSV export for arbitrary queries 736365306  
721896822 https://github.com/simonw/datasette/issues/268#issuecomment-721896822 https://api.github.com/repos/simonw/datasette/issues/268 MDEyOklzc3VlQ29tbWVudDcyMTg5NjgyMg== simonw 9599 2020-11-04T18:23:29Z 2020-11-04T18:23:29Z OWNER

Worth noting that joining to get the rank works for FTS5 but not for FTS4 - see comment here: https://github.com/simonw/sqlite-utils/issues/192#issuecomment-721420539

Easiest solution would be to only support sort-by-rank for FTS5 tables. Alternative would be to depend on https://github.com/simonw/sqlite-fts4

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Mechanism for ranking results from SQLite full-text search 323718842  
721545090 https://github.com/simonw/datasette/issues/1082#issuecomment-721545090 https://api.github.com/repos/simonw/datasette/issues/1082 MDEyOklzc3VlQ29tbWVudDcyMTU0NTA5MA== simonw 9599 2020-11-04T06:47:15Z 2020-11-04T06:47:15Z OWNER

I've run into a similar problem with Google Cloud Run: beyond a certain size of database file I find myself needing to run instances there with more RAM assigned to them.

I haven't yet figured out a method to estimate the amount of RAM that will be needed to successfully serve a database file of a specific size- I've been using trial and error.

5GB is quite a big database file, so it doesn't surprise me that it may need a bigger instance. I recommend trying it on a 1GB or 2GB of RAM Digital Ocean instance (their default is 512MB) and see if that works.

Let me know what you find out!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
DigitalOcean buildpack memory errors for large sqlite db? 735852274  
721453779 https://github.com/simonw/sqlite-utils/issues/192#issuecomment-721453779 https://api.github.com/repos/simonw/sqlite-utils/issues/192 MDEyOklzc3VlQ29tbWVudDcyMTQ1Mzc3OQ== simonw 9599 2020-11-04T00:59:24Z 2020-11-04T00:59:36Z OWNER

FTS5 was added in SQLite 3.9.0 in 2015-10-14 - so about a year after CTEs, which means CTEs will always be safe to use with FTS5 queries.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
sqlite-utils search command 735532751  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 502.144ms · About: github-to-sqlite