github
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/sqlite-utils/issues/192#issuecomment-721453779 | https://api.github.com/repos/simonw/sqlite-utils/issues/192 | 721453779 | MDEyOklzc3VlQ29tbWVudDcyMTQ1Mzc3OQ== | 9599 | 2020-11-04T00:59:24Z | 2020-11-04T00:59:36Z | OWNER | FTS5 was added in SQLite 3.9.0 in 2015-10-14 - so about a year after CTEs, which means CTEs will always be safe to use with FTS5 queries. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
735532751 | |
https://github.com/simonw/datasette/issues/1082#issuecomment-721545090 | https://api.github.com/repos/simonw/datasette/issues/1082 | 721545090 | MDEyOklzc3VlQ29tbWVudDcyMTU0NTA5MA== | 9599 | 2020-11-04T06:47:15Z | 2020-11-04T06:47:15Z | OWNER | I've run into a similar problem with Google Cloud Run: beyond a certain size of database file I find myself needing to run instances there with more RAM assigned to them. I haven't yet figured out a method to estimate the amount of RAM that will be needed to successfully serve a database file of a specific size- I've been using trial and error. 5GB is quite a big database file, so it doesn't surprise me that it may need a bigger instance. I recommend trying it on a 1GB or 2GB of RAM Digital Ocean instance (their default is 512MB) and see if that works. Let me know what you find out! | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
735852274 | |
https://github.com/simonw/datasette/issues/1082#issuecomment-721547177 | https://api.github.com/repos/simonw/datasette/issues/1082 | 721547177 | MDEyOklzc3VlQ29tbWVudDcyMTU0NzE3Nw== | 39538958 | 2020-11-04T06:52:30Z | 2020-11-04T06:53:16Z | NONE | I think I tried the same db size on the following scenarios in Digital Ocean: 1. Basic ($5/month) with 512MB RAM 2. Basic ($10/month) with 1GB RAM 3. Pro ($12/month) with 1GB RAM All such attempts conked out with "out of memory" errors | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
735852274 | |
https://github.com/simonw/datasette/issues/268#issuecomment-721896822 | https://api.github.com/repos/simonw/datasette/issues/268 | 721896822 | MDEyOklzc3VlQ29tbWVudDcyMTg5NjgyMg== | 9599 | 2020-11-04T18:23:29Z | 2020-11-04T18:23:29Z | OWNER | Worth noting that joining to get the rank works for FTS5 but not for FTS4 - see comment here: https://github.com/simonw/sqlite-utils/issues/192#issuecomment-721420539 Easiest solution would be to only support sort-by-rank for FTS5 tables. Alternative would be to depend on https://github.com/simonw/sqlite-fts4 | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
323718842 | |
https://github.com/simonw/datasette/issues/1083#issuecomment-721926827 | https://api.github.com/repos/simonw/datasette/issues/1083 | 721926827 | MDEyOklzc3VlQ29tbWVudDcyMTkyNjgyNw== | 9599 | 2020-11-04T19:23:42Z | 2020-11-04T19:23:42Z | OWNER | https://latest.datasette.io/fixtures/sortable#export has advanced export options, but https://latest.datasette.io/fixtures?sql=select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable+order+by+pk1%2C+pk2+limit+101 does not. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
736365306 | |
https://github.com/simonw/datasette/issues/1083#issuecomment-721927254 | https://api.github.com/repos/simonw/datasette/issues/1083 | 721927254 | MDEyOklzc3VlQ29tbWVudDcyMTkyNzI1NA== | 9599 | 2020-11-04T19:24:34Z | 2020-11-04T19:24:34Z | OWNER | Related: #856 - if it's possible to paginate correctly configured canned query then the CSV option to "stream all rows" could work for queries as well as tables. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
736365306 | |
https://github.com/simonw/datasette/issues/1082#issuecomment-721931504 | https://api.github.com/repos/simonw/datasette/issues/1082 | 721931504 | MDEyOklzc3VlQ29tbWVudDcyMTkzMTUwNA== | 9599 | 2020-11-04T19:32:47Z | 2020-11-04T19:35:44Z | OWNER | I wonder if setting a soft memory limit within Datasette would help here: https://www.sqlite.org/malloc.html#_setting_memory_usage_limits > If attempts are made to allocate more memory than specified by the soft heap limit, then SQLite will first attempt to free cache memory before continuing with the allocation request. https://www.sqlite.org/pragma.html#pragma_soft_heap_limit > **PRAGMA soft_heap_limit** > **PRAGMA soft_heap_limit=N** > > This pragma invokes the [sqlite3_soft_heap_limit64()](https://www.sqlite.org/c3ref/hard_heap_limit64.html) interface with the argument N, if N is specified and is a non-negative integer. The soft_heap_limit pragma always returns the same integer that would be returned by the [sqlite3_soft_heap_limit64](https://www.sqlite.org/c3ref/hard_heap_limit64.html)(-1) C-language function. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
735852274 |