html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app
https://github.com/simonw/datasette/issues/526#issuecomment-1258337011,https://api.github.com/repos/simonw/datasette/issues/526,1258337011,IC_kwDOBm6k_c5LALLz,536941,2022-09-26T16:49:48Z,2022-09-26T16:49:48Z,CONTRIBUTOR,"i think the smallest change that gets close to what i want is to change the behavior so that `max_returned_rows` is not applied in the `execute` method when we are are asking for a csv of query.
there are some infelicities for that approach, but i'll make a PR to make it easier to discuss.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,
https://github.com/simonw/datasette/issues/526#issuecomment-1258167564,https://api.github.com/repos/simonw/datasette/issues/526,1258167564,IC_kwDOBm6k_c5K_h0M,536941,2022-09-26T14:57:44Z,2022-09-26T15:08:36Z,CONTRIBUTOR,"reading the database execute method i have a few questions.
https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L229-L242
---
unless i'm missing something (which is very likely!!), the `max_returned_rows` argument doesn't actually offer any protections against running very expensive queries.
It's not like adding a `LIMIT max_rows` argument. it make sense that it isn't because, the query could already have an `LIMIT` argument. Doing something like `select * from (query) limit {max_returned_rows}` **might** be protective but wouldn't always.
Instead the code executes the full original query, and if still has time it fetches out the first `max_rows + 1` rows.
this *does* offer some protection of memory exhaustion, as you won't hydrate a huge result set into python (however, there are [data flow patterns](https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113) that could avoid that too)
given the current architecture, i don't see how creating a new connection would be use?
---
If we just removed the `max_return_rows` limitation, then i think most things would be fine **except** for the QueryViews. Right now rendering, just [5000 rows takes a lot of client-side memory](https://github.com/simonw/datasette/issues/1655) so some form of pagination would be required.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,
https://github.com/simonw/datasette/issues/1655#issuecomment-1258166572,https://api.github.com/repos/simonw/datasette/issues/1655,1258166572,IC_kwDOBm6k_c5K_hks,536941,2022-09-26T14:57:04Z,2022-09-26T14:57:04Z,CONTRIBUTOR,"I think that paginating, even in javascript, could be very helpful. Maybe render json or csv into the page and let javascript loading that into the dom?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515,
https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113,https://api.github.com/repos/simonw/datasette/issues/1727,1258129113,IC_kwDOBm6k_c5K_YbZ,536941,2022-09-26T14:30:11Z,2022-09-26T14:48:31Z,CONTRIBUTOR,"from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the `fetchmany` call)
this is probably a simplistic idea, but what if you had the python code in the `execute` method iterate over the cursor and yield out rows or small chunks of rows.
something like:
```python
with sqlite_timelimit(conn, time_limit_ms):
try:
cursor = conn.cursor()
cursor.execute(sql, params if params is not None else {})
except:
...
max_returned_rows = self.ds.max_returned_rows
if max_returned_rows == page_size:
max_returned_rows += 1
if max_returned_rows and truncate:
for i, row in enumerate(cursor):
yield row
if i == max_returned_rows - 1:
break
else:
for row in cursor:
yield row
truncated = False
```
this kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite.
you would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking.
depending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117,