{"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-1258337011", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 1258337011, "node_id": "IC_kwDOBm6k_c5LALLz", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-09-26T16:49:48Z", "updated_at": "2022-09-26T16:49:48Z", "author_association": "CONTRIBUTOR", "body": "i think the smallest change that gets close to what i want is to change the behavior so that `max_returned_rows` is not applied in the `execute` method when we are are asking for a csv of query.\r\n\r\nthere are some infelicities for that approach, but i'll make a PR to make it easier to discuss.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-1258167564", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 1258167564, "node_id": "IC_kwDOBm6k_c5K_h0M", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-09-26T14:57:44Z", "updated_at": "2022-09-26T15:08:36Z", "author_association": "CONTRIBUTOR", "body": "reading the database execute method i have a few questions.\r\n\r\nhttps://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L229-L242\r\n\r\n---\r\nunless i'm missing something (which is very likely!!), the `max_returned_rows` argument doesn't actually offer any protections against running very expensive queries. \r\n\r\nIt's not like adding a `LIMIT max_rows` argument. it make sense that it isn't because, the query could already have an `LIMIT` argument. Doing something like `select * from (query) limit {max_returned_rows}` **might** be protective but wouldn't always.\r\n\r\nInstead the code executes the full original query, and if still has time it fetches out the first `max_rows + 1` rows. \r\n\r\nthis *does* offer some protection of memory exhaustion, as you won't hydrate a huge result set into python (however, there are [data flow patterns](https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113) that could avoid that too)\r\n\r\ngiven the current architecture, i don't see how creating a new connection would be use?\r\n\r\n---\r\n\r\nIf we just removed the `max_return_rows` limitation, then i think most things would be fine **except** for the QueryViews. Right now rendering, just [5000 rows takes a lot of client-side memory](https://github.com/simonw/datasette/issues/1655) so some form of pagination would be required.\r\n\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null}