{"html_url": "https://github.com/simonw/datasette/issues/526#issuecomment-1259693536", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/526", "id": 1259693536, "node_id": "IC_kwDOBm6k_c5LFWXg", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-09-27T15:42:55Z", "updated_at": "2022-09-27T15:42:55Z", "author_association": "OWNER", "body": "It's interesting to note WHY the time limit works against this so well.\r\n\r\nThe time limit as-implemented looks like this:\r\n\r\nhttps://github.com/simonw/datasette/blob/5f9f567acbc58c9fcd88af440e68034510fb5d2b/datasette/utils/__init__.py#L181-L201\r\n\r\nThe key here is `conn.set_progress_handler(handler, n)` - which specifies that the handler function should be called every `n` SQLite operations.\r\n\r\nThe handler function then checks to see if too much time has transpired and conditionally cancels the query.\r\n\r\nThis also doubles up as a \"maximum number of operations\" guard, which is what's happening when you attempt to fetch an infinite number of rows from an infinite table.\r\n\r\nThat limit code could even be extended to say \"exit the query after either 5s or 50,000,000 operations\".\r\n\r\nI don't think that's necessary though.\r\n\r\nTo be honest I'm having trouble with the idea of dropping `max_returned_rows` mainly because what Datasette does (allow arbitrary untrusted SQL queries) is dangerous, so I've designed in multiple redundant defence-in-depth mechanisms right from the start.", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 1, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 459882902, "label": "Stream all results for arbitrary SQL and canned queries"}, "performed_via_github_app": null}