home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 1259693536

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions issue performed_via_github_app
https://github.com/simonw/datasette/issues/526#issuecomment-1259693536 https://api.github.com/repos/simonw/datasette/issues/526 1259693536 IC_kwDOBm6k_c5LFWXg 9599 2022-09-27T15:42:55Z 2022-09-27T15:42:55Z OWNER

It's interesting to note WHY the time limit works against this so well.

The time limit as-implemented looks like this:

https://github.com/simonw/datasette/blob/5f9f567acbc58c9fcd88af440e68034510fb5d2b/datasette/utils/init.py#L181-L201

The key here is conn.set_progress_handler(handler, n) - which specifies that the handler function should be called every n SQLite operations.

The handler function then checks to see if too much time has transpired and conditionally cancels the query.

This also doubles up as a "maximum number of operations" guard, which is what's happening when you attempt to fetch an infinite number of rows from an infinite table.

That limit code could even be extended to say "exit the query after either 5s or 50,000,000 operations".

I don't think that's necessary though.

To be honest I'm having trouble with the idea of dropping max_returned_rows mainly because what Datasette does (allow arbitrary untrusted SQL queries) is dangerous, so I've designed in multiple redundant defence-in-depth mechanisms right from the start.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
459882902  
Powered by Datasette · Queries took 0.908ms · About: github-to-sqlite