issue_comments: 782747743

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions issue performed_via_github_app
https://github.com/simonw/datasette/issues/782#issuecomment-782747743 https://api.github.com/repos/simonw/datasette/issues/782 782747743 MDEyOklzc3VlQ29tbWVudDc4Mjc0Nzc0Mw== 9599 2021-02-20T20:52:10Z 2021-02-20T20:52:10Z OWNER

Minor suggestion: rename size query param to limit, to better reflect that it’s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword?

The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword. Consider this query:

https://latest-with-plugins.datasette.io/github?sql=select+*+from+commits - select * from commits

Datasette returns 1,000 results, and shows a "Custom SQL query returning more than 1,000 rows" message at the top. That's the size kicking in - I only fetch the first 1,000 results from the cursor to avoid exhausting resources. In the JSON version of that at https://latest-with-plugins.datasette.io/github.json?sql=select+*+from+commits there's a "truncated": true key to let you know what happened.

I find myself using ?_size=2 against Datasette occasionally if I know the rows being returned are really big and I don't want to load 10+MB of HTML.

This is only really a concern for arbitrary SQL queries though - for table pages such as https://latest-with-plugins.datasette.io/github/commits?_size=10 adding ?_size=10 actually puts a limit 10 on the underlying SQL query.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
627794879