home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

20 rows where "created_at" is on date 2023-02-07 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

user 5

  • simonw 8
  • 4l1fe 6
  • cldellow 4
  • psychemedia 1
  • mcarpenter 1

issue 4

  • Transformation type `--type DATETIME` 10
  • Refactor out the keyset pagination code 8
  • First proof-of-concept of Datasette Library 1
  • rows_from_file() raises confusing error if file-like object is not in binary mode 1

author_association 3

  • NONE 10
  • OWNER 8
  • CONTRIBUTOR 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1421600789 https://github.com/simonw/datasette/issues/2019#issuecomment-1421600789 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5Uu-gV simonw 9599 2023-02-07T23:12:40Z 2023-02-07T23:16:20Z OWNER

Most complicated example of a paginated query: https://latest.datasette.io/fixtures?sql=select%0D%0A++pk1%2C%0D%0A++pk2%2C%0D%0A++content%2C%0D%0A++sortable%2C%0D%0A++sortable_with_nulls%2C%0D%0A++sortable_with_nulls_2%2C%0D%0A++text%0D%0Afrom%0D%0A++sortable%0D%0Awhere%0D%0A++(%0D%0A++++sortable_with_nulls+is+null%0D%0A++++and+(%0D%0A++++++(pk1+%3E+%3Ap0)%0D%0A++++++or+(%0D%0A++++++++pk1+%3D+%3Ap0%0D%0A++++++++and+pk2+%3E+%3Ap1%0D%0A++++++)%0D%0A++++)%0D%0A++)%0D%0Aorder+by%0D%0A++sortable_with_nulls+desc%2C%0D%0A++pk1%2C%0D%0A++pk2%0D%0Alimit%0D%0A++101&p0=h&p1=r

sql select pk1, pk2, content, sortable, sortable_with_nulls, sortable_with_nulls_2, text from sortable where ( sortable_with_nulls is null and ( (pk1 > :p0) or ( pk1 = :p0 and pk2 > :p1 ) ) ) order by sortable_with_nulls desc, pk1, pk2 limit 101 Generated by this page: https://latest.datasette.io/fixtures/sortable?_next=%24null%2Ch%2Cr&_sort_desc=sortable_with_nulls

The _next= parameter there decodes as $null,h,r - and those components are tilde-encoded, so this can be distinguished from an actual $null value which would be represented as ~24null.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1421571810 https://github.com/simonw/sqlite-utils/issues/520#issuecomment-1421571810 https://api.github.com/repos/simonw/sqlite-utils/issues/520 IC_kwDOCGYnMM5Uu3bi mcarpenter 167893 2023-02-07T22:43:09Z 2023-02-07T22:43:09Z CONTRIBUTOR

Hey, isn't this essentially the same issue as #448 ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
rows_from_file() raises confusing error if file-like object is not in binary mode 1516644980  
1421274434 https://github.com/simonw/datasette/issues/2019#issuecomment-1421274434 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5Utu1C simonw 9599 2023-02-07T18:42:42Z 2023-02-07T18:42:42Z OWNER

I'm going to build completely separate tests for this in test_pagination.py.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1421177666 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421177666 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5UtXNC 4l1fe 21095447 2023-02-07T17:39:00Z 2023-02-07T17:39:00Z NONE

lets users make schema changes, so it's important to me that the tool work in a non-surprising way -- if you ask for a column of type X, you should get type X. If the column or table previously had CHECK constraints, they shouldn't be silently removed

I've got your concern. Let's see if we will be replied on it and i'll close the issue some later.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1421081939 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421081939 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Us_1T cldellow 193185 2023-02-07T16:42:25Z 2023-02-07T16:43:42Z NONE

Ha, yes, I might end up making something very niche. That's OK.

I'm building a UI for Datasette that lets users make schema changes, so it's important to me that the tool work in a non-surprising way -- if you ask for a column of type X, you should get type X. If the column or table previously had CHECK constraints, they shouldn't be silently removed. And so on. I had hoped that I could just lean on sqlite-utils, but I think it's a little too surprising.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1421055590 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421055590 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Us5Zm 4l1fe 21095447 2023-02-07T16:25:31Z 2023-02-07T16:25:31Z NONE

Ah, it looks like that is controlled by this dict: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L178

I suspect you could overwrite the datetime entry to achieve what you want

And thank you for pointing me to it. At least, i can make a monkey patch for my need...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1421052195 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421052195 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Us4kj 4l1fe 21095447 2023-02-07T16:23:17Z 2023-02-07T16:23:57Z NONE

Isn't your suggestion too fundamental for the utility?

The bigger flexibility, the bigger complexity. Your idea make sense defenitely, but how often do you make schema changes? And how many people could benefit from it, what do you think?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1421033725 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421033725 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Us0D9 cldellow 193185 2023-02-07T16:12:13Z 2023-02-07T16:12:13Z NONE

I think the bigger issue is that sqlite-utils mixes mechanism (it implements the 12-step way to alter SQLite tables) and policy (it has an opinionated stance on what column types should be used).

That might be a design choice to make it accessible to users by providing a reasonable set of defaults, but it doesn't quite fit my use case.

It might make sense to extract a separate library that provides just the mechanisms, and then sqlite-utils would sit on top of that library with its opinionated set of policies.

That would be a very big change, though.

I might take a stab at extracting the library, but just for the table schema migration piece, not all the other features that sqlite-utils supports. I wouldn't expect sqlite-utils to depend on it.

Part of my motivation is that I want to provide some other abilities, too, like support for CHECK constraints. I see that the issue in this repo (https://github.com/simonw/sqlite-utils/issues/358) proposes a bunch of short-hand constraints, which I wouldn't want to accidentally expose to people -- I want a layer that is a 1:1 mapping to SQLite.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1421022917 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1421022917 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5UsxbF 4l1fe 21095447 2023-02-07T16:06:03Z 2023-02-07T16:08:58Z NONE

Do you see a way to enable it without affecting existing users or bumping the major version number?

I don't see a clean solution, only extending code with a side variable that tells us we want to apply advanced types instead of basic.

it could be a similiar command like tranform-v2 --type column DATETIME or a cli option transform --adv-type column DATETIME along with a dict that contains the advanced types. Then with knowledge that we run an advanced command we take that dictionary somehow, we can wrap the current and new dictionaries by a superdict and work with it everywhere according to the knowledge. This way shouldn't affect users who are using the previous lib versions and it have to be merged in the next major one.

But this way looks a bad design, too messy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1420992261 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420992261 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Usp8F cldellow 193185 2023-02-07T15:45:58Z 2023-02-07T15:45:58Z NONE

I'd support that, but I'm not the author of this library.

One challenge is that would be a breaking change. Do you see a way to enable it without affecting existing users or bumping the major version number?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1420966995 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420966995 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5UsjxT 4l1fe 21095447 2023-02-07T15:29:28Z 2023-02-07T15:29:28Z NONE

I could, of course.

Doest it worth bringing such the improvement to the library?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1420941334 https://github.com/simonw/datasette/pull/564#issuecomment-1420941334 https://api.github.com/repos/simonw/datasette/issues/564 IC_kwDOBm6k_c5UsdgW psychemedia 82988 2023-02-07T15:14:10Z 2023-02-07T15:14:10Z CONTRIBUTOR

Is this feature covered by any more recent updates to datasette, or via any plugins that you're aware of?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
First proof-of-concept of Datasette Library 473288428  
1420809773 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420809773 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Ur9Yt cldellow 193185 2023-02-07T13:53:01Z 2023-02-07T13:53:01Z NONE

Ah, it looks like that is controlled by this dict: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L178

I suspect you could overwrite the datetime entry to achieve what you want

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1420496447 https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1420496447 https://api.github.com/repos/simonw/sqlite-utils/issues/524 IC_kwDOCGYnMM5Uqw4_ 4l1fe 21095447 2023-02-07T09:57:38Z 2023-02-07T09:57:38Z NONE

That said, it looks like the check is only enforced at the CLI level. If you use the API directly, I think it'll work.

It works, but a column becomes TEXT

```python In [1]: import sqlite_utils In [2]: db = sqlite_utils.Database('events.sqlite') In [3]: table = db['cards.chunk.get'] In [4]: table.columns_dict Out[4]: {'id': int, 'timestamp': float, 'data_chunk_number': int, 'user_id': str, 'meta_duplication_source_id': int, 'context_sort_attribute': str, 'context_sort_order': str}

In [5]: from datetime import datetime In [7]: table.transform(types={'timestamp': datetime}) In [8]: table.columns_dict Out[8]: {'id': int, 'timestamp': str, 'data_chunk_number': int, 'user_id': str, 'meta_duplication_source_id': int, 'context_sort_attribute': str, 'context_sort_order': str} ```

bash ❯ sqlite-utils schema events.sqlite cards.chunk.get CREATE TABLE "cards.chunk.get" ( [id] INTEGER PRIMARY KEY NOT NULL, [timestamp] TEXT, ...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Transformation type `--type DATETIME` 1572766460  
1420109153 https://github.com/simonw/datasette/issues/2019#issuecomment-1420109153 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5UpSVh simonw 9599 2023-02-07T02:32:36Z 2023-02-07T02:32:36Z OWNER

Doing this as a class makes sense to me. There are a few steps:

  • Instantiate the class with the information it needs, which includes sort order, page size, tiebreaker columns and SQL query and parameters
  • Generate the new SQL query that will actually be executed - maybe this takes the optional _next parameter? This returns the SQL and params that should be executed, where the SQL now includes pagination logic plus order by and limit
  • The calling code then gets to execute the SQL query to fetch the rows
  • Last step: those rows are passed to a paginator method which returns (rows, next) - where rows is the rows truncated to the correct length (really just with the last one cut off if it's too long for the length) and next is either None or a token, depending on if there should be a next page.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1420106315 https://github.com/simonw/datasette/issues/2019#issuecomment-1420106315 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5UpRpL simonw 9599 2023-02-07T02:28:03Z 2023-02-07T02:28:36Z OWNER

So I think I can write an abstraction that applies keyset pagination to ANY arbitrary SQL query provided it is given the query, the existing params (so it can pick names for the new params that won't overlap with them), the desired sort order, any existing _next token AND the columns that should be used to tie-break any duplicates.

Those tie breakers will be either the primary key(s) or rowid if none are provided.

What about the case of SQL views, where offset/limit should be used instead? I'm inclined to have that as a separate pagination abstraction entirely, with the calling code deciding which pagination helper to use based on if keyset pagination makes sense or not.

Might be easier to design a class structure for this starting with OffsetPaginator, then using that to inform the design of KeysetPaginator.

Might put these in datasette.utils.pagination to start off with, then maybe extract them out to sqlite-utils later once they've proven themselves.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1420104254 https://github.com/simonw/datasette/issues/2019#issuecomment-1420104254 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5UpRI- simonw 9599 2023-02-07T02:24:46Z 2023-02-07T02:24:46Z OWNER

Even more complicated: https://latest.datasette.io/fixtures/sortable?sortable_with_nulls__notnull=1&_next=0~2E692704598586882%2Ce%2Cr&_sort=sortable_with_nulls_2

The rewritten SQL for that is:

sql select * from (select pk1, pk2, content, sortable, sortable_with_nulls, sortable_with_nulls_2, text from sortable where "sortable_with_nulls" is not null) where (sortable_with_nulls_2 > :p2 or (sortable_with_nulls_2 = :p2 and ((pk1 > :p0) or (pk1 = :p0 and pk2 > :p1)))) order by sortable_with_nulls_2, pk1, pk2 limit 101 And it still has the same number of explain steps as the current SQL witohut the subselect.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1420101175 https://github.com/simonw/datasette/issues/2019#issuecomment-1420101175 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5UpQY3 simonw 9599 2023-02-07T02:22:11Z 2023-02-07T02:22:11Z OWNER

A more complex example: https://latest.datasette.io/fixtures/sortable?_next=0~2E2650566289400591%2Ca%2Cu&_sort=sortable_with_nulls_2

SQL:

sql select pk1, pk2, content, sortable, sortable_with_nulls, sortable_with_nulls_2, text from sortable where (sortable_with_nulls_2 > :p2 or (sortable_with_nulls_2 = :p2 and ((pk1 > :p0) or (pk1 = :p0 and pk2 > :p1)))) order by sortable_with_nulls_2, pk1, pk2 limit 101

https://latest.datasette.io/fixtures?sql=select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable+where+%28sortable_with_nulls_2+%3E+%3Ap2+or+%28sortable_with_nulls_2+%3D+%3Ap2+and+%28%28pk1+%3E+%3Ap0%29%0A++or%0A%28pk1+%3D+%3Ap0+and+pk2+%3E+%3Ap1%29%29%29%29+order+by+sortable_with_nulls_2%2C+pk1%2C+pk2+limit+101&p0=a&p1=u&p2=0.2650566289400591

Here's the explain: 49 steps long https://latest.datasette.io/fixtures?sql=explain+select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable+where+%28sortable_with_nulls_2+%3E+%3Ap2+or+%28sortable_with_nulls_2+%3D+%3Ap2+and+%28%28pk1+%3E+%3Ap0%29%0D%0A++or%0D%0A%28pk1+%3D+%3Ap0+and+pk2+%3E+%3Ap1%29%29%29%29+order+by+sortable_with_nulls_2%2C+pk1%2C+pk2+limit+101&p2=0.2650566289400591&p0=a&p1=u

Rewritten with a subselect:

sql select * from ( select pk1, pk2, content, sortable, sortable_with_nulls, sortable_with_nulls_2, text from sortable ) where (sortable_with_nulls_2 > :p2 or (sortable_with_nulls_2 = :p2 and ((pk1 > :p0) or (pk1 = :p0 and pk2 > :p1)))) order by sortable_with_nulls_2, pk1, pk2 limit 101 https://latest.datasette.io/fixtures?sql=select+*+from+(%0D%0A++select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable%0D%0A)%0D%0Awhere+(sortable_with_nulls_2+%3E+%3Ap2+or+(sortable_with_nulls_2+%3D+%3Ap2+and+((pk1+%3E+%3Ap0)%0D%0A++or%0D%0A(pk1+%3D+%3Ap0+and+pk2+%3E+%3Ap1))))+order+by+sortable_with_nulls_2%2C+pk1%2C+pk2+limit+101&p2=0.2650566289400591&p0=a&p1=u

And here's the explain for that - also 49 steps: https://latest.datasette.io/fixtures?sql=explain+select+*+from+%28%0D%0A++select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable%0D%0A%29%0D%0Awhere+%28sortable_with_nulls_2+%3E+%3Ap2+or+%28sortable_with_nulls_2+%3D+%3Ap2+and+%28%28pk1+%3E+%3Ap0%29%0D%0A++or%0D%0A%28pk1+%3D+%3Ap0+and+pk2+%3E+%3Ap1%29%29%29%29+order+by+sortable_with_nulls_2%2C+pk1%2C+pk2+limit+101&p2=0.2650566289400591&p0=a&p1=u

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1420094396 https://github.com/simonw/datasette/issues/2019#issuecomment-1420094396 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5UpOu8 simonw 9599 2023-02-07T02:18:11Z 2023-02-07T02:19:16Z OWNER

For the SQL underlying this page (the second page in that compound primary key paginated sequence): https://latest.datasette.io/fixtures/compound_three_primary_keys?_next=a%2Cd%2Cv

The explain for the default query: https://latest.datasette.io/fixtures?sql=explain+select%0D%0A++pk1%2C%0D%0A++pk2%2C%0D%0A++pk3%2C%0D%0A++content%0D%0Afrom%0D%0A++compound_three_primary_keys%0D%0Awhere%0D%0A++%28%0D%0A++++%28pk1+%3E+%3Ap0%29%0D%0A++++or+%28%0D%0A++++++pk1+%3D+%3Ap0%0D%0A++++++and+pk2+%3E+%3Ap1%0D%0A++++%29%0D%0A++++or+%28%0D%0A++++++pk1+%3D+%3Ap0%0D%0A++++++and+pk2+%3D+%3Ap1%0D%0A++++++and+pk3+%3E+%3Ap2%0D%0A++++%29%0D%0A++%29%0D%0Aorder+by%0D%0A++pk1%2C%0D%0A++pk2%2C%0D%0A++pk3%0D%0Alimit%0D%0A++101&p0=a&p1=d&p2=v

The explain for that query rewritten as this:

sql explain select * from ( select pk1, pk2, pk3, content from compound_three_primary_keys ) where ( (pk1 > :p0) or ( pk1 = :p0 and pk2 > :p1 ) or ( pk1 = :p0 and pk2 = :p1 and pk3 > :p2 ) ) order by pk1, pk2, pk3 limit 101 https://latest.datasette.io/fixtures?sql=explain+select+*+from+%28select+%0D%0A++pk1%2C%0D%0A++pk2%2C%0D%0A++pk3%2C%0D%0A++content%0D%0Afrom%0D%0A++compound_three_primary_keys%0D%0A%29%0D%0A++where%0D%0A++%28%0D%0A++++%28pk1+%3E+%3Ap0%29%0D%0A++++or+%28%0D%0A++++++pk1+%3D+%3Ap0%0D%0A++++++and+pk2+%3E+%3Ap1%0D%0A++++%29%0D%0A++++or+%28%0D%0A++++++pk1+%3D+%3Ap0%0D%0A++++++and+pk2+%3D+%3Ap1%0D%0A++++++and+pk3+%3E+%3Ap2%0D%0A++++%29%0D%0A++%29%0D%0Aorder+by%0D%0A++pk1%2C%0D%0A++pk2%2C%0D%0A++pk3%0D%0Alimit%0D%0A++101&p0=a&p1=d&p2=v

Both explains have 31 steps and look pretty much identical.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  
1420088670 https://github.com/simonw/datasette/issues/2019#issuecomment-1420088670 https://api.github.com/repos/simonw/datasette/issues/2019 IC_kwDOBm6k_c5UpNVe simonw 9599 2023-02-07T02:14:35Z 2023-02-07T02:14:35Z OWNER

Maybe the correct level of abstraction here is that pagination is something that happens to a SQL query that is defined as SQL and params, without an order by or limit. That's then wrapped in a sub-select and those things are added to it, plus the necessary where clauses depending on the page.

Need to check that the query plan for pagination of a subquery isn't slower than the plan for pagination as it works today.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Refactor out the keyset pagination code 1573424830  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 562.656ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows