home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

10 rows where issue = 1217759117, "updated_at" is on date 2022-04-28 and user = 9599 sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date)

user 1

  • simonw · 10 ✖

issue 1

  • Research: demonstrate if parallel SQL queries are worthwhile · 10 ✖

author_association 1

  • OWNER 10
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1112668411 https://github.com/simonw/datasette/issues/1727#issuecomment-1112668411 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CUfj7 simonw 9599 2022-04-28T21:25:34Z 2022-04-28T21:25:44Z OWNER

The two most promising theories at the moment, from here and Twitter and the SQLite forum, are:

  • SQLite is I/O bound - it generally only goes as fast as it can load data from disk. Multiple connections all competing for the same file on disk are going to end up blocked at the file system layer. But maybe this means in-memory databases will perform better?
  • It's the GIL. The sqlite3 C code may release the GIL, but the bits that do things like assembling Row objects to return still happen in Python, and that Python can only run on a single core.

A couple of ways to research the in-memory theory:

  • Use a RAM disk on macOS (or Linux). https://stackoverflow.com/a/2033417/6083 has instructions - short version:

    hdiutil attach -nomount ram://$((2 * 1024 * 100)) diskutil eraseVolume HFS+ RAMDisk name-returned-by-previous-command (was /dev/disk2 when I tried it) cd /Volumes/RAMDisk cp ~/fixtures.db .

  • Copy Datasette databases into an in-memory database on startup. I built a new plugin to do that here: https://github.com/simonw/datasette-copy-to-memory

I need to do some more, better benchmarks using these different approaches.

https://twitter.com/laurencerowe/status/1519780174560169987 also suggests:

Maybe try: 1. Copy the sqlite file to /dev/shm and rerun (all in ram.) 2. Create a CTE which calculates Fibonacci or similar so you can test something completely cpu bound (only return max value or something to avoid crossing between sqlite/Python.)

I like that second idea a lot - I could use the mandelbrot example from https://www.sqlite.org/lang_with.html#outlandish_recursive_query_examples

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111726586 https://github.com/simonw/datasette/issues/1727#issuecomment-1111726586 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQ5n6 simonw 9599 2022-04-28T04:17:16Z 2022-04-28T04:19:31Z OWNER

I could experiment with the await asyncio.run_in_executor(processpool_executor, fn) mechanism described in https://stackoverflow.com/a/29147750

Code examples: https://cs.github.com/?scopeName=All+repos&scope=&q=run_in_executor+ProcessPoolExecutor

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111725638 https://github.com/simonw/datasette/issues/1727#issuecomment-1111725638 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQ5ZG simonw 9599 2022-04-28T04:15:15Z 2022-04-28T04:15:15Z OWNER

Useful theory from Keith Medcalf https://sqlite.org/forum/forumpost/e363c69d3441172e

This is true, but the concurrency is limited to the execution which occurs with the GIL released (that is, in the native C sqlite3 library itself). Each row (for example) can be retrieved in parallel but "constructing the python return objects for each row" will be serialized (by the GIL).

That is to say that if your have two python threads each with their own connection, and each one is performing a select that returns 1,000,000 rows (lets say that is 25% of the candidates for each select) then the difference in execution time between executing two python threads in parallel vs a single serial thead will not be much different (if even detectable at all). In fact it is possible that the multiple-threaded version takes longer to run both queries to completion because of the increased contention over a shared resource (the GIL).

So maybe this is a GIL thing.

I should test with some expensive SQL queries (maybe big aggregations against large tables) and see if I can spot an improvement there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111699175 https://github.com/simonw/datasette/issues/1727#issuecomment-1111699175 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQy7n simonw 9599 2022-04-28T03:19:48Z 2022-04-28T03:20:08Z OWNER

I ran py-spy and then hammered refresh a bunch of times on the http://127.0.0.1:8856/github/commits?_facet=repo&_facet=committer&_trace=1&_noparallel= page - it generated this SVG profile for me.

The area on the right is the threads running the DB queries:

Interactive version here: https://static.simonwillison.net/static/2022/datasette-parallel-profile.svg

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111683539 https://github.com/simonw/datasette/issues/1727#issuecomment-1111683539 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQvHT simonw 9599 2022-04-28T02:47:57Z 2022-04-28T02:47:57Z OWNER

Maybe this is the Python GIL after all?

I've been hoping that the GIL won't be an issue because the sqlite3 module releases the GIL for the duration of the execution of a SQL query - see https://github.com/python/cpython/blob/f348154c8f8a9c254503306c59d6779d4d09b3a9/Modules/_sqlite/cursor.c#L749-L759

So I've been hoping this means that SQLite code itself can run concurrently on multiple cores even when Python threads cannot.

But maybe I'm misunderstanding how that works?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111681513 https://github.com/simonw/datasette/issues/1727#issuecomment-1111681513 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQunp simonw 9599 2022-04-28T02:44:26Z 2022-04-28T02:44:26Z OWNER

I could try py-spy top, which I previously used here: - https://github.com/simonw/datasette/issues/1673

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111661331 https://github.com/simonw/datasette/issues/1727#issuecomment-1111661331 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQpsT simonw 9599 2022-04-28T02:07:31Z 2022-04-28T02:07:31Z OWNER

Asked on the SQLite forum about this here: https://sqlite.org/forum/forumpost/ffbfa9f38e

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111602802 https://github.com/simonw/datasette/issues/1727#issuecomment-1111602802 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQbZy simonw 9599 2022-04-28T00:21:35Z 2022-04-28T00:21:35Z OWNER

Tried this but I'm getting back an empty JSON array of traces at the bottom of the page most of the time (intermittently it works correctly):

```diff diff --git a/datasette/database.py b/datasette/database.py index ba594a8..d7f9172 100644 --- a/datasette/database.py +++ b/datasette/database.py @@ -7,7 +7,7 @@ import sys import threading import uuid

-from .tracer import trace +from .tracer import trace, trace_child_tasks from .utils import ( detect_fts, detect_primary_keys, @@ -207,30 +207,31 @@ class Database: time_limit_ms = custom_time_limit

         with sqlite_timelimit(conn, time_limit_ms):
  • try:
  • cursor = conn.cursor()
  • cursor.execute(sql, params if params is not None else {})
  • max_returned_rows = self.ds.max_returned_rows
  • if max_returned_rows == page_size:
  • max_returned_rows += 1
  • if max_returned_rows and truncate:
  • rows = cursor.fetchmany(max_returned_rows + 1)
  • truncated = len(rows) > max_returned_rows
  • rows = rows[:max_returned_rows]
  • else:
  • rows = cursor.fetchall()
  • truncated = False
  • except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
  • if e.args == ("interrupted",):
  • raise QueryInterrupted(e, sql, params)
  • if log_sql_errors:
  • sys.stderr.write(
  • "ERROR: conn={}, sql = {}, params = {}: {}\n".format(
  • conn, repr(sql), params, e
  • with trace("sql", database=self.name, sql=sql.strip(), params=params):
  • try:
  • cursor = conn.cursor()
  • cursor.execute(sql, params if params is not None else {})
  • max_returned_rows = self.ds.max_returned_rows
  • if max_returned_rows == page_size:
  • max_returned_rows += 1
  • if max_returned_rows and truncate:
  • rows = cursor.fetchmany(max_returned_rows + 1)
  • truncated = len(rows) > max_returned_rows
  • rows = rows[:max_returned_rows]
  • else:
  • rows = cursor.fetchall()
  • truncated = False
  • except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
  • if e.args == ("interrupted",):
  • raise QueryInterrupted(e, sql, params)
  • if log_sql_errors:
  • sys.stderr.write(
  • "ERROR: conn={}, sql = {}, params = {}: {}\n".format(
  • conn, repr(sql), params, e
  • ) )
  • )
  • sys.stderr.flush()
  • raise
  • sys.stderr.flush()
  • raise

         if truncate:
             return Results(rows, truncated, cursor.description)
    

    @@ -238,9 +239,8 @@ class Database: else: return Results(rows, False, cursor.description)

  • with trace("sql", database=self.name, sql=sql.strip(), params=params):

  • results = await self.execute_fn(sql_operation_in_thread)
  • return results
  • with trace_child_tasks():
  • return await self.execute_fn(sql_operation_in_thread)

    @property def size(self): ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111597176 https://github.com/simonw/datasette/issues/1727#issuecomment-1111597176 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQaB4 simonw 9599 2022-04-28T00:11:44Z 2022-04-28T00:11:44Z OWNER

Though it would be interesting to also have the trace reveal how much time is spent in the functions that wrap that core SQL - the stuff that is being measured at the moment.

I have a hunch that this could help solve the over-arching performance mystery.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111595319 https://github.com/simonw/datasette/issues/1727#issuecomment-1111595319 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CQZk3 simonw 9599 2022-04-28T00:09:45Z 2022-04-28T00:11:01Z OWNER

Here's where read queries are instrumented: https://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L241-L242

So the instrumentation is actually capturing quite a bit of Python activity before it gets to SQLite:

https://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L179-L190

And then:

https://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L204-L233

Ideally I'd like that trace() block to wrap just the cursor.execute() and cursor.fetchmany(...) or cursor.fetchall() calls.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Research: demonstrate if parallel SQL queries are worthwhile 1217759117  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 920.65ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows