home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 481885279 and user = 9599 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • simonw · 4 ✖

issue 1

  • More advanced connection pooling · 4 ✖

author_association 1

  • OWNER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
537712384 https://github.com/simonw/datasette/issues/569#issuecomment-537712384 https://api.github.com/repos/simonw/datasette/issues/569 MDEyOklzc3VlQ29tbWVudDUzNzcxMjM4NA== simonw 9599 2019-10-02T22:44:36Z 2019-10-02T22:44:36Z OWNER

I'm going to simplify things a bunch by continuing to ignore the cross-database joining issue #283 - I'll post some notes there on my latest thinking.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
More advanced connection pooling 481885279  
537711455 https://github.com/simonw/datasette/issues/569#issuecomment-537711455 https://api.github.com/repos/simonw/datasette/issues/569 MDEyOklzc3VlQ29tbWVudDUzNzcxMTQ1NQ== simonw 9599 2019-10-02T22:41:12Z 2019-10-02T22:41:12Z OWNER

I'm going to refactor the execute() and execute_against_connection_in_thread() methods.

They currently live on the Datasette class, but in this new world it would make more sense for them to live on the Database, ConnectionGroup or Connection classes.

I think I'll put them on the Database class.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
More advanced connection pooling 481885279  
522238222 https://github.com/simonw/datasette/issues/569#issuecomment-522238222 https://api.github.com/repos/simonw/datasette/issues/569 MDEyOklzc3VlQ29tbWVudDUyMjIzODIyMg== simonw 9599 2019-08-17T13:44:24Z 2019-08-17T13:45:10Z OWNER

Potential API design: python with pool.connection("fixtures") as conn: conn.set_time_limit(1000) conn.allow_all() conn.execute(...) Within this block the current thread has exclusive action to the connection - which has essentially been "checked out" from the pool. When the block ends that connection is made available to be checked out by other threads.

This could accept multiple database names for the case where I want to join across databases: python with pool.connection("fixtures", "dogs") as conn: conn.execute(...)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
More advanced connection pooling 481885279  
522236218 https://github.com/simonw/datasette/issues/569#issuecomment-522236218 https://api.github.com/repos/simonw/datasette/issues/569 MDEyOklzc3VlQ29tbWVudDUyMjIzNjIxOA== simonw 9599 2019-08-17T13:24:50Z 2019-08-17T13:24:50Z OWNER

I think what I want it a mechanism where a thread can say "give me a connection for database X" and it either gets back connection to X instantly OR a new connection to X is created and returned OR it blocks (because a certain number of connections to that database exist already) until another thread returns their connection OR it times out and returns an error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
More advanced connection pooling 481885279  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 562.28ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows