home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where author_association = "OWNER" and issue = 421546944 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • simonw 7

issue 1

  • Datasette Library · 7 ✖

author_association 1

  • OWNER · 7 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
751127485 https://github.com/simonw/datasette/issues/417#issuecomment-751127485 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDc1MTEyNzQ4NQ== simonw 9599 2020-12-24T22:58:05Z 2020-12-24T22:58:05Z OWNER

That's a great idea. I'd ruled that out because working with the different operating system versions of those is tricky, but if watchdog can handle those differences for me this could be a really good option.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
586066798 https://github.com/simonw/datasette/issues/417#issuecomment-586066798 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDU4NjA2Njc5OA== simonw 9599 2020-02-14T02:24:54Z 2020-02-14T02:24:54Z OWNER

I'm going to move this over to a draft pull request.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
586065843 https://github.com/simonw/datasette/issues/417#issuecomment-586065843 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDU4NjA2NTg0Mw== simonw 9599 2020-02-14T02:20:53Z 2020-02-14T02:20:53Z OWNER

MVP for this feature: just do it once on startup, don't scan for new files every X seconds.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
586047525 https://github.com/simonw/datasette/issues/417#issuecomment-586047525 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDU4NjA0NzUyNQ== simonw 9599 2020-02-14T01:03:43Z 2020-02-14T01:59:02Z OWNER

OK, I have a plan. I'm going to try and implement this is a core Datasette feature (no plugins) with the following design:

  • You can tell Datasette "load any databases you find in this directory" by passing the --dir=path/to/dir option to datasette that are valid SQLite files and will attach them to Datasette
  • Every 10 seconds Datasette will re-scan those directories to see if any new files have been added
  • That 10s will be the default for a new --config directory_scan_s:10 config option. You can set this to 0 to disable scanning entirely, at which point Datasette will only run the scan once on startup.

To check if a file is valid SQLite, Datasette will first check if the first few bytes of the file are b"SQLite format 3\x00". If they are, it will open a connection to the file and attempt to run select * from sqlite_master against it. If that runs without any errors it will assume the file is usable and connect it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
586047995 https://github.com/simonw/datasette/issues/417#issuecomment-586047995 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDU4NjA0Nzk5NQ== simonw 9599 2020-02-14T01:05:20Z 2020-02-14T01:05:20Z OWNER

I'm going to add two methods to the Datasette class to help support this work (and to enable exciting new plugin opportunities in the future):

  • datasette.add_database(name, db) - adds a new named database to the list of connected databases. db will be a Database() object, which may prove useful in the future for things like #670 and could also allow some plugins to provide in-memory SQLite databases.
  • datasette.remove_database(name)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
473312514 https://github.com/simonw/datasette/issues/417#issuecomment-473312514 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDQ3MzMxMjUxNA== simonw 9599 2019-03-15T14:42:07Z 2019-03-17T22:12:30Z OWNER

A neat ability of Datasette Library would be if it can work against other files that have been dropped into the folder. In particular: if a user drops a CSV file into the folder, how about automatically converting that CSV file to SQLite using sqlite-utils?

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
473308631 https://github.com/simonw/datasette/issues/417#issuecomment-473308631 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDQ3MzMwODYzMQ== simonw 9599 2019-03-15T14:32:13Z 2019-03-15T14:32:13Z OWNER

This would allow Datasette to be easily used as a "data library" (like a data warehouse but less expectation of big data querying technology such as Presto).

One of the things I learned at the NICAR CAR 2019 conference in Newport Beach is that there is a very real need for some kind of easily accessible data library at most newsrooms.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 27.527ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows