home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 1060631257 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • eyeseast 1
  • 20after4 1
  • asg017 1

author_association 2

  • CONTRIBUTOR 2
  • NONE 1

issue 1

  • Add new `"sql_file"` key to Canned Queries in metadata? · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1151887842 https://github.com/simonw/datasette/issues/1528#issuecomment-1151887842 https://api.github.com/repos/simonw/datasette/issues/1528 IC_kwDOBm6k_c5EqGni eyeseast 25778 2022-06-10T03:23:08Z 2022-06-10T03:23:08Z CONTRIBUTOR

I just put together a version of this in a plugin: https://github.com/eyeseast/datasette-query-files. Happy to have any feedback.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add new `"sql_file"` key to Canned Queries in metadata? 1060631257  
988468238 https://github.com/simonw/datasette/issues/1528#issuecomment-988468238 https://api.github.com/repos/simonw/datasette/issues/1528 IC_kwDOBm6k_c466tQO 20after4 30934 2021-12-08T03:35:45Z 2021-12-08T03:35:45Z NONE

FWIW I implemented something similar with a bit of plugin code:

```python @hookimpl def canned_queries(datasette: Datasette, database: str) -> Mapping[str, str]: # load "canned queries" from the filesystem under # www/sql/db/query_name.sql queries = {}

sqldir = Path(__file__).parent.parent / "sql"
if database:
    sqldir = sqldir / database

if not sqldir.is_dir():
    return queries

for f in sqldir.glob('*.sql'):
    try:
        sql = f.read_text('utf8').strip()
        if not len(sql):
            log(f"Skipping empty canned query file: {f}")
            continue
        queries[f.stem] = { "sql": sql }
    except OSError as err:
        log(err)

return queries

```

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
Add new `"sql_file"` key to Canned Queries in metadata? 1060631257  
975955589 https://github.com/simonw/datasette/issues/1528#issuecomment-975955589 https://api.github.com/repos/simonw/datasette/issues/1528 IC_kwDOBm6k_c46K-aF asg017 15178711 2021-11-22T22:00:30Z 2021-11-22T22:00:30Z CONTRIBUTOR

Oh, another thing to consider: I believe this would be the first "_file" key in datasette's metadata, compared to other "_url" keys like "license_url" or "about_url". Not too sure what considerations to include with this (ex should missing files cause Datasette to stop before starting, should build scripts bundle these sql files somewhere during datasette package, etc.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add new `"sql_file"` key to Canned Queries in metadata? 1060631257  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 102.719ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows