home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 323658641, "updated_at" is on date 2023-01-21 and user = 9599 sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: updated_at (date)

user 1

  • simonw · 5 ✖

issue 1

  • Add ?_extra= mechanism for requesting extra properties in JSON · 5 ✖

author_association 1

  • OWNER 5
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1399184642 https://github.com/simonw/datasette/issues/262#issuecomment-1399184642 https://api.github.com/repos/simonw/datasette/issues/262 IC_kwDOBm6k_c5TZd0C simonw 9599 2023-01-21T05:36:22Z 2023-01-21T05:41:06Z OWNER

Maybe "rows" should be a default ?_extra=... but it should be possible to request "arrays" instead which would be a list of arrays, more suitable perhaps for custom renderers such as the CSV one.

This could be quite neat, in that EVERY key in the JSON representation would be defined as an extra - just some would be on by default. There could even be a mechanism for turning them back off again, maybe using ?_extra=-rows.

In which case maybe ?_extra= isn't actually the right name for this feature. It could be ?_key= perhaps, or ?_field=.

Being able to pass ?_field=count,-rows to get back just the count (and skip executing the count entirely) would be pretty neat.

Although ?_only=count would be tidier. So maybe the pair of ?_only= and ?_extra= would make sense.

Would ?_only=rows still return the "ok" field so you can always look at that to confirm an error didn't occur?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add ?_extra= mechanism for requesting extra properties in JSON 323658641  
1399184540 https://github.com/simonw/datasette/issues/262#issuecomment-1399184540 https://api.github.com/repos/simonw/datasette/issues/262 IC_kwDOBm6k_c5TZdyc simonw 9599 2023-01-21T05:35:32Z 2023-01-21T05:35:32Z OWNER

It's annoying that the https://docs.datasette.io/en/0.64.1/plugin_hooks.html#register-output-renderer-datasette plugin hook passes rows as "list of sqlite3.Row objects" - I'd prefer it if that plugin hook worked with JSON data, not sqlite3.Row.

https://docs.datasette.io/en/0.64.1/plugin_hooks.html#render-cell-row-value-column-table-database-datasette is documented as accepting Row but actually gets CustomRow, see:

  • 1973

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add ?_extra= mechanism for requesting extra properties in JSON 323658641  
1399178823 https://github.com/simonw/datasette/issues/262#issuecomment-1399178823 https://api.github.com/repos/simonw/datasette/issues/262 IC_kwDOBm6k_c5TZcZH simonw 9599 2023-01-21T04:54:49Z 2023-01-21T04:54:49Z OWNER

I pushed my prototype so far, going to start a draft PR for it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add ?_extra= mechanism for requesting extra properties in JSON 323658641  
1399178591 https://github.com/simonw/datasette/issues/262#issuecomment-1399178591 https://api.github.com/repos/simonw/datasette/issues/262 IC_kwDOBm6k_c5TZcVf simonw 9599 2023-01-21T04:53:15Z 2023-01-21T04:53:15Z OWNER

Implementing this to work with the .json extension is going to be a lot harder.

The challenge here is that we're working with the whole BaseView() v.s. TableView() abstraction, which I've been wanting to get rid of for a long time.

BaseView() calls .data() and expects to get back a (data, extra_template_data, templates) tuple - then if a format is in play (.json or .geojson or similar from a plugin) it hands off data to that. If .csv is involved it does something special, in order to support streaming responses. And if it's regular HTML it calls await extra_template_data() and combines that with data and passes it to the template.

I want this to work completely differently: I want the formats (including HTML) to have the option of adding some extra ?_extra= extras, then I want HTML to be able to render the page entirely from the JSON if necessary.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add ?_extra= mechanism for requesting extra properties in JSON 323658641  
1399145981 https://github.com/simonw/datasette/issues/262#issuecomment-1399145981 https://api.github.com/repos/simonw/datasette/issues/262 IC_kwDOBm6k_c5TZUX9 simonw 9599 2023-01-21T01:56:52Z 2023-01-21T01:56:52Z OWNER

Got first prototype working using asyncinject and it's pretty nice: ```diff diff --git a/datasette/views/table.py b/datasette/views/table.py index ad45ecd3..c8690b22 100644 --- a/datasette/views/table.py +++ b/datasette/views/table.py @@ -2,6 +2,7 @@ import asyncio import itertools import json

+from asyncinject import Registry import markupsafe

from datasette.plugins import pm @@ -538,57 +539,60 @@ class TableView(DataView): # Execute the main query! results = await db.execute(sql, params, truncate=True, **extra_args)

  • Calculate the total count for this query

  • count = None
  • if (
  • not db.is_mutable
  • and self.ds.inspect_data
  • and count_sql == f"select count(*) from {table_name} "
  • ):
  • We can use a previously cached table row count

  • try:
  • count = self.ds.inspect_data[database_name]["tables"][table_name][
  • "count"
  • ]
  • except KeyError:
  • pass

  • Otherwise run a select count(*) ...

  • if count_sql and count is None and not nocount:
  • try:
  • count_rows = list(await db.execute(count_sql, from_sql_params))
  • count = count_rows[0][0]
  • except QueryInterrupted:
  • pass

  • Faceting

  • if not self.ds.setting("allow_facet") and any(
  • arg.startswith("_facet") for arg in request.args
  • ):
  • raise BadRequest("_facet= is not allowed")
  • Resolve extras

  • extras = _get_extras(request)
  • if request.args.getlist("_facet"):
  • extras.add("facet_results")

  • pylint: disable=no-member

  • facet_classes = list(
  • itertools.chain.from_iterable(pm.hook.register_facet_classes())
  • )
  • facet_results = {}
  • facets_timed_out = []
  • facet_instances = []
  • for klass in facet_classes:
  • facet_instances.append(
  • klass(
  • self.ds,
  • request,
  • database_name,
  • sql=sql_no_order_no_limit,
  • params=params,
  • table=table_name,
  • metadata=table_metadata,
  • row_count=count,
  • )
  • async def extra_count():
  • Calculate the total count for this query

  • count = None
  • if (
  • not db.is_mutable
  • and self.ds.inspect_data
  • and count_sql == f"select count(*) from {table_name} "
  • ):
  • We can use a previously cached table row count

  • try:
  • count = self.ds.inspect_data[database_name]["tables"][table_name][
  • "count"
  • ]
  • except KeyError:
  • pass +
  • Otherwise run a select count(*) ...

  • if count_sql and count is None and not nocount:
  • try:
  • count_rows = list(await db.execute(count_sql, from_sql_params))
  • count = count_rows[0][0]
  • except QueryInterrupted:
  • pass
  • return count +
  • async def facet_instances(extra_count):
  • facet_instances = []
  • facet_classes = list(
  • itertools.chain.from_iterable(pm.hook.register_facet_classes()) )
  • for facet_class in facet_classes:
  • facet_instances.append(
  • facet_class(
  • self.ds,
  • request,
  • database_name,
  • sql=sql_no_order_no_limit,
  • params=params,
  • table=table_name,
  • metadata=table_metadata,
  • row_count=extra_count,
  • )
  • )
  • return facet_instances +
  • async def extra_facet_results(facet_instances):
  • facet_results = {}
  • facets_timed_out = []

  • async def execute_facets(): if not nofacet: # Run them in parallel facet_awaitables = [facet.facet_results() for facet in facet_instances] @@ -607,9 +611,13 @@ class TableView(DataView): facet_results[key] = facet_info facets_timed_out.extend(instance_facets_timed_out)

  • suggested_facets = []

  • return {
  • "results": facet_results,
  • "timed_out": facets_timed_out,
  • }

  • async def execute_suggested_facets():

  • async def extra_suggested_facets(facet_instances):
  • suggested_facets = [] # Calculate suggested facets if ( self.ds.setting("suggest_facets") @@ -624,8 +632,15 @@ class TableView(DataView): ] for suggest_result in await gather(*facet_suggest_awaitables): suggested_facets.extend(suggest_result)
  • return suggested_facets +
  • Faceting

  • if not self.ds.setting("allow_facet") and any(
  • arg.startswith("_facet") for arg in request.args
  • ):
  • raise BadRequest("_facet= is not allowed")

  • await gather(execute_facets(), execute_suggested_facets())

  • pylint: disable=no-member

     # Figure out columns and rows for the query
     columns = [r[0] for r in results.description]
    

    @@ -732,17 +747,56 @@ class TableView(DataView): rows = rows[:page_size]

     # human_description_en combines filters AND search, if provided
    
    • human_description_en = filters.human_description_en(
    • extra=extra_human_descriptions
    • )
    • async def extra_human_description_en():
    • human_description_en = filters.human_description_en(
    • extra=extra_human_descriptions
    • )
    • if sort or sort_desc:
    • human_description_en = " ".join(
    • [b for b in [human_description_en, sorted_by] if b]
    • )
    • return human_description_en

      if sort or sort_desc: sorted_by = "sorted by {}{}".format( (sort or sort_desc), " descending" if sort_desc else "" ) - human_description_en = " ".join( - [b for b in [human_description_en, sorted_by] if b] - ) + + async def extra_next_url(): + return next_url + + async def extra_columns(): + return columns + + async def extra_primary_keys(): + return pks + + registry = Registry( + extra_count, + extra_facet_results, + extra_suggested_facets, + facet_instances, + extra_human_description_en, + extra_next_url, + extra_columns, + extra_primary_keys, + ) + + results = await registry.resolve_multi( + ["extra_{}".format(extra) for extra in extras] + ) + data = { + "ok": True, + "rows": rows[:page_size], + "next": next_value and str(next_value) or None, + } + data.update({ + key.replace("extra_", ""): value + for key, value in results.items() + if key.startswith("extra_") + and key.replace("extra_", "") in extras + }) + return Response.json(data, default=repr)

      async def extra_template(): nonlocal sort @@ -1334,3 +1388,11 @@ class TableDropView(BaseView):

      await db.execute_write_fn(drop_table) return Response.json({"ok": True}, status=200) + + +def _get_extras(request): + extra_bits = request.args.getlist("_extra") + extras = set() + for bit in extra_bits: + extras.update(bit.split(",")) + return extras With that in place, `http://127.0.0.1:8001/content/releases?author=25778&_size=1&_extra=count,primary_keys,columns&_facet=author` returns:json { "ok": true, "rows": [ { "html_url": "https://github.com/eyeseast/geocode-sqlite/releases/tag/0.1.2", "id": 30926270, "author": { "value": 25778, "label": "eyeseast" }, "node_id": "MDc6UmVsZWFzZTMwOTI2Mjcw", "tag_name": "0.1.2", "target_commitish": "master", "name": "v0.1.2", "draft": 0, "prerelease": 1, "created_at": "2020-09-08T17:48:24Z", "published_at": "2020-09-08T17:50:15Z", "body": "Basic API is in place, with CLI support for Google, Bing, MapQuest and Nominatum (OSM) geocoders.", "repo": { "value": 293361514, "label": "geocode-sqlite" }, "reactions": null, "mentions_count": null } ], "next": "30926270", "primary_keys": [ "id" ], "columns": [ "html_url", "id", "author", "node_id", "tag_name", "target_commitish", "name", "draft", "prerelease", "created_at", "published_at", "body", "repo", "reactions", "mentions_count" ], "count": 25, "facet_results": { "results": { "author": { "name": "author", "type": "column", "hideable": true, "toggle_url": "/content/releases?author=25778&_size=1&_extra=count%2Cprimary_keys%2Ccolumns", "results": [ { "value": 25778, "label": "eyeseast", "count": 25, "toggle_url": "http://127.0.0.1:8001/content/releases?_size=1&_extra=count%2Cprimary_keys%2Ccolumns&_facet=author", "selected": true } ], "truncated": false } }, "timed_out": [] } } ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add ?_extra= mechanism for requesting extra properties in JSON 323658641  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1198.963ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows