github
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/1101#issuecomment-732543700 | https://api.github.com/repos/simonw/datasette/issues/1101 | 732543700 | MDEyOklzc3VlQ29tbWVudDczMjU0MzcwMA== | 9599 | 2020-11-24T02:20:30Z | 2020-11-24T02:20:30Z | OWNER | Current design: https://docs.datasette.io/en/stable/plugin_hooks.html#register-output-renderer-datasette ```python @hookimpl def register_output_renderer(datasette): return { "extension": "test", "render": render_demo, "can_render": can_render_demo, # Optional } ``` Where `render_demo` looks something like this: ```python async def render_demo(datasette, columns, rows): db = datasette.get_database() result = await db.execute("select sqlite_version()") first_row = " | ".join(columns) lines = [first_row] lines.append("=" * len(first_row)) for row in rows: lines.append(" | ".join(row)) return Response( "\n".join(lines), content_type="text/plain; charset=utf-8", headers={"x-sqlite-version": result.first()[0]} ) ``` Meanwhile here's where the CSV streaming mode is implemented: https://github.com/simonw/datasette/blob/4bac9f18f9d04e5ed10f072502bcc508e365438e/datasette/views/base.py#L297-L380 | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-732544590 | https://api.github.com/repos/simonw/datasette/issues/1101 | 732544590 | MDEyOklzc3VlQ29tbWVudDczMjU0NDU5MA== | 9599 | 2020-11-24T02:22:55Z | 2020-11-24T02:22:55Z | OWNER | The trick I'm using here is to follow the `next_url` in order to paginate through all of the matching results. The loop calls the `data()` method multiple times, once for each page of results: https://github.com/simonw/datasette/blob/4bac9f18f9d04e5ed10f072502bcc508e365438e/datasette/views/base.py#L304-L307 | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-755128038 | https://api.github.com/repos/simonw/datasette/issues/1101 | 755128038 | MDEyOklzc3VlQ29tbWVudDc1NTEyODAzOA== | 9599 | 2021-01-06T07:10:22Z | 2021-01-06T07:10:22Z | OWNER | Yet another use-case for this: I want to be able to stream newline-delimited JSON in order to better import into Pandas: pandas.read_json("https://latest.datasette.io/fixtures/compound_three_primary_keys.json?_shape=array&_nl=on", lines=True) | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-755133937 | https://api.github.com/repos/simonw/datasette/issues/1101 | 755133937 | MDEyOklzc3VlQ29tbWVudDc1NTEzMzkzNw== | 9599 | 2021-01-06T07:25:48Z | 2021-01-06T07:26:43Z | OWNER | Idea: instead of returning a dictionary, `register_output_renderer` could return an object. The object could have the following properties: - `.extension` - the extension to use - `.can_render(...)` - says if it can render this - `.can_stream(...)` - says if streaming is supported - `async .stream_rows(rows_iterator, send)` - method that loops through all rows and uses `send` to send them to the response in the correct format I can then deprecate the existing `dict` return type for 1.0. | { "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-755134771 | https://api.github.com/repos/simonw/datasette/issues/1101 | 755134771 | MDEyOklzc3VlQ29tbWVudDc1NTEzNDc3MQ== | 9599 | 2021-01-06T07:28:01Z | 2021-01-06T07:28:01Z | OWNER | With this structure it will become possible to stream non-newline-delimited JSON array-of-objects too - the `stream_rows()` method could output `[` first, then each row followed by a comma, then `]` after the very last row. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-869812567 | https://api.github.com/repos/simonw/datasette/issues/1101 | 869812567 | MDEyOklzc3VlQ29tbWVudDg2OTgxMjU2Nw== | 9599 | 2021-06-28T16:06:57Z | 2021-06-28T16:07:24Z | OWNER | Relevant blog post: https://simonwillison.net/2021/Jun/25/streaming-large-api-responses/ - including notes on efficiently streaming formats with some kind of separator in between the records (regular JSON). > Some export formats are friendlier for streaming than others. CSV and TSV are pretty easy to stream, as is newline-delimited JSON. > > Regular JSON requires a bit more thought: you can output a `[` character, then output each row in a stream with a comma suffix, then skip the comma for the last row and output a `]`. Doing that requires peeking ahead (looping two at a time) to verify that you haven't yet reached the end. > > Or... Martin De Wulf [pointed out](https://twitter.com/madewulf/status/1405559088994467844) that you can output the first row, then output every other row with a preceeding comma---which avoids the whole "iterate two at a time" problem entirely. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-1105571003 | https://api.github.com/repos/simonw/datasette/issues/1101 | 1105571003 | IC_kwDOBm6k_c5B5ay7 | 9599 | 2022-04-21T18:10:38Z | 2022-04-21T18:10:46Z | OWNER | Maybe the simplest design for this is to add an optional `can_stream` to the contract: ```python @hookimpl def register_output_renderer(datasette): return { "extension": "tsv", "render": render_tsv, "can_render": lambda: True, "can_stream": lambda: True } ``` When streaming, a new parameter could be passed to the render function - maybe `chunks` - which is an iterator/generator over a sequence of chunks of rows. Or it could use the existing `rows` parameter but treat that as an iterator? | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-1105608964 | https://api.github.com/repos/simonw/datasette/issues/1101 | 1105608964 | IC_kwDOBm6k_c5B5kEE | 9599 | 2022-04-21T18:26:29Z | 2022-04-21T18:26:29Z | OWNER | I'm questioning if the mechanisms should be separate at all now - a single response rendering is really just a case of a streaming response that only pulls the first N records from the iterator. It probably needs to be an `async for` iterator, which I've not worked with much before. Good opportunity to learn. This actually gets a fair bit more complicated due to the work I'm doing right now to improve the default JSON API: - #1709 I want to do things like make faceting results optionally available to custom renderers - which is a separate concern from streaming rows. I'm going to poke around with a bunch of prototypes and see what sticks. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-1105615625 | https://api.github.com/repos/simonw/datasette/issues/1101 | 1105615625 | IC_kwDOBm6k_c5B5lsJ | 9599 | 2022-04-21T18:31:41Z | 2022-04-21T18:32:22Z | OWNER | The `datasette-geojson` plugin is actually an interesting case here, because of the way it converts SpatiaLite geometries into GeoJSON: https://github.com/eyeseast/datasette-geojson/blob/602c4477dc7ddadb1c0a156cbcd2ef6688a5921d/datasette_geojson/__init__.py#L61-L66 ```python if isinstance(geometry, bytes): results = await db.execute( "SELECT AsGeoJSON(:geometry)", {"geometry": geometry} ) return geojson.loads(results.single_value()) ``` That actually seems to work really well as-is, but it does worry me a bit that it ends up having to execute an extra `SELECT` query for every single returned row - especially in streaming mode where it might be asked to return 1m rows at once. My PostgreSQL/MySQL engineering brain says that this would be better handled by doing a chunk of these (maybe 100) at once, to avoid the per-query-overhead - but with SQLite that might not be necessary. At any rate, this is one of the reasons I'm interested in "iterate over this sequence of chunks of 100 rows at a time" as a potential option here. Of course, a better solution would be for `datasette-geojson` to have a way to influence the SQL query before it is executed, adding a `AsGeoJSON(geometry)` clause to it - so that's something I'm open to as well. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 | |
https://github.com/simonw/datasette/issues/1101#issuecomment-1399341761 | https://api.github.com/repos/simonw/datasette/issues/1101 | 1399341761 | IC_kwDOBm6k_c5TaELB | 9599 | 2023-01-21T22:07:19Z | 2023-01-21T22:07:19Z | OWNER | Idea for supporting streaming with the `register_output_renderer` hook: ```python @hookimpl def register_output_renderer(datasette): return { "extension": "test", "render": render_demo, "can_render": can_render_demo, "render_stream": render_demo_stream, # This is new } ``` So there's a new `"render_stream"` key which can be returned, which if present means that the output renderer supports streaming. I'll play around with the design of that function signature in: - #1999 - #1062 | { "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
749283032 |