issue_comments
26 rows where author_association = "OWNER", "created_at" is on date 2022-04-28 and user = 9599 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- simonw · 26 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
1112734577 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112734577 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUvtx | simonw 9599 | 2022-04-28T23:08:42Z | 2022-04-28T23:08:42Z | OWNER | That prototype is a very small amount of code so far: ```diff diff --git a/datasette/renderer.py b/datasette/renderer.py index 4508949..b600e1b 100644 --- a/datasette/renderer.py +++ b/datasette/renderer.py @@ -28,6 +28,10 @@ def convert_specific_columns_to_json(rows, columns, json_cols): def json_renderer(args, data, view_name): """Render a response as JSON""" + from pprint import pprint + + pprint(data) + status_code = 200
@@ -43,6 +47,41 @@ def json_renderer(args, data, view_name): if "rows" in data and not value_as_boolean(args.get("_json_infinity", "0")): data["rows"] = [remove_infinites(row) for row in data["rows"]]
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112732563 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112732563 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUvOT | simonw 9599 | 2022-04-28T23:05:03Z | 2022-04-28T23:05:03Z | OWNER | OK, the prototype of this is looking really good - it's very pleasant to use.
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112730416 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112730416 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUusw | simonw 9599 | 2022-04-28T23:01:21Z | 2022-04-28T23:01:21Z | OWNER | I'm not sure what to do about the It's not really relevant to table results, since they are paginated whether or not you ask for them to be. It plays a role in query results, where you might run Adding it to every table result and always setting it to I think I'm going to keep it exclusively in the default representation for the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112721321 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112721321 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUsep | simonw 9599 | 2022-04-28T22:44:05Z | 2022-04-28T22:44:14Z | OWNER | I may be able to implement this mostly in the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112717745 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112717745 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUrmx | simonw 9599 | 2022-04-28T22:38:39Z | 2022-04-28T22:39:05Z | OWNER | (I remain keen on the idea of shipping a plugin that restores the old default API shape to people who have written pre-Datasette-1.0 code against it, but I'll tackle that much later. I really like how jQuery has a culture of doing this.) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112717210 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112717210 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUrea | simonw 9599 | 2022-04-28T22:37:37Z | 2022-04-28T22:37:37Z | OWNER | This means I'll add |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112716611 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112716611 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUrVD | simonw 9599 | 2022-04-28T22:36:24Z | 2022-04-28T22:36:24Z | OWNER | Then I'm going to implement the following
I thought about having I'm tempted to add |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112713581 | https://github.com/simonw/datasette/issues/1729#issuecomment-1112713581 | https://api.github.com/repos/simonw/datasette/issues/1729 | IC_kwDOBm6k_c5CUqlt | simonw 9599 | 2022-04-28T22:31:11Z | 2022-04-28T22:31:11Z | OWNER | I'm going to change the default API response to look like this:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement ?_extra and new API design for TableView 1219385669 | |
1112711115 | https://github.com/simonw/datasette/issues/1715#issuecomment-1112711115 | https://api.github.com/repos/simonw/datasette/issues/1715 | IC_kwDOBm6k_c5CUp_L | simonw 9599 | 2022-04-28T22:26:56Z | 2022-04-28T22:26:56Z | OWNER | I'm not going to use
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Refactor TableView to use asyncinject 1212823665 | |
1112668411 | https://github.com/simonw/datasette/issues/1727#issuecomment-1112668411 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CUfj7 | simonw 9599 | 2022-04-28T21:25:34Z | 2022-04-28T21:25:44Z | OWNER | The two most promising theories at the moment, from here and Twitter and the SQLite forum, are:
A couple of ways to research the in-memory theory:
I need to do some more, better benchmarks using these different approaches. https://twitter.com/laurencerowe/status/1519780174560169987 also suggests:
I like that second idea a lot - I could use the mandelbrot example from https://www.sqlite.org/lang_with.html#outlandish_recursive_query_examples |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111726586 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111726586 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQ5n6 | simonw 9599 | 2022-04-28T04:17:16Z | 2022-04-28T04:19:31Z | OWNER | I could experiment with the Code examples: https://cs.github.com/?scopeName=All+repos&scope=&q=run_in_executor+ProcessPoolExecutor |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111725638 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111725638 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQ5ZG | simonw 9599 | 2022-04-28T04:15:15Z | 2022-04-28T04:15:15Z | OWNER | Useful theory from Keith Medcalf https://sqlite.org/forum/forumpost/e363c69d3441172e
So maybe this is a GIL thing. I should test with some expensive SQL queries (maybe big aggregations against large tables) and see if I can spot an improvement there. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111714665 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111714665 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQ2tp | simonw 9599 | 2022-04-28T03:52:47Z | 2022-04-28T03:52:58Z | OWNER | Nice custom template/theme! Yeah, for that I'd recommend hosting elsewhere - on a regular VPS (I use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111708206 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111708206 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQ1Iu | simonw 9599 | 2022-04-28T03:38:56Z | 2022-04-28T03:38:56Z | OWNER | In terms of this bug, there are a few potential fixes:
I'm not keen on that last one because it would be frustrating if you couldn't launch Datasette just because you had an old canned query lying around in your metadata file. So I'm leaning towards option 2. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111707384 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111707384 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQ074 | simonw 9599 | 2022-04-28T03:36:46Z | 2022-04-28T03:36:56Z | OWNER | A more realistic solution (which I've been using on several of my own projects) is to keep the data itself in GitHub and encourage users to edit it there - using the GitHub web interface to edit YAML files or similar. Needs your users to be comfortable hand-editing YAML though! You can at least guard against critical errors by having CI run tests against their YAML before deploying. I have a dream of building a more friendly web forms interface which edits the YAML back on GitHub for the user, but that's just a concept at the moment. Even more fun would be if a user-friendly form could submit PRs for review without the user having to know what a PR is! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111706519 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111706519 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQ0uX | simonw 9599 | 2022-04-28T03:34:49Z | 2022-04-28T03:34:49Z | OWNER | I've wanted to do stuff like that on Cloud Run too. So far I've assumed that it's not feasible, but recently I've been wondering how hard it would be to have a small (like less than 100KB or so) Datasette instance which persists data to a backing GitHub repository such that when it starts up it can pull the latest copy and any time someone edits it can push their changes. I'm still not sure it would work well on Cloud Run due to the uncertainty at what would happen if Cloud Run decided to boot up a second instance - but it's still an interesting thought exercise. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111705069 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111705069 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQ0Xt | simonw 9599 | 2022-04-28T03:31:33Z | 2022-04-28T03:31:33Z | OWNER | Confirmed - this is a bug where immutable databases fail to show a useful error if you write to them with a canned query. Steps to reproduce:
Now do this instead:
And I'm getting a broken error: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111699175 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111699175 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQy7n | simonw 9599 | 2022-04-28T03:19:48Z | 2022-04-28T03:20:08Z | OWNER | I ran The area on the right is the threads running the DB queries: Interactive version here: https://static.simonwillison.net/static/2022/datasette-parallel-profile.svg |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111698307 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111698307 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQyuD | simonw 9599 | 2022-04-28T03:18:02Z | 2022-04-28T03:18:02Z | OWNER | If the behaviour you are seeing is because the database is running in immutable mode then that's a bug - you should get a useful error message instead! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111697985 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111697985 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQypB | simonw 9599 | 2022-04-28T03:17:20Z | 2022-04-28T03:17:20Z | OWNER | How did you deploy to Cloud Run?
That's why I upgraded |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
1111683539 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111683539 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQvHT | simonw 9599 | 2022-04-28T02:47:57Z | 2022-04-28T02:47:57Z | OWNER | Maybe this is the Python GIL after all? I've been hoping that the GIL won't be an issue because the So I've been hoping this means that SQLite code itself can run concurrently on multiple cores even when Python threads cannot. But maybe I'm misunderstanding how that works? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111681513 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111681513 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQunp | simonw 9599 | 2022-04-28T02:44:26Z | 2022-04-28T02:44:26Z | OWNER | I could try |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111661331 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111661331 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQpsT | simonw 9599 | 2022-04-28T02:07:31Z | 2022-04-28T02:07:31Z | OWNER | Asked on the SQLite forum about this here: https://sqlite.org/forum/forumpost/ffbfa9f38e |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111602802 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111602802 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQbZy | simonw 9599 | 2022-04-28T00:21:35Z | 2022-04-28T00:21:35Z | OWNER | Tried this but I'm getting back an empty JSON array of traces at the bottom of the page most of the time (intermittently it works correctly): ```diff diff --git a/datasette/database.py b/datasette/database.py index ba594a8..d7f9172 100644 --- a/datasette/database.py +++ b/datasette/database.py @@ -7,7 +7,7 @@ import sys import threading import uuid -from .tracer import trace +from .tracer import trace, trace_child_tasks from .utils import ( detect_fts, detect_primary_keys, @@ -207,30 +207,31 @@ class Database: time_limit_ms = custom_time_limit
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111597176 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111597176 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQaB4 | simonw 9599 | 2022-04-28T00:11:44Z | 2022-04-28T00:11:44Z | OWNER | Though it would be interesting to also have the trace reveal how much time is spent in the functions that wrap that core SQL - the stuff that is being measured at the moment. I have a hunch that this could help solve the over-arching performance mystery. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 | |
1111595319 | https://github.com/simonw/datasette/issues/1727#issuecomment-1111595319 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5CQZk3 | simonw 9599 | 2022-04-28T00:09:45Z | 2022-04-28T00:11:01Z | OWNER | Here's where read queries are instrumented: https://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L241-L242 So the instrumentation is actually capturing quite a bit of Python activity before it gets to SQLite: And then: Ideally I'd like that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 4