issue_comments
4 rows where "updated_at" is on date 2022-09-26 and user = 536941 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- fgregg · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
1258337011 | https://github.com/simonw/datasette/issues/526#issuecomment-1258337011 | https://api.github.com/repos/simonw/datasette/issues/526 | IC_kwDOBm6k_c5LALLz | fgregg 536941 | 2022-09-26T16:49:48Z | 2022-09-26T16:49:48Z | CONTRIBUTOR | i think the smallest change that gets close to what i want is to change the behavior so that there are some infelicities for that approach, but i'll make a PR to make it easier to discuss. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stream all results for arbitrary SQL and canned queries 459882902 | |
1258167564 | https://github.com/simonw/datasette/issues/526#issuecomment-1258167564 | https://api.github.com/repos/simonw/datasette/issues/526 | IC_kwDOBm6k_c5K_h0M | fgregg 536941 | 2022-09-26T14:57:44Z | 2022-09-26T15:08:36Z | CONTRIBUTOR | reading the database execute method i have a few questions. unless i'm missing something (which is very likely!!), the It's not like adding a Instead the code executes the full original query, and if still has time it fetches out the first this does offer some protection of memory exhaustion, as you won't hydrate a huge result set into python (however, there are data flow patterns that could avoid that too) given the current architecture, i don't see how creating a new connection would be use? If we just removed the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stream all results for arbitrary SQL and canned queries 459882902 | |
1258166572 | https://github.com/simonw/datasette/issues/1655#issuecomment-1258166572 | https://api.github.com/repos/simonw/datasette/issues/1655 | IC_kwDOBm6k_c5K_hks | fgregg 536941 | 2022-09-26T14:57:04Z | 2022-09-26T14:57:04Z | CONTRIBUTOR | I think that paginating, even in javascript, could be very helpful. Maybe render json or csv into the page and let javascript loading that into the dom? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data 1163369515 | |
1258129113 | https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113 | https://api.github.com/repos/simonw/datasette/issues/1727 | IC_kwDOBm6k_c5K_YbZ | fgregg 536941 | 2022-09-26T14:30:11Z | 2022-09-26T14:48:31Z | CONTRIBUTOR | from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the this is probably a simplistic idea, but what if you had the python code in the something like: this kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite. you would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking. depending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Research: demonstrate if parallel SQL queries are worthwhile 1217759117 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3