issue_comments
3 rows where issue = 1163369515 and user = 6262071 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
1767248394 | https://github.com/simonw/datasette/issues/1655#issuecomment-1767248394 | https://api.github.com/repos/simonw/datasette/issues/1655 | IC_kwDOBm6k_c5pVhIK | yejiyang 6262071 | 2023-10-17T21:53:17Z | 2023-10-17T21:53:17Z | NONE | @fgregg, I am happy to do that and just could not find a way to create issues at your fork repo. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data 1163369515 | |
1767133832 | https://github.com/simonw/datasette/issues/1655#issuecomment-1767133832 | https://api.github.com/repos/simonw/datasette/issues/1655 | IC_kwDOBm6k_c5pVFKI | yejiyang 6262071 | 2023-10-17T20:37:18Z | 2023-10-17T21:12:48Z | NONE | @fgregg Thanks for your reply. I tried to use your fork branch
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data 1163369515 | |
1761630595 | https://github.com/simonw/datasette/issues/1655#issuecomment-1761630595 | https://api.github.com/repos/simonw/datasette/issues/1655 | IC_kwDOBm6k_c5pAFmD | yejiyang 6262071 | 2023-10-13T14:37:48Z | 2023-10-13T14:37:48Z | NONE | Hi @fgregg, I came across this issue and found your setup at labordata.bunkum.us can help me with a research project at https://database.zeropm.eu/. I really like the approach here when dealing with a custom SQL query returning more than 1000 rows: 1) At the table in HTML page, only first 1000 rows displayed; 2) When click the "Download this data as a CSV Spreadsheet(All Rows)" button, a csv with ALL ROWS (could be > 100 Mb) get downloaded. I am trying to repeat the setup but have yet to be successful so far. What I tried: 1) copy the query.html & table.html templates from this github repo and use it my project 2) use the same datasette version 1.0a2. Do you know what else I should try to set it correctly? I appreciate your help. @simonw I would like to use this opportunity to thank you for developing & maintaining such an amazing project. I introduce your datasette to several projects in my institute. I am also interested in your cloud version. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data 1163369515 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1