issue_comments
15 rows where "created_at" is on date 2018-06-18 sorted by updated_at descending
This data as json, CSV (advanced)
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
398133159 | https://github.com/simonw/datasette/issues/316#issuecomment-398133159 | https://api.github.com/repos/simonw/datasette/issues/316 | MDEyOklzc3VlQ29tbWVudDM5ODEzMzE1OQ== | simonw 9599 | 2018-06-18T17:29:59Z | 2018-07-10T15:14:53Z | OWNER | For #271 I've been contemplating having Datasette work against an on-disk database that gets modified without needing to restart the server. For that to work, I'll have to dramatically change the inspect() mechanism. It may be that inspect becomes an optional optimization in the future. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette inspect takes a very long time on large dbs 333238932 | |
398133924 | https://github.com/simonw/datasette/issues/271#issuecomment-398133924 | https://api.github.com/repos/simonw/datasette/issues/271 | MDEyOklzc3VlQ29tbWVudDM5ODEzMzkyNA== | simonw 9599 | 2018-06-18T17:32:22Z | 2018-06-18T17:32:22Z | OWNER | As seen in #316 inspect is already taking a VERY long time to run against large (600GB) databases. To get this working I may have to make inspect an optional optimization and run introspection for columns and primary keys in demand. The one catch here is the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Mechanism for automatically picking up changes when on-disk .db file changes 324162476 | |
398109204 | https://github.com/simonw/datasette/issues/316#issuecomment-398109204 | https://api.github.com/repos/simonw/datasette/issues/316 | MDEyOklzc3VlQ29tbWVudDM5ODEwOTIwNA== | gavinband 132230 | 2018-06-18T16:12:45Z | 2018-06-18T16:12:45Z | NONE | Hi Simon,
Thanks for the response. Ok I'll try running |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette inspect takes a very long time on large dbs 333238932 | |
398102537 | https://github.com/simonw/datasette/issues/265#issuecomment-398102537 | https://api.github.com/repos/simonw/datasette/issues/265 | MDEyOklzc3VlQ29tbWVudDM5ODEwMjUzNw== | simonw 9599 | 2018-06-18T15:52:15Z | 2018-06-18T15:52:15Z | OWNER | https://latest.datasette.io/ now always hosts the latest version of the code. I've started linking to it from our documentation. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add links to example Datasette instances to appropiate places in docs 323677499 | |
398101670 | https://github.com/simonw/datasette/issues/316#issuecomment-398101670 | https://api.github.com/repos/simonw/datasette/issues/316 | MDEyOklzc3VlQ29tbWVudDM5ODEwMTY3MA== | simonw 9599 | 2018-06-18T15:49:35Z | 2018-06-18T15:50:38Z | OWNER | Wow, I've gone as high as 7GB but I've never tried it against 600GB.
As you spotted, most of the time is spent in those counts. I imagine you don't need those row counts in order for the rest of Datasette to function correctly (they are mainly used for display purposes - on the https://latest.datasette.io/fixtures index page for example). If your database changes infrequently, for the moment I recommend running If your database DOES change frequently then this workaround won't help you much. Let me know and I'll see how much work it would take to have those row counts be optional rather than required. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette inspect takes a very long time on large dbs 333238932 | |
398098582 | https://github.com/simonw/datasette/issues/266#issuecomment-398098582 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5ODA5ODU4Mg== | simonw 9599 | 2018-06-18T15:40:32Z | 2018-06-18T15:40:32Z | OWNER | This is now released in Datasette 0.23! http://datasette.readthedocs.io/en/latest/changelog.html#v0-23 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
398030903 | https://github.com/simonw/datasette/issues/316#issuecomment-398030903 | https://api.github.com/repos/simonw/datasette/issues/316 | MDEyOklzc3VlQ29tbWVudDM5ODAzMDkwMw== | gavinband 132230 | 2018-06-18T12:00:43Z | 2018-06-18T12:00:43Z | NONE | I should add that I'm using datasette version 0.22, Python 2.7.10 on Mac OS X. Happy to send more info if helpful. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette inspect takes a very long time on large dbs 333238932 | |
397952129 | https://github.com/simonw/datasette/issues/266#issuecomment-397952129 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5Nzk1MjEyOQ== | simonw 9599 | 2018-06-18T06:15:36Z | 2018-06-18T06:15:51Z | OWNER | Advanced export pane demo: https://latest.datasette.io/fixtures-35b6eb6/facetable?_size=4 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397949002 | https://github.com/simonw/datasette/issues/266#issuecomment-397949002 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5Nzk0OTAwMg== | simonw 9599 | 2018-06-18T05:53:17Z | 2018-06-18T05:53:17Z | OWNER | Advanced export pane: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397923253 | https://github.com/simonw/datasette/issues/266#issuecomment-397923253 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NzkyMzI1Mw== | simonw 9599 | 2018-06-18T01:49:52Z | 2018-06-18T03:02:28Z | OWNER | Ideally the downloadable filenames of exported CSVs would differ across different querystring parameters. Maybe S |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397918264 | https://github.com/simonw/datasette/issues/266#issuecomment-397918264 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NzkxODI2NA== | simonw 9599 | 2018-06-18T00:49:35Z | 2018-06-18T00:49:35Z | OWNER | Simpler design: the top of the page will link to basic .json and .csv and "advanced" - which will fragment link to an advanced export format the bottom of the page. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397916321 | https://github.com/simonw/datasette/issues/266#issuecomment-397916321 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NzkxNjMyMQ== | simonw 9599 | 2018-06-18T00:17:44Z | 2018-06-18T00:18:05Z | OWNER | The export UI could be a GET form controlling various parameters. This would discourage crawlers from hitting the export links and would also allow us to express the full range of export options. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397916091 | https://github.com/simonw/datasette/issues/266#issuecomment-397916091 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NzkxNjA5MQ== | simonw 9599 | 2018-06-18T00:13:43Z | 2018-06-18T00:15:50Z | OWNER | I was also worried about the performance of pagination over custom |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397915403 | https://github.com/simonw/datasette/issues/266#issuecomment-397915403 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NzkxNTQwMw== | simonw 9599 | 2018-06-18T00:03:17Z | 2018-06-18T00:14:37Z | OWNER | Since CSV streaming export doesn't work for custom SQL queries (since they don't support Related: the UI should not show the option to download everything on custom SQL pages. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 | |
397915258 | https://github.com/simonw/datasette/issues/266#issuecomment-397915258 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NzkxNTI1OA== | simonw 9599 | 2018-06-18T00:01:05Z | 2018-06-18T00:01:05Z | OWNER | Someone malicious could use a UNION to generate an unpleasantly large CSV response. I'll add another config setting which limits the response size to 100MB but can be turned off by setting it to 0. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Export to CSV 323681589 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 4