issue_comments
31 rows where "updated_at" is on date 2019-06-24 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
505162238 | https://github.com/simonw/datasette/issues/526#issuecomment-505162238 | https://api.github.com/repos/simonw/datasette/issues/526 | MDEyOklzc3VlQ29tbWVudDUwNTE2MjIzOA== | simonw 9599 | 2019-06-24T20:14:51Z | 2019-06-24T20:14:51Z | OWNER | The other reason I didn't implement this in the first place is that adding offset/limit to a custom query (as opposed to a view) requires modifying the existing SQL - what if that SQL already has its own offset/limit clause? It looks like I can solve that using a nested query:
So I can wrap any user-provided SQL query in an outer offset/limit and implement pagination that way. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stream all results for arbitrary SQL and canned queries 459882902 | |
505161008 | https://github.com/simonw/datasette/issues/526#issuecomment-505161008 | https://api.github.com/repos/simonw/datasette/issues/526 | MDEyOklzc3VlQ29tbWVudDUwNTE2MTAwOA== | simonw 9599 | 2019-06-24T20:11:15Z | 2019-06-24T20:11:15Z | OWNER | Views already use offset/limit pagination so actually I may be over-thinking this. Maybe the right thing to do here is to have the feature enabled by default, since it will work for the VAST majority of queries - the only ones that might cause problems are complex queries across millions of rows. It can continue to use aggressive internal time limits so if someone DOES trigger something expensive they'll get an error. I can allow users to disable the feature with a config setting, or increase the time limit if they need to. Downgrading this from a medium to a small since it's much less effort to enable the existing pagination method for this type of query. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stream all results for arbitrary SQL and canned queries 459882902 | |
505087020 | https://github.com/simonw/datasette/pull/437#issuecomment-505087020 | https://api.github.com/repos/simonw/datasette/issues/437 | MDEyOklzc3VlQ29tbWVudDUwNTA4NzAyMA== | simonw 9599 | 2019-06-24T16:38:56Z | 2019-06-24T16:38:56Z | OWNER | Closing this because it doesn't really fit the new model of inspect (though we should discuss in #465 how to further evolve this feature) and because as-of #272 we no longer use Sanic - though #520 will implement the equivalent of |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add inspect and prepare_sanic hooks 438048318 | |
505086213 | https://github.com/simonw/datasette/issues/525#issuecomment-505086213 | https://api.github.com/repos/simonw/datasette/issues/525 | MDEyOklzc3VlQ29tbWVudDUwNTA4NjIxMw== | simonw 9599 | 2019-06-24T16:36:35Z | 2019-06-24T16:36:35Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add section on sqite-utils enable-fts to the search documentation 459714943 | ||
505083671 | https://github.com/simonw/datasette/issues/525#issuecomment-505083671 | https://api.github.com/repos/simonw/datasette/issues/525 | MDEyOklzc3VlQ29tbWVudDUwNTA4MzY3MQ== | simonw 9599 | 2019-06-24T16:29:30Z | 2019-06-24T16:29:30Z | OWNER | It's mentioned here at the moment, but I'm going to expand that: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add section on sqite-utils enable-fts to the search documentation 459714943 | |
505061703 | https://github.com/simonw/datasette/issues/514#issuecomment-505061703 | https://api.github.com/repos/simonw/datasette/issues/514 | MDEyOklzc3VlQ29tbWVudDUwNTA2MTcwMw== | simonw 9599 | 2019-06-24T15:31:25Z | 2019-06-24T15:31:25Z | OWNER | I'm suspicious of the wildcard. Does it work if you do the following?
If that does work then it means the ExecStart line doesn't support bash wildcard expansion. You'll need to create a separate script like this: ``` !/bin/bash/home/chris/Env/datasette/bin/datasette serve -h 0.0.0.0 /home/chris/digital-library/databases/*.db --cors --metadata /home/chris/digital-library/metadata.json ``` Then save that as
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation with recommendations on running Datasette in production without using Docker 459397625 | |
505060332 | https://github.com/simonw/datasette/issues/526#issuecomment-505060332 | https://api.github.com/repos/simonw/datasette/issues/526 | MDEyOklzc3VlQ29tbWVudDUwNTA2MDMzMg== | simonw 9599 | 2019-06-24T15:28:16Z | 2019-06-24T15:28:16Z | OWNER | This is currently a deliberate feature decision. The problem is that the streaming CSV feature relies on Datasette's automated efficient pagination under the hood. When you stream a CSV you're actually causing Datasette to paginate through the full set of "pages" under the hood, streaming each page out as a new chunk of CSV rows. This mechanism only works if the Offset/limit pagination for canned queries would be a pretty nasty performance hit, because each subsequent page would require even more time for SQLite to scroll through to the specified offset. This does seem like it's worth fixing though: pulling every row for a canned queries would definitely be useful. The problem is that the pagination trick used elsewhere isn't right for canned queries - instead I would need to keep the database cursor open until ALL rows had been fetched. Figuring out how to do that efficiently within an asyncio managed thread pool may take some thought. Maybe this feature ends up as something which is turned off by default (due to the risk of it causing uptime problems for public sites) but that users working on their own private environments can turn on? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stream all results for arbitrary SQL and canned queries 459882902 | |
505057520 | https://github.com/simonw/datasette/issues/527#issuecomment-505057520 | https://api.github.com/repos/simonw/datasette/issues/527 | MDEyOklzc3VlQ29tbWVudDUwNTA1NzUyMA== | simonw 9599 | 2019-06-24T15:21:18Z | 2019-06-24T15:21:18Z | OWNER | I just released csvs-to-sqlite 0.9.1 with this bug fix: https://github.com/simonw/csvs-to-sqlite/releases/tag/0.9.1 |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unable to use rank when fts-table generated with csvs-to-sqlite 459936585 | |
505052344 | https://github.com/simonw/datasette/issues/527#issuecomment-505052344 | https://api.github.com/repos/simonw/datasette/issues/527 | MDEyOklzc3VlQ29tbWVudDUwNTA1MjM0NA== | simonw 9599 | 2019-06-24T15:09:10Z | 2019-06-24T15:09:10Z | OWNER | Closing in favour of that bug in the csvs-to-sqlite repo. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unable to use rank when fts-table generated with csvs-to-sqlite 459936585 | |
505052224 | https://github.com/simonw/datasette/issues/527#issuecomment-505052224 | https://api.github.com/repos/simonw/datasette/issues/527 | MDEyOklzc3VlQ29tbWVudDUwNTA1MjIyNA== | simonw 9599 | 2019-06-24T15:08:52Z | 2019-06-24T15:08:52Z | OWNER | The ... I tested on my own machine and that is indeed what's happening! And in fact it looks like it's a known bug - I should fix that! https://github.com/simonw/csvs-to-sqlite/issues/41 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unable to use rank when fts-table generated with csvs-to-sqlite 459936585 | |
504791796 | https://github.com/simonw/datasette/pull/518#issuecomment-504791796 | https://api.github.com/repos/simonw/datasette/issues/518 | MDEyOklzc3VlQ29tbWVudDUwNDc5MTc5Ng== | simonw 9599 | 2019-06-23T22:10:02Z | 2019-06-24T13:42:50Z | OWNER | The Sanic stuff I import at the moment is:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Port Datasette from Sanic to ASGI + Uvicorn 459587155 | |
504998302 | https://github.com/simonw/datasette/issues/514#issuecomment-504998302 | https://api.github.com/repos/simonw/datasette/issues/514 | MDEyOklzc3VlQ29tbWVudDUwNDk5ODMwMg== | chrismp 7936571 | 2019-06-24T12:57:19Z | 2019-06-24T12:57:19Z | NONE | Same error when I used the full path. On Sun, Jun 23, 2019 at 18:31 Simon Willison notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation with recommendations on running Datasette in production without using Docker 459397625 | |
504883688 | https://github.com/simonw/datasette/issues/48#issuecomment-504883688 | https://api.github.com/repos/simonw/datasette/issues/48 | MDEyOklzc3VlQ29tbWVudDUwNDg4MzY4OA== | simonw 9599 | 2019-06-24T06:57:43Z | 2019-06-24T06:57:43Z | OWNER | I've seen no evidence that JSON handling is even close to being a performance bottleneck, so wontfix. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Switch to ujson 272391665 | |
504882686 | https://github.com/simonw/datasette/issues/294#issuecomment-504882686 | https://api.github.com/repos/simonw/datasette/issues/294 | MDEyOklzc3VlQ29tbWVudDUwNDg4MjY4Ng== | simonw 9599 | 2019-06-24T06:54:22Z | 2019-06-24T06:54:22Z | OWNER | Consider this when solving #465 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
inspect should record column types 327365110 | |
504882244 | https://github.com/simonw/datasette/issues/238#issuecomment-504882244 | https://api.github.com/repos/simonw/datasette/issues/238 | MDEyOklzc3VlQ29tbWVudDUwNDg4MjI0NA== | simonw 9599 | 2019-06-24T06:52:45Z | 2019-06-24T06:52:45Z | OWNER | I'm not going to do this - there are plenty of smarter ways of achieving a similar goal. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
External metadata.json 317714268 | |
504881630 | https://github.com/simonw/datasette/issues/340#issuecomment-504881630 | https://api.github.com/repos/simonw/datasette/issues/340 | MDEyOklzc3VlQ29tbWVudDUwNDg4MTYzMA== | simonw 9599 | 2019-06-24T06:50:26Z | 2019-06-24T06:50:26Z | OWNER | Black is now enforced by our unit tests as of #449 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Embrace black 340730961 | |
504881030 | https://github.com/simonw/datasette/issues/146#issuecomment-504881030 | https://api.github.com/repos/simonw/datasette/issues/146 | MDEyOklzc3VlQ29tbWVudDUwNDg4MTAzMA== | simonw 9599 | 2019-06-24T06:48:20Z | 2019-06-24T06:48:20Z | OWNER | I'm going to call this "done" thanks to cloudrun: #400 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette publish gcloud 276455748 | |
504880796 | https://github.com/simonw/datasette/issues/268#issuecomment-504880796 | https://api.github.com/repos/simonw/datasette/issues/268 | MDEyOklzc3VlQ29tbWVudDUwNDg4MDc5Ng== | simonw 9599 | 2019-06-24T06:47:23Z | 2019-06-24T06:47:23Z | OWNER | I did a bunch of research relevant to this a while ago: https://simonwillison.net/2019/Jan/7/exploring-search-relevance-algorithms-sqlite/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Mechanism for ranking results from SQLite full-text search 323718842 | |
504880173 | https://github.com/simonw/datasette/issues/183#issuecomment-504880173 | https://api.github.com/repos/simonw/datasette/issues/183 | MDEyOklzc3VlQ29tbWVudDUwNDg4MDE3Mw== | simonw 9599 | 2019-06-24T06:45:07Z | 2019-06-24T06:45:07Z | OWNER | Closing as couldn't replicate |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Custom Queries - escaping strings 291639118 | |
504879834 | https://github.com/simonw/datasette/issues/124#issuecomment-504879834 | https://api.github.com/repos/simonw/datasette/issues/124 | MDEyOklzc3VlQ29tbWVudDUwNDg3OTgzNA== | simonw 9599 | 2019-06-24T06:43:46Z | 2019-06-24T06:43:46Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Option to open readonly but not immutable 275125805 | ||
504879510 | https://github.com/simonw/datasette/issues/106#issuecomment-504879510 | https://api.github.com/repos/simonw/datasette/issues/106 | MDEyOklzc3VlQ29tbWVudDUwNDg3OTUxMA== | simonw 9599 | 2019-06-24T06:42:33Z | 2019-06-24T06:42:33Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document how pagination works 274315193 | ||
504879082 | https://github.com/simonw/datasette/issues/267#issuecomment-504879082 | https://api.github.com/repos/simonw/datasette/issues/267 | MDEyOklzc3VlQ29tbWVudDUwNDg3OTA4Mg== | simonw 9599 | 2019-06-24T06:41:02Z | 2019-06-24T06:41:02Z | OWNER | Yes this is definitely documented now https://datasette.readthedocs.io/en/stable/performance.html |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation for URL hashing, redirects and cache policy 323716411 | |
504878886 | https://github.com/simonw/datasette/issues/305#issuecomment-504878886 | https://api.github.com/repos/simonw/datasette/issues/305 | MDEyOklzc3VlQ29tbWVudDUwNDg3ODg4Ng== | simonw 9599 | 2019-06-24T06:40:19Z | 2019-06-24T06:40:19Z | OWNER | I did this a while ago https://datasette.readthedocs.io/en/stable/contributing.html |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add contributor guidelines to docs 329147284 | |
504863901 | https://github.com/simonw/datasette/issues/398#issuecomment-504863901 | https://api.github.com/repos/simonw/datasette/issues/398 | MDEyOklzc3VlQ29tbWVudDUwNDg2MzkwMQ== | simonw 9599 | 2019-06-24T05:33:26Z | 2019-06-24T05:33:26Z | OWNER | I no longer depend on Sanic so I should be able to solve this entirely within the Datasette codebase. I need to figure out how |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ensure downloading a 100+MB SQLite database file works 398011658 | |
504863286 | https://github.com/simonw/datasette/issues/511#issuecomment-504863286 | https://api.github.com/repos/simonw/datasette/issues/511 | MDEyOklzc3VlQ29tbWVudDUwNDg2MzI4Ng== | simonw 9599 | 2019-06-24T05:30:02Z | 2019-06-24T05:30:02Z | OWNER | I've landed #272 - need to manually test if it works on Windows now! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Get Datasette tests passing on Windows in GitHub Actions 456578474 | |
504857097 | https://github.com/simonw/datasette/issues/272#issuecomment-504857097 | https://api.github.com/repos/simonw/datasette/issues/272 | MDEyOklzc3VlQ29tbWVudDUwNDg1NzA5Nw== | simonw 9599 | 2019-06-24T04:54:15Z | 2019-06-24T04:54:15Z | OWNER | I wrote about this on my blog: https://simonwillison.net/2019/Jun/23/datasette-asgi/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Port Datasette to ASGI 324188953 | |
504852873 | https://github.com/simonw/datasette/issues/520#issuecomment-504852873 | https://api.github.com/repos/simonw/datasette/issues/520 | MDEyOklzc3VlQ29tbWVudDUwNDg1Mjg3Mw== | simonw 9599 | 2019-06-24T04:28:22Z | 2019-06-24T04:28:22Z | OWNER | 272 is closed now! This hook is next on the priority list. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
asgi_wrapper plugin hook 459598080 | |
504844339 | https://github.com/simonw/datasette/issues/272#issuecomment-504844339 | https://api.github.com/repos/simonw/datasette/issues/272 | MDEyOklzc3VlQ29tbWVudDUwNDg0NDMzOQ== | simonw 9599 | 2019-06-24T03:33:06Z | 2019-06-24T03:33:06Z | OWNER | It's alive! Here's the first deployed version: https://a559123.datasette.io/ You can confirm it's running under ASGI by viewing https://a559123.datasette.io/-/versions and looking for the Compare to the last version of master running on Sanic here: http://aa91112.datasette.io/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Port Datasette to ASGI 324188953 | |
504843916 | https://github.com/simonw/datasette/pull/518#issuecomment-504843916 | https://api.github.com/repos/simonw/datasette/issues/518 | MDEyOklzc3VlQ29tbWVudDUwNDg0MzkxNg== | simonw 9599 | 2019-06-24T03:30:37Z | 2019-06-24T03:30:37Z | OWNER | It's live! https://a559123.datasette.io/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Port Datasette from Sanic to ASGI + Uvicorn 459587155 | |
504809397 | https://github.com/simonw/datasette/issues/523#issuecomment-504809397 | https://api.github.com/repos/simonw/datasette/issues/523 | MDEyOklzc3VlQ29tbWVudDUwNDgwOTM5Nw== | rixx 2657547 | 2019-06-24T01:38:14Z | 2019-06-24T01:38:14Z | CONTRIBUTOR | Ah, apologies – I had found and read those issues, but I was under the impression that they refered only to the filtered row count, not the unfiltered total row count. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Show total/unfiltered row count when filtering 459627549 | |
504807727 | https://github.com/simonw/datasette/issues/523#issuecomment-504807727 | https://api.github.com/repos/simonw/datasette/issues/523 | MDEyOklzc3VlQ29tbWVudDUwNDgwNzcyNw== | simonw 9599 | 2019-06-24T01:24:07Z | 2019-06-24T01:24:07Z | OWNER | For databases opened in immutable mode we pre-calculate the total row count for every table precisely so we can offer this kind of functionality without too much of a performance hit. The total row count is already available to the template (you can hit the .json endpoint to see it), so implementing this should be possible just by updating the template. For mutable databases we have a mechanism for attempting the count and giving up after a specified time limit - we can use that to get "3 of many". It looks like this is actually a dupe of #127 and #134. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Show total/unfiltered row count when filtering 459627549 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 22