issue_comments
8,358 rows where author_association = "OWNER" sorted by updated_at descending
This data as json, CSV (advanced)
user 1
- simonw 4,149
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
782464306 | https://github.com/simonw/datasette/issues/1236#issuecomment-782464306 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ2NDMwNg== | simonw 9599 | 2021-02-19T23:57:32Z | 2021-02-19T23:57:32Z | OWNER | Need to test this on mobile. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | |
782464215 | https://github.com/simonw/datasette/issues/1236#issuecomment-782464215 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ2NDIxNQ== | simonw 9599 | 2021-02-19T23:57:13Z | 2021-02-19T23:57:13Z | OWNER | Now live on https://latest.datasette.io/_memory |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | |
782462049 | https://github.com/simonw/datasette/issues/1236#issuecomment-782462049 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ2MjA0OQ== | simonw 9599 | 2021-02-19T23:51:12Z | 2021-02-19T23:51:12Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | ||
782459550 | https://github.com/simonw/datasette/issues/1236#issuecomment-782459550 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ1OTU1MA== | simonw 9599 | 2021-02-19T23:45:30Z | 2021-02-19T23:45:30Z | OWNER | Encoded using https://meyerweb.com/eric/tools/dencoder/
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | |
782459405 | https://github.com/simonw/datasette/issues/1236#issuecomment-782459405 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ1OTQwNQ== | simonw 9599 | 2021-02-19T23:45:02Z | 2021-02-19T23:45:02Z | OWNER | I'm going to use a variant of the Datasette menu icon. Here it is in
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | |
782458983 | https://github.com/simonw/datasette/issues/1236#issuecomment-782458983 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ1ODk4Mw== | simonw 9599 | 2021-02-19T23:43:34Z | 2021-02-19T23:43:34Z | OWNER | I only want it to resize up and down, not left to right - so I'm not keen on the default resize handle: https://rawgit.com/Sphinxxxx/cm-resize/master/demo/index.html |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | |
782458744 | https://github.com/simonw/datasette/issues/1236#issuecomment-782458744 | https://api.github.com/repos/simonw/datasette/issues/1236 | MDEyOklzc3VlQ29tbWVudDc4MjQ1ODc0NA== | simonw 9599 | 2021-02-19T23:42:42Z | 2021-02-19T23:42:42Z | OWNER | I can use https://github.com/Sphinxxxx/cm-resize for this |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to increase size of the SQL editor window 812228314 | |
782246111 | https://github.com/simonw/datasette/issues/619#issuecomment-782246111 | https://api.github.com/repos/simonw/datasette/issues/619 | MDEyOklzc3VlQ29tbWVudDc4MjI0NjExMQ== | simonw 9599 | 2021-02-19T18:11:22Z | 2021-02-19T18:11:22Z | OWNER | Big usability improvement, see also #1236 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
"Invalid SQL" page should let you edit the SQL 520655983 | |
781825726 | https://github.com/simonw/sqlite-utils/issues/236#issuecomment-781825726 | https://api.github.com/repos/simonw/sqlite-utils/issues/236 | MDEyOklzc3VlQ29tbWVudDc4MTgyNTcyNg== | simonw 9599 | 2021-02-19T05:10:41Z | 2021-02-19T05:10:41Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--attach command line option for attaching extra databases 811680502 | ||
781825187 | https://github.com/simonw/sqlite-utils/issues/113#issuecomment-781825187 | https://api.github.com/repos/simonw/sqlite-utils/issues/113 | MDEyOklzc3VlQ29tbWVudDc4MTgyNTE4Nw== | simonw 9599 | 2021-02-19T05:09:12Z | 2021-02-19T05:09:12Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Syntactic sugar for ATTACH DATABASE 621286870 | ||
781764561 | https://github.com/simonw/datasette/issues/283#issuecomment-781764561 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTc2NDU2MQ== | simonw 9599 | 2021-02-19T02:10:21Z | 2021-02-19T02:10:21Z | OWNER | This feature is now released! https://docs.datasette.io/en/stable/changelog.html#v0-55 |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 } |
Support cross-database joins 325958506 | |
781736855 | https://github.com/simonw/datasette/issues/1235#issuecomment-781736855 | https://api.github.com/repos/simonw/datasette/issues/1235 | MDEyOklzc3VlQ29tbWVudDc4MTczNjg1NQ== | simonw 9599 | 2021-02-19T00:52:47Z | 2021-02-19T01:47:53Z | OWNER | I bumped the two lines in the
Then I ran it with:
http://0.0.0.0:8001/-/versions confirmed that it was now running Python 3.7.10 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Upgrade Python version used by official Datasette Docker image 811589344 | |
781735887 | https://github.com/simonw/datasette/issues/1235#issuecomment-781735887 | https://api.github.com/repos/simonw/datasette/issues/1235 | MDEyOklzc3VlQ29tbWVudDc4MTczNTg4Nw== | simonw 9599 | 2021-02-19T00:50:21Z | 2021-02-19T00:50:55Z | OWNER | I'll bump to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Upgrade Python version used by official Datasette Docker image 811589344 | |
781670827 | https://github.com/simonw/datasette/issues/283#issuecomment-781670827 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTY3MDgyNw== | simonw 9599 | 2021-02-18T22:16:46Z | 2021-02-18T22:16:46Z | OWNER | Demo is now live here: https://latest.datasette.io/_memory The documentation is at https://docs.datasette.io/en/latest/sql_queries.html#cross-database-queries - it links to this example query: https://latest.datasette.io/_memory?sql=select%0D%0A++%27fixtures%27+as+database%2C+%0D%0Afrom%0D%0A++%5Bfixtures%5D.sqlite_master%0D%0Aunion%0D%0Aselect%0D%0A++%27extra_database%27+as+database%2C+%0D%0Afrom%0D%0A++%5Bextra_database%5D.sqlite_master |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support cross-database joins 325958506 | |
781665560 | https://github.com/simonw/datasette/issues/283#issuecomment-781665560 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTY2NTU2MA== | simonw 9599 | 2021-02-18T22:06:14Z | 2021-02-18T22:06:14Z | OWNER | The implementation in #1232 is ready to land. It's the simplest-thing-that-could-possibly-work: you can run It only works on the first 10 databases that were passed to the command-line. This means that if you have a Datasette instance with hundreds of attached databases (see Datasette Library) this won't be particularly useful for you. So... a better, future version of this feature would be one that lets you join across databases on command - maybe by hitting Also worth noting: plugins that implement the prepare_connection() hook can attach additional databases - so if you need better, customized support for this one way to handle that would be with a custom plugin. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support cross-database joins 325958506 | |
781651283 | https://github.com/simonw/datasette/pull/1232#issuecomment-781651283 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTY1MTI4Mw== | simonw 9599 | 2021-02-18T21:37:55Z | 2021-02-18T21:37:55Z | OWNER | UI listing the attached tables: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781641728 | https://github.com/simonw/datasette/pull/1232#issuecomment-781641728 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTY0MTcyOA== | simonw 9599 | 2021-02-18T21:19:34Z | 2021-02-18T21:19:34Z | OWNER | I tested the demo deployment like this:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781637292 | https://github.com/simonw/datasette/pull/1232#issuecomment-781637292 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTYzNzI5Mg== | simonw 9599 | 2021-02-18T21:11:31Z | 2021-02-18T21:11:31Z | OWNER | Due to bug #1233 I'm going to publish the additional database as |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781636590 | https://github.com/simonw/datasette/issues/1233#issuecomment-781636590 | https://api.github.com/repos/simonw/datasette/issues/1233 | MDEyOklzc3VlQ29tbWVudDc4MTYzNjU5MA== | simonw 9599 | 2021-02-18T21:10:08Z | 2021-02-18T21:10:08Z | OWNER | I think the bug is here: https://github.com/simonw/datasette/blob/640ac7071b73111ba4423812cd683756e0e1936b/datasette/utils/init.py#L349-L373 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
"datasette publish cloudrun" cannot publish files with spaces in their name 811458446 | |
781634819 | https://github.com/simonw/datasette/pull/1232#issuecomment-781634819 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTYzNDgxOQ== | simonw 9599 | 2021-02-18T21:06:43Z | 2021-02-18T21:06:43Z | OWNER | I'll document this option on https://docs.datasette.io/en/stable/sql_queries.html |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781629841 | https://github.com/simonw/datasette/pull/1232#issuecomment-781629841 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTYyOTg0MQ== | simonw 9599 | 2021-02-18T20:57:23Z | 2021-02-18T20:57:23Z | OWNER | The new warning looks like this: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781598585 | https://github.com/simonw/datasette/pull/1232#issuecomment-781598585 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTU5ODU4NQ== | simonw 9599 | 2021-02-18T19:57:30Z | 2021-02-18T19:57:30Z | OWNER | It would also be neat if https://latest.datasette.io/ had multiple databases attached in order to demonstrate this feature. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781594632 | https://github.com/simonw/datasette/pull/1232#issuecomment-781594632 | https://api.github.com/repos/simonw/datasette/issues/1232 | MDEyOklzc3VlQ29tbWVudDc4MTU5NDYzMg== | simonw 9599 | 2021-02-18T19:50:21Z | 2021-02-18T19:50:21Z | OWNER | It would be neat if the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--crossdb option for joining across databases 811407131 | |
781593169 | https://github.com/simonw/datasette/issues/283#issuecomment-781593169 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTU5MzE2OQ== | simonw 9599 | 2021-02-18T19:47:34Z | 2021-02-18T19:47:34Z | OWNER | I have a working version now, moving development to a pull request. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support cross-database joins 325958506 | |
781591015 | https://github.com/simonw/datasette/issues/283#issuecomment-781591015 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTU5MTAxNQ== | simonw 9599 | 2021-02-18T19:44:02Z | 2021-02-18T19:44:02Z | OWNER | For the moment I'm going to hard-code a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support cross-database joins 325958506 | |
781574786 | https://github.com/simonw/datasette/issues/283#issuecomment-781574786 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTU3NDc4Ng== | simonw 9599 | 2021-02-18T19:15:37Z | 2021-02-18T19:15:37Z | OWNER |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support cross-database joins 325958506 | |
781573676 | https://github.com/simonw/datasette/issues/283#issuecomment-781573676 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTU3MzY3Ng== | simonw 9599 | 2021-02-18T19:13:30Z | 2021-02-18T19:13:30Z | OWNER | It turns out SQLite defaults to a maximum of 10 attached databases. This can be increased using a compile-time constant, but even with that it cannot be more than 62: https://stackoverflow.com/questions/9845448/attach-limit-10 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support cross-database joins 325958506 | |
781560989 | https://github.com/simonw/datasette/issues/1231#issuecomment-781560989 | https://api.github.com/repos/simonw/datasette/issues/1231 | MDEyOklzc3VlQ29tbWVudDc4MTU2MDk4OQ== | simonw 9599 | 2021-02-18T18:50:53Z | 2021-02-18T18:50:53Z | OWNER | Ideally I'd figure out a way to replicate this error in a concurrent unit test. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Race condition errors in new refresh_schemas() mechanism 811367257 | |
781560865 | https://github.com/simonw/datasette/issues/1231#issuecomment-781560865 | https://api.github.com/repos/simonw/datasette/issues/1231 | MDEyOklzc3VlQ29tbWVudDc4MTU2MDg2NQ== | simonw 9599 | 2021-02-18T18:50:38Z | 2021-02-18T18:50:38Z | OWNER | I started trying to use locks to resolve this but I've not figured out the right way to do that yet - here's my first experiment: ```diff diff --git a/datasette/app.py b/datasette/app.py index 9e15a16..1681c9d 100644 --- a/datasette/app.py +++ b/datasette/app.py @@ -217,6 +217,7 @@ class Datasette: self.inspect_data = inspect_data self.immutables = set(immutables or []) self.databases = collections.OrderedDict() + self._refresh_schemas_lock = threading.Lock() if memory or not self.files: self.add_database(Database(self, is_memory=True), name="_memory") # memory_name is a random string so that each Datasette instance gets its own @@ -324,6 +325,13 @@ class Datasette: self.client = DatasetteClient(self)
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Race condition errors in new refresh_schemas() mechanism 811367257 | |
781546512 | https://github.com/simonw/datasette/issues/1226#issuecomment-781546512 | https://api.github.com/repos/simonw/datasette/issues/1226 | MDEyOklzc3VlQ29tbWVudDc4MTU0NjUxMg== | simonw 9599 | 2021-02-18T18:26:19Z | 2021-02-18T18:26:19Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--port option should validate port is between 0 and 65535 808843401 | ||
781530157 | https://github.com/simonw/datasette/issues/1226#issuecomment-781530157 | https://api.github.com/repos/simonw/datasette/issues/1226 | MDEyOklzc3VlQ29tbWVudDc4MTUzMDE1Nw== | simonw 9599 | 2021-02-18T18:00:15Z | 2021-02-18T18:00:15Z | OWNER | I can use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--port option should validate port is between 0 and 65535 808843401 | |
781077127 | https://github.com/simonw/datasette/issues/283#issuecomment-781077127 | https://api.github.com/repos/simonw/datasette/issues/283 | MDEyOklzc3VlQ29tbWVudDc4MTA3NzEyNw== | simonw 9599 | 2021-02-18T05:56:30Z | 2021-02-18T05:57:34Z | OWNER | I'm going to to try prototyping the |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 } |
Support cross-database joins 325958506 | |
779467451 | https://github.com/simonw/datasette/issues/1226#issuecomment-779467451 | https://api.github.com/repos/simonw/datasette/issues/1226 | MDEyOklzc3VlQ29tbWVudDc3OTQ2NzQ1MQ== | simonw 9599 | 2021-02-15T22:02:46Z | 2021-02-15T22:02:46Z | OWNER | I'm OK with the current error message shown if you try to use too low a port:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--port option should validate port is between 0 and 65535 808843401 | |
779467160 | https://github.com/simonw/datasette/issues/1226#issuecomment-779467160 | https://api.github.com/repos/simonw/datasette/issues/1226 | MDEyOklzc3VlQ29tbWVudDc3OTQ2NzE2MA== | simonw 9599 | 2021-02-15T22:01:53Z | 2021-02-15T22:01:53Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--port option should validate port is between 0 and 65535 808843401 | ||
779416619 | https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779416619 | https://api.github.com/repos/simonw/sqlite-utils/issues/147 | MDEyOklzc3VlQ29tbWVudDc3OTQxNjYxOQ== | simonw 9599 | 2021-02-15T19:40:57Z | 2021-02-15T21:27:55Z | OWNER | Tried this experiment (not proper binary search, it only searches downwards): ```python import sqlite3 db = sqlite3.connect(":memory:") def tryit(n): sql = "select 1 where 1 in ({})".format(", ".join("?" for i in range(n))) db.execute(sql, [0 for i in range(n)]) def find_limit(min=0, max=5_000_000):
value = max
while True:
print('Trying', value)
try:
tryit(value)
return value
except:
value = value // 2
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
SQLITE_MAX_VARS maybe hard-coded too low 688670158 | |
779448912 | https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779448912 | https://api.github.com/repos/simonw/sqlite-utils/issues/147 | MDEyOklzc3VlQ29tbWVudDc3OTQ0ODkxMg== | simonw 9599 | 2021-02-15T21:09:50Z | 2021-02-15T21:09:50Z | OWNER | I fiddled around and replaced that line with
43s is definitely better than 56s, but it's still not as big as the ~26.5s to ~3.5s improvement described by @simonwiles at the top of this issue. I wonder what I'm missing here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
SQLITE_MAX_VARS maybe hard-coded too low 688670158 | |
779446652 | https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779446652 | https://api.github.com/repos/simonw/sqlite-utils/issues/147 | MDEyOklzc3VlQ29tbWVudDc3OTQ0NjY1Mg== | simonw 9599 | 2021-02-15T21:04:19Z | 2021-02-15T21:04:19Z | OWNER | ... but it looks like And |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
SQLITE_MAX_VARS maybe hard-coded too low 688670158 | |
779445423 | https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779445423 | https://api.github.com/repos/simonw/sqlite-utils/issues/147 | MDEyOklzc3VlQ29tbWVudDc3OTQ0NTQyMw== | simonw 9599 | 2021-02-15T21:00:44Z | 2021-02-15T21:01:09Z | OWNER | I tried changing the hard-coded value from 999 to 156_250 and running Increased the setting here(sqlite-utils) sqlite-utils % time sqlite-utils insert fast-ethos.db ethos ../ethos-datasette/ethos.csv --no-headers [###################################-] 99% 00:00:00sqlite-utils insert fast-ethos.db ethos ../ethos-datasette/ethos.csv 39.40s user 5.15s system 96% cpu 46.320 total ``` Not as big a difference as I was expecting. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
SQLITE_MAX_VARS maybe hard-coded too low 688670158 | |
779417723 | https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779417723 | https://api.github.com/repos/simonw/sqlite-utils/issues/147 | MDEyOklzc3VlQ29tbWVudDc3OTQxNzcyMw== | simonw 9599 | 2021-02-15T19:44:02Z | 2021-02-15T19:47:00Z | OWNER |
All of these are still slow enough that I'm not comfortable running this search for every time the library is imported. Allowing users to opt-in to this as a performance enhancement might be better. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
SQLITE_MAX_VARS maybe hard-coded too low 688670158 | |
779409770 | https://github.com/simonw/sqlite-utils/issues/147#issuecomment-779409770 | https://api.github.com/repos/simonw/sqlite-utils/issues/147 | MDEyOklzc3VlQ29tbWVudDc3OTQwOTc3MA== | simonw 9599 | 2021-02-15T19:23:11Z | 2021-02-15T19:23:11Z | OWNER | On my Mac right now I'm seeing a limit of 500,000:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
SQLITE_MAX_VARS maybe hard-coded too low 688670158 | |
778854808 | https://github.com/simonw/sqlite-utils/issues/227#issuecomment-778854808 | https://api.github.com/repos/simonw/sqlite-utils/issues/227 | MDEyOklzc3VlQ29tbWVudDc3ODg1NDgwOA== | simonw 9599 | 2021-02-14T22:46:54Z | 2021-02-14T22:46:54Z | OWNER | Fix is released in 3.5. |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error reading csv files with large column data 807174161 | |
778851721 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778851721 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODg1MTcyMQ== | simonw 9599 | 2021-02-14T22:23:46Z | 2021-02-14T22:23:46Z | OWNER | I called this |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778849394 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778849394 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODg0OTM5NA== | simonw 9599 | 2021-02-14T22:06:53Z | 2021-02-14T22:06:53Z | OWNER | For the moment I think just adding Users can import with that option, then use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778844016 | https://github.com/simonw/sqlite-utils/issues/229#issuecomment-778844016 | https://api.github.com/repos/simonw/sqlite-utils/issues/229 | MDEyOklzc3VlQ29tbWVudDc3ODg0NDAxNg== | simonw 9599 | 2021-02-14T21:22:45Z | 2021-02-14T21:22:45Z | OWNER | I'm going to use this pattern from https://stackoverflow.com/a/15063941 ```python import sys import csv maxInt = sys.maxsize while True: # decrease the maxInt value by factor 10 # as long as the OverflowError occurs.
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hitting `_csv.Error: field larger than field limit (131072)` 807817197 | |
778843503 | https://github.com/simonw/sqlite-utils/issues/229#issuecomment-778843503 | https://api.github.com/repos/simonw/sqlite-utils/issues/229 | MDEyOklzc3VlQ29tbWVudDc3ODg0MzUwMw== | simonw 9599 | 2021-02-14T21:18:51Z | 2021-02-14T21:18:51Z | OWNER | I want to set this to the maximum allowed limit, which seems to be surprisingly hard! That StackOverflow thread is full of ideas for that, many of them involving |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hitting `_csv.Error: field larger than field limit (131072)` 807817197 | |
778843362 | https://github.com/simonw/sqlite-utils/issues/229#issuecomment-778843362 | https://api.github.com/repos/simonw/sqlite-utils/issues/229 | MDEyOklzc3VlQ29tbWVudDc3ODg0MzM2Mg== | simonw 9599 | 2021-02-14T21:17:53Z | 2021-02-14T21:17:53Z | OWNER | Same issue as #227. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hitting `_csv.Error: field larger than field limit (131072)` 807817197 | |
778811746 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778811746 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODgxMTc0Ng== | simonw 9599 | 2021-02-14T17:39:30Z | 2021-02-14T21:16:54Z | OWNER | I'm going to detach this from the #131 column types idea. The three things I need to handle here are:
Here's a potential design that covers the first two:
It doesn't cover the "give me unknown column names" case though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778843086 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778843086 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODg0MzA4Ng== | simonw 9599 | 2021-02-14T21:15:43Z | 2021-02-14T21:15:43Z | OWNER | I'm not convinced the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778842982 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778842982 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODg0Mjk4Mg== | simonw 9599 | 2021-02-14T21:15:11Z | 2021-02-14T21:15:11Z | OWNER | Implementation tip: I have code that reads the first row and uses it as headers here: https://github.com/simonw/sqlite-utils/blob/8f042ae1fd323995d966a94e8e6df85cc843b938/sqlite_utils/cli.py#L689-L691 So If I want to use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778841704 | https://github.com/simonw/sqlite-utils/issues/227#issuecomment-778841704 | https://api.github.com/repos/simonw/sqlite-utils/issues/227 | MDEyOklzc3VlQ29tbWVudDc3ODg0MTcwNA== | simonw 9599 | 2021-02-14T21:05:20Z | 2021-02-14T21:05:20Z | OWNER | This has also been reported in #229. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error reading csv files with large column data 807174161 | |
778841547 | https://github.com/simonw/sqlite-utils/pull/225#issuecomment-778841547 | https://api.github.com/repos/simonw/sqlite-utils/issues/225 | MDEyOklzc3VlQ29tbWVudDc3ODg0MTU0Nw== | simonw 9599 | 2021-02-14T21:04:13Z | 2021-02-14T21:04:13Z | OWNER | I added a test and fixed this in #234 - thanks for the fix. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix for problem in Table.insert_all on search for columns per chunk of rows 797159961 | |
778841278 | https://github.com/simonw/sqlite-utils/issues/234#issuecomment-778841278 | https://api.github.com/repos/simonw/sqlite-utils/issues/234 | MDEyOklzc3VlQ29tbWVudDc3ODg0MTI3OA== | simonw 9599 | 2021-02-14T21:02:11Z | 2021-02-14T21:02:11Z | OWNER | I managed to replicate this in a test:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
.insert_all() fails if subsequent chunks contain additional columns 808046597 | |
778834504 | https://github.com/simonw/sqlite-utils/pull/225#issuecomment-778834504 | https://api.github.com/repos/simonw/sqlite-utils/issues/225 | MDEyOklzc3VlQ29tbWVudDc3ODgzNDUwNA== | simonw 9599 | 2021-02-14T20:09:30Z | 2021-02-14T20:09:30Z | OWNER | Thanks for this. I'm going to try and get the test suite to run in Windows on GitHub Actions. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix for problem in Table.insert_all on search for columns per chunk of rows 797159961 | |
778829456 | https://github.com/simonw/sqlite-utils/issues/231#issuecomment-778829456 | https://api.github.com/repos/simonw/sqlite-utils/issues/231 | MDEyOklzc3VlQ29tbWVudDc3ODgyOTQ1Ng== | simonw 9599 | 2021-02-14T19:37:52Z | 2021-02-14T19:37:52Z | OWNER | I'm going to add
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
limit=X, offset=Y parameters for more Python methods 808028757 | |
778828758 | https://github.com/simonw/sqlite-utils/issues/231#issuecomment-778828758 | https://api.github.com/repos/simonw/sqlite-utils/issues/231 | MDEyOklzc3VlQ29tbWVudDc3ODgyODc1OA== | simonw 9599 | 2021-02-14T19:33:14Z | 2021-02-14T19:33:14Z | OWNER | The |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
limit=X, offset=Y parameters for more Python methods 808028757 | |
778828495 | https://github.com/simonw/sqlite-utils/pull/224#issuecomment-778828495 | https://api.github.com/repos/simonw/sqlite-utils/issues/224 | MDEyOklzc3VlQ29tbWVudDc3ODgyODQ5NQ== | simonw 9599 | 2021-02-14T19:31:06Z | 2021-02-14T19:31:06Z | OWNER | I'm going to add a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add fts offset docs. 792297010 | |
778827570 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778827570 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgyNzU3MA== | simonw 9599 | 2021-02-14T19:24:20Z | 2021-02-14T19:24:20Z | OWNER | Here's the implementation in Python: https://github.com/python/cpython/blob/63298930fb531ba2bb4f23bc3b915dbf1e17e9e1/Lib/csv.py#L204-L225 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778824361 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778824361 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgyNDM2MQ== | simonw 9599 | 2021-02-14T18:59:22Z | 2021-02-14T18:59:22Z | OWNER | I think I've got it. I can use
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778821403 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778821403 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgyMTQwMw== | simonw 9599 | 2021-02-14T18:38:16Z | 2021-02-14T18:38:16Z | OWNER | There are two code paths here that matter:
I'm a bit stuck on the second one. Ideally I could use something like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778818639 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778818639 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgxODYzOQ== | simonw 9599 | 2021-02-14T18:22:38Z | 2021-02-14T18:22:38Z | OWNER | Maybe I shouldn't be using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778817494 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778817494 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgxNzQ5NA== | simonw 9599 | 2021-02-14T18:16:06Z | 2021-02-14T18:16:06Z | OWNER | Types involved:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778816333 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778816333 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgxNjMzMw== | simonw 9599 | 2021-02-14T18:08:44Z | 2021-02-14T18:08:44Z | OWNER | No, you can't |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778815740 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778815740 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgxNTc0MA== | simonw 9599 | 2021-02-14T18:05:03Z | 2021-02-14T18:05:03Z | OWNER | The challenge here is how to read the first 2048 bytes and then reset the incoming file. The Python docs example looks like this:
The challenge is going to be having the If |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778812684 | https://github.com/simonw/sqlite-utils/issues/230#issuecomment-778812684 | https://api.github.com/repos/simonw/sqlite-utils/issues/230 | MDEyOklzc3VlQ29tbWVudDc3ODgxMjY4NA== | simonw 9599 | 2021-02-14T17:45:16Z | 2021-02-14T17:45:16Z | OWNER | Running this could take any CSV (or TSV) file and automatically detect the delimiter. If no header row is detected it could add
(Using This could be called |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--sniff option for sniffing delimiters 808008305 | |
778812050 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778812050 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODgxMjA1MA== | simonw 9599 | 2021-02-14T17:41:30Z | 2021-02-14T17:41:30Z | OWNER | I just spotted that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778811934 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778811934 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODgxMTkzNA== | simonw 9599 | 2021-02-14T17:40:48Z | 2021-02-14T17:40:48Z | OWNER | Another pattern that might be useful is to generate a header that is just "unknown1,unknown2,unknown3" for each of the columns in the rest of the file. This makes it easy to e.g. facet-explore within Datasette to figure out the correct names, then use I needed to do that for the https://bl.iro.bl.uk/work/ns/3037474a-761c-456d-a00c-9ef3c6773f4c example. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778511347 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778511347 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODUxMTM0Nw== | simonw 9599 | 2021-02-12T23:27:50Z | 2021-02-12T23:27:50Z | OWNER | For the moment, a workaround can be to
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778510528 | https://github.com/simonw/sqlite-utils/issues/131#issuecomment-778510528 | https://api.github.com/repos/simonw/sqlite-utils/issues/131 | MDEyOklzc3VlQ29tbWVudDc3ODUxMDUyOA== | simonw 9599 | 2021-02-12T23:25:06Z | 2021-02-12T23:25:06Z | OWNER | If |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
sqlite-utils insert: options for column types 675753042 | |
778508887 | https://github.com/simonw/sqlite-utils/issues/131#issuecomment-778508887 | https://api.github.com/repos/simonw/sqlite-utils/issues/131 | MDEyOklzc3VlQ29tbWVudDc3ODUwODg4Nw== | simonw 9599 | 2021-02-12T23:20:11Z | 2021-02-12T23:20:11Z | OWNER | Annoyingly Particularly annoying because I attempted to remove the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
sqlite-utils insert: options for column types 675753042 | |
778349672 | https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778349672 | https://api.github.com/repos/simonw/sqlite-utils/issues/228 | MDEyOklzc3VlQ29tbWVudDc3ODM0OTY3Mg== | simonw 9599 | 2021-02-12T18:00:43Z | 2021-02-12T18:00:43Z | OWNER | I could combine this with #131 to allow types to be specified in addition to column names. Probably need an option that means "ignore the existing heading row and use this one instead". |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--no-headers option for CSV and TSV 807437089 | |
778349142 | https://github.com/simonw/sqlite-utils/issues/227#issuecomment-778349142 | https://api.github.com/repos/simonw/sqlite-utils/issues/227 | MDEyOklzc3VlQ29tbWVudDc3ODM0OTE0Mg== | simonw 9599 | 2021-02-12T17:59:35Z | 2021-02-12T17:59:35Z | OWNER | It looks like I can at least bump this size limit up to the maximum allowed by Python - I'll take a look at that. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error reading csv files with large column data 807174161 | |
777901052 | https://github.com/simonw/datasette/issues/1221#issuecomment-777901052 | https://api.github.com/repos/simonw/datasette/issues/1221 | MDEyOklzc3VlQ29tbWVudDc3NzkwMTA1Mg== | simonw 9599 | 2021-02-12T01:09:54Z | 2021-02-12T01:09:54Z | OWNER | I also tested this manually. I generated certificate files like so:
This created Then I started Datasette like this:
And exercise it using
Note that without the ``` /tmp % curl 'https://localhost:8001/_memory.json' curl: (60) SSL certificate problem: Invalid certificate chain More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support SSL/TLS directly 806849424 | |
777887190 | https://github.com/simonw/datasette/issues/1221#issuecomment-777887190 | https://api.github.com/repos/simonw/datasette/issues/1221 | MDEyOklzc3VlQ29tbWVudDc3Nzg4NzE5MA== | simonw 9599 | 2021-02-12T00:29:18Z | 2021-02-12T00:29:18Z | OWNER | I can use this recipe to start a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support SSL/TLS directly 806849424 | |
777883452 | https://github.com/simonw/datasette/issues/1221#issuecomment-777883452 | https://api.github.com/repos/simonw/datasette/issues/1221 | MDEyOklzc3VlQ29tbWVudDc3Nzg4MzQ1Mg== | simonw 9599 | 2021-02-12T00:19:30Z | 2021-02-12T00:19:40Z | OWNER | Uvicorn supports these options: https://www.uvicorn.org/#command-line-options ``` --ssl-keyfile TEXT SSL key file --ssl-certfile TEXT SSL certificate file --ssl-keyfile-password TEXT SSL keyfile password --ssl-version INTEGER SSL version to use (see stdlib ssl module's) [default: 2] --ssl-cert-reqs INTEGER Whether client certificate is required (see stdlib ssl module's) [default: 0] --ssl-ca-certs TEXT CA certificates file
--ssl-ciphers TEXT Ciphers to use (see stdlib ssl module's)
[default: TLSv1]
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support SSL/TLS directly 806849424 | |
777178728 | https://github.com/simonw/datasette/issues/1200#issuecomment-777178728 | https://api.github.com/repos/simonw/datasette/issues/1200 | MDEyOklzc3VlQ29tbWVudDc3NzE3ODcyOA== | simonw 9599 | 2021-02-11T03:13:59Z | 2021-02-11T03:13:59Z | OWNER | I came up with the need for this while playing with this tool: https://calands.datasettes.com/calands?sql=select%0D%0A++AsGeoJSON(geometry)%2C+*%0D%0Afrom%0D%0A++CPAD_2020a_SuperUnits%0D%0Awhere%0D%0A++PARK_NAME+like+'%25mini%25'+and%0D%0A++Intersects(GeomFromGeoJSON(%3Afreedraw)%2C+geometry)+%3D+1%0D%0A++and+CPAD_2020a_SuperUnits.rowid+in+(%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++SpatialIndex%0D%0A++++where%0D%0A++++++f_table_name+%3D+'CPAD_2020a_SuperUnits'%0D%0A++++++and+search_frame+%3D+GeomFromGeoJSON(%3Afreedraw)%0D%0A++)&freedraw={"type"%3A"MultiPolygon"%2C"coordinates"%3A[[[[-122.42202758789064%2C37.82280243352759]%2C[-122.39868164062501%2C37.823887203271454]%2C[-122.38220214843751%2C37.81846319511331]%2C[-122.35061645507814%2C37.77071473849611]%2C[-122.34924316406251%2C37.74465712069939]%2C[-122.37258911132814%2C37.703380457832374]%2C[-122.39044189453125%2C37.690340943717715]%2C[-122.41241455078126%2C37.680559803205135]%2C[-122.44262695312501%2C37.67295135774715]%2C[-122.47283935546876%2C37.67295135774715]%2C[-122.52502441406251%2C37.68382032669382]%2C[-122.53463745117189%2C37.6892542140253]%2C[-122.54699707031251%2C37.690340943717715]%2C[-122.55798339843751%2C37.72945260537781]%2C[-122.54287719726564%2C37.77831314799672]%2C[-122.49893188476564%2C37.81303878836991]%2C[-122.46185302734376%2C37.82822612280363]%2C[-122.42889404296876%2C37.82822612280363]%2C[-122.42202758789064%2C37.82280243352759]]]]} - before I fixed https://github.com/simonw/datasette-leaflet-geojson/issues/16 it was loading a LOT of maps, which felt bad. I wanted to be able to link people to that page with a hard limit on the number of rows displayed on that page. It's mainly to guard against unexpected behaviour from limit-less queries though. It's not a very high priority feature! |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
?_size=10 option for the arbitrary query page would be useful 792890765 | |
775442039 | https://github.com/simonw/datasette/issues/1219#issuecomment-775442039 | https://api.github.com/repos/simonw/datasette/issues/1219 | MDEyOklzc3VlQ29tbWVudDc3NTQ0MjAzOQ== | simonw 9599 | 2021-02-08T20:39:52Z | 2021-02-08T22:13:00Z | OWNER | This comment helped me find a pattern for running Scalene against the Datasette test suite: https://github.com/emeryberger/scalene/issues/70#issuecomment-755245858
Current thread 0x0000000110c1edc0 (most recent call first): File "/Users/simon/Dropbox/Development/datasette/datasette/utils/init.py", line 553 in detect_json1 File "/Users/simon/Dropbox/Development/datasette/datasette/filters.py", line 168 in Filters File "/Users/simon/Dropbox/Development/datasette/datasette/filters.py", line 94 in <module> File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 783 in exec_module File "<frozen importlib._bootstrap>", line 671 in _load_unlocked File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 991 in _find_and_load File "/Users/simon/Dropbox/Development/datasette/datasette/views/table.py", line 27 in <module> File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 783 in exec_module File "<frozen importlib._bootstrap>", line 671 in _load_unlocked File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 991 in _find_and_load File "/Users/simon/Dropbox/Development/datasette/datasette/app.py", line 42 in <module> File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 783 in exec_module File "<frozen importlib._bootstrap>", line 671 in _load_unlocked File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 991 in _find_and_load File "/Users/simon/Dropbox/Development/datasette/tests/test_api.py", line 1 in <module> File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 170 in exec_module File "<frozen importlib._bootstrap>", line 671 in _load_unlocked File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 991 in _find_and_load File "<frozen importlib._bootstrap>", line 1014 in _gcd_import File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/importlib/init.py", line 127 in import_module File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/pathlib.py", line 520 in import_path File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/python.py", line 552 in _importtestmodule File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/python.py", line 484 in _getobj File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/python.py", line 288 in obj File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/python.py", line 500 in _inject_setup_module_fixture File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/python.py", line 487 in collect File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/runner.py", line 324 in <lambda> File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/runner.py", line 294 in from_call File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/runner.py", line 324 in pytest_make_collect_report File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in call File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/runner.py", line 441 in collect_one_node File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 768 in genitems File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 771 in genitems File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 568 in _perform_collect File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 516 in perform_collect File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 306 in pytest_collection File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in call File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 295 in _main File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 240 in wrap_session File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/main.py", line 289 in pytest_cmdline_main File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in call File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/_pytest/config/init.py", line 157 in main File "run_tests.py", line 3 in <module> File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/scalene/scalene_profiler.py", line 1525 in main File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/scalene/main.py", line 7 in main File "/Users/simon/.local/share/virtualenvs/datasette-AWNrQs95/lib/python3.8/site-packages/scalene/main.py", line 14 in <module> File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87 in _run_code File "/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 194 in _run_module_as_main Scalene error: received signal SIGSEGV ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Try profiling Datasette using scalene 803929694 | |
775497449 | https://github.com/simonw/datasette/issues/1219#issuecomment-775497449 | https://api.github.com/repos/simonw/datasette/issues/1219 | MDEyOklzc3VlQ29tbWVudDc3NTQ5NzQ0OQ== | simonw 9599 | 2021-02-08T22:11:34Z | 2021-02-08T22:11:34Z | OWNER | https://github.com/emeryberger/scalene/issues/110 reports a "received signal SIGSEGV" error that was fixed by upgrading to the latest Scalene version, but I'm running that already. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Try profiling Datasette using scalene 803929694 | |
774373829 | https://github.com/simonw/sqlite-utils/issues/223#issuecomment-774373829 | https://api.github.com/repos/simonw/sqlite-utils/issues/223 | MDEyOklzc3VlQ29tbWVudDc3NDM3MzgyOQ== | simonw 9599 | 2021-02-06T01:39:47Z | 2021-02-06T01:39:47Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
--delimiter option for CSV import 788527932 | ||
772796111 | https://github.com/simonw/datasette/issues/1216#issuecomment-772796111 | https://api.github.com/repos/simonw/datasette/issues/1216 | MDEyOklzc3VlQ29tbWVudDc3Mjc5NjExMQ== | simonw 9599 | 2021-02-03T20:20:48Z | 2021-02-03T20:20:48Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
/-/databases should reflect connection order, not alphabetical order 800669347 | ||
772001787 | https://github.com/simonw/datasette/issues/1214#issuecomment-772001787 | https://api.github.com/repos/simonw/datasette/issues/1214 | MDEyOklzc3VlQ29tbWVudDc3MjAwMTc4Nw== | simonw 9599 | 2021-02-02T21:28:53Z | 2021-02-02T21:28:53Z | OWNER | Fix is now live on https://latest.datasette.io/fixtures/searchable?_search=terry - clearing "terry" and re-submitting the form now works as expected. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Re-submitting filter form duplicates _x querystring arguments 799693777 | |
771992628 | https://github.com/simonw/datasette/issues/1214#issuecomment-771992628 | https://api.github.com/repos/simonw/datasette/issues/1214 | MDEyOklzc3VlQ29tbWVudDc3MTk5MjYyOA== | simonw 9599 | 2021-02-02T21:15:18Z | 2021-02-02T21:15:18Z | OWNER | The cause of this bug is form fields which begin with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Re-submitting filter form duplicates _x querystring arguments 799693777 | |
771992025 | https://github.com/simonw/datasette/issues/1214#issuecomment-771992025 | https://api.github.com/repos/simonw/datasette/issues/1214 | MDEyOklzc3VlQ29tbWVudDc3MTk5MjAyNQ== | simonw 9599 | 2021-02-02T21:14:16Z | 2021-02-02T21:14:16Z | OWNER | As a result, navigating to https://github-to-sqlite.dogsheep.net/github/labels?_search=help and clearing out the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Re-submitting filter form duplicates _x querystring arguments 799693777 | |
771976561 | https://github.com/simonw/datasette/issues/1212#issuecomment-771976561 | https://api.github.com/repos/simonw/datasette/issues/1212 | MDEyOklzc3VlQ29tbWVudDc3MTk3NjU2MQ== | simonw 9599 | 2021-02-02T20:53:27Z | 2021-02-02T20:53:27Z | OWNER | It would be great if we could get |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Tests are very slow. 797651831 | |
771975941 | https://github.com/simonw/datasette/issues/1212#issuecomment-771975941 | https://api.github.com/repos/simonw/datasette/issues/1212 | MDEyOklzc3VlQ29tbWVudDc3MTk3NTk0MQ== | simonw 9599 | 2021-02-02T20:52:36Z | 2021-02-02T20:52:36Z | OWNER | 37 minutes, wow! They're a little slow for me (4-5 minutes perhaps) but not nearly that bad. Thanks for running that profile. I think you're right: figuring out how to use more session scopes would definitely help. The Note that I'd be delighted if you explored this issue further, especially the option of using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Tests are very slow. 797651831 | |
771968675 | https://github.com/simonw/datasette/issues/1213#issuecomment-771968675 | https://api.github.com/repos/simonw/datasette/issues/1213 | MDEyOklzc3VlQ29tbWVudDc3MTk2ODY3NQ== | simonw 9599 | 2021-02-02T20:41:55Z | 2021-02-02T20:41:55Z | OWNER | So maybe I could a special response header which ASGI middleware can pick up that means "Don't attempt to gzip this, just stream it through". |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
gzip support for HTML (and JSON) responses 799663959 | |
771968177 | https://github.com/simonw/datasette/issues/1213#issuecomment-771968177 | https://api.github.com/repos/simonw/datasette/issues/1213 | MDEyOklzc3VlQ29tbWVudDc3MTk2ODE3Nw== | simonw 9599 | 2021-02-02T20:41:13Z | 2021-02-02T20:41:13Z | OWNER | Starlette accumulates the full response body in a
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
gzip support for HTML (and JSON) responses 799663959 | |
771965281 | https://github.com/simonw/datasette/issues/1213#issuecomment-771965281 | https://api.github.com/repos/simonw/datasette/issues/1213 | MDEyOklzc3VlQ29tbWVudDc3MTk2NTI4MQ== | simonw 9599 | 2021-02-02T20:37:08Z | 2021-02-02T20:39:24Z | OWNER | Starlette's gzip middleware implementation is here: https://github.com/encode/starlette/blob/0.14.2/starlette/middleware/gzip.py |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
gzip support for HTML (and JSON) responses 799663959 | |
769534187 | https://github.com/simonw/datasette/issues/1207#issuecomment-769534187 | https://api.github.com/repos/simonw/datasette/issues/1207 | MDEyOklzc3VlQ29tbWVudDc2OTUzNDE4Nw== | simonw 9599 | 2021-01-29T02:37:19Z | 2021-01-29T02:37:19Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document the Datasette(..., pdb=True) testing pattern 793881756 | ||
769455370 | https://github.com/simonw/datasette/issues/1209#issuecomment-769455370 | https://api.github.com/repos/simonw/datasette/issues/1209 | MDEyOklzc3VlQ29tbWVudDc2OTQ1NTM3MA== | simonw 9599 | 2021-01-28T23:00:21Z | 2021-01-28T23:00:21Z | OWNER | Good catch on the workaround here. The root problem is that Is this a bug? I think it is - because the documented behaviour on https://docs.datasette.io/en/stable/internals.html#get-database-name is this:
Since the new behaviour differs from what was in the documentation I'm going to treat this as a bug and fix it. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround 795367402 | |
769453074 | https://github.com/simonw/datasette/issues/1205#issuecomment-769453074 | https://api.github.com/repos/simonw/datasette/issues/1205 | MDEyOklzc3VlQ29tbWVudDc2OTQ1MzA3NA== | simonw 9599 | 2021-01-28T22:54:49Z | 2021-01-28T22:55:02Z | OWNER | I also checked that the following works:
Sure enough, it results in the following Datasette homepage - thanks to #509 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Rename /:memory: to /_memory 793027837 | |
769452084 | https://github.com/simonw/datasette/issues/1205#issuecomment-769452084 | https://api.github.com/repos/simonw/datasette/issues/1205 | MDEyOklzc3VlQ29tbWVudDc2OTQ1MjA4NA== | simonw 9599 | 2021-01-28T22:52:23Z | 2021-01-28T22:52:23Z | OWNER | Here are the redirect tests: https://github.com/simonw/datasette/blob/1600d2a3ec3ada1f6fb5b1eb73bdaeccb5f80530/tests/test_api.py#L635-L648 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Rename /:memory: to /_memory 793027837 | |
769442165 | https://github.com/simonw/datasette/issues/1205#issuecomment-769442165 | https://api.github.com/repos/simonw/datasette/issues/1205 | MDEyOklzc3VlQ29tbWVudDc2OTQ0MjE2NQ== | simonw 9599 | 2021-01-28T22:30:16Z | 2021-01-28T22:30:27Z | OWNER | I'm going to do this, with redirects from |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Rename /:memory: to /_memory 793027837 | |
769274591 | https://github.com/simonw/datasette/issues/1210#issuecomment-769274591 | https://api.github.com/repos/simonw/datasette/issues/1210 | MDEyOklzc3VlQ29tbWVudDc2OTI3NDU5MQ== | simonw 9599 | 2021-01-28T18:10:02Z | 2021-01-28T18:10:02Z | OWNER | That definitely sounds like a bug! Can you provide a copy of your |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Immutable Database w/ Canned Queries 796234313 | |
767823684 | https://github.com/simonw/datasette/issues/1208#issuecomment-767823684 | https://api.github.com/repos/simonw/datasette/issues/1208 | MDEyOklzc3VlQ29tbWVudDc2NzgyMzY4NA== | simonw 9599 | 2021-01-26T20:58:51Z | 2021-01-26T20:58:51Z | OWNER | This is a good catch - I've been lazy about this, but you're right that it's an issue that needs cleaning up. Would be very happy to apply a PR, thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper 794554881 | |
767762551 | https://github.com/simonw/datasette/issues/1151#issuecomment-767762551 | https://api.github.com/repos/simonw/datasette/issues/1151 | MDEyOklzc3VlQ29tbWVudDc2Nzc2MjU1MQ== | simonw 9599 | 2021-01-26T19:07:44Z | 2021-01-26T19:07:44Z | OWNER | Mentioned in https://simonwillison.net/2021/Jan/25/datasette/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Database class mechanism for cross-connection in-memory databases 770448622 | |
767761155 | https://github.com/simonw/datasette/issues/991#issuecomment-767761155 | https://api.github.com/repos/simonw/datasette/issues/991 | MDEyOklzc3VlQ29tbWVudDc2Nzc2MTE1NQ== | simonw 9599 | 2021-01-26T19:05:21Z | 2021-01-26T19:06:36Z | OWNER | Idea: implement this using the existing table view, with a custom template called |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Redesign application homepage 714377268 | |
766991680 | https://github.com/simonw/datasette/issues/1201#issuecomment-766991680 | https://api.github.com/repos/simonw/datasette/issues/1201 | MDEyOklzc3VlQ29tbWVudDc2Njk5MTY4MA== | simonw 9599 | 2021-01-25T17:42:21Z | 2021-01-25T17:42:21Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Release notes for Datasette 0.54 792904595 | ||
766588371 | https://github.com/simonw/datasette/pull/1206#issuecomment-766588371 | https://api.github.com/repos/simonw/datasette/issues/1206 | MDEyOklzc3VlQ29tbWVudDc2NjU4ODM3MQ== | simonw 9599 | 2021-01-25T06:49:06Z | 2021-01-25T06:49:06Z | OWNER | Last thing to do: write up the annotated version of these release notes, assign it a URL on my blog and link to it from the release notes here so I can publish them simultaneously. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Release 0.54 793086333 | |
766588020 | https://github.com/simonw/datasette/pull/1206#issuecomment-766588020 | https://api.github.com/repos/simonw/datasette/issues/1206 | MDEyOklzc3VlQ29tbWVudDc2NjU4ODAyMA== | simonw 9599 | 2021-01-25T06:48:20Z | 2021-01-25T06:48:20Z | OWNER | Issues to reference in the commit message: #509, #1091, #1150, #1151, #1166, #1167, #1178, #1181, #1182, #1184, #1185, #1186, #1187, #1194, #1198 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Release 0.54 793086333 | |
766586151 | https://github.com/simonw/datasette/issues/1201#issuecomment-766586151 | https://api.github.com/repos/simonw/datasette/issues/1201 | MDEyOklzc3VlQ29tbWVudDc2NjU4NjE1MQ== | simonw 9599 | 2021-01-25T06:44:43Z | 2021-01-25T06:44:43Z | OWNER | OK, release notes are ready to merge from that branch. I'll ship the release in the morning, to give me time to write the accompanying annotated release notes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Release notes for Datasette 0.54 792904595 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue >30