(impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/sqlite-utils/pull/303?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [c7e8d72...4c3bf97](https://codecov.io/gh/simonw/sqlite-utils/pull/303?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957536983,sqlite-utils convert command and db[table].convert(...) method,
https://github.com/simonw/sqlite-utils/issues/304#issuecomment-890704624,https://api.github.com/repos/simonw/sqlite-utils/issues/304,890704624,IC_kwDOCGYnMM41FxLw,9599,simonw,2021-08-02T04:28:42Z,2021-08-02T04:28:42Z,OWNER,For the command-line version this can duplicate the `--param` option to allow named parameters in the where clause: https://github.com/simonw/sqlite-utils/blob/c7e8d72be9fe8fe0811f685a18eebc637662d41b/sqlite_utils/cli.py#L1096-L1102,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957731178,"`table.convert(..., where=)` and `sqlite-utils convert ... --where=`",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890553783,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890553783,IC_kwDOCGYnMM41FMW3,9599,simonw,2021-08-01T16:59:09Z,2021-08-01T16:59:09Z,OWNER,I'm going with `recipes.jsonsplit()` rather than `recipe.jsonsplit()` because the Python module containing the recipes will be called `recipes`. I'll set up a `r.jsonsplit()` shortcut too as a convenience.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890552827,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890552827,IC_kwDOCGYnMM41FMH7,9599,simonw,2021-08-01T16:52:00Z,2021-08-01T16:52:00Z,OWNER,I'll finish the work on this in a PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/302#issuecomment-890548009,https://api.github.com/repos/simonw/sqlite-utils/issues/302,890548009,IC_kwDOCGYnMM41FK8p,9599,simonw,2021-08-01T16:18:13Z,2021-08-01T16:18:13Z,OWNER,"Basic API design:
db[table].convert(""headline"", lambda v: v.upper())
You can pass a list of columns instead of a single game column string to apply it to multiple columns.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957529248,Python library version of `sqlite-utils convert`,
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890448623,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890448623,IC_kwDOCGYnMM41Eyrv,9599,simonw,2021-08-01T04:33:30Z,2021-08-01T04:33:30Z,OWNER,"I've started an implementation in the `convert` branch - no documentation yet, and I've not implemented the recipes.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890448119,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890448119,IC_kwDOCGYnMM41Eyj3,9599,simonw,2021-08-01T04:28:05Z,2021-08-01T04:30:28Z,OWNER,"In which case I think `--code` should be a positional argument instead:
```
sqlite-utils convert mydb.db mytable col 'recipe.parsedatetime(value, dayfirst=True)'
sqlite-utils convert mydb.db mytable col 'recipe.jsonsplit(value, delimiter="":"")'
sqlite-utils convert mydb.db mytable col 'recipe.jsonsplit(value, delimiter="":"")'
sqlite-utils convert mydb.db mytable col '{""lower"": value.lower(), ""upper"": value.upper()}' --multi
```
One problem with this: we already accept one or more columns. I think that's OK though since the code is now a required argument, so it means we have to treat everything between the table and the final code argument as a column.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890447102,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890447102,IC_kwDOCGYnMM41EyT-,9599,simonw,2021-08-01T04:20:18Z,2021-08-01T04:29:26Z,OWNER,"I could stick them in a `recipe` namespace so you do this:
```
sqlite-utils convert mydb.db mytable col --code 'recipe.parsedatetime(value, dayfirst=True)'
sqlite-utils convert mydb.db mytable col --code 'recipe.jsonsplit(value, delimiter="":"")'
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890448190,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890448190,IC_kwDOCGYnMM41Eyk-,9599,simonw,2021-08-01T04:28:49Z,2021-08-01T04:28:49Z,OWNER,"Would make sense to accept code from standard input too:
echo 'value.upper()' | sqlite-utils convert my.db mytable col -","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446808,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446808,IC_kwDOCGYnMM41EyPY,9599,simonw,2021-08-01T04:18:18Z,2021-08-01T04:28:18Z,OWNER,"Or.... how about making the `parsedate()` and `parsedatetime()` and `jsonsplit()` functions available within the namespace that is configured for the `--code` block?
Then you could do something like this:
```
sqlite-utils convert mydb.db mytable col --code 'parsedatetime(value, dayfirst=True)'
sqlite-utils convert mydb.db mytable col --code 'jsonsplit(value, delimiter="":"")'
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446943,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446943,IC_kwDOCGYnMM41EyRf,9599,simonw,2021-08-01T04:19:09Z,2021-08-01T04:19:09Z,OWNER,"That's a pretty neat fix, though it's a bit more challenging on the documentation front - maybe the help text for `sqlite-utils convert --help` gets a fair bit longer?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446506,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446506,IC_kwDOCGYnMM41EyKq,9599,simonw,2021-08-01T04:16:36Z,2021-08-01T04:16:36Z,OWNER,"Back to the design board then. One way to handle this would be the long-form:
```
sqlite-utils convert jsonsplit mydb.db mytable mycolumn
sqlite-utils convert parsedatetime mydb.db mytable mycolumn
sqlite-utils convert parsedate mydb.db mytable mycolumn
sqlite-utils convert lambda mydb.db mytable mycolumn --code='str(value).upper()'
```
I like the idea that `lambda` is the default action, but in this form it's required that the second argument (the word after `convert`) be the name of the recipe that is being applied to avoid any potential confusion with the database filename.
An ugly solution would be to make all four of those options available on `sqlite-utils convert` - and return an error if you try and use one of those without specifying the accompanying recipe. That's a bit gross though.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446166,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446166,IC_kwDOCGYnMM41EyFW,9599,simonw,2021-08-01T04:14:26Z,2021-08-01T04:14:26Z,OWNER,"Problem with the `-r/--recipe` idea: the `parsedate` and `parsedatetime` and `jsonsplit` recipes in the current `sqlite-transform` tool all take additional options.
For `sqlite-transform parsedate` and `parsedatetime`:
```python
@click.option(
""--dayfirst"",
is_flag=True,
help=""Assume day comes first in ambiguous dates, e.g. 03/04/05"",
)
@click.option(
""--yearfirst"",
is_flag=True,
help=""Assume year comes first in ambiguous dates, e.g. 03/04/05"",
)
```
For `jsonsplit`:
```python
@click.option(""--delimiter"", default="","", help=""Delimiter to split on"")
@click.option(
""--type"",
type=click.Choice((""int"", ""float"")),
help=""Type to use for values - int or float (defaults to string)"",
)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890443079,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890443079,IC_kwDOCGYnMM41ExVH,9599,simonw,2021-08-01T03:46:43Z,2021-08-01T03:46:43Z,OWNER,"Note that there's already a concept of `conversions` which might be confused with `convert`? https://sqlite-utils.datasette.io/en/stable/python-api.html#converting-column-values-using-sql-functions
```python
db[""example""].insert({
""name"": ""The Bigfoot Discovery Museum""
}, conversions={""name"": ""upper(?)""})
```
I think that's OK though - that's a Python library feature, `sqlite-utils convert` is a CLI thing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/datasette/issues/1411#issuecomment-890441844,https://api.github.com/repos/simonw/datasette/issues/1411,890441844,IC_kwDOBm6k_c41ExB0,9599,simonw,2021-08-01T03:27:30Z,2021-08-01T03:27:30Z,OWNER,Confirmed: https://latest.datasette.io/fixtures/neighborhood_search?text=cork&_hide_sql=1 no longer exhibits the bug.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957345476,Canned query ?sql= is pointlessly echoed in query string starting from hidden mode,
https://github.com/simonw/datasette/issues/1409#issuecomment-890400425,https://api.github.com/repos/simonw/datasette/issues/1409,890400425,IC_kwDOBm6k_c41Em6p,9599,simonw,2021-07-31T20:25:16Z,2021-07-31T20:26:25Z,OWNER,"If I was prone to over-thinking (which I am) I'd note that `allow_facet` and `allow_download` and `allow_csv_stream` are all settings that do NOT have an equivalent in the newer permissions system, which is itself a little weird and inconsistent.
So maybe there's a future task where I introduce those as both permissions and metadata `""allow_x""` blocks, then rename the settings themselves to be called `default_allow_facet` and `default_allow_download` and `default_allow_csv_stream`.
If I was going to do that I should get it in before Datasette 1.0.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890400121,https://api.github.com/repos/simonw/datasette/issues/1409,890400121,IC_kwDOBm6k_c41Em15,9599,simonw,2021-07-31T20:22:21Z,2021-07-31T20:23:34Z,OWNER,"I think `default_allow_sql` is more consistent with the current naming conventions, because both `allow` and `default` are used as prefixes at the moment but neither of them are ever used as a suffix.
Plus `default_allow_sql off` makes sense to me but `allow_default_sql off` does not - what is ""default SQL""?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890400059,https://api.github.com/repos/simonw/datasette/issues/1409,890400059,IC_kwDOBm6k_c41Em07,9599,simonw,2021-07-31T20:21:51Z,2021-07-31T20:21:51Z,OWNER,"One of these two options:
- `--setting default_allow_sql off`
- `--setting allow_sql_default off`
Existing settings from https://docs.datasette.io/en/0.58.1/settings.html with similar names that I need to be consistent with:
- `default_page_size`
- `allow_facet`
- `default_facet_size`
- `allow_download`
- `default_cache_ttl`
- `default_cache_ttl_hashed`
- `allow_csv_stream`
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890399806,https://api.github.com/repos/simonw/datasette/issues/1409,890399806,IC_kwDOBm6k_c41Emw-,9599,simonw,2021-07-31T20:18:46Z,2021-07-31T20:18:46Z,OWNER,"My rationale for removing it: https://github.com/simonw/datasette/issues/813#issuecomment-640916290
> Naming problem: Datasette already has a config option with this name:
>
> $ datasette serve data.db --config allow_sql:1
>
> https://datasette.readthedocs.io/en/stable/config.html#allow-sql
>
> It's confusing to have two things called `allow_sql` that do slightly different things.
>
> I could retire the `--config allow_sql:0` option entirely, since the new `metadata.json` mechanism can be used to achieve the exact same thing.
>
> I'm going to do that.
This is true. The `""allow_sql""` permissions block in `metadata.json` does indeed have a name that is easily confused with `--setting allow_sql off`.
So I definitely need to pick a different name from the setting. `--setting default_allow_sql off` is a good option here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397753,https://api.github.com/repos/simonw/datasette/issues/1409,890397753,IC_kwDOBm6k_c41EmQ5,9599,simonw,2021-07-31T19:57:56Z,2021-07-31T19:57:56Z,OWNER,"I think the correct solution is for the default permissions logic to take the `allow_sql` setting into account, and to return `False` if that setting is set to `off` AND the current actor fails the `actor_matches_allow` checks.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397652,https://api.github.com/repos/simonw/datasette/issues/1409,890397652,IC_kwDOBm6k_c41EmPU,9599,simonw,2021-07-31T19:56:48Z,2021-07-31T19:56:48Z,OWNER,"The other option would be to use the setting to pick the `default=` argument when calling `self.ds.permission_allowed( request.actor, ""execute-sql"", resource=database, default=True)`.
The problem with that is that there are actually a few different places which perform that check, so changing all of them raises the risk of missing one in the future:
https://github.com/simonw/datasette/blob/a6c8e7fa4cffdeff84e9e755dcff4788fd6154b8/datasette/views/table.py#L436-L444
https://github.com/simonw/datasette/blob/a6c8e7fa4cffdeff84e9e755dcff4788fd6154b8/datasette/views/table.py#L964-L966
https://github.com/simonw/datasette/blob/d23a2671386187f61872b9f6b58e0f80ac61f8fe/datasette/views/database.py#L220-L221
https://github.com/simonw/datasette/blob/d23a2671386187f61872b9f6b58e0f80ac61f8fe/datasette/views/database.py#L343-L345
https://github.com/simonw/datasette/blob/d23a2671386187f61872b9f6b58e0f80ac61f8fe/datasette/views/database.py#L134-L136
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397261,https://api.github.com/repos/simonw/datasette/issues/1409,890397261,IC_kwDOBm6k_c41EmJN,9599,simonw,2021-07-31T19:52:25Z,2021-07-31T19:52:25Z,OWNER,I think I can make this modification by teaching the default permissions code here to take the `allow_sql` setting into account: https://github.com/simonw/datasette/blob/ff253f5242e4b0b5d85d29d38b8461feb5ea997a/datasette/default_permissions.py#L38-L45,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397169,https://api.github.com/repos/simonw/datasette/issues/1409,890397169,IC_kwDOBm6k_c41EmHx,9599,simonw,2021-07-31T19:51:35Z,2021-07-31T19:51:35Z,OWNER,I'm going to stick with `--setting allow_sql off`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397124,https://api.github.com/repos/simonw/datasette/issues/1409,890397124,IC_kwDOBm6k_c41EmHE,9599,simonw,2021-07-31T19:51:10Z,2021-07-31T19:51:10Z,OWNER,"I think I may like `disable_sql` better. Some options:
- `--setting allow_sql off` (consistent with `allow_facet` and `allow_download` and `allow_csv_stream` - all which default to `on` already)
- `--setting disable_sql on`
- `--setting disable_custom_sql on`
The existence of three `allow_*` settings does make a strong argument for staying consistent with that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1408#issuecomment-890390845,https://api.github.com/repos/simonw/datasette/issues/1408,890390845,IC_kwDOBm6k_c41Ekk9,9599,simonw,2021-07-31T19:00:32Z,2021-07-31T19:00:32Z,OWNER,"When I revisit this I can also look at dropping the `@pytest.mark.serial` hack, and maybe the `restore_working_directory()` fixture hack too:
https://github.com/simonw/datasette/blob/ff253f5242e4b0b5d85d29d38b8461feb5ea997a/pytest.ini#L9-L10
https://github.com/simonw/datasette/blob/ff253f5242e4b0b5d85d29d38b8461feb5ea997a/tests/conftest.py#L62-L75","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957302085,"Review places in codebase that use os.chdir(), in particularly relating to tests",
https://github.com/simonw/datasette/issues/1408#issuecomment-890390495,https://api.github.com/repos/simonw/datasette/issues/1408,890390495,IC_kwDOBm6k_c41Ekff,9599,simonw,2021-07-31T18:57:39Z,2021-07-31T18:57:39Z,OWNER,Opening this issue as an optional follow-up to the work I did in #1406.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957302085,"Review places in codebase that use os.chdir(), in particularly relating to tests",
https://github.com/simonw/datasette/issues/1406#issuecomment-890390342,https://api.github.com/repos/simonw/datasette/issues/1406,890390342,IC_kwDOBm6k_c41EkdG,9599,simonw,2021-07-31T18:56:35Z,2021-07-31T18:56:35Z,OWNER,"But... I've lost enough time to this already, and removing `runner.isolated_filesystem()` has the tests passing again. So I'm not going to work on this any more.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-890390198,https://api.github.com/repos/simonw/datasette/issues/1406,890390198,IC_kwDOBm6k_c41Eka2,9599,simonw,2021-07-31T18:55:33Z,2021-07-31T18:55:33Z,OWNER,"To clarify: the core problem here is that an error is thrown any time you call `os.getcwd()` but the directory you are currently in has been deleted.
`runner.isolated_filesystem()` assumes that the current directory in has not been deleted. But the various temporary directory utilities in `pytest` work by creating directories and then deleting them.
Maybe there's a larger problem here that I play a bit fast and loose with `os.chdir()` in both the test suite and in various lines of code in Datasette itself (in particular in the publish commands)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1407#issuecomment-890388656,https://api.github.com/repos/simonw/datasette/issues/1407,890388656,IC_kwDOBm6k_c41EkCw,9599,simonw,2021-07-31T18:42:41Z,2021-07-31T18:42:41Z,OWNER,I'll try `tempfile.gettempdir()` - on macOS it returns something like `'/var/folders/wr/hn3206rs1yzgq3r49bz8nvnh0000gn/T'` which is still long but hopefully not too long.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957298475,OSError: AF_UNIX path too long in ds_unix_domain_socket_server,
https://github.com/simonw/datasette/issues/1407#issuecomment-890388200,https://api.github.com/repos/simonw/datasette/issues/1407,890388200,IC_kwDOBm6k_c41Ej7o,9599,simonw,2021-07-31T18:38:41Z,2021-07-31T18:38:41Z,OWNER,"The `path` variable there looked like this:
`/private/var/folders/wr/hn3206rs1yzgq3r49bz8nvnh0000gn/T/pytest-of-simon/pytest-696/popen-gw0/uds0/datasette.sock`
I think what's happening here is that `pytest-xdist` causes `tmp_path_factory.mktemp(""uds"")` to create significantly longer paths, which in this case is breaking some limit.
So for this code to work with `pytest-xdist` I need to make sure the random path to `datasette.sock` is shorter.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957298475,OSError: AF_UNIX path too long in ds_unix_domain_socket_server,
https://github.com/simonw/datasette/issues/1406#issuecomment-890259755,https://api.github.com/repos/simonw/datasette/issues/1406,890259755,IC_kwDOBm6k_c41EEkr,9599,simonw,2021-07-31T00:04:54Z,2021-07-31T00:04:54Z,OWNER,"STILL failing. I'm going to try removing all instances of `isolated_filesystem()` in favour of a different pattern using pytest temporary files, then see if I can get that to work without the serial hack. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1398#issuecomment-889599513,https://api.github.com/repos/simonw/datasette/issues/1398,889599513,IC_kwDOBm6k_c41BjYZ,192984,aitoehigie,2021-07-30T03:21:49Z,2021-07-30T03:21:49Z,NONE,Does the library support this now?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",947044667,Documentation on using Datasette as a library,
https://github.com/simonw/datasette/issues/1406#issuecomment-889555977,https://api.github.com/repos/simonw/datasette/issues/1406,889555977,IC_kwDOBm6k_c41BYwJ,9599,simonw,2021-07-30T01:06:57Z,2021-07-30T01:06:57Z,OWNER,"Looking at the source code in Click for `isolated_filesystem()`: https://github.com/pallets/click/blob/9da166957f5848b641231d485467f6140bca2bc0/src/click/testing.py#L450-L468
```python
@contextlib.contextmanager
def isolated_filesystem(
self, temp_dir: t.Optional[t.Union[str, os.PathLike]] = None
) -> t.Iterator[str]:
""""""A context manager that creates a temporary directory and
changes the current working directory to it. This isolates tests
that affect the contents of the CWD to prevent them from
interfering with each other.
:param temp_dir: Create the temporary directory under this
directory. If given, the created directory is not removed
when exiting.
.. versionchanged:: 8.0
Added the ``temp_dir`` parameter.
""""""
cwd = os.getcwd()
t = tempfile.mkdtemp(dir=temp_dir)
os.chdir(t)
```
How about if I pass in that optional `temp_dir` as a temp directory created using the `pytest-xdist` aware pytest mechanisms: https://docs.pytest.org/en/6.2.x/tmpdir.html","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889553052,https://api.github.com/repos/simonw/datasette/issues/1406,889553052,IC_kwDOBm6k_c41BYCc,9599,simonw,2021-07-30T00:58:43Z,2021-07-30T00:58:43Z,OWNER,Tests are still failing in the job that calculates coverage.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889550391,https://api.github.com/repos/simonw/datasette/issues/1406,889550391,IC_kwDOBm6k_c41BXY3,9599,simonw,2021-07-30T00:49:31Z,2021-07-30T00:49:31Z,OWNER,That fixed it. My hunch is that Click's `runner.isolated_filesystem()` mechanism doesn't play well with `pytest-xdist`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889548536,https://api.github.com/repos/simonw/datasette/issues/1406,889548536,IC_kwDOBm6k_c41BW74,9599,simonw,2021-07-30T00:43:47Z,2021-07-30T00:43:47Z,OWNER,"Still couldn't replicate on my laptop. On a hunch, I'm going to add `@pytest.mark.serial` to every test that uses `runner.isolated_filesystem()`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889547142,https://api.github.com/repos/simonw/datasette/issues/1406,889547142,IC_kwDOBm6k_c41BWmG,9599,simonw,2021-07-30T00:39:49Z,2021-07-30T00:39:49Z,OWNER,It happens in CI but not on my laptop. I think I need to run the tests on my laptop like this: https://github.com/simonw/datasette/blob/121e10c29c5b412fddf0326939f1fe46c3ad9d4a/.github/workflows/test.yml#L27-L30,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1241#issuecomment-889539227,https://api.github.com/repos/simonw/datasette/issues/1241,889539227,IC_kwDOBm6k_c41BUqb,9599,simonw,2021-07-30T00:15:26Z,2021-07-30T00:15:26Z,OWNER,"One possible treatment:
```html
{% if query.sql and allow_execute_sql %}
View and edit SQL
{% endif %}
Copy and share link
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",814595021,Share button for copying current URL,
https://github.com/simonw/datasette/issues/1405#issuecomment-889525741,https://api.github.com/repos/simonw/datasette/issues/1405,889525741,IC_kwDOBm6k_c41BRXt,9599,simonw,2021-07-29T23:33:30Z,2021-07-29T23:33:30Z,OWNER,New documentation section for `datasette.utils` is here: https://docs.datasette.io/en/latest/internals.html#the-datasette-utils-module,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",955316250,utils.parse_metadata() should be a documented internal function,
https://github.com/simonw/datasette/issues/1405#issuecomment-888694261,https://api.github.com/repos/simonw/datasette/issues/1405,888694261,IC_kwDOBm6k_c40-GX1,9599,simonw,2021-07-28T23:52:21Z,2021-07-28T23:52:21Z,OWNER,Document that it can raise a `BadMetadataError` exception.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",955316250,utils.parse_metadata() should be a documented internal function,
https://github.com/simonw/datasette/issues/1405#issuecomment-888694144,https://api.github.com/repos/simonw/datasette/issues/1405,888694144,IC_kwDOBm6k_c40-GWA,9599,simonw,2021-07-28T23:51:59Z,2021-07-28T23:51:59Z,OWNER,https://github.com/simonw/datasette/blob/eccfeb0871dd4bc27870faf64f80ac68e5b6bc0d/datasette/utils/__init__.py#L918-L926,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",955316250,utils.parse_metadata() should be a documented internal function,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,888075098,IC_kwDODFE5qs407vNa,28565,maxhawkins,2021-07-28T07:18:56Z,2021-07-28T07:18:56Z,NONE,"> I'm not sure why but my most recent import, when displayed in Datasette, looks like this:
>
>
I did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead.
@simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/simonw/datasette/issues/1404#issuecomment-887095569,https://api.github.com/repos/simonw/datasette/issues/1404,887095569,IC_kwDOBm6k_c404AER,9599,simonw,2021-07-26T23:27:07Z,2021-07-26T23:27:07Z,OWNER,Updated documentation: https://github.com/simonw/datasette/blob/eccfeb0871dd4bc27870faf64f80ac68e5b6bc0d/docs/plugin_hooks.rst#register-routes-datasette,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",953352015,`register_routes()` hook should take `datasette` argument,
https://github.com/simonw/datasette/issues/1402#issuecomment-886969541,https://api.github.com/repos/simonw/datasette/issues/1402,886969541,IC_kwDOBm6k_c403hTF,9599,simonw,2021-07-26T19:31:40Z,2021-07-26T19:31:40Z,OWNER,"Datasette could do a pretty good job of this by default, using `twitter:card` and `og:url` tags - like on https://til.simonwillison.net/jq/extracting-objects-recursively
I could also provide a mechanism to customize these - in particular to add images of some sort.
It feels like something that should tie in to the metadata mechanism.","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",951185411,feature request: social meta tags,
https://github.com/simonw/datasette/issues/1402#issuecomment-886968648,https://api.github.com/repos/simonw/datasette/issues/1402,886968648,IC_kwDOBm6k_c403hFI,9599,simonw,2021-07-26T19:30:14Z,2021-07-26T19:30:14Z,OWNER,"I really like this idea. I was thinking it might make a good plugin, but there's not a great mechanism for plugins to inject extra `` content at the moment - plus this actually feels like a reasonable feature for Datasette core itself.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",951185411,feature request: social meta tags,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/3#issuecomment-886241674,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3,886241674,IC_kwDODtX3eM400vmK,9599,simonw,2021-07-25T18:41:17Z,2021-07-25T18:41:17Z,MEMBER,Got a TIL out of this: https://til.simonwillison.net/jq/extracting-objects-recursively,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952189173,Use HN algolia endpoint to retrieve trees,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/3#issuecomment-886237834,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3,886237834,IC_kwDODtX3eM400uqK,9599,simonw,2021-07-25T18:05:32Z,2021-07-25T18:05:32Z,MEMBER,"If you hit the endpoint for a comment that's part of a thread you get that comment and its recursive children: https://hn.algolia.com/api/v1/items/27941552
You can tell that it's not the top-level because the `parent_id` isn't `null`. You can use `story_id` to figure out what the top-level item is.
```json
{
""id"": 27941552,
""created_at"": ""2021-07-24T15:08:39.000Z"",
""created_at_i"": 1627139319,
""type"": ""comment"",
""author"": ""nine_k"",
""title"": null,
""url"": null,
""text"": ""I wish ..."",
""points"": null,
""parent_id"": 27941108,
""story_id"": 27941108
}
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952189173,Use HN algolia endpoint to retrieve trees,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/3#issuecomment-886142671,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3,886142671,IC_kwDODtX3eM400XbP,9599,simonw,2021-07-25T03:51:05Z,2021-07-25T03:51:05Z,MEMBER,"Prototype:
curl 'https://hn.algolia.com/api/v1/items/27941108' \
| jq '[recurse(.children[]) | del(.children)]' \
| sqlite-utils insert hn.db items - --pk id
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952189173,Use HN algolia endpoint to retrieve trees,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886140431,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886140431,IC_kwDODtX3eM400W4P,9599,simonw,2021-07-25T03:12:57Z,2021-07-25T03:12:57Z,MEMBER,"I'm going to build a general-purpose `hacker-new-to-sqlite search ...` command, where one of the options is to search within the URL.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886136224,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886136224,IC_kwDODtX3eM400V2g,9599,simonw,2021-07-25T02:08:29Z,2021-07-25T02:08:29Z,MEMBER,"Prototype:
curl ""https://hn.algolia.com/api/v1/search_by_date?query=simonwillison.net&restrictSearchableAttributes=url&hitsPerPage=1000"" | \
jq .hits | sqlite-utils insert hn.db items - --pk objectID --alter","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886135922,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886135922,IC_kwDODtX3eM400Vxy,9599,simonw,2021-07-25T02:06:20Z,2021-07-25T02:06:20Z,MEMBER,"https://hn.algolia.com/api/v1/search_by_date?query=simonwillison.net&restrictSearchableAttributes=url looks like it does what I want.
https://hn.algolia.com/api/v1/search_by_date?query=simonwillison.net&restrictSearchableAttributes=url&hitsPerPage=1000 - returns 1000 at once.
Otherwise you have to paginate using `&page=2` etc - up to `nbPages` pages.
https://www.algolia.com/doc/api-reference/api-parameters/hitsPerPage/ says 1000 is the maximum.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886135562,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886135562,IC_kwDODtX3eM400VsK,9599,simonw,2021-07-25T02:01:11Z,2021-07-25T02:01:11Z,MEMBER,"That page doesn't have an API but does look easy to scrape.
The other option here is the HN Search API powered by Algolia, documented at https://hn.algolia.com/api","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-886122696,https://api.github.com/repos/simonw/sqlite-utils/issues/251,886122696,IC_kwDOCGYnMM400SjI,9599,simonw,2021-07-24T23:21:32Z,2021-07-24T23:21:32Z,OWNER,"> ```
> sqlite-utils convert jsonsplit mydb.db mytable mycolumn
> sqlite-utils convert parsedatetime mydb.db mytable mycolumn
> sqlite-utils convert parsedate mydb.db mytable mycolumn
> sqlite-utils convert lambda mydb.db mytable mycolumn --code='str(value).upper()'
> ```
This is a bit verbose - and having added `--multi` and `--output` the `lambda` command keeps getting more and more flexible compared to the others.
New idea: ditch the sub-sub-commands and move the `jsonsplit` and `parsedate` recipes to be options of `convert` - maybe like this:
sqlite-utils convert my.db mytable col1 --jsonsplit
or:
sqlite-utils convert my.db mytable col1 --recipe jsonsplit
or `-r jsonsplit` for short.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/299#issuecomment-886117120,https://api.github.com/repos/simonw/sqlite-utils/issues/299,886117120,IC_kwDOCGYnMM400RMA,9599,simonw,2021-07-24T22:12:01Z,2021-07-24T22:12:01Z,OWNER,Documentation here: https://sqlite-utils.datasette.io/en/latest/cli.html#showing-the-schema,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952154468,Ability to see just specific table schemas with `sqlite-utils schema`,
https://github.com/dogsheep/github-to-sqlite/pull/65#issuecomment-885964242,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65,885964242,IC_kwDODFdgUs40zr3S,231498,khimaros,2021-07-23T23:45:35Z,2021-07-23T23:45:35Z,NONE,@simonw is this PR of interest to you?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923270900,basic support for events,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885098025,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885098025,IC_kwDODFE5qs40wYYp,306240,UtahDave,2021-07-22T17:47:50Z,2021-07-22T17:47:50Z,NONE,"Hi @maxhawkins , I'm sorry, I haven't had any time to work on this. I'll have some time tomorrow to test your commits. I think they look great. I'm great with your commits superseding my initial attempt here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885094284,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885094284,IC_kwDODFE5qs40wXeM,28565,maxhawkins,2021-07-22T17:41:32Z,2021-07-22T17:41:32Z,NONE,I added a follow-up commit that deals with emails that don't have a `Date` header: https://github.com/maxhawkins/google-takeout-to-sqlite/commit/4bc70103582c10802c85a523ef1e99a8a2154aa9,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885022230,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885022230,IC_kwDODFE5qs40wF4W,28565,maxhawkins,2021-07-22T15:51:46Z,2021-07-22T15:51:46Z,NONE,One thing I noticed is this importer doesn't save attachments along with the body of the emails. It would be nice if those got stored as blobs in a separate attachments table so attachments can be included while fetching search results.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-884672647,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,884672647,IC_kwDODFE5qs40uwiH,28565,maxhawkins,2021-07-22T05:56:31Z,2021-07-22T14:03:08Z,NONE,"How does this commit look? https://github.com/maxhawkins/google-takeout-to-sqlite/commit/72802a83fee282eb5d02d388567731ba4301050d
It seems that Takeout's mbox format is pretty simple, so we can get away with just splitting the file on lines begining with `From `. My commit just splits the file every time a line starts with `From ` and uses `email.message_from_bytes` to parse each chunk.
I was able to load a 12GB takeout mbox without the program using more than a couple hundred MB of memory during the import process. It does make us lose the progress bar, but maybe I can add that back in a later commit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/simonw/datasette/issues/1401#issuecomment-884910320,https://api.github.com/repos/simonw/datasette/issues/1401,884910320,IC_kwDOBm6k_c40vqjw,536941,fgregg,2021-07-22T13:26:01Z,2021-07-22T13:26:01Z,CONTRIBUTOR,"ordered lists didn't work either, btw","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",950664971,unordered list is not rendering bullet points in description_html on database page,
https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-884688833,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,884688833,IC_kwDOD079W840u0fB,10793464,aaronyih1,2021-07-22T06:40:25Z,2021-07-22T06:40:25Z,NONE,The solution here is to upload an image to the bucket first. It is caused because it does not properly handle the case when there are no images in the bucket.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload,
https://github.com/simonw/datasette/pull/1296#issuecomment-817403642,https://api.github.com/repos/simonw/datasette/issues/1296,817403642,MDEyOklzc3VlQ29tbWVudDgxNzQwMzY0Mg==,22429695,codecov[bot],2021-04-12T00:29:05Z,2021-07-20T08:52:12Z,NONE,"# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) Report
> Merging [#1296](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (527a056) into [main](https://codecov.io/gh/simonw/datasette/commit/c73af5dd72305f6a01ea94a2c76d52e5e26de38b?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (c73af5d) will **decrease** coverage by `0.11%`.
> The diff coverage is `n/a`.
> :exclamation: Current head 527a056 differs from pull request most recent head 8f00c31. Consider uploading reports for the commit 8f00c31 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/1296/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
```diff
@@ Coverage Diff @@
## main #1296 +/- ##
==========================================
- Coverage 91.62% 91.51% -0.12%
==========================================
Files 34 34
Lines 4371 4255 -116
==========================================
- Hits 4005 3894 -111
+ Misses 366 361 -5
```
| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | Coverage Δ | |
|---|---|---|
| [datasette/tracer.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3RyYWNlci5weQ==) | `81.60% <0.00%> (-1.35%)` | :arrow_down: |
| [datasette/views/base.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2Jhc2UucHk=) | `95.01% <0.00%> (-0.42%)` | :arrow_down: |
| [datasette/facets.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2ZhY2V0cy5weQ==) | `89.04% <0.00%> (-0.41%)` | :arrow_down: |
| [datasette/utils/\_\_init\_\_.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3V0aWxzL19faW5pdF9fLnB5) | `94.13% <0.00%> (-0.21%)` | :arrow_down: |
| [datasette/renderer.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3JlbmRlcmVyLnB5) | `94.02% <0.00%> (-0.18%)` | :arrow_down: |
| [datasette/views/database.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2RhdGFiYXNlLnB5) | `97.19% <0.00%> (-0.10%)` | :arrow_down: |
| [datasette/views/table.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL3RhYmxlLnB5) | `95.88% <0.00%> (-0.07%)` | :arrow_down: |
| [datasette/views/index.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2luZGV4LnB5) | `96.36% <0.00%> (-0.07%)` | :arrow_down: |
| [datasette/hookspecs.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2hvb2tzcGVjcy5weQ==) | `100.00% <0.00%> (ø)` | |
| [datasette/utils/testing.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3V0aWxzL3Rlc3RpbmcucHk=) | `95.38% <0.00%> (ø)` | |
| ... and [5 more](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
> `Δ = absolute (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [c73af5d...8f00c31](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829,Dockerfile: use Ubuntu 20.10 as base,
https://github.com/simonw/datasette/pull/1400#issuecomment-882542519,https://api.github.com/repos/simonw/datasette/issues/1400,882542519,IC_kwDOBm6k_c40moe3,22429695,codecov[bot],2021-07-19T13:20:52Z,2021-07-19T13:20:52Z,NONE,"# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) Report
> Merging [#1400](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (e95c685) into [main](https://codecov.io/gh/simonw/datasette/commit/c73af5dd72305f6a01ea94a2c76d52e5e26de38b?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (c73af5d) will **not change** coverage.
> The diff coverage is `n/a`.
[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/1400/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
```diff
@@ Coverage Diff @@
## main #1400 +/- ##
=======================================
Coverage 91.62% 91.62%
=======================================
Files 34 34
Lines 4371 4371
=======================================
Hits 4005 4005
Misses 366 366
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
> `Δ = absolute (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [c73af5d...e95c685](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",947640902,Bump black from 21.6b0 to 21.7b0,
https://github.com/simonw/datasette/issues/123#issuecomment-882138084,https://api.github.com/repos/simonw/datasette/issues/123,882138084,IC_kwDOBm6k_c40lFvk,9599,simonw,2021-07-19T00:04:31Z,2021-07-19T00:04:31Z,OWNER,"I've been thinking more about this one today too. An extension of this (touched on in #417, Datasette Library) would be to support pointing Datasette at a directory and having it automatically load any CSV files it finds anywhere in that folder or its descendants - either loading them fully, or providing a UI that allows users to select a file to open it in Datasette.
For larger files I think the right thing to do is import them into an on-disk SQLite database, which is limited only by available disk space. For smaller files loading them into an in-memory database should work fine.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats,
https://github.com/simonw/datasette/issues/123#issuecomment-882096402,https://api.github.com/repos/simonw/datasette/issues/123,882096402,IC_kwDOBm6k_c40k7kS,921217,RayBB,2021-07-18T18:07:29Z,2021-07-18T18:07:29Z,NONE,"I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases.
My comment is partially inspired by this post about streaming sqlite dbs from github pages or such
https://news.ycombinator.com/item?id=27016630
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats,
https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-882091516,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,882091516,IC_kwDOD079W840k6X8,10793464,aaronyih1,2021-07-18T17:29:39Z,2021-07-18T17:33:02Z,NONE,Same here for US West (N. California) us-west-1. Running on Catalina.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload,
https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052852,https://api.github.com/repos/simonw/sqlite-utils/issues/297,882052852,IC_kwDOCGYnMM40kw70,9599,simonw,2021-07-18T12:59:20Z,2021-07-18T12:59:20Z,OWNER,I'm not too worried about `sqlite-utils memory` because if your data is large enough that you can benefit from this optimization you probably should use a real file as opposed to a disposable memory database when analyzing it.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism,
https://github.com/simonw/datasette/issues/1199#issuecomment-881932880,https://api.github.com/repos/simonw/datasette/issues/1199,881932880,IC_kwDOBm6k_c40kTpQ,9599,simonw,2021-07-17T17:39:17Z,2021-07-17T17:39:17Z,OWNER,"I asked about optimizing performance on the SQLite forum and this came up as a suggestion: https://sqlite.org/forum/forumpost/9a6b9ae8e2048c8b?t=c
I can start by trying this:
PRAGMA mmap_size=268435456;","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",792652391,Experiment with PRAGMA mmap_size=N,
https://github.com/simonw/datasette/issues/1396#issuecomment-881686662,https://api.github.com/repos/simonw/datasette/issues/1396,881686662,IC_kwDOBm6k_c40jXiG,9599,simonw,2021-07-16T20:02:44Z,2021-07-16T20:02:44Z,OWNER,Confirmed fixed: 0.58.1 was successfully published to Docker Hub in https://github.com/simonw/datasette/runs/3089447346 and the `latest` tag on https://hub.docker.com/r/datasetteproject/datasette/tags was updated.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1231#issuecomment-881677620,https://api.github.com/repos/simonw/datasette/issues/1231,881677620,IC_kwDOBm6k_c40jVU0,9599,simonw,2021-07-16T19:44:12Z,2021-07-16T19:44:12Z,OWNER,"That fixed the race condition in the `datasette-graphql` tests, which is the only place that I've been able to successfully replicate this. I'm going to land this change.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881674857,https://api.github.com/repos/simonw/datasette/issues/1231,881674857,IC_kwDOBm6k_c40jUpp,9599,simonw,2021-07-16T19:38:39Z,2021-07-16T19:38:39Z,OWNER,I can't replicate the race condition locally with or without this patch. I'm going to push the commit and then test the CI run from `datasette-graphql` that was failing against it.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881671706,https://api.github.com/repos/simonw/datasette/issues/1231,881671706,IC_kwDOBm6k_c40jT4a,9599,simonw,2021-07-16T19:32:05Z,2021-07-16T19:32:05Z,OWNER,The test suite passes with that change.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881668759,https://api.github.com/repos/simonw/datasette/issues/1231,881668759,IC_kwDOBm6k_c40jTKX,9599,simonw,2021-07-16T19:27:46Z,2021-07-16T19:27:46Z,OWNER,"Second attempt at this:
```diff
diff --git a/datasette/app.py b/datasette/app.py
index 5976d8b..5f348cb 100644
--- a/datasette/app.py
+++ b/datasette/app.py
@@ -224,6 +224,7 @@ class Datasette:
self.inspect_data = inspect_data
self.immutables = set(immutables or [])
self.databases = collections.OrderedDict()
+ self._refresh_schemas_lock = asyncio.Lock()
self.crossdb = crossdb
if memory or crossdb or not self.files:
self.add_database(Database(self, is_memory=True), name=""_memory"")
@@ -332,6 +333,12 @@ class Datasette:
self.client = DatasetteClient(self)
async def refresh_schemas(self):
+ if self._refresh_schemas_lock.locked():
+ return
+ async with self._refresh_schemas_lock:
+ await self._refresh_schemas()
+
+ async def _refresh_schemas(self):
internal_db = self.databases[""_internal""]
if not self.internal_db_created:
await init_internal_db(internal_db)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881665383,https://api.github.com/repos/simonw/datasette/issues/1231,881665383,IC_kwDOBm6k_c40jSVn,9599,simonw,2021-07-16T19:21:35Z,2021-07-16T19:21:35Z,OWNER,"https://stackoverflow.com/a/25799871/6083 has a good example of using `asyncio.Lock()`:
```python
stuff_lock = asyncio.Lock()
async def get_stuff(url):
async with stuff_lock:
if url in cache:
return cache[url]
stuff = await aiohttp.request('GET', url)
cache[url] = stuff
return stuff
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881664408,https://api.github.com/repos/simonw/datasette/issues/1231,881664408,IC_kwDOBm6k_c40jSGY,9599,simonw,2021-07-16T19:19:35Z,2021-07-16T19:19:35Z,OWNER,"The only place that calls `refresh_schemas()` is here: https://github.com/simonw/datasette/blob/dd5ee8e66882c94343cd3f71920878c6cfd0da41/datasette/views/base.py#L120-L124
Ideally only one call to `refresh_schemas()` would be running at any one time.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881663968,https://api.github.com/repos/simonw/datasette/issues/1231,881663968,IC_kwDOBm6k_c40jR_g,9599,simonw,2021-07-16T19:18:42Z,2021-07-16T19:18:42Z,OWNER,The race condition happens inside this method - initially with the call to `await init_internal_db()`: https://github.com/simonw/datasette/blob/dd5ee8e66882c94343cd3f71920878c6cfd0da41/datasette/app.py#L334-L359,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,