(impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/sqlite-utils/pull/303?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [c7e8d72...4c3bf97](https://codecov.io/gh/simonw/sqlite-utils/pull/303?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957536983,sqlite-utils convert command and db[table].convert(...) method,
https://github.com/simonw/sqlite-utils/issues/304#issuecomment-890704624,https://api.github.com/repos/simonw/sqlite-utils/issues/304,890704624,IC_kwDOCGYnMM41FxLw,9599,simonw,2021-08-02T04:28:42Z,2021-08-02T04:28:42Z,OWNER,For the command-line version this can duplicate the `--param` option to allow named parameters in the where clause: https://github.com/simonw/sqlite-utils/blob/c7e8d72be9fe8fe0811f685a18eebc637662d41b/sqlite_utils/cli.py#L1096-L1102,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957731178,"`table.convert(..., where=)` and `sqlite-utils convert ... --where=`",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890553783,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890553783,IC_kwDOCGYnMM41FMW3,9599,simonw,2021-08-01T16:59:09Z,2021-08-01T16:59:09Z,OWNER,I'm going with `recipes.jsonsplit()` rather than `recipe.jsonsplit()` because the Python module containing the recipes will be called `recipes`. I'll set up a `r.jsonsplit()` shortcut too as a convenience.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890552827,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890552827,IC_kwDOCGYnMM41FMH7,9599,simonw,2021-08-01T16:52:00Z,2021-08-01T16:52:00Z,OWNER,I'll finish the work on this in a PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/302#issuecomment-890548009,https://api.github.com/repos/simonw/sqlite-utils/issues/302,890548009,IC_kwDOCGYnMM41FK8p,9599,simonw,2021-08-01T16:18:13Z,2021-08-01T16:18:13Z,OWNER,"Basic API design:
db[table].convert(""headline"", lambda v: v.upper())
You can pass a list of columns instead of a single game column string to apply it to multiple columns.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957529248,Python library version of `sqlite-utils convert`,
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890448623,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890448623,IC_kwDOCGYnMM41Eyrv,9599,simonw,2021-08-01T04:33:30Z,2021-08-01T04:33:30Z,OWNER,"I've started an implementation in the `convert` branch - no documentation yet, and I've not implemented the recipes.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890448119,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890448119,IC_kwDOCGYnMM41Eyj3,9599,simonw,2021-08-01T04:28:05Z,2021-08-01T04:30:28Z,OWNER,"In which case I think `--code` should be a positional argument instead:
```
sqlite-utils convert mydb.db mytable col 'recipe.parsedatetime(value, dayfirst=True)'
sqlite-utils convert mydb.db mytable col 'recipe.jsonsplit(value, delimiter="":"")'
sqlite-utils convert mydb.db mytable col 'recipe.jsonsplit(value, delimiter="":"")'
sqlite-utils convert mydb.db mytable col '{""lower"": value.lower(), ""upper"": value.upper()}' --multi
```
One problem with this: we already accept one or more columns. I think that's OK though since the code is now a required argument, so it means we have to treat everything between the table and the final code argument as a column.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890447102,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890447102,IC_kwDOCGYnMM41EyT-,9599,simonw,2021-08-01T04:20:18Z,2021-08-01T04:29:26Z,OWNER,"I could stick them in a `recipe` namespace so you do this:
```
sqlite-utils convert mydb.db mytable col --code 'recipe.parsedatetime(value, dayfirst=True)'
sqlite-utils convert mydb.db mytable col --code 'recipe.jsonsplit(value, delimiter="":"")'
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890448190,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890448190,IC_kwDOCGYnMM41Eyk-,9599,simonw,2021-08-01T04:28:49Z,2021-08-01T04:28:49Z,OWNER,"Would make sense to accept code from standard input too:
echo 'value.upper()' | sqlite-utils convert my.db mytable col -","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446808,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446808,IC_kwDOCGYnMM41EyPY,9599,simonw,2021-08-01T04:18:18Z,2021-08-01T04:28:18Z,OWNER,"Or.... how about making the `parsedate()` and `parsedatetime()` and `jsonsplit()` functions available within the namespace that is configured for the `--code` block?
Then you could do something like this:
```
sqlite-utils convert mydb.db mytable col --code 'parsedatetime(value, dayfirst=True)'
sqlite-utils convert mydb.db mytable col --code 'jsonsplit(value, delimiter="":"")'
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446943,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446943,IC_kwDOCGYnMM41EyRf,9599,simonw,2021-08-01T04:19:09Z,2021-08-01T04:19:09Z,OWNER,"That's a pretty neat fix, though it's a bit more challenging on the documentation front - maybe the help text for `sqlite-utils convert --help` gets a fair bit longer?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446506,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446506,IC_kwDOCGYnMM41EyKq,9599,simonw,2021-08-01T04:16:36Z,2021-08-01T04:16:36Z,OWNER,"Back to the design board then. One way to handle this would be the long-form:
```
sqlite-utils convert jsonsplit mydb.db mytable mycolumn
sqlite-utils convert parsedatetime mydb.db mytable mycolumn
sqlite-utils convert parsedate mydb.db mytable mycolumn
sqlite-utils convert lambda mydb.db mytable mycolumn --code='str(value).upper()'
```
I like the idea that `lambda` is the default action, but in this form it's required that the second argument (the word after `convert`) be the name of the recipe that is being applied to avoid any potential confusion with the database filename.
An ugly solution would be to make all four of those options available on `sqlite-utils convert` - and return an error if you try and use one of those without specifying the accompanying recipe. That's a bit gross though.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890446166,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890446166,IC_kwDOCGYnMM41EyFW,9599,simonw,2021-08-01T04:14:26Z,2021-08-01T04:14:26Z,OWNER,"Problem with the `-r/--recipe` idea: the `parsedate` and `parsedatetime` and `jsonsplit` recipes in the current `sqlite-transform` tool all take additional options.
For `sqlite-transform parsedate` and `parsedatetime`:
```python
@click.option(
""--dayfirst"",
is_flag=True,
help=""Assume day comes first in ambiguous dates, e.g. 03/04/05"",
)
@click.option(
""--yearfirst"",
is_flag=True,
help=""Assume year comes first in ambiguous dates, e.g. 03/04/05"",
)
```
For `jsonsplit`:
```python
@click.option(""--delimiter"", default="","", help=""Delimiter to split on"")
@click.option(
""--type"",
type=click.Choice((""int"", ""float"")),
help=""Type to use for values - int or float (defaults to string)"",
)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-890443079,https://api.github.com/repos/simonw/sqlite-utils/issues/251,890443079,IC_kwDOCGYnMM41ExVH,9599,simonw,2021-08-01T03:46:43Z,2021-08-01T03:46:43Z,OWNER,"Note that there's already a concept of `conversions` which might be confused with `convert`? https://sqlite-utils.datasette.io/en/stable/python-api.html#converting-column-values-using-sql-functions
```python
db[""example""].insert({
""name"": ""The Bigfoot Discovery Museum""
}, conversions={""name"": ""upper(?)""})
```
I think that's OK though - that's a Python library feature, `sqlite-utils convert` is a CLI thing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/datasette/issues/1411#issuecomment-890441844,https://api.github.com/repos/simonw/datasette/issues/1411,890441844,IC_kwDOBm6k_c41ExB0,9599,simonw,2021-08-01T03:27:30Z,2021-08-01T03:27:30Z,OWNER,Confirmed: https://latest.datasette.io/fixtures/neighborhood_search?text=cork&_hide_sql=1 no longer exhibits the bug.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957345476,Canned query ?sql= is pointlessly echoed in query string starting from hidden mode,
https://github.com/simonw/datasette/issues/1409#issuecomment-890400425,https://api.github.com/repos/simonw/datasette/issues/1409,890400425,IC_kwDOBm6k_c41Em6p,9599,simonw,2021-07-31T20:25:16Z,2021-07-31T20:26:25Z,OWNER,"If I was prone to over-thinking (which I am) I'd note that `allow_facet` and `allow_download` and `allow_csv_stream` are all settings that do NOT have an equivalent in the newer permissions system, which is itself a little weird and inconsistent.
So maybe there's a future task where I introduce those as both permissions and metadata `""allow_x""` blocks, then rename the settings themselves to be called `default_allow_facet` and `default_allow_download` and `default_allow_csv_stream`.
If I was going to do that I should get it in before Datasette 1.0.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890400121,https://api.github.com/repos/simonw/datasette/issues/1409,890400121,IC_kwDOBm6k_c41Em15,9599,simonw,2021-07-31T20:22:21Z,2021-07-31T20:23:34Z,OWNER,"I think `default_allow_sql` is more consistent with the current naming conventions, because both `allow` and `default` are used as prefixes at the moment but neither of them are ever used as a suffix.
Plus `default_allow_sql off` makes sense to me but `allow_default_sql off` does not - what is ""default SQL""?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890400059,https://api.github.com/repos/simonw/datasette/issues/1409,890400059,IC_kwDOBm6k_c41Em07,9599,simonw,2021-07-31T20:21:51Z,2021-07-31T20:21:51Z,OWNER,"One of these two options:
- `--setting default_allow_sql off`
- `--setting allow_sql_default off`
Existing settings from https://docs.datasette.io/en/0.58.1/settings.html with similar names that I need to be consistent with:
- `default_page_size`
- `allow_facet`
- `default_facet_size`
- `allow_download`
- `default_cache_ttl`
- `default_cache_ttl_hashed`
- `allow_csv_stream`
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890399806,https://api.github.com/repos/simonw/datasette/issues/1409,890399806,IC_kwDOBm6k_c41Emw-,9599,simonw,2021-07-31T20:18:46Z,2021-07-31T20:18:46Z,OWNER,"My rationale for removing it: https://github.com/simonw/datasette/issues/813#issuecomment-640916290
> Naming problem: Datasette already has a config option with this name:
>
> $ datasette serve data.db --config allow_sql:1
>
> https://datasette.readthedocs.io/en/stable/config.html#allow-sql
>
> It's confusing to have two things called `allow_sql` that do slightly different things.
>
> I could retire the `--config allow_sql:0` option entirely, since the new `metadata.json` mechanism can be used to achieve the exact same thing.
>
> I'm going to do that.
This is true. The `""allow_sql""` permissions block in `metadata.json` does indeed have a name that is easily confused with `--setting allow_sql off`.
So I definitely need to pick a different name from the setting. `--setting default_allow_sql off` is a good option here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397753,https://api.github.com/repos/simonw/datasette/issues/1409,890397753,IC_kwDOBm6k_c41EmQ5,9599,simonw,2021-07-31T19:57:56Z,2021-07-31T19:57:56Z,OWNER,"I think the correct solution is for the default permissions logic to take the `allow_sql` setting into account, and to return `False` if that setting is set to `off` AND the current actor fails the `actor_matches_allow` checks.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397652,https://api.github.com/repos/simonw/datasette/issues/1409,890397652,IC_kwDOBm6k_c41EmPU,9599,simonw,2021-07-31T19:56:48Z,2021-07-31T19:56:48Z,OWNER,"The other option would be to use the setting to pick the `default=` argument when calling `self.ds.permission_allowed( request.actor, ""execute-sql"", resource=database, default=True)`.
The problem with that is that there are actually a few different places which perform that check, so changing all of them raises the risk of missing one in the future:
https://github.com/simonw/datasette/blob/a6c8e7fa4cffdeff84e9e755dcff4788fd6154b8/datasette/views/table.py#L436-L444
https://github.com/simonw/datasette/blob/a6c8e7fa4cffdeff84e9e755dcff4788fd6154b8/datasette/views/table.py#L964-L966
https://github.com/simonw/datasette/blob/d23a2671386187f61872b9f6b58e0f80ac61f8fe/datasette/views/database.py#L220-L221
https://github.com/simonw/datasette/blob/d23a2671386187f61872b9f6b58e0f80ac61f8fe/datasette/views/database.py#L343-L345
https://github.com/simonw/datasette/blob/d23a2671386187f61872b9f6b58e0f80ac61f8fe/datasette/views/database.py#L134-L136
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397261,https://api.github.com/repos/simonw/datasette/issues/1409,890397261,IC_kwDOBm6k_c41EmJN,9599,simonw,2021-07-31T19:52:25Z,2021-07-31T19:52:25Z,OWNER,I think I can make this modification by teaching the default permissions code here to take the `allow_sql` setting into account: https://github.com/simonw/datasette/blob/ff253f5242e4b0b5d85d29d38b8461feb5ea997a/datasette/default_permissions.py#L38-L45,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397169,https://api.github.com/repos/simonw/datasette/issues/1409,890397169,IC_kwDOBm6k_c41EmHx,9599,simonw,2021-07-31T19:51:35Z,2021-07-31T19:51:35Z,OWNER,I'm going to stick with `--setting allow_sql off`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1409#issuecomment-890397124,https://api.github.com/repos/simonw/datasette/issues/1409,890397124,IC_kwDOBm6k_c41EmHE,9599,simonw,2021-07-31T19:51:10Z,2021-07-31T19:51:10Z,OWNER,"I think I may like `disable_sql` better. Some options:
- `--setting allow_sql off` (consistent with `allow_facet` and `allow_download` and `allow_csv_stream` - all which default to `on` already)
- `--setting disable_sql on`
- `--setting disable_custom_sql on`
The existence of three `allow_*` settings does make a strong argument for staying consistent with that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957310278,`default_allow_sql` setting (a re-imagining of the old `allow_sql` setting),
https://github.com/simonw/datasette/issues/1408#issuecomment-890390845,https://api.github.com/repos/simonw/datasette/issues/1408,890390845,IC_kwDOBm6k_c41Ekk9,9599,simonw,2021-07-31T19:00:32Z,2021-07-31T19:00:32Z,OWNER,"When I revisit this I can also look at dropping the `@pytest.mark.serial` hack, and maybe the `restore_working_directory()` fixture hack too:
https://github.com/simonw/datasette/blob/ff253f5242e4b0b5d85d29d38b8461feb5ea997a/pytest.ini#L9-L10
https://github.com/simonw/datasette/blob/ff253f5242e4b0b5d85d29d38b8461feb5ea997a/tests/conftest.py#L62-L75","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957302085,"Review places in codebase that use os.chdir(), in particularly relating to tests",
https://github.com/simonw/datasette/issues/1408#issuecomment-890390495,https://api.github.com/repos/simonw/datasette/issues/1408,890390495,IC_kwDOBm6k_c41Ekff,9599,simonw,2021-07-31T18:57:39Z,2021-07-31T18:57:39Z,OWNER,Opening this issue as an optional follow-up to the work I did in #1406.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957302085,"Review places in codebase that use os.chdir(), in particularly relating to tests",
https://github.com/simonw/datasette/issues/1406#issuecomment-890390342,https://api.github.com/repos/simonw/datasette/issues/1406,890390342,IC_kwDOBm6k_c41EkdG,9599,simonw,2021-07-31T18:56:35Z,2021-07-31T18:56:35Z,OWNER,"But... I've lost enough time to this already, and removing `runner.isolated_filesystem()` has the tests passing again. So I'm not going to work on this any more.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-890390198,https://api.github.com/repos/simonw/datasette/issues/1406,890390198,IC_kwDOBm6k_c41Eka2,9599,simonw,2021-07-31T18:55:33Z,2021-07-31T18:55:33Z,OWNER,"To clarify: the core problem here is that an error is thrown any time you call `os.getcwd()` but the directory you are currently in has been deleted.
`runner.isolated_filesystem()` assumes that the current directory in has not been deleted. But the various temporary directory utilities in `pytest` work by creating directories and then deleting them.
Maybe there's a larger problem here that I play a bit fast and loose with `os.chdir()` in both the test suite and in various lines of code in Datasette itself (in particular in the publish commands)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1407#issuecomment-890388656,https://api.github.com/repos/simonw/datasette/issues/1407,890388656,IC_kwDOBm6k_c41EkCw,9599,simonw,2021-07-31T18:42:41Z,2021-07-31T18:42:41Z,OWNER,I'll try `tempfile.gettempdir()` - on macOS it returns something like `'/var/folders/wr/hn3206rs1yzgq3r49bz8nvnh0000gn/T'` which is still long but hopefully not too long.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957298475,OSError: AF_UNIX path too long in ds_unix_domain_socket_server,
https://github.com/simonw/datasette/issues/1407#issuecomment-890388200,https://api.github.com/repos/simonw/datasette/issues/1407,890388200,IC_kwDOBm6k_c41Ej7o,9599,simonw,2021-07-31T18:38:41Z,2021-07-31T18:38:41Z,OWNER,"The `path` variable there looked like this:
`/private/var/folders/wr/hn3206rs1yzgq3r49bz8nvnh0000gn/T/pytest-of-simon/pytest-696/popen-gw0/uds0/datasette.sock`
I think what's happening here is that `pytest-xdist` causes `tmp_path_factory.mktemp(""uds"")` to create significantly longer paths, which in this case is breaking some limit.
So for this code to work with `pytest-xdist` I need to make sure the random path to `datasette.sock` is shorter.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",957298475,OSError: AF_UNIX path too long in ds_unix_domain_socket_server,
https://github.com/simonw/datasette/issues/1406#issuecomment-890259755,https://api.github.com/repos/simonw/datasette/issues/1406,890259755,IC_kwDOBm6k_c41EEkr,9599,simonw,2021-07-31T00:04:54Z,2021-07-31T00:04:54Z,OWNER,"STILL failing. I'm going to try removing all instances of `isolated_filesystem()` in favour of a different pattern using pytest temporary files, then see if I can get that to work without the serial hack. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1398#issuecomment-889599513,https://api.github.com/repos/simonw/datasette/issues/1398,889599513,IC_kwDOBm6k_c41BjYZ,192984,aitoehigie,2021-07-30T03:21:49Z,2021-07-30T03:21:49Z,NONE,Does the library support this now?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",947044667,Documentation on using Datasette as a library,
https://github.com/simonw/datasette/issues/1406#issuecomment-889555977,https://api.github.com/repos/simonw/datasette/issues/1406,889555977,IC_kwDOBm6k_c41BYwJ,9599,simonw,2021-07-30T01:06:57Z,2021-07-30T01:06:57Z,OWNER,"Looking at the source code in Click for `isolated_filesystem()`: https://github.com/pallets/click/blob/9da166957f5848b641231d485467f6140bca2bc0/src/click/testing.py#L450-L468
```python
@contextlib.contextmanager
def isolated_filesystem(
self, temp_dir: t.Optional[t.Union[str, os.PathLike]] = None
) -> t.Iterator[str]:
""""""A context manager that creates a temporary directory and
changes the current working directory to it. This isolates tests
that affect the contents of the CWD to prevent them from
interfering with each other.
:param temp_dir: Create the temporary directory under this
directory. If given, the created directory is not removed
when exiting.
.. versionchanged:: 8.0
Added the ``temp_dir`` parameter.
""""""
cwd = os.getcwd()
t = tempfile.mkdtemp(dir=temp_dir)
os.chdir(t)
```
How about if I pass in that optional `temp_dir` as a temp directory created using the `pytest-xdist` aware pytest mechanisms: https://docs.pytest.org/en/6.2.x/tmpdir.html","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889553052,https://api.github.com/repos/simonw/datasette/issues/1406,889553052,IC_kwDOBm6k_c41BYCc,9599,simonw,2021-07-30T00:58:43Z,2021-07-30T00:58:43Z,OWNER,Tests are still failing in the job that calculates coverage.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889550391,https://api.github.com/repos/simonw/datasette/issues/1406,889550391,IC_kwDOBm6k_c41BXY3,9599,simonw,2021-07-30T00:49:31Z,2021-07-30T00:49:31Z,OWNER,That fixed it. My hunch is that Click's `runner.isolated_filesystem()` mechanism doesn't play well with `pytest-xdist`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889548536,https://api.github.com/repos/simonw/datasette/issues/1406,889548536,IC_kwDOBm6k_c41BW74,9599,simonw,2021-07-30T00:43:47Z,2021-07-30T00:43:47Z,OWNER,"Still couldn't replicate on my laptop. On a hunch, I'm going to add `@pytest.mark.serial` to every test that uses `runner.isolated_filesystem()`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1406#issuecomment-889547142,https://api.github.com/repos/simonw/datasette/issues/1406,889547142,IC_kwDOBm6k_c41BWmG,9599,simonw,2021-07-30T00:39:49Z,2021-07-30T00:39:49Z,OWNER,It happens in CI but not on my laptop. I think I need to run the tests on my laptop like this: https://github.com/simonw/datasette/blob/121e10c29c5b412fddf0326939f1fe46c3ad9d4a/.github/workflows/test.yml#L27-L30,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",956303470,Tests failing with FileNotFoundError in runner.isolated_filesystem,
https://github.com/simonw/datasette/issues/1241#issuecomment-889539227,https://api.github.com/repos/simonw/datasette/issues/1241,889539227,IC_kwDOBm6k_c41BUqb,9599,simonw,2021-07-30T00:15:26Z,2021-07-30T00:15:26Z,OWNER,"One possible treatment:
```html
{% if query.sql and allow_execute_sql %}
View and edit SQL
{% endif %}
Copy and share link
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",814595021,Share button for copying current URL,
https://github.com/simonw/datasette/issues/1405#issuecomment-889525741,https://api.github.com/repos/simonw/datasette/issues/1405,889525741,IC_kwDOBm6k_c41BRXt,9599,simonw,2021-07-29T23:33:30Z,2021-07-29T23:33:30Z,OWNER,New documentation section for `datasette.utils` is here: https://docs.datasette.io/en/latest/internals.html#the-datasette-utils-module,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",955316250,utils.parse_metadata() should be a documented internal function,
https://github.com/simonw/datasette/issues/1405#issuecomment-888694261,https://api.github.com/repos/simonw/datasette/issues/1405,888694261,IC_kwDOBm6k_c40-GX1,9599,simonw,2021-07-28T23:52:21Z,2021-07-28T23:52:21Z,OWNER,Document that it can raise a `BadMetadataError` exception.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",955316250,utils.parse_metadata() should be a documented internal function,
https://github.com/simonw/datasette/issues/1405#issuecomment-888694144,https://api.github.com/repos/simonw/datasette/issues/1405,888694144,IC_kwDOBm6k_c40-GWA,9599,simonw,2021-07-28T23:51:59Z,2021-07-28T23:51:59Z,OWNER,https://github.com/simonw/datasette/blob/eccfeb0871dd4bc27870faf64f80ac68e5b6bc0d/datasette/utils/__init__.py#L918-L926,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",955316250,utils.parse_metadata() should be a documented internal function,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,888075098,IC_kwDODFE5qs407vNa,28565,maxhawkins,2021-07-28T07:18:56Z,2021-07-28T07:18:56Z,NONE,"> I'm not sure why but my most recent import, when displayed in Datasette, looks like this:
>
>
I did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead.
@simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/simonw/datasette/issues/1404#issuecomment-887095569,https://api.github.com/repos/simonw/datasette/issues/1404,887095569,IC_kwDOBm6k_c404AER,9599,simonw,2021-07-26T23:27:07Z,2021-07-26T23:27:07Z,OWNER,Updated documentation: https://github.com/simonw/datasette/blob/eccfeb0871dd4bc27870faf64f80ac68e5b6bc0d/docs/plugin_hooks.rst#register-routes-datasette,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",953352015,`register_routes()` hook should take `datasette` argument,
https://github.com/simonw/datasette/issues/1402#issuecomment-886969541,https://api.github.com/repos/simonw/datasette/issues/1402,886969541,IC_kwDOBm6k_c403hTF,9599,simonw,2021-07-26T19:31:40Z,2021-07-26T19:31:40Z,OWNER,"Datasette could do a pretty good job of this by default, using `twitter:card` and `og:url` tags - like on https://til.simonwillison.net/jq/extracting-objects-recursively
I could also provide a mechanism to customize these - in particular to add images of some sort.
It feels like something that should tie in to the metadata mechanism.","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",951185411,feature request: social meta tags,
https://github.com/simonw/datasette/issues/1402#issuecomment-886968648,https://api.github.com/repos/simonw/datasette/issues/1402,886968648,IC_kwDOBm6k_c403hFI,9599,simonw,2021-07-26T19:30:14Z,2021-07-26T19:30:14Z,OWNER,"I really like this idea. I was thinking it might make a good plugin, but there's not a great mechanism for plugins to inject extra `` content at the moment - plus this actually feels like a reasonable feature for Datasette core itself.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",951185411,feature request: social meta tags,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/3#issuecomment-886241674,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3,886241674,IC_kwDODtX3eM400vmK,9599,simonw,2021-07-25T18:41:17Z,2021-07-25T18:41:17Z,MEMBER,Got a TIL out of this: https://til.simonwillison.net/jq/extracting-objects-recursively,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952189173,Use HN algolia endpoint to retrieve trees,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/3#issuecomment-886237834,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3,886237834,IC_kwDODtX3eM400uqK,9599,simonw,2021-07-25T18:05:32Z,2021-07-25T18:05:32Z,MEMBER,"If you hit the endpoint for a comment that's part of a thread you get that comment and its recursive children: https://hn.algolia.com/api/v1/items/27941552
You can tell that it's not the top-level because the `parent_id` isn't `null`. You can use `story_id` to figure out what the top-level item is.
```json
{
""id"": 27941552,
""created_at"": ""2021-07-24T15:08:39.000Z"",
""created_at_i"": 1627139319,
""type"": ""comment"",
""author"": ""nine_k"",
""title"": null,
""url"": null,
""text"": ""I wish ..."",
""points"": null,
""parent_id"": 27941108,
""story_id"": 27941108
}
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952189173,Use HN algolia endpoint to retrieve trees,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/3#issuecomment-886142671,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/3,886142671,IC_kwDODtX3eM400XbP,9599,simonw,2021-07-25T03:51:05Z,2021-07-25T03:51:05Z,MEMBER,"Prototype:
curl 'https://hn.algolia.com/api/v1/items/27941108' \
| jq '[recurse(.children[]) | del(.children)]' \
| sqlite-utils insert hn.db items - --pk id
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952189173,Use HN algolia endpoint to retrieve trees,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886140431,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886140431,IC_kwDODtX3eM400W4P,9599,simonw,2021-07-25T03:12:57Z,2021-07-25T03:12:57Z,MEMBER,"I'm going to build a general-purpose `hacker-new-to-sqlite search ...` command, where one of the options is to search within the URL.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886136224,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886136224,IC_kwDODtX3eM400V2g,9599,simonw,2021-07-25T02:08:29Z,2021-07-25T02:08:29Z,MEMBER,"Prototype:
curl ""https://hn.algolia.com/api/v1/search_by_date?query=simonwillison.net&restrictSearchableAttributes=url&hitsPerPage=1000"" | \
jq .hits | sqlite-utils insert hn.db items - --pk objectID --alter","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886135922,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886135922,IC_kwDODtX3eM400Vxy,9599,simonw,2021-07-25T02:06:20Z,2021-07-25T02:06:20Z,MEMBER,"https://hn.algolia.com/api/v1/search_by_date?query=simonwillison.net&restrictSearchableAttributes=url looks like it does what I want.
https://hn.algolia.com/api/v1/search_by_date?query=simonwillison.net&restrictSearchableAttributes=url&hitsPerPage=1000 - returns 1000 at once.
Otherwise you have to paginate using `&page=2` etc - up to `nbPages` pages.
https://www.algolia.com/doc/api-reference/api-parameters/hitsPerPage/ says 1000 is the maximum.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/dogsheep/hacker-news-to-sqlite/issues/2#issuecomment-886135562,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/2,886135562,IC_kwDODtX3eM400VsK,9599,simonw,2021-07-25T02:01:11Z,2021-07-25T02:01:11Z,MEMBER,"That page doesn't have an API but does look easy to scrape.
The other option here is the HN Search API powered by Algolia, documented at https://hn.algolia.com/api","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952179830,Command for fetching Hacker News threads from the search API,
https://github.com/simonw/sqlite-utils/issues/251#issuecomment-886122696,https://api.github.com/repos/simonw/sqlite-utils/issues/251,886122696,IC_kwDOCGYnMM400SjI,9599,simonw,2021-07-24T23:21:32Z,2021-07-24T23:21:32Z,OWNER,"> ```
> sqlite-utils convert jsonsplit mydb.db mytable mycolumn
> sqlite-utils convert parsedatetime mydb.db mytable mycolumn
> sqlite-utils convert parsedate mydb.db mytable mycolumn
> sqlite-utils convert lambda mydb.db mytable mycolumn --code='str(value).upper()'
> ```
This is a bit verbose - and having added `--multi` and `--output` the `lambda` command keeps getting more and more flexible compared to the others.
New idea: ditch the sub-sub-commands and move the `jsonsplit` and `parsedate` recipes to be options of `convert` - maybe like this:
sqlite-utils convert my.db mytable col1 --jsonsplit
or:
sqlite-utils convert my.db mytable col1 --recipe jsonsplit
or `-r jsonsplit` for short.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841377702,"""sqlite-utils convert"" command to replace the separate ""sqlite-transform"" tool",
https://github.com/simonw/sqlite-utils/issues/299#issuecomment-886117120,https://api.github.com/repos/simonw/sqlite-utils/issues/299,886117120,IC_kwDOCGYnMM400RMA,9599,simonw,2021-07-24T22:12:01Z,2021-07-24T22:12:01Z,OWNER,Documentation here: https://sqlite-utils.datasette.io/en/latest/cli.html#showing-the-schema,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",952154468,Ability to see just specific table schemas with `sqlite-utils schema`,
https://github.com/dogsheep/github-to-sqlite/pull/65#issuecomment-885964242,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65,885964242,IC_kwDODFdgUs40zr3S,231498,khimaros,2021-07-23T23:45:35Z,2021-07-23T23:45:35Z,NONE,@simonw is this PR of interest to you?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923270900,basic support for events,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885098025,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885098025,IC_kwDODFE5qs40wYYp,306240,UtahDave,2021-07-22T17:47:50Z,2021-07-22T17:47:50Z,NONE,"Hi @maxhawkins , I'm sorry, I haven't had any time to work on this. I'll have some time tomorrow to test your commits. I think they look great. I'm great with your commits superseding my initial attempt here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885094284,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885094284,IC_kwDODFE5qs40wXeM,28565,maxhawkins,2021-07-22T17:41:32Z,2021-07-22T17:41:32Z,NONE,I added a follow-up commit that deals with emails that don't have a `Date` header: https://github.com/maxhawkins/google-takeout-to-sqlite/commit/4bc70103582c10802c85a523ef1e99a8a2154aa9,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885022230,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885022230,IC_kwDODFE5qs40wF4W,28565,maxhawkins,2021-07-22T15:51:46Z,2021-07-22T15:51:46Z,NONE,One thing I noticed is this importer doesn't save attachments along with the body of the emails. It would be nice if those got stored as blobs in a separate attachments table so attachments can be included while fetching search results.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-884672647,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,884672647,IC_kwDODFE5qs40uwiH,28565,maxhawkins,2021-07-22T05:56:31Z,2021-07-22T14:03:08Z,NONE,"How does this commit look? https://github.com/maxhawkins/google-takeout-to-sqlite/commit/72802a83fee282eb5d02d388567731ba4301050d
It seems that Takeout's mbox format is pretty simple, so we can get away with just splitting the file on lines begining with `From `. My commit just splits the file every time a line starts with `From ` and uses `email.message_from_bytes` to parse each chunk.
I was able to load a 12GB takeout mbox without the program using more than a couple hundred MB of memory during the import process. It does make us lose the progress bar, but maybe I can add that back in a later commit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import,
https://github.com/simonw/datasette/issues/1401#issuecomment-884910320,https://api.github.com/repos/simonw/datasette/issues/1401,884910320,IC_kwDOBm6k_c40vqjw,536941,fgregg,2021-07-22T13:26:01Z,2021-07-22T13:26:01Z,CONTRIBUTOR,"ordered lists didn't work either, btw","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",950664971,unordered list is not rendering bullet points in description_html on database page,
https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-884688833,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,884688833,IC_kwDOD079W840u0fB,10793464,aaronyih1,2021-07-22T06:40:25Z,2021-07-22T06:40:25Z,NONE,The solution here is to upload an image to the bucket first. It is caused because it does not properly handle the case when there are no images in the bucket.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload,
https://github.com/simonw/datasette/pull/1296#issuecomment-817403642,https://api.github.com/repos/simonw/datasette/issues/1296,817403642,MDEyOklzc3VlQ29tbWVudDgxNzQwMzY0Mg==,22429695,codecov[bot],2021-04-12T00:29:05Z,2021-07-20T08:52:12Z,NONE,"# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) Report
> Merging [#1296](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (527a056) into [main](https://codecov.io/gh/simonw/datasette/commit/c73af5dd72305f6a01ea94a2c76d52e5e26de38b?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (c73af5d) will **decrease** coverage by `0.11%`.
> The diff coverage is `n/a`.
> :exclamation: Current head 527a056 differs from pull request most recent head 8f00c31. Consider uploading reports for the commit 8f00c31 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/1296/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
```diff
@@ Coverage Diff @@
## main #1296 +/- ##
==========================================
- Coverage 91.62% 91.51% -0.12%
==========================================
Files 34 34
Lines 4371 4255 -116
==========================================
- Hits 4005 3894 -111
+ Misses 366 361 -5
```
| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | Coverage Δ | |
|---|---|---|
| [datasette/tracer.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3RyYWNlci5weQ==) | `81.60% <0.00%> (-1.35%)` | :arrow_down: |
| [datasette/views/base.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2Jhc2UucHk=) | `95.01% <0.00%> (-0.42%)` | :arrow_down: |
| [datasette/facets.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2ZhY2V0cy5weQ==) | `89.04% <0.00%> (-0.41%)` | :arrow_down: |
| [datasette/utils/\_\_init\_\_.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3V0aWxzL19faW5pdF9fLnB5) | `94.13% <0.00%> (-0.21%)` | :arrow_down: |
| [datasette/renderer.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3JlbmRlcmVyLnB5) | `94.02% <0.00%> (-0.18%)` | :arrow_down: |
| [datasette/views/database.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2RhdGFiYXNlLnB5) | `97.19% <0.00%> (-0.10%)` | :arrow_down: |
| [datasette/views/table.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL3RhYmxlLnB5) | `95.88% <0.00%> (-0.07%)` | :arrow_down: |
| [datasette/views/index.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2luZGV4LnB5) | `96.36% <0.00%> (-0.07%)` | :arrow_down: |
| [datasette/hookspecs.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2hvb2tzcGVjcy5weQ==) | `100.00% <0.00%> (ø)` | |
| [datasette/utils/testing.py](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3V0aWxzL3Rlc3RpbmcucHk=) | `95.38% <0.00%> (ø)` | |
| ... and [5 more](https://codecov.io/gh/simonw/datasette/pull/1296/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
> `Δ = absolute (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [c73af5d...8f00c31](https://codecov.io/gh/simonw/datasette/pull/1296?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829,Dockerfile: use Ubuntu 20.10 as base,
https://github.com/simonw/datasette/pull/1400#issuecomment-882542519,https://api.github.com/repos/simonw/datasette/issues/1400,882542519,IC_kwDOBm6k_c40moe3,22429695,codecov[bot],2021-07-19T13:20:52Z,2021-07-19T13:20:52Z,NONE,"# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) Report
> Merging [#1400](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (e95c685) into [main](https://codecov.io/gh/simonw/datasette/commit/c73af5dd72305f6a01ea94a2c76d52e5e26de38b?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (c73af5d) will **not change** coverage.
> The diff coverage is `n/a`.
[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/1400/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
```diff
@@ Coverage Diff @@
## main #1400 +/- ##
=======================================
Coverage 91.62% 91.62%
=======================================
Files 34 34
Lines 4371 4371
=======================================
Hits 4005 4005
Misses 366 366
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
> `Δ = absolute (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [c73af5d...e95c685](https://codecov.io/gh/simonw/datasette/pull/1400?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",947640902,Bump black from 21.6b0 to 21.7b0,
https://github.com/simonw/datasette/issues/123#issuecomment-882138084,https://api.github.com/repos/simonw/datasette/issues/123,882138084,IC_kwDOBm6k_c40lFvk,9599,simonw,2021-07-19T00:04:31Z,2021-07-19T00:04:31Z,OWNER,"I've been thinking more about this one today too. An extension of this (touched on in #417, Datasette Library) would be to support pointing Datasette at a directory and having it automatically load any CSV files it finds anywhere in that folder or its descendants - either loading them fully, or providing a UI that allows users to select a file to open it in Datasette.
For larger files I think the right thing to do is import them into an on-disk SQLite database, which is limited only by available disk space. For smaller files loading them into an in-memory database should work fine.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats,
https://github.com/simonw/datasette/issues/123#issuecomment-882096402,https://api.github.com/repos/simonw/datasette/issues/123,882096402,IC_kwDOBm6k_c40k7kS,921217,RayBB,2021-07-18T18:07:29Z,2021-07-18T18:07:29Z,NONE,"I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases.
My comment is partially inspired by this post about streaming sqlite dbs from github pages or such
https://news.ycombinator.com/item?id=27016630
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats,
https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-882091516,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,882091516,IC_kwDOD079W840k6X8,10793464,aaronyih1,2021-07-18T17:29:39Z,2021-07-18T17:33:02Z,NONE,Same here for US West (N. California) us-west-1. Running on Catalina.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload,
https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052852,https://api.github.com/repos/simonw/sqlite-utils/issues/297,882052852,IC_kwDOCGYnMM40kw70,9599,simonw,2021-07-18T12:59:20Z,2021-07-18T12:59:20Z,OWNER,I'm not too worried about `sqlite-utils memory` because if your data is large enough that you can benefit from this optimization you probably should use a real file as opposed to a disposable memory database when analyzing it.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism,
https://github.com/simonw/datasette/issues/1199#issuecomment-881932880,https://api.github.com/repos/simonw/datasette/issues/1199,881932880,IC_kwDOBm6k_c40kTpQ,9599,simonw,2021-07-17T17:39:17Z,2021-07-17T17:39:17Z,OWNER,"I asked about optimizing performance on the SQLite forum and this came up as a suggestion: https://sqlite.org/forum/forumpost/9a6b9ae8e2048c8b?t=c
I can start by trying this:
PRAGMA mmap_size=268435456;","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",792652391,Experiment with PRAGMA mmap_size=N,
https://github.com/simonw/datasette/issues/1396#issuecomment-881686662,https://api.github.com/repos/simonw/datasette/issues/1396,881686662,IC_kwDOBm6k_c40jXiG,9599,simonw,2021-07-16T20:02:44Z,2021-07-16T20:02:44Z,OWNER,Confirmed fixed: 0.58.1 was successfully published to Docker Hub in https://github.com/simonw/datasette/runs/3089447346 and the `latest` tag on https://hub.docker.com/r/datasetteproject/datasette/tags was updated.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1231#issuecomment-881677620,https://api.github.com/repos/simonw/datasette/issues/1231,881677620,IC_kwDOBm6k_c40jVU0,9599,simonw,2021-07-16T19:44:12Z,2021-07-16T19:44:12Z,OWNER,"That fixed the race condition in the `datasette-graphql` tests, which is the only place that I've been able to successfully replicate this. I'm going to land this change.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881674857,https://api.github.com/repos/simonw/datasette/issues/1231,881674857,IC_kwDOBm6k_c40jUpp,9599,simonw,2021-07-16T19:38:39Z,2021-07-16T19:38:39Z,OWNER,I can't replicate the race condition locally with or without this patch. I'm going to push the commit and then test the CI run from `datasette-graphql` that was failing against it.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881671706,https://api.github.com/repos/simonw/datasette/issues/1231,881671706,IC_kwDOBm6k_c40jT4a,9599,simonw,2021-07-16T19:32:05Z,2021-07-16T19:32:05Z,OWNER,The test suite passes with that change.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881668759,https://api.github.com/repos/simonw/datasette/issues/1231,881668759,IC_kwDOBm6k_c40jTKX,9599,simonw,2021-07-16T19:27:46Z,2021-07-16T19:27:46Z,OWNER,"Second attempt at this:
```diff
diff --git a/datasette/app.py b/datasette/app.py
index 5976d8b..5f348cb 100644
--- a/datasette/app.py
+++ b/datasette/app.py
@@ -224,6 +224,7 @@ class Datasette:
self.inspect_data = inspect_data
self.immutables = set(immutables or [])
self.databases = collections.OrderedDict()
+ self._refresh_schemas_lock = asyncio.Lock()
self.crossdb = crossdb
if memory or crossdb or not self.files:
self.add_database(Database(self, is_memory=True), name=""_memory"")
@@ -332,6 +333,12 @@ class Datasette:
self.client = DatasetteClient(self)
async def refresh_schemas(self):
+ if self._refresh_schemas_lock.locked():
+ return
+ async with self._refresh_schemas_lock:
+ await self._refresh_schemas()
+
+ async def _refresh_schemas(self):
internal_db = self.databases[""_internal""]
if not self.internal_db_created:
await init_internal_db(internal_db)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881665383,https://api.github.com/repos/simonw/datasette/issues/1231,881665383,IC_kwDOBm6k_c40jSVn,9599,simonw,2021-07-16T19:21:35Z,2021-07-16T19:21:35Z,OWNER,"https://stackoverflow.com/a/25799871/6083 has a good example of using `asyncio.Lock()`:
```python
stuff_lock = asyncio.Lock()
async def get_stuff(url):
async with stuff_lock:
if url in cache:
return cache[url]
stuff = await aiohttp.request('GET', url)
cache[url] = stuff
return stuff
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881664408,https://api.github.com/repos/simonw/datasette/issues/1231,881664408,IC_kwDOBm6k_c40jSGY,9599,simonw,2021-07-16T19:19:35Z,2021-07-16T19:19:35Z,OWNER,"The only place that calls `refresh_schemas()` is here: https://github.com/simonw/datasette/blob/dd5ee8e66882c94343cd3f71920878c6cfd0da41/datasette/views/base.py#L120-L124
Ideally only one call to `refresh_schemas()` would be running at any one time.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881663968,https://api.github.com/repos/simonw/datasette/issues/1231,881663968,IC_kwDOBm6k_c40jR_g,9599,simonw,2021-07-16T19:18:42Z,2021-07-16T19:18:42Z,OWNER,The race condition happens inside this method - initially with the call to `await init_internal_db()`: https://github.com/simonw/datasette/blob/dd5ee8e66882c94343cd3f71920878c6cfd0da41/datasette/app.py#L334-L359,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881204782,https://api.github.com/repos/simonw/datasette/issues/1231,881204782,IC_kwDOBm6k_c40hh4u,9599,simonw,2021-07-16T06:14:12Z,2021-07-16T06:14:12Z,OWNER,"Here's the traceback I got from `datasette-graphql` (annoyingly only running the tests in GitHub Actions CI - I've not been able to replicate on my laptop yet):
```
tests/test_utils.py . [100%]
=================================== FAILURES ===================================
_________________________ test_graphql_examples[path0] _________________________
ds =
path = PosixPath('/home/runner/work/datasette-graphql/datasette-graphql/examples/filters.md')
@pytest.mark.asyncio
@pytest.mark.parametrize(
""path"", (pathlib.Path(__file__).parent.parent / ""examples"").glob(""*.md"")
)
async def test_graphql_examples(ds, path):
content = path.read_text()
query = graphql_re.search(content)[1]
try:
variables = variables_re.search(content)[1]
except TypeError:
variables = ""{}""
expected = json.loads(json_re.search(content)[1])
response = await ds.client.post(
""/graphql"",
json={
""query"": query,
""variables"": json.loads(variables),
},
)
> assert response.status_code == 200, response.json()
E AssertionError: {'data': {'repos_arraycontains': None, 'users_contains': None, 'users_date': None, 'users_endswith': None, ...}, 'erro..."", 'path': ['users_gt']}, {'locations': [{'column': 5, 'line': 34}], 'message': ""'rows'"", 'path': ['users_gte']}, ...]}
E assert 500 == 200
E + where 500 = .status_code
tests/test_graphql.py:142: AssertionError
----------------------------- Captured stderr call -----------------------------
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
table databases already exists
Traceback (most recent call last):
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/app.py"", line 1171, in route_path
response = await view(request, send)
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/views/base.py"", line 151, in view
request, **request.scope[""url_route""][""kwargs""]
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/views/base.py"", line 123, in dispatch_request
await self.ds.refresh_schemas()
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/app.py"", line 338, in refresh_schemas
await init_internal_db(internal_db)
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/utils/internal_db.py"", line 16, in init_internal_db
block=True,
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/database.py"", line 102, in execute_write
return await self.execute_write_fn(_inner, block=block)
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/database.py"", line 118, in execute_write_fn
raise result
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/database.py"", line 139, in _execute_writes
result = task.fn(conn)
File ""/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/site-packages/datasette/database.py"", line 100, in _inner
return conn.execute(sql, params or [])
sqlite3.OperationalError: table databases already exists
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1231#issuecomment-881204343,https://api.github.com/repos/simonw/datasette/issues/1231,881204343,IC_kwDOBm6k_c40hhx3,9599,simonw,2021-07-16T06:13:11Z,2021-07-16T06:13:11Z,OWNER,This just broke the `datasette-graphql` test suite: https://github.com/simonw/datasette-graphql/issues/77 - I need to figure out a solution here.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811367257,Race condition errors in new refresh_schemas() mechanism,
https://github.com/simonw/datasette/issues/1394#issuecomment-881129149,https://api.github.com/repos/simonw/datasette/issues/1394,881129149,IC_kwDOBm6k_c40hPa9,9599,simonw,2021-07-16T02:23:32Z,2021-07-16T02:23:32Z,OWNER,Wrote about this in the annotated release notes for 0.58: https://simonwillison.net/2021/Jul/16/datasette-058/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944870799,Big performance boost on faceting: skip the inner order by,
https://github.com/simonw/datasette/issues/759#issuecomment-881125124,https://api.github.com/repos/simonw/datasette/issues/759,881125124,IC_kwDOBm6k_c40hOcE,9599,simonw,2021-07-16T02:11:48Z,2021-07-16T02:11:54Z,OWNER,"I added `""searchmode"": ""raw""` as a supported option for table metadata in #1389 and released that in Datasette 0.58.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612673948,fts search on a column doesn't work anymore due to escape_fts,
https://github.com/simonw/datasette/issues/1396#issuecomment-880967052,https://api.github.com/repos/simonw/datasette/issues/1396,880967052,MDEyOklzc3VlQ29tbWVudDg4MDk2NzA1Mg==,9599,simonw,2021-07-15T19:47:25Z,2021-07-15T19:47:25Z,OWNER,Actually I'm going to close this now and re-open it if the problem occurs again in the future.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1394#issuecomment-880900534,https://api.github.com/repos/simonw/datasette/issues/1394,880900534,MDEyOklzc3VlQ29tbWVudDg4MDkwMDUzNA==,9599,simonw,2021-07-15T17:58:03Z,2021-07-15T17:58:03Z,OWNER,Started a conversation about this on the SQLite forum: https://sqlite.org/forum/forumpost/2d76f2bcf65d256a?t=h,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944870799,Big performance boost on faceting: skip the inner order by,
https://github.com/simonw/datasette/issues/1396#issuecomment-880374156,https://api.github.com/repos/simonw/datasette/issues/1396,880374156,MDEyOklzc3VlQ29tbWVudDg4MDM3NDE1Ng==,9599,simonw,2021-07-15T04:03:18Z,2021-07-15T04:03:18Z,OWNER,"I fixed `datasette:latest` by running the following on my laptop:
```
docker pull datasetteproject/datasette:0.58
docker tag datasetteproject/datasette:0.58 datasetteproject/datasette:latest
docker login -u datasetteproject -p ...
docker push datasetteproject/datasette:latest
```
Confirmed on https://hub.docker.com/r/datasetteproject/datasette/tags?page=1&ordering=last_updated that `datasette:latest` and `datasette:0.58` both now have the same digest of `3b5ba478040e`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1396#issuecomment-880372149,https://api.github.com/repos/simonw/datasette/issues/1396,880372149,MDEyOklzc3VlQ29tbWVudDg4MDM3MjE0OQ==,9599,simonw,2021-07-15T03:56:49Z,2021-07-15T03:56:49Z,OWNER,I'm going to leave this open until I next successfully publish a new version.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1396#issuecomment-880326049,https://api.github.com/repos/simonw/datasette/issues/1396,880326049,MDEyOklzc3VlQ29tbWVudDg4MDMyNjA0OQ==,9599,simonw,2021-07-15T01:50:05Z,2021-07-15T01:50:05Z,OWNER,"I think I made a mistake in this commit: https://github.com/simonw/datasette/commit/0486303b60ce2784fd2e2ecdbecf304b7d6e6659
It looks like I copied `$VERSION_TAG` from here - but it's not available in the `publish.yml` flow: https://github.com/simonw/datasette/blob/0486303b60ce2784fd2e2ecdbecf304b7d6e6659/.github/workflows/push_docker_tag.yml#L18-L25","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1396#issuecomment-880325362,https://api.github.com/repos/simonw/datasette/issues/1396,880325362,MDEyOklzc3VlQ29tbWVudDg4MDMyNTM2Mg==,9599,simonw,2021-07-15T01:48:11Z,2021-07-15T01:48:11Z,OWNER,In particular these three lines: https://github.com/simonw/datasette/blob/084cfe1e00e1a4c0515390a513aca286eeea20c2/.github/workflows/publish.yml#L117-L119,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1396#issuecomment-880325004,https://api.github.com/repos/simonw/datasette/issues/1396,880325004,MDEyOklzc3VlQ29tbWVudDg4MDMyNTAwNA==,9599,simonw,2021-07-15T01:47:17Z,2021-07-15T01:47:17Z,OWNER,"This is the part of the publish workflow that failed and threw the ""invalid reference format"" error: https://github.com/simonw/datasette/blob/084cfe1e00e1a4c0515390a513aca286eeea20c2/.github/workflows/publish.yml#L100-L119","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1396#issuecomment-880324637,https://api.github.com/repos/simonw/datasette/issues/1396,880324637,MDEyOklzc3VlQ29tbWVudDg4MDMyNDYzNw==,9599,simonw,2021-07-15T01:46:26Z,2021-07-15T01:46:26Z,OWNER,"I manually published the Docker image using https://github.com/simonw/datasette/actions/workflows/push_docker_tag.yml https://github.com/simonw/datasette/runs/3072505126
The 0.58 release shows up on https://hub.docker.com/r/datasetteproject/datasette/tags?page=1&ordering=last_updated now - BUT the `latest` tag still points to a version from a month ago.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image",
https://github.com/simonw/datasette/issues/1394#issuecomment-880287483,https://api.github.com/repos/simonw/datasette/issues/1394,880287483,MDEyOklzc3VlQ29tbWVudDg4MDI4NzQ4Mw==,9599,simonw,2021-07-15T00:01:47Z,2021-07-15T00:01:47Z,OWNER,"I wrote this code:
```python
_order_by_re = re.compile(r""(^.*) order by [a-zA-Z_][a-zA-Z0-9_]+( desc)?$"", re.DOTALL)
_order_by_braces_re = re.compile(r""(^.*) order by \[[^\]]+\]( desc)?$"", re.DOTALL)
def strip_order_by(sql):
for regex in (_order_by_re, _order_by_braces_re):
match = regex.match(sql)
if match is not None:
return match.group(1)
return sql
@pytest.mark.parametrize(
""sql,expected"",
[
(""blah"", ""blah""),
(""select * from foo"", ""select * from foo""),
(""select * from foo order by bah"", ""select * from foo""),
(""select * from foo order by bah desc"", ""select * from foo""),
(""select * from foo order by [select]"", ""select * from foo""),
(""select * from foo order by [select] desc"", ""select * from foo""),
],
)
def test_strip_order_by(sql, expected):
assert strip_order_by(sql) == expected
```
But it turns out I don't need it! The SQL that is passed to the facet class is created by this code: https://github.com/simonw/datasette/blob/ba11ef27edd6981eeb26d7ecf5aa236707f5f8ce/datasette/views/table.py#L677-L684
And the only place that uses that `sql_no_limit` variable is here: https://github.com/simonw/datasette/blob/ba11ef27edd6981eeb26d7ecf5aa236707f5f8ce/datasette/views/table.py#L733-L745
So I can change that to `sql_no_limit_no_order` and fix the bug that way instead.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944870799,Big performance boost on faceting: skip the inner order by,
https://github.com/simonw/datasette/issues/1394#issuecomment-880278256,https://api.github.com/repos/simonw/datasette/issues/1394,880278256,MDEyOklzc3VlQ29tbWVudDg4MDI3ODI1Ng==,9599,simonw,2021-07-14T23:35:18Z,2021-07-14T23:35:18Z,OWNER,"The challenge here is that faceting doesn't currently modify the inner SQL at all - it wraps it so that it can work against any SQL statement (though Datasette itself does not yet take advantage of that ability, only offering faceting on table pages).
So just removing the order by wouldn't be appropriate if the inner query looked something like this:
```sql
select * from items order by created desc limit 100
```
Since the intent there would be to return facet counts against only the most recent 100 items.
In SQLite the `limit` has to come after the `order by` though, so the fix here could be as easy as using a regular expression to identify queries that end with `order by COLUMN (desc)?` and stripping off that clause.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944870799,Big performance boost on faceting: skip the inner order by,
https://github.com/simonw/sqlite-utils/issues/297#issuecomment-880259255,https://api.github.com/repos/simonw/sqlite-utils/issues/297,880259255,MDEyOklzc3VlQ29tbWVudDg4MDI1OTI1NQ==,9599,simonw,2021-07-14T22:48:41Z,2021-07-14T22:48:41Z,OWNER,Should also take advantage of `.mode tabs` to support `sqlite-utils insert blah.db blah blah.csv --tsv --fast`,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism,
https://github.com/simonw/sqlite-utils/issues/297#issuecomment-880257587,https://api.github.com/repos/simonw/sqlite-utils/issues/297,880257587,MDEyOklzc3VlQ29tbWVudDg4MDI1NzU4Nw==,9599,simonw,2021-07-14T22:44:05Z,2021-07-14T22:44:05Z,OWNER,"https://unix.stackexchange.com/a/642364 suggests you can also use this to import from stdin, like so:
sqlite3 -csv $database_file_name "".import '|cat -' $table_name""
Here the `sqlite3 -csv` is an alternative to using `.mode csv`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism,
https://github.com/simonw/sqlite-utils/issues/297#issuecomment-880256865,https://api.github.com/repos/simonw/sqlite-utils/issues/297,880256865,MDEyOklzc3VlQ29tbWVudDg4MDI1Njg2NQ==,9599,simonw,2021-07-14T22:42:11Z,2021-07-14T22:42:11Z,OWNER,"Potential workaround for missing `--skip` implementation is that the filename can be a command instead, so maybe it could shell out to `tail -n +1 filename`:
> The source argument is the name of a file to be read or, if it begins with a ""|"" character, specifies a command which will be run to produce the input CSV data.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism,
https://github.com/simonw/sqlite-utils/issues/297#issuecomment-880256058,https://api.github.com/repos/simonw/sqlite-utils/issues/297,880256058,MDEyOklzc3VlQ29tbWVudDg4MDI1NjA1OA==,9599,simonw,2021-07-14T22:40:01Z,2021-07-14T22:40:47Z,OWNER,"Full docs here: https://www.sqlite.org/draft/cli.html#csv
One catch: how this works has changed in recent SQLite versions: https://www.sqlite.org/changes.html
- 2020-12-01 (3.34.0) - ""Table name quoting works correctly for the .import dot-command""
- 2020-05-22 (3.32.0) - ""Add options to the .import command: --csv, --ascii, --skip""
- 2017-08-01 (3.20.0) - "" The "".import"" command ignores an initial UTF-8 BOM.""
The ""skip"" feature is particularly important to understand. https://www.sqlite.org/draft/cli.html#csv says:
> There are two cases to consider: (1) Table ""tab1"" does not previously exist and (2) table ""tab1"" does already exist.
>
> In the first case, when the table does not previously exist, the table is automatically created and the content of the first row of the input CSV file is used to determine the name of all the columns in the table. In other words, if the table does not previously exist, the first row of the CSV file is interpreted to be column names and the actual data starts on the second row of the CSV file.
>
> For the second case, when the table already exists, every row of the CSV file, including the first row, is assumed to be actual content. If the CSV file contains an initial row of column labels, you can cause the .import command to skip that initial row using the ""--skip 1"" option.
But the `--skip 1` option is only available in 3.32.0 and higher.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,Option for importing CSV data using the SQLite .import mechanism,
https://github.com/simonw/datasette/issues/268#issuecomment-880153069,https://api.github.com/repos/simonw/datasette/issues/268,880153069,MDEyOklzc3VlQ29tbWVudDg4MDE1MzA2OQ==,9599,simonw,2021-07-14T19:31:00Z,2021-07-14T19:31:00Z,OWNER,"... though interestingly I can't replicate that error on `latest.datasette.io` - https://latest.datasette.io/fixtures/searchable?_search=park.&_searchmode=raw
That's running https://latest.datasette.io/-/versions SQLite 3.35.4 whereas https://www.niche-museums.com/-/versions is running 3.27.2 (the most recent version available with Vercel) - but there's nothing in the SQLite changelog between those two versions that suggests changes to how the FTS5 parser works. https://www.sqlite.org/changes.html","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323718842,Mechanism for ranking results from SQLite full-text search,
https://github.com/simonw/datasette/issues/268#issuecomment-880150755,https://api.github.com/repos/simonw/datasette/issues/268,880150755,MDEyOklzc3VlQ29tbWVudDg4MDE1MDc1NQ==,9599,simonw,2021-07-14T19:26:47Z,2021-07-14T19:29:08Z,OWNER,"> What are the side-effects of turning that on in the query string, or even by default as you suggested? I see that you stated in the docs... ""to ensure they do not cause any confusion for users who are not aware of them"", but I'm not sure what those could be.
Mainly that it's possible to generate SQL queries that crash with an error. This was the example that convinced me to default to escaping:
- https://www.niche-museums.com/browse/museums?_search=park.&_searchmode=raw (returns `fts5: syntax error near "".""`)
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323718842,Mechanism for ranking results from SQLite full-text search,
https://github.com/simonw/datasette/issues/651#issuecomment-579675357,https://api.github.com/repos/simonw/datasette/issues/651,579675357,MDEyOklzc3VlQ29tbWVudDU3OTY3NTM1Nw==,2181410,clausjuhl,2020-01-29T09:45:00Z,2021-07-14T19:26:06Z,NONE,"Hi Simon
Thank you for adding the escape_function, but it does not work on my datasette-installation (0.33). I've added the following file to my datasette-dir: `/plugins/sql_functions.py`:
```python
from datasette import hookimpl
def escape_fts_query(query):
bits = query.split()
return ' '.join('""{}""'.format(bit.replace('""', '')) for bit in bits)
@hookimpl
def prepare_connection(conn):
conn.create_function(""escape_fts_query"", 1, escape_fts_query)`
```
It has no effect on the standard queries to the tables though, as they still produce errors when including any characters like '-', '/', '+' or '?'
Does the function only work when using costum queries, where I can include the escape_fts-function explicitly in the sql-query?
PS. I'm calling datasette with --plugins=plugins, and my other plugins work just fine.
PPS. The fts5 virtual table is created with 'sqlite3' like so:
`CREATE VIRTUAL TABLE ""cases_fts"" USING FTS5(
title,
subtitle,
resume,
suggestion,
presentation,
detail = full,
content_rowid = 'id',
content = 'cases',
tokenize='unicode61', 'remove_diacritics 2', 'tokenchars ""-_""'
);`
Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",539590148,fts5 syntax error when using punctuation,
https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-879477586,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,879477586,MDEyOklzc3VlQ29tbWVudDg3OTQ3NzU4Ng==,9599,simonw,2021-07-13T23:50:06Z,2021-07-13T23:50:06Z,MEMBER,"Unfortunately I don't think updating the database is practical, because the export doesn't include unique identifiers which can be used to update existing records and create new ones. Recreating from scratch works around that limitation.
I've not explored workouts with SpatiaLite but that's a really good idea.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text",
https://github.com/simonw/datasette/pull/1393#issuecomment-879309636,https://api.github.com/repos/simonw/datasette/issues/1393,879309636,MDEyOklzc3VlQ29tbWVudDg3OTMwOTYzNg==,9599,simonw,2021-07-13T18:32:25Z,2021-07-13T18:32:25Z,OWNER,Thanks,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",941412189,Update deploying.rst,
https://github.com/simonw/datasette/pull/1392#issuecomment-879277953,https://api.github.com/repos/simonw/datasette/issues/1392,879277953,MDEyOklzc3VlQ29tbWVudDg3OTI3Nzk1Mw==,9599,simonw,2021-07-13T17:42:31Z,2021-07-13T17:42:31Z,OWNER,Thanks!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",941403676,Update deploying.rst,
https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-877874117,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,877874117,MDEyOklzc3VlQ29tbWVudDg3Nzg3NDExNw==,956433,Mjboothaus,2021-07-11T23:03:37Z,2021-07-11T23:03:37Z,NONE,P.s. wondering if you have explored using the spatialite functionality with the location data in workouts?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text",
https://github.com/simonw/datasette/issues/511#issuecomment-877835171,https://api.github.com/repos/simonw/datasette/issues/511,877835171,MDEyOklzc3VlQ29tbWVudDg3NzgzNTE3MQ==,9599,simonw,2021-07-11T17:23:05Z,2021-07-11T17:23:05Z,OWNER," == 87 failed, 819 passed, 7 skipped, 29 errors in 2584.85s (0:43:04) ==
https://github.com/simonw/datasette/runs/3038188870?check_suite_focus=true
Full copy of log here: https://gist.github.com/simonw/4b1fdd24496b989fca56bc757be345ad","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-877805513,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,877805513,MDEyOklzc3VlQ29tbWVudDg3NzgwNTUxMw==,956433,Mjboothaus,2021-07-11T14:03:01Z,2021-07-11T14:03:01Z,NONE,"Hi Simon -- just experimenting with your excellent software!
Up to this point in time I have been using the (paid) [HealthFit App](https://apps.apple.com/au/app/healthfit/id1202650514) to export my workouts from my Apple Watch, one walk at the time into either .GPX or .FIT format and then using another library to suck it into Python and eventually here to my ""Emmaus Walking"" app:
https://share.streamlit.io/mjboothaus/emmaus_walking/emmaus_walking/app.py
I just used `healthkit-to-sqlite` to convert my export.zip file and it all ""just worked"".
I did notice the issue with various numeric fields being stored in the SQLite db as TEXT for now and just thought I'd flag it - but you're already self-reported this issue.
Keep up the great work!
I was curious if you have any thoughts about periodically exporting ""export.zip"" and how to just update the SQLite file instead of re-creating it each time. Hopefully Apple will give some thought to managing this data in a more sensible fashion as it grows over time. Ideally one could pull it from iCloud (where it is allegedly being backed up).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text",
https://github.com/simonw/datasette/issues/511#issuecomment-877726495,https://api.github.com/repos/simonw/datasette/issues/511,877726495,MDEyOklzc3VlQ29tbWVudDg3NzcyNjQ5NQ==,9599,simonw,2021-07-11T01:32:27Z,2021-07-11T01:32:27Z,OWNER,"I'm using `pytest-xdist` and this:
pytest -n auto -m ""not serial""
I'll try not using the `-n auto` bit on Windows and see if that helps.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/simonw/datasette/issues/511#issuecomment-877726288,https://api.github.com/repos/simonw/datasette/issues/511,877726288,MDEyOklzc3VlQ29tbWVudDg3NzcyNjI4OA==,9599,simonw,2021-07-11T01:29:41Z,2021-07-11T01:29:41Z,OWNER,"Lots of errors that look like this:
```
2021-07-11T00:40:32.1189321Z E NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpdr41pgwg\\data.db'
2021-07-11T00:40:32.1190083Z
2021-07-11T00:40:32.1191128Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\shutil.py:596: NotADirectoryError
2021-07-11T00:40:32.1191999Z ___________________ ERROR at teardown of test_insert_error ____________________
2021-07-11T00:40:32.1192842Z [gw1] win32 -- Python 3.8.10 c:\hostedtoolcache\windows\python\3.8.10\x64\python.exe
2021-07-11T00:40:32.1193387Z
2021-07-11T00:40:32.1193930Z path = 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpry729pq_'
2021-07-11T00:40:32.1194876Z onerror = .onerror at 0x00000291FCEA93A0>
2021-07-11T00:40:32.1195480Z
2021-07-11T00:40:32.1195927Z def _rmtree_unsafe(path, onerror):
2021-07-11T00:40:32.1196435Z try:
2021-07-11T00:40:32.1196910Z with os.scandir(path) as scandir_it:
2021-07-11T00:40:32.1197504Z entries = list(scandir_it)
2021-07-11T00:40:32.1198002Z except OSError:
2021-07-11T00:40:32.1198607Z onerror(os.scandir, path, sys.exc_info())
2021-07-11T00:40:32.1199137Z entries = []
2021-07-11T00:40:32.1199637Z for entry in entries:
2021-07-11T00:40:32.1200184Z fullname = entry.path
2021-07-11T00:40:32.1200692Z if _rmtree_isdir(entry):
2021-07-11T00:40:32.1201198Z try:
2021-07-11T00:40:32.1201643Z if entry.is_symlink():
2021-07-11T00:40:32.1202280Z # This can only happen if someone replaces
2021-07-11T00:40:32.1202944Z # a directory with a symlink after the call to
2021-07-11T00:40:32.1203623Z # os.scandir or entry.is_dir above.
2021-07-11T00:40:32.1204303Z raise OSError(""Cannot call rmtree on a symbolic link"")
2021-07-11T00:40:32.1204942Z except OSError:
2021-07-11T00:40:32.1206416Z onerror(os.path.islink, fullname, sys.exc_info())
2021-07-11T00:40:32.1207022Z continue
2021-07-11T00:40:32.1207584Z _rmtree_unsafe(fullname, onerror)
2021-07-11T00:40:32.1208074Z else:
2021-07-11T00:40:32.1208496Z try:
2021-07-11T00:40:32.1208926Z > os.unlink(fullname)
2021-07-11T00:40:32.1210053Z E PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpry729pq_\\data.db'
2021-07-11T00:40:32.1210974Z
2021-07-11T00:40:32.1211638Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\shutil.py:616: PermissionError
2021-07-11T00:40:32.1212211Z
2021-07-11T00:40:32.1212846Z During handling of the above exception, another exception occurred:
2021-07-11T00:40:32.1213320Z
2021-07-11T00:40:32.1213797Z func =
2021-07-11T00:40:32.1214529Z path = 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpry729pq_\\data.db'
2021-07-11T00:40:32.1215763Z exc_info = (, PermissionError(13, 'The process cannot access the file because it is being used by another process'), )
2021-07-11T00:40:32.1217263Z
2021-07-11T00:40:32.1217777Z def onerror(func, path, exc_info):
2021-07-11T00:40:32.1218421Z if issubclass(exc_info[0], PermissionError):
2021-07-11T00:40:32.1219079Z def resetperms(path):
2021-07-11T00:40:32.1219518Z try:
2021-07-11T00:40:32.1219992Z _os.chflags(path, 0)
2021-07-11T00:40:32.1220535Z except AttributeError:
2021-07-11T00:40:32.1221110Z pass
2021-07-11T00:40:32.1221545Z _os.chmod(path, 0o700)
2021-07-11T00:40:32.1221984Z
2021-07-11T00:40:32.1222330Z try:
2021-07-11T00:40:32.1222768Z if path != name:
2021-07-11T00:40:32.1223332Z resetperms(_os.path.dirname(path))
2021-07-11T00:40:32.1223963Z resetperms(path)
2021-07-11T00:40:32.1224408Z
2021-07-11T00:40:32.1224749Z try:
2021-07-11T00:40:32.1225954Z > _os.unlink(path)
2021-07-11T00:40:32.1227032Z E PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpry729pq_\\data.db'
2021-07-11T00:40:32.1227927Z
2021-07-11T00:40:32.1228646Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\tempfile.py:802: PermissionError
2021-07-11T00:40:32.1229200Z
2021-07-11T00:40:32.1229842Z During handling of the above exception, another exception occurred:
2021-07-11T00:40:32.1230355Z
2021-07-11T00:40:32.1230783Z @pytest.fixture
2021-07-11T00:40:32.1231322Z def canned_write_client():
2021-07-11T00:40:32.1231805Z with make_app_client(
2021-07-11T00:40:32.1232467Z extra_databases={""data.db"": ""create table names (name text)""},
2021-07-11T00:40:32.1233104Z metadata={
2021-07-11T00:40:32.1233535Z ""databases"": {
2021-07-11T00:40:32.1233989Z ""data"": {
2021-07-11T00:40:32.1234416Z ""queries"": {
2021-07-11T00:40:32.1235001Z ""canned_read"": {""sql"": ""select * from names""},
2021-07-11T00:40:32.1235527Z ""add_name"": {
2021-07-11T00:40:32.1236117Z ""sql"": ""insert into names (name) values (:name)"",
2021-07-11T00:40:32.1236686Z ""write"": True,
2021-07-11T00:40:32.1237317Z ""on_success_redirect"": ""/data/add_name?success"",
2021-07-11T00:40:32.1237882Z },
2021-07-11T00:40:32.1238331Z ""add_name_specify_id"": {
2021-07-11T00:40:32.1239009Z ""sql"": ""insert into names (rowid, name) values (:rowid, :name)"",
2021-07-11T00:40:32.1239610Z ""write"": True,
2021-07-11T00:40:32.1240259Z ""on_error_redirect"": ""/data/add_name_specify_id?error"",
2021-07-11T00:40:32.1240839Z },
2021-07-11T00:40:32.1241320Z ""delete_name"": {
2021-07-11T00:40:32.1242504Z ""sql"": ""delete from names where rowid = :rowid"",
2021-07-11T00:40:32.1243127Z ""write"": True,
2021-07-11T00:40:32.1243721Z ""on_success_message"": ""Name deleted"",
2021-07-11T00:40:32.1244282Z ""allow"": {""id"": ""root""},
2021-07-11T00:40:32.1244749Z },
2021-07-11T00:40:32.1245959Z ""update_name"": {
2021-07-11T00:40:32.1246614Z ""sql"": ""update names set name = :name where rowid = :rowid"",
2021-07-11T00:40:32.1247267Z ""params"": [""rowid"", ""name"", ""extra""],
2021-07-11T00:40:32.1247828Z ""write"": True,
2021-07-11T00:40:32.1248247Z },
2021-07-11T00:40:32.1248653Z }
2021-07-11T00:40:32.1249166Z }
2021-07-11T00:40:32.1249577Z }
2021-07-11T00:40:32.1249962Z },
2021-07-11T00:40:32.1250333Z ) as client:
2021-07-11T00:40:32.1250822Z > yield client
2021-07-11T00:40:32.1251078Z
2021-07-11T00:40:32.1251678Z D:\a\datasette\datasette\tests\test_canned_queries.py:43:
2021-07-11T00:40:32.1252347Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2021-07-11T00:40:32.1253040Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\contextlib.py:120: in __exit__
2021-07-11T00:40:32.1253759Z next(self.gen)
2021-07-11T00:40:32.1254398Z D:\a\datasette\datasette\tests\fixtures.py:156: in make_app_client
2021-07-11T00:40:32.1255098Z yield TestClient(ds)
2021-07-11T00:40:32.1255796Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\tempfile.py:827: in __exit__
2021-07-11T00:40:32.1256510Z self.cleanup()
2021-07-11T00:40:32.1257200Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\tempfile.py:831: in cleanup
2021-07-11T00:40:32.1257961Z self._rmtree(self.name)
2021-07-11T00:40:32.1258712Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\tempfile.py:813: in _rmtree
2021-07-11T00:40:32.1259487Z _shutil.rmtree(name, onerror=onerror)
2021-07-11T00:40:32.1260280Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\shutil.py:740: in rmtree
2021-07-11T00:40:32.1261039Z return _rmtree_unsafe(path, onerror)
2021-07-11T00:40:32.1261843Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\shutil.py:618: in _rmtree_unsafe
2021-07-11T00:40:32.1262633Z onerror(os.unlink, fullname, sys.exc_info())
2021-07-11T00:40:32.1263456Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\tempfile.py:805: in onerror
2021-07-11T00:40:32.1264175Z cls._rmtree(path)
2021-07-11T00:40:32.1264848Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\tempfile.py:813: in _rmtree
2021-07-11T00:40:32.1266329Z _shutil.rmtree(name, onerror=onerror)
2021-07-11T00:40:32.1267082Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\shutil.py:740: in rmtree
2021-07-11T00:40:32.1267858Z return _rmtree_unsafe(path, onerror)
2021-07-11T00:40:32.1268615Z c:\hostedtoolcache\windows\python\3.8.10\x64\lib\shutil.py:599: in _rmtree_unsafe
2021-07-11T00:40:32.1269440Z onerror(os.scandir, path, sys.exc_info())
2021-07-11T00:40:32.1269979Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2021-07-11T00:40:32.1270287Z
2021-07-11T00:40:32.1270947Z path = 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpry729pq_\\data.db'
2021-07-11T00:40:32.1273356Z onerror = .onerror at 0x00000291FCF40E50>
2021-07-11T00:40:32.1273999Z
2021-07-11T00:40:32.1274493Z def _rmtree_unsafe(path, onerror):
2021-07-11T00:40:32.1274953Z try:
2021-07-11T00:40:32.1275461Z > with os.scandir(path) as scandir_it:
2021-07-11T00:40:32.1276459Z E NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpry729pq_\\data.db'
2021-07-11T00:40:32.1277220Z
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/simonw/datasette/issues/511#issuecomment-877725742,https://api.github.com/repos/simonw/datasette/issues/511,877725742,MDEyOklzc3VlQ29tbWVudDg3NzcyNTc0Mg==,9599,simonw,2021-07-11T01:25:01Z,2021-07-11T01:26:38Z,OWNER,"That's weird. https://github.com/simonw/datasette/runs/3037862798 finished running and came up green - but actually a TON of the tests failed on Windows. Not sure why that didn't fail the whole test suite:
Also the test suite took 50 minutes on Windows!
Here's a copy of the full log file for the tests on Python 3.8 on Windows: https://gist.github.com/simonw/2900ef33693c1bbda09188eb31c8212d","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/simonw/datasette/issues/1388#issuecomment-877725193,https://api.github.com/repos/simonw/datasette/issues/1388,877725193,MDEyOklzc3VlQ29tbWVudDg3NzcyNTE5Mw==,9599,simonw,2021-07-11T01:18:38Z,2021-07-11T01:18:38Z,OWNER,Wrote up a TIL: https://til.simonwillison.net/nginx/proxy-domain-sockets,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1388#issuecomment-877721003,https://api.github.com/repos/simonw/datasette/issues/1388,877721003,MDEyOklzc3VlQ29tbWVudDg3NzcyMTAwMw==,9599,simonw,2021-07-11T00:21:19Z,2021-07-11T00:21:19Z,OWNER,Documentation: https://docs.datasette.io/en/latest/deploying.html#nginx-proxy-configuration,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/511#issuecomment-877718364,https://api.github.com/repos/simonw/datasette/issues/511,877718364,MDEyOklzc3VlQ29tbWVudDg3NzcxODM2NA==,9599,simonw,2021-07-10T23:54:37Z,2021-07-10T23:54:37Z,OWNER,"Looks like it's not even 10% of the way through, and already a bunch of errors:
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/simonw/datasette/issues/511#issuecomment-877718286,https://api.github.com/repos/simonw/datasette/issues/511,877718286,MDEyOklzc3VlQ29tbWVudDg3NzcxODI4Ng==,9599,simonw,2021-07-10T23:53:29Z,2021-07-10T23:53:29Z,OWNER,"Test suite on Windows seems to run a lot slower:
From https://github.com/simonw/datasette/actions/runs/1018938850 which is still going.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/simonw/datasette/issues/511#issuecomment-877717791,https://api.github.com/repos/simonw/datasette/issues/511,877717791,MDEyOklzc3VlQ29tbWVudDg3NzcxNzc5MQ==,9599,simonw,2021-07-10T23:45:35Z,2021-07-10T23:45:35Z,OWNER,"> Trying to run on Windows today, I get an error from the utils/asgi.py module.
>
> It's trying `from os import EX_CANTCREAT` which is Unix-only. I commented this line out, and (so far) it's working.
Good news: that line was removed in #1094.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions,
https://github.com/simonw/datasette/pull/868#issuecomment-650340914,https://api.github.com/repos/simonw/datasette/issues/868,650340914,MDEyOklzc3VlQ29tbWVudDY1MDM0MDkxNA==,22429695,codecov[bot],2020-06-26T18:53:02Z,2021-07-10T23:41:42Z,NONE,"# [Codecov](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) Report
> Merging [#868](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (b452fcb) into [master](https://codecov.io/gh/simonw/datasette/commit/000528192eaf891118932250141dabe7a1561ece?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (0005281) will **increase** coverage by `0.49%`.
> The diff coverage is `96.19%`.
> :exclamation: Current head b452fcb differs from pull request most recent head c99caba. Consider uploading reports for the commit c99caba to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/simonw/datasette/pull/868/graphs/tree.svg?width=650&height=150&src=pr&token=eSahVY7kw1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
```diff
@@ Coverage Diff @@
## master #868 +/- ##
==========================================
+ Coverage 82.91% 83.40% +0.49%
==========================================
Files 26 27 +1
Lines 3547 3634 +87
==========================================
+ Hits 2941 3031 +90
+ Misses 606 603 -3
```
| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | Coverage Δ | |
|---|---|---|
| [datasette/plugins.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3BsdWdpbnMucHk=) | `82.35% <ø> (ø)` | |
| [datasette/default\_magic\_parameters.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2RlZmF1bHRfbWFnaWNfcGFyYW1ldGVycy5weQ==) | `91.17% <91.17%> (ø)` | |
| [datasette/app.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2FwcC5weQ==) | `95.99% <97.91%> (+1.32%)` | :arrow_up: |
| [datasette/hookspecs.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL2hvb2tzcGVjcy5weQ==) | `100.00% <100.00%> (ø)` | |
| [datasette/utils/\_\_init\_\_.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3V0aWxzL19faW5pdF9fLnB5) | `93.93% <100.00%> (+0.08%)` | :arrow_up: |
| [datasette/utils/asgi.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3V0aWxzL2FzZ2kucHk=) | `91.32% <100.00%> (+0.41%)` | :arrow_up: |
| [datasette/views/base.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2Jhc2UucHk=) | `93.39% <100.00%> (-0.01%)` | :arrow_down: |
| [datasette/views/database.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL2RhdGFiYXNlLnB5) | `96.37% <100.00%> (-1.96%)` | :arrow_down: |
| [datasette/views/table.py](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3ZpZXdzL3RhYmxlLnB5) | `95.67% <0.00%> (-0.03%)` | :arrow_down: |
| ... and [6 more](https://codecov.io/gh/simonw/datasette/pull/868/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)
> `Δ = absolute (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [180c7a5...c99caba](https://codecov.io/gh/simonw/datasette/pull/868?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",646448486,initial windows ci setup,
https://github.com/simonw/datasette/pull/557#issuecomment-877717392,https://api.github.com/repos/simonw/datasette/issues/557,877717392,MDEyOklzc3VlQ29tbWVudDg3NzcxNzM5Mg==,9599,simonw,2021-07-10T23:39:48Z,2021-07-10T23:39:48Z,OWNER,Abandoning this - need to switch to using GitHub Actions for this instead.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",466996584,Get tests running on Windows using Travis CI,
https://github.com/simonw/datasette/issues/1388#issuecomment-877717262,https://api.github.com/repos/simonw/datasette/issues/1388,877717262,MDEyOklzc3VlQ29tbWVudDg3NzcxNzI2Mg==,9599,simonw,2021-07-10T23:37:54Z,2021-07-10T23:37:54Z,OWNER,"> I wonder if `--fd` is worth supporting too?
I'm going to hold off on implementing this until someone asks for it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1388#issuecomment-877716993,https://api.github.com/repos/simonw/datasette/issues/1388,877716993,MDEyOklzc3VlQ29tbWVudDg3NzcxNjk5Mw==,9599,simonw,2021-07-10T23:34:02Z,2021-07-10T23:34:02Z,OWNER,"Figured out an example nginx configuration. This in `nginx.conf`:
daemon off;
events {
worker_connections 1024;
}
http {
server {
listen 8092;
location / {
proxy_pass http://datasette;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
upstream datasette {
server unix:/tmp/datasette.sock;
}
}
Then run `datasette --uds /tmp/datasette.sock`
Then run nginx like this:
nginx -c ./nginx.conf
Then hits to `http://localhost:8092/` will be proxied to Datasette.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1388#issuecomment-877716359,https://api.github.com/repos/simonw/datasette/issues/1388,877716359,MDEyOklzc3VlQ29tbWVudDg3NzcxNjM1OQ==,9599,simonw,2021-07-10T23:24:58Z,2021-07-10T23:24:58Z,OWNER,"Apparently Windows 10 has Unix domain socket support: https://bugs.python.org/issue33408
> Unix socket (AF_UNIX) is now avalible in Windows 10 (April 2018 Update). Please add Python support for it.
> More details about it on https://blogs.msdn.microsoft.com/commandline/2017/12/19/af_unix-comes-to-windows/
But it's not clear if this is going to work. That same issue thread (the issue is still open) suggests using `hasattr(socket, 'AF_UNIX'))` to detect support in tests.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1388#issuecomment-877716156,https://api.github.com/repos/simonw/datasette/issues/1388,877716156,MDEyOklzc3VlQ29tbWVudDg3NzcxNjE1Ng==,9599,simonw,2021-07-10T23:22:21Z,2021-07-10T23:22:21Z,OWNER,"I don't have the Datasette test suite running on Windows yet, but I'd like it to run there some day - so ideally this test would be skipped if Unix domain sockets are not supported by the underlying operating system.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1388#issuecomment-877715654,https://api.github.com/repos/simonw/datasette/issues/1388,877715654,MDEyOklzc3VlQ29tbWVudDg3NzcxNTY1NA==,9599,simonw,2021-07-10T23:15:06Z,2021-07-10T23:15:06Z,OWNER,"I can run tests against it using `httpx`: https://www.python-httpx.org/advanced/#usage_1
> ```pycon
> >>> import httpx
> >>> # Connect to the Docker API via a Unix Socket.
> >>> transport = httpx.HTTPTransport(uds=""/var/run/docker.sock"")
> >>> client = httpx.Client(transport=transport)
> >>> response = client.get(""http://docker/info"")
> ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1388#issuecomment-877714698,https://api.github.com/repos/simonw/datasette/issues/1388,877714698,MDEyOklzc3VlQ29tbWVudDg3NzcxNDY5OA==,9599,simonw,2021-07-10T23:01:37Z,2021-07-10T23:01:37Z,OWNER,"Can test this with:
```
curl --unix-socket ${socket} -i ""http://localhost/""
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket,
https://github.com/simonw/datasette/issues/1391#issuecomment-877691558,https://api.github.com/repos/simonw/datasette/issues/1391,877691558,MDEyOklzc3VlQ29tbWVudDg3NzY5MTU1OA==,9599,simonw,2021-07-10T19:26:57Z,2021-07-10T19:26:57Z,OWNER,"The `https://latest.datasette.io/fixtures.db` file no longer includes generated columns, which will help avoid confusion such as seen in #1376.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",941300946,Stop using generated columns in fixtures.db,
https://github.com/simonw/datasette/issues/1391#issuecomment-877691427,https://api.github.com/repos/simonw/datasette/issues/1391,877691427,MDEyOklzc3VlQ29tbWVudDg3NzY5MTQyNw==,9599,simonw,2021-07-10T19:26:00Z,2021-07-10T19:26:00Z,OWNER,I had to run the tests locally on my macOS laptop using `pysqlite3` to get a version that supported generated columns - wrote up a TIL about that here: https://til.simonwillison.net/sqlite/pysqlite3-on-macos,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",941300946,Stop using generated columns in fixtures.db,