finish actor_from_request --> [] [hook]
```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/789/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
628572716,MDU6SXNzdWU2Mjg1NzI3MTY=,791,Tutorial: building a something-interesting with writable canned queries,9599,simonw,open,0,,,,,2,2020-06-01T16:32:05Z,2020-10-10T23:34:42Z,,OWNER,,"Initial idea: TODO list, as a tutorial for #698 writable canned queries.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/791/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
649429772,MDU6SXNzdWU2NDk0Mjk3NzI=,886,Reconsider how _actor_X magic parameter deals with missing values,9599,simonw,open,0,,,,,2,2020-07-02T00:00:38Z,2020-09-11T21:35:26Z,,OWNER,,"I had to build a custom `_actorornull` prefix for [datasette-saved-queries](https://github.com/simonw/datasette-saved-queries/blob/37c00e56ac398e1f9aa342d30357de013a9b37b4/datasette_saved_queries/__init__.py):
```python
def actorornull(key, request):
if request.actor is None:
return None
return request.actor.get(key)
@hookimpl
def register_magic_parameters():
return [
(""actorornull"", actorornull),
]
```
Maybe the `actor` magic in Datasette core should do that out of the box?
https://github.com/simonw/datasette/blob/f1f581b7ffcd5d8f3ae6c1c654d813a6641410eb/datasette/default_magic_parameters.py#L14-L17
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/886/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
659873662,MDU6SXNzdWU2NTk4NzM2NjI=,898,datasette.utils.testing module,9599,simonw,open,0,,,,,2,2020-07-18T03:53:24Z,2020-07-18T03:57:46Z,,OWNER,,"The unit tests for plugins could benefit from reusing code from Datasette's own testing fixtures, e.g.:
> I may need to borrow this function from Datasette for the tests:
> https://github.com/simonw/datasette/blob/1f6a134369e6a7efaae9db469f15b1dd2b7f3709/tests/fixtures.py#L836-L851
>
> It's not importable (it lives in `fixtures.py` and not in the `datasette` package that gets packaged for PyPI) - maybe I should fix that in Datasette by adding a `from datasette.utils.testing` module.
_Originally posted by @simonw in https://github.com/simonw/datasette-update-api/issues/4#issuecomment-660419182_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/898/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
695441530,MDU6SXNzdWU2OTU0NDE1MzA=,154,OperationalError: cannot change into wal mode from within a transaction,9599,simonw,open,0,,,,,2,2020-09-07T23:42:44Z,2020-09-07T23:47:10Z,,OWNER,,"I'm getting this error when running:
sqlite-utils enable-wal beta.db
`OperationalError: cannot change into wal mode from within a transaction`
I'm worried that maybe that's because of this new code from #152:
https://github.com/simonw/sqlite-utils/blob/deb2eb013ff85bbc828ebc244a9654f0d9c3139e/sqlite_utils/db.py#L128-L129",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/154/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
695553522,MDU6SXNzdWU2OTU1NTM1MjI=,18,Deleted records stay in the search index,9599,simonw,open,0,,,,,2,2020-09-08T05:14:23Z,2020-09-08T05:15:51Z,,MEMBER,,"Here's why: https://github.com/dogsheep/dogsheep-beta/blob/24f7898d41a39218058f174c75ba62f7c0fcfff6/dogsheep_beta/utils.py#L44-L53
That should probably do `DELETE FROM index1.search_index WHERE [table] = ?` first.",197431109,dogsheep-beta,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/18/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
695556681,MDU6SXNzdWU2OTU1NTY2ODE=,19,Figure out incremental re-indexing,9599,simonw,open,0,,,,,2,2020-09-08T05:23:31Z,2020-09-08T05:27:07Z,,MEMBER,,As tables get bigger reindexing everything on a schedule (essentially recreating the entire index from scratch) will start to become a performance bottleneck.,197431109,dogsheep-beta,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/19/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
696908389,MDU6SXNzdWU2OTY5MDgzODk=,961,Verification checks for metadata.json on startup,9599,simonw,open,0,,,,,2,2020-09-09T15:21:53Z,2020-09-09T15:24:31Z,,OWNER,,"I lost a bunch of time yesterday trying to figure out why a Datasette instance wasn't starting up - it turned out it was because I had a `facets:` reference that mentioned a column that did not exist.
Catching these on startup would be good.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/961/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
698791218,MDU6SXNzdWU2OTg3OTEyMTg=,50,"favorites --stop_after=N stops after min(N, 200)",370930,mikepqr,open,0,,,,,2,2020-09-11T03:38:14Z,2020-09-13T05:11:14Z,,CONTRIBUTOR,,"For any number greater than 200, `favorites --stop_after` stops after getting 200 tweets, e.g.
```
$ twitter-to-sqlite favorites tweets.db --stop_after=300
Importing favorites [####################################] 199
$
```
I don't _think_ this is a limitation of the API (if you omit `--stop_after` you get some very large number, possibly all of them), so I _think_ this is a bug.",206156866,twitter-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
712202333,MDU6SXNzdWU3MTIyMDIzMzM=,982,"SQL editor should allow execution of write queries, if you have permission",9599,simonw,open,0,,,,,2,2020-09-30T19:04:35Z,2022-01-13T22:21:29Z,,OWNER,,"The `datasette-write` plugin provides this at the moment https://github.com/simonw/datasette-write - but it feels like it should be a built-in capability, protected by a default permission.
UI concept: if you have write permission then the existing SQL editor gets an ""execute write"" checkbox underneath it.
JavaScript can spot if you appear to be trying to execute an UPDATE or INSERT or DELETE query and check that checkbox for you.
If you link to a query page with a non-SELECT then that query will be displayed in the box ready for you to POST submit it. The page will also then get ""cannot be embedded"" headers to protect against clickjacking.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/982/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
736365306,MDU6SXNzdWU3MzYzNjUzMDY=,1083,Advanced CSV export for arbitrary queries,9599,simonw,open,0,,,,,2,2020-11-04T19:23:05Z,2021-06-17T18:12:31Z,,OWNER,,"There's no link to download the CSV file - the table page has that as an advanced export option, but this is missing from the query page.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1083/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
769376447,MDU6SXNzdWU3NjkzNzY0NDc=,2,killed by oomkiller on large location-history,231498,khimaros,open,0,,,,,2,2020-12-17T00:32:24Z,2020-12-17T00:48:32Z,,NONE,,"memory seems to grow unbounded and is oom-killed after about 20GB memory usage.
this is happening while loading a ~1GB uncompressed location history.",206649770,google-takeout-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/2/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
780767542,MDU6SXNzdWU3ODA3Njc1NDI=,1180,Lazily evaluated arguments for call_with_supported_arguments,9599,simonw,open,0,,,,,2,2021-01-06T18:43:34Z,2021-01-07T18:56:24Z,,OWNER,,"While building https://github.com/simonw/datasette-export-notebook I thought it would be nice to be able to show a count of exported records on the page ""This will stream 10,422 records to your notebook"".
None of the documented arguments on https://docs.datasette.io/en/0.53/plugin_hooks.html#register-output-renderer-datasette expose the count. The closest is `sql` which could be executed as `select count(*) from ({sql})` but that's a bit inelegant.
So, idea: if your defined render function takes a `total` argument, the caller runs a `count(*)` query and passes you that total.
To implement this I would need to teach the `call_with_supported_arguments` that some arguments are lazy - they should execute a function (or an async function) but only if they are needed.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1180/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
782708469,MDU6SXNzdWU3ODI3MDg0Njk=,1183,"Take advantage of sqlite-utils cached table counts, if available",9599,simonw,open,0,,,,,2,2021-01-09T23:51:48Z,2021-01-12T02:42:08Z,,OWNER,,"sqlite-utils 3.2 now has a mechanism for creating a `_counts` table with triggers to maintain counts for individual tables. Datasette could look for this table and use it for counts, dramatically speeding up places that currently suffer from slow counts. Refs #859.
https://sqlite-utils.datasette.io/en/stable/python-api.html#cached-table-counts-using-triggers",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1183/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
791237799,MDU6SXNzdWU3OTEyMzc3OTk=,1196,Access Denied Error in Windows,2826376,QAInsights,open,0,,,,,2,2021-01-21T15:40:40Z,2021-04-14T19:28:38Z,,NONE,,"I am trying to publish a db to vercel. But while issuing the below command throwing `Access Denied` error which is leading to `RecursionError: maximum recursion depth exceeded while calling a Python object`.
I am using PyCharm and Python 3.9. I have reinstalled both and launched PyCharm as Admin in Windows 10. But still the issue persists.
Issued command `datasette publish vercel jmeter.db --project jmeter --install datasette-vega`
PS: localhost is working fine.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1196/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
792652391,MDU6SXNzdWU3OTI2NTIzOTE=,1199,Experiment with PRAGMA mmap_size=N,9599,simonw,open,0,,,,,2,2021-01-23T21:24:09Z,2021-07-17T17:39:17Z,,OWNER,,"https://sqlite.org/mmap.html - SQLite supports memory-mapped I/O but it's disabled by default. The `PRAGMA mmap_size=N` option can be used to enable it.
It would be very interesting to understand the impact this could have on Datasette performance for various different shapes of data.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1199/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
792890765,MDU6SXNzdWU3OTI4OTA3NjU=,1200,?_size=10 option for the arbitrary query page would be useful,9599,simonw,open,0,,,,,2,2021-01-24T20:55:35Z,2021-02-11T03:13:59Z,,OWNER,,"https://latest.datasette.io/fixtures?sql=select+*+from+compound_three_primary_keys&_size=10 - `_size=10` does not do anything at the moment. It would be useful if it did.
Would also be good if it persisted in a hidden form field.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1200/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
801780625,MDU6SXNzdWU4MDE3ODA2MjU=,9,SSL Error,12669260,jfeiwell,open,0,,,,,2,2021-02-05T02:12:56Z,2021-02-07T18:45:04Z,,NONE,,"Here's the error I get when running `pip install pocket-to-sqlite`:
```
Could not fetch URL https://pypi.python.org/simple/pocket-to-sqlite/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:661) - skipping
Could not find a version that satisfies the requirement pocket-to-sqlite (from versions: )
No matching distribution found for pocket-to-sqlite
```
Does this require python 3? ",213286752,pocket-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
803929694,MDU6SXNzdWU4MDM5Mjk2OTQ=,1219,Try profiling Datasette using scalene,9599,simonw,open,0,,,,,2,2021-02-08T20:37:06Z,2021-02-08T22:13:00Z,,OWNER,,https://github.com/emeryberger/scalene looks like an interesting profiling tool.,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1219/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
826700095,MDU6SXNzdWU4MjY3MDAwOTU=,1255,Facets timing out but work when filtering,1219001,robroc,open,0,,,,,2,2021-03-09T22:01:39Z,2021-04-02T20:50:08Z,,NONE,,"System info:
Windows 10
Datasette 0.55 installed via pip
Python 3.8.5 in a conda environment
I'm getting the message `These facets timed out` on any faceting operation. However, when I apply a filter, the facets appear in the filtered view. The error returns when the filter is removed. My data only has 38,450 rows.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1255/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
842862708,MDU6SXNzdWU4NDI4NjI3MDg=,1280,Ability to run CI against multiple SQLite versions,9599,simonw,open,0,,,,,2,2021-03-28T23:54:50Z,2021-05-10T19:07:46Z,,OWNER,,"Issue #1276 happened because I didn't run tests against a SQLite version prior to 3.16.0 (released 2017-01-02).
Glitch is a deployment target and runs SQLite 3.11.0 from 2016-02-15.
If CI ran against that version of SQLite this bug could have been avoided.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1280/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
855296937,MDU6SXNzdWU4NTUyOTY5Mzc=,1295,Errors should have links to further information,9599,simonw,open,0,,,,,2,2021-04-11T12:39:12Z,2022-12-14T23:28:49Z,,OWNER,,"Inspired by this tweet:
https://twitter.com/willmcgugan/status/1381186384510255104
> While I am thinking about faqs. I’d also like to add short URLs to Rich exceptions.
>
> I loath cryptic error messages, and I’ve created a fair few myself. In Rich I’ve tried to make them as plain English as possible. But...
>
> would be great if every error message linked to a page that explains the error in detail and offers fixes.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1295/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
904598267,MDExOlB1bGxSZXF1ZXN0NjU1NzQxNDI4,1348,DRAFT: add test and scan for docker images,10801138,blairdrummond,open,0,,,,,2,2021-05-28T03:02:12Z,2021-05-28T03:06:16Z,,CONTRIBUTOR,simonw/datasette/pulls/1348,"**NOTE: I don't think this PR is ready, since the arm/v6 and arm/v7 images are failing pytest due to missing dependencies (gcc and friends). But it's pretty close.**
Closes https://github.com/simonw/datasette/issues/1344 . Using a build-matrix for the platforms and [this test](https://github.com/simonw/datasette/issues/1344#issuecomment-849820019), we test all the platforms in parallel. I also threw in container scanning.
### Switch `pip install` to use either tags or commit shas
Notably! This also [changes the Dockerfile](https://github.com/blairdrummond/datasette/blob/7fe5315d68e04fce64b5bebf4e2d7feec44f8546/Dockerfile#L20) so that it accepts tags or commit-shas.
```
# It's backwards compatible with tags, but also lets you use shas
root@712071df17af:/# pip install git+git://github.com/simonw/datasette.git@0.56
Collecting git+git://github.com/simonw/datasette.git@0.56
Cloning git://github.com/simonw/datasette.git (to revision 0.56) to /tmp/pip-req-build-u6dhm945
Running command git clone -q git://github.com/simonw/datasette.git /tmp/pip-req-build-u6dhm945
Running command git checkout -q af5a7f1c09f6a902bb2a25e8edf39c7034d2e5de
Collecting Jinja2<2.12.0,>=2.10.3
Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
```
This lets you build the containers in CI every push for testing, which maybe resolves [this problem](https://github.com/simonw/datasette/issues/1272#issuecomment-808648974)?
# Workflow run example
You can see the results in my workflow [here](https://github.com/blairdrummond/datasette/pull/2/checks?check_run_id=2690570717). The commit history is different because I squashed this branch, also in the testing branch I had to change `github.com/simonw` to `github.com/blairdrummond` for the CI to pick up my git_sha.
## Why did the builds fail?
**NOTE:** The results of all the tests fail, but for different reasons! A few fail to install Rust, the amd64 passes the tests (phew!) but has critical CVEs which fail the container scan, the Arm/v6 and Arm/v7 seem to fail to install the test dependencies due to missing programs like `gcc`. (`gcc` is not sufficient though, as [this run](https://github.com/blairdrummond/datasette/pull/3/checks?check_run_id=2690672982) indicates) ",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1348/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
910088936,MDU6SXNzdWU5MTAwODg5MzY=,1355,datasette --get should efficiently handle streaming CSV,9599,simonw,open,0,,,,,2,2021-06-03T04:40:40Z,2022-03-20T22:38:53Z,,OWNER,,"It would be great if you could use `datasette --get` to run queries that return streaming CSV data without running out of RAM.
Current implementation looks like it loads the entire result into memory first: https://github.com/simonw/datasette/blob/f78ebdc04537a6102316d6dbbf6c887565806078/datasette/cli.py#L546-L552",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1355/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
913809802,MDU6SXNzdWU5MTM4MDk4MDI=,1366,Get rid of this `restore_working_directory` hack entirely,9599,simonw,open,0,,,,,2,2021-06-07T18:01:21Z,2021-06-07T18:03:03Z,,OWNER,,"> That seems to have fixed it. I'd love to get rid of this `restore_working_directory` hack entirely.
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1361#issuecomment-855308811_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1366/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
923270900,MDExOlB1bGxSZXF1ZXN0NjcyMDUzODEx,65,basic support for events,231498,khimaros,open,0,,,,,2,2021-06-17T00:51:30Z,2022-10-03T22:35:03Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/github-to-sqlite/pulls/65,"a quick first pass at implementing the feature requested in https://github.com/dogsheep/github-to-sqlite/issues/64
testing instructions:
```
$ github-to-sqlite events events.db user/khimaros
```
if the specified user is the authenticated user, it will also include private events.
caveat: pagination appears to be broken (i don't see `next` in the response JSON from GitHub)",207052882,github-to-sqlite,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
950664971,MDU6SXNzdWU5NTA2NjQ5NzE=,1401,unordered list is not rendering bullet points in description_html on database page,536941,fgregg,open,0,,,,,2,2021-07-22T13:24:18Z,2021-10-23T13:09:10Z,,CONTRIBUTOR,,"Thanks for this tremendous package, @simonw!
In the `description_html` for a database, I [have an unordered list](https://github.com/labordata/warehouse/blob/fcea4502e5b615b0eb3e0bdcb45ec634abe20bb6/warehouse_metadata.yml#L19-L22).
However, on the database page on the deployed site, it is not rendering this as a bulleted list.
![Screenshot 2021-07-22 at 09-21-51 nlrb](https://user-images.githubusercontent.com/536941/126645923-2777b7f1-fd4c-4d2d-af70-a35e49a07675.png)
Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5
The documentation gives an [example of using an unordered list](https://docs.datasette.io/en/stable/metadata.html#using-yaml-for-metadata) in a `description_html`, so I expected this will work.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1401/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
951185411,MDU6SXNzdWU5NTExODU0MTE=,1402,feature request: social meta tags,536941,fgregg,open,0,,,,,2,2021-07-23T01:57:23Z,2021-07-26T19:31:41Z,,CONTRIBUTOR,,"it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1402/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
957302085,MDU6SXNzdWU5NTczMDIwODU=,1408,"Review places in codebase that use os.chdir(), in particularly relating to tests",9599,simonw,open,0,,,,,2,2021-07-31T18:57:06Z,2021-07-31T19:00:32Z,,OWNER,,"> To clarify: the core problem here is that an error is thrown any time you call `os.getcwd()` but the directory you are currently in has been deleted.
>
> `runner.isolated_filesystem()` assumes that the current directory in has not been deleted. But the various temporary directory utilities in `pytest` work by creating directories and then deleting them.
>
> Maybe there's a larger problem here that I play a bit fast and loose with `os.chdir()` in both the test suite and in various lines of code in Datasette itself (in particular in the publish commands)?
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1406#issuecomment-890390198_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1408/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
961008507,MDU6SXNzdWU5NjEwMDg1MDc=,308,Add an interactive tutorial as a Jupyter notebook,9599,simonw,open,0,,,,,2,2021-08-04T20:34:22Z,2021-08-04T21:30:59Z,,OWNER,,Can show people how to open this up in Binder.,140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
969548935,MDU6SXNzdWU5Njk1NDg5MzU=,1429,UI for setting `?_size=max` on table page,9599,simonw,open,0,,,,,2,2021-08-12T20:52:09Z,2021-08-13T04:37:41Z,,OWNER,,"It defaults to 100 per page, but you can increase that to 1000 per page using `?_size=max` (or higher if `max_returned_rows` is set higher than that).
But... that's only available to people who know how to hack URLs.
Solution: add a link that sets that option to the pagination block at the bottom of the table:
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1429/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
974067156,MDU6SXNzdWU5NzQwNjcxNTY=,318,Research: handle gzipped CSV directly,9599,simonw,open,0,,,,,2,2021-08-18T21:23:04Z,2021-08-18T21:25:30Z,,OWNER,,"Would it be worthwhile for the `sqlite-utils` command-line tool to grow features to efficiently directly interact with gzipped CSV data?
Maybe add `--gz` options to both `insert` and to the various commands that output query results.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/318/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
975166271,MDU6SXNzdWU5NzUxNjYyNzE=,20,Add index on workout_points.date,9599,simonw,open,0,,,,,2,2021-08-20T01:08:04Z,2021-08-20T01:12:48Z,,MEMBER,,"Sorting that by date makes sense for seeing most recent points, and my DB has 2.5m points in so it's an expensive sort!",197882382,healthkit-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/20/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
999902754,I_kwDOBm6k_c47mU4i,1473,base logo link visits `undefined` rather than href url,192568,mroswell,open,0,,,,,2,2021-09-18T04:17:04Z,2021-09-19T00:45:32Z,,CONTRIBUTOR,,"I have two connected sites:
http://www.SaferOrToxic.org
(a Hugo website)
and:
http://disinfectants.SaferOrToxic.org/disinfectants/listN
(a datasette table page)
The latter is linked as ""The List"" in the former's menu.
(I'd love a prettier URL, but that's what I've got.)
On:
http://disinfectants.SaferOrToxic.org/disinfectants/listN
... all the other menu links should point back to:
https://www.SaferOrToxic.org
And they do!
But the logo, for some reason--though it has an href pointing to:
https://www.SaferOrToxic.org
Keeps going to this instead:
https://disinfectants.saferortoxic.org/disinfectants/undefined
What is causing that? How can I fix it?
In #1284 back in March, I was doing battle with the index.html template, in a still unresolved issue. (I wanted only a single table page at the root.)
But I thought, well, if I can't resolve that, at least I could just point the main website to the datasette page (""The List,"") and then have the List point back to the home website.
The menu hrefs to https://www.SaferOrToxic.org work just fine, exactly as they should, from the datasette page. Even the Home link works properly.
But the logo link keeps rewriting to: https://disinfectants.saferortoxic.org/disinfectants/undefined
This is the HTML:
```
```
Is this somehow related to cloudflare?
Or something in the datasette code?
I'm starting to think it's a cloudflare issue.
Can I at least rule out it being a datasette issue?
My repository is here:
https://github.com/mroswell/list-N
(BTW, I couldn't figure out how to reference a local image, either, on the datasette side, which is why I'm using the image from the www home page.)
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1473/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1077628073,I_kwDOBm6k_c5AO0yp,1550,Research option for returning all rows from arbitrary query,9599,simonw,open,0,,,,,2,2021-12-11T19:31:11Z,2021-12-11T23:43:24Z,,OWNER,,"Inspired by thinking about #1549 - returning ALL rows from an arbitrary query is a lot easier if you just run that query and keep iterating over the cursor.
I've avoided doing that in the past because it could tie up a connection for a long time - but in private instances this wouldn't be such a problem.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1550/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1122427321,I_kwDOBm6k_c5C5uG5,1624,Index page `/` has no CORS headers,9599,simonw,open,0,,,,,2,2022-02-02T21:56:10Z,2022-09-28T16:54:22Z,,OWNER,,"Compare the following:
```
% curl -I 'https://latest.datasette.io/fixtures'
HTTP/1.1 200 OK
link: https://latest.datasette.io/fixtures.json; rel=""alternate""; type=""application/json+datasette""
cache-control: max-age=5
referrer-policy: no-referrer
access-control-allow-origin: *
access-control-allow-headers: Authorization
access-control-expose-headers: Link
content-type: text/html; charset=utf-8
x-databases: _memory, _internal, fixtures, extra_database
Date: Wed, 02 Feb 2022 21:55:49 GMT
Server: Google Frontend
Transfer-Encoding: chunked
% curl -I 'https://latest.datasette.io/'
HTTP/1.1 200 OK
link: https://latest.datasette.io/.json; rel=""alternate""; type=""application/json+datasette""
content-type: text/html; charset=utf-8
x-databases: _memory, _internal, fixtures, extra_database
Date: Wed, 02 Feb 2022 21:55:52 GMT
Server: Google Frontend
Transfer-Encoding: chunked
```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1624/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1129052172,I_kwDOBm6k_c5DS_gM,1633,base_url or prefix does not work with _exact match,6613091,henrikek,open,0,,,,,2,2022-02-09T21:45:07Z,2022-04-28T09:12:56Z,,NONE,,"When i hit ""Apply"" button to search with ""_exact"" for a column syntax the URL prefix is removed from the url.
![image](https://user-images.githubusercontent.com/6613091/153293758-0b757d55-5757-4987-992e-9426e69a7956.png)
And the result is:
![image](https://user-images.githubusercontent.com/6613091/153294672-87be7809-bb7b-455d-bf1a-41e90bbfa4ae.png)
If I add the marked row to url_builder.py it seams to work:
![image](https://user-images.githubusercontent.com/6613091/153295231-bdd52e37-efcf-4b21-9d37-69f182a922f4.png)
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1633/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1148725876,I_kwDOBm6k_c5EeCp0,1640,"Support static assets where file length may change, e.g. logs",57859326,broccolihighkicks,open,0,,,,,2,2022-02-24T00:34:42Z,2022-03-05T01:19:25Z,,NONE,,"This is a bit of an oxymoron.
I am serving a log.txt file for a background process using the Datasette --static CLI. This is useful as I can observe a background process from the web UI to see any errors that occur (instead of spelunking the logs via docker exec/ssh etc).
I get this error, which I think is because Datasette assumes that the size of the content does not change (but appending new log lines means the content length changes).
```python
Traceback (most recent call last):
File ""/usr/local/lib/python3.9/site-packages/datasette/app.py"", line 1181, in route_path
response = await view(request, send)
File ""/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py"", line 305, in inner_static
await asgi_send_file(send, full_path, chunk_size=chunk_size)
File ""/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py"", line 280, in asgi_send_file
await send(
File ""/usr/local/lib/python3.9/site-packages/asgi_csrf.py"", line 104, in wrapped_send
await send(event)
File ""/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py"", line 460, in send
output = self.conn.send(event)
File ""/usr/local/lib/python3.9/site-packages/h11/_connection.py"", line 468, in send
data_list = self.send_with_data_passthrough(event)
File ""/usr/local/lib/python3.9/site-packages/h11/_connection.py"", line 501, in send_with_data_passthrough
writer(event, data_list.append)
File ""/usr/local/lib/python3.9/site-packages/h11/_writers.py"", line 58, in __call__
self.send_data(event.data, write)
File ""/usr/local/lib/python3.9/site-packages/h11/_writers.py"", line 78, in send_data
raise LocalProtocolError(""Too much data for declared Content-Length"")
h11._util.LocalProtocolError: Too much data for declared Content-Length
ERROR: Exception in ASGI application
Traceback (most recent call last):
File ""/usr/local/lib/python3.9/site-packages/datasette/app.py"", line 1181, in route_path
response = await view(request, send)
File ""/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py"", line 305, in inner_static
await asgi_send_file(send, full_path, chunk_size=chunk_size)
File ""/usr/local/lib/python3.9/site-packages/datasette/utils/asgi.py"", line 280, in asgi_send_file
await send(
File ""/usr/local/lib/python3.9/site-packages/asgi_csrf.py"", line 104, in wrapped_send
await send(event)
File ""/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py"", line 460, in send
output = self.conn.send(event)
File ""/usr/local/lib/python3.9/site-packages/h11/_connection.py"", line 468, in send
data_list = self.send_with_data_passthrough(event)
File ""/usr/local/lib/python3.9/site-packages/h11/_connection.py"", line 501, in send_with_data_passthrough
writer(event, data_list.append)
File ""/usr/local/lib/python3.9/site-packages/h11/_writers.py"", line 58, in __call__
self.send_data(event.data, write)
File ""/usr/local/lib/python3.9/site-packages/h11/_writers.py"", line 78, in send_data
raise LocalProtocolError(""Too much data for declared Content-Length"")
h11._util.LocalProtocolError: Too much data for declared Content-Length
```
Thanks, I am finding Datasette very useful.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1640/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1161937073,I_kwDOBm6k_c5FQcCx,1653,Mechanism to default a table to sorting by multiple columns,9599,simonw,open,0,,,,,2,2022-03-07T21:20:11Z,2022-03-07T21:23:39Z,,OWNER,,"### Discussed in https://github.com/simonw/datasette/discussions/1652
Originally posted by **zaneselvans** March 7, 2022
It's easy to tell datasette to sort tables using a single column, as [described in the docs](https://docs.datasette.io/en/stable/metadata.html#setting-a-default-sort-order):
```yaml
databases:
ferc1:
tables:
f1_edcfu_epda:
sort: created_time
```
But is there some way to tell it to sort using a composite key, like you would in an `ORDER BY` clause instead? For example, the way it's being done **[in this query](https://data.catalyst.coop/ferc1?sql=select%0D%0A++rowid%2C%0D%0A++respondent_id%2C%0D%0A++report_year%2C%0D%0A++spplmnt_num%2C%0D%0A++row_number%2C%0D%0A++row_seq%2C%0D%0A++row_prvlg%2C%0D%0A++acct_num%2C%0D%0A++depr_plnt_base%2C%0D%0A++est_avg_srvce_lf%2C%0D%0A++net_salvage%2C%0D%0A++apply_depr_rate%2C%0D%0A++mrtlty_crv_typ%2C%0D%0A++avg_remaining_lf%2C%0D%0A++report_prd%0D%0Afrom%0D%0A++f1_edcfu_epda%0D%0Awhere%0D%0A++respondent_id+%3D+210%0D%0A++AND+report_year+%3D+2020%0D%0Aorder+by%0D%0A++report_year%2C+report_prd%2C+respondent_id%2C+spplmnt_num%2C+row_number%0D%0Alimit%0D%0A++1000)** on our Datasette?
```sql
SELECT
respondent_id,
report_year,
spplmnt_num,
row_number,
row_seq,
row_prvlg,
acct_num,
depr_plnt_base,
est_avg_srvce_lf,
net_salvage,
apply_depr_rate,
mrtlty_crv_typ,
avg_remaining_lf,
report_prd
FROM
f1_edcfu_epda
WHERE
respondent_id = 210
AND report_year = 2020
ORDER BY
report_year, report_prd, respondent_id, spplmnt_num, row_number
LIMIT
1000
```
The problem here is that by default it's using `rowid` (the SQLite assigned autoincrementing integer key) to order the records, but the table **should** have a natural composite primary key, but the original database that this data is being migrated from doesn't enforce unique primary keys, so there are dupes, and we don't want to drop those rows, and the records are somehow getting jumbled in the database (the `rowid` ordering isn't lined up with the expected ordering based on the composite primary key, though it's close) and this jumbling is confusing to users that expect to see the data ordered based on the natural primary key.
I've tried setting the `sort` metadata parameter to a list of column names, a tuple of column names, a quoted string of comma-separated column names, a quoted string of a tuple of column names...
```yaml
databases:
ferc1:
tables:
f1_edcfu_epda:
sort: ""(report_year, report_prd, respondent_id, spplmnt_num, row_number)""
```
and they all give me server errors like:
```
Cannot sort table by (report_year, report_prd, respondent_id, spplmnt_num, row_number)
```
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1653/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1177101697,I_kwDOBm6k_c5GKSWB,1681,Potential bug in numeric handling where_clause for filters,9599,simonw,open,0,,,,,2,2022-03-22T17:43:50Z,2022-03-22T17:49:09Z,,OWNER,,"> Note that Datasette does already have special logic to convert parameters to integers for numeric comparisons like `>`:
>
> https://github.com/simonw/datasette/blob/c4c9dbd0386e46d2bf199f0ed34e4895c98cb78c/datasette/filters.py#L203-L212
>
> Though... it looks like there's a bug in that? It doesn't account for `float` values - `""3.5"".isdigit()` return `False` - probably for the best, because `int(3.5)` would break that value anyway.
_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1671#issuecomment-1075432283_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1681/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1182141761,I_kwDOBm6k_c5Gdg1B,1690,"Idea: `datasette.set_actor_cookie(response, actor)`",9599,simonw,open,0,,,,,2,2022-03-26T22:41:52Z,2022-03-26T22:43:00Z,,OWNER,,"I just wrote this code in a plugin and it felt like it could benefit from an abstraction: https://github.com/simonw/datasette-auth0/blob/152e6eb21e96e9b73bd9c205f9749a1297d0ef0b/datasette_auth0/__init__.py#L79-L92
```python
redirect_response = Response.redirect(""/"")
expires_at = int(time.time()) + (24 * 60 * 60)
redirect_response.set_cookie(
""ds_actor"",
datasette.sign(
{
""a"": profile_response.json(),
""e"": baseconv.base62.encode(expires_at),
},
""actor"",
),
)
return redirect_response
```
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1690/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1182227211,I_kwDOBm6k_c5Gd1sL,1692,[plugins][feature request]: Support additional script tag attributes when loading custom JS,9020979,hydrosquall,open,0,,,,,2,2022-03-27T01:16:03Z,2022-03-30T06:14:51Z,,CONTRIBUTOR,,"## Motivation
- The build system for my new [plugin](https://github.com/hydrosquall/datasette-nteract-data-explorer) has two output JS files, one for browsers that support ES modules, one for browsers that don't. At present, I'm only passing one of them into Datasette.
- I'd like to specify the non-es-module script as a fallback for older browsers. I don't want to load it by default, because browsers will only need one, and it's heavy, so for now I'm only supporting modern browsers.
To be able to support legacy browsers without slowing down users with modern browsers, I would like to be able to set additional HTML attributes on the tag fallback script, `nomodule` and `defer`. My injected scripts should look something like this:
```html
```
## Proposal
To achieve this, I propose additional optional properties to the API accepted by the `extra_js_urls` hook and custom JS field the `metadata.json` [described here](https://docs.datasette.io/en/stable/custom_templates.html#custom-css-and-javascript).
Under this API, I'd write something like this to get the above HTML rendered in Datasette.
```json
{
""extra_js_urls"": [
{
""url"": ""/index.my-es-module-bundle.js"",
""module"": true,
},
{
""url"": ""/index.my-legacy-fallback-bundle.js"",
""nomodule"": """",
""defer"": true
}
]
}
```
## Resources
- [MDN on the script tag](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script)
- There may be other properties that could be added that are potentially valuable, like `async` or `referrerpolicy`, but I don't have an immediate need for those.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1692/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1185868354,I_kwDOBm6k_c5GrupC,1695,Option to un-filter facet not shown for `?col__exact=value`,9599,simonw,open,0,,,,,2,2022-03-30T04:44:02Z,2022-03-30T04:46:18Z,,OWNER,,"Spotted this on a page with `COUNTY__exact=Lee` in the URL:
![CleanShot 2022-03-29 at 21 41 46@2x](https://user-images.githubusercontent.com/9599/160752849-a9039343-3770-4655-920b-f19e25687a57.png)
With `COUNTY=Lee` you get this instead:
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1695/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1186696202,I_kwDOBm6k_c5Gu4wK,1696,Show foreign key label when filtering,9599,simonw,open,0,,,,,2,2022-03-30T16:18:54Z,2023-01-29T20:56:20Z,,OWNER,,"For example here:
3 corresponds to ""Human Related: Other"" - it would be neat to display this in this area of the page somehow.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1696/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1200649124,I_kwDOBm6k_c5HkHOk,1708,Datasette 1.0 alpha upcoming release notes,9599,simonw,open,0,,,8755003,Datasette 1.0a-next,2,2022-04-11T22:57:12Z,2022-12-13T05:29:06Z,,OWNER,,"I'm going to try writing the release notes first, to see if that helps unblock me.
# ⚠️ Any release notes in this issue are a draft, and should not be treated as the real thing ⚠️ ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1708/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1221849746,I_kwDOBm6k_c5I0_KS,1732,Custom page variables aren't decoded,52649,tannewt,open,0,,,,,2,2022-04-30T14:55:46Z,2022-05-03T01:50:45Z,,NONE,,"I have a page `templates/filer/{filer_id}.html`. It uses `filer_id` in a `sql()` call to fetch data. With 0.61.1 this no longer works because the spaces in IDs isn't preserved. Instead, the escaped version is passed into the template and the id isn't present in my db.
Datasette should unescape the url component before passing them into the template.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1732/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1224112817,I_kwDOCGYnMM5I9nqx,430,Document how to use `PRAGMA temp_store` to avoid errors when running VACUUM against huge databases,9308268,rayvoelker,open,0,,,,,2,2022-05-03T13:33:58Z,2022-06-14T23:26:37Z,,NONE,,"I'm trying to figure out a way to get the `table.extract()` method to complete successfully -- I'm not sure if maybe the cause (and a possible solution) of this on Ubuntu Server 22.04 is to adjust some of the PRAGMA values within SQLite itself ... on another Linux system (PopOS), using this method on this same database appears to work just fine.
Here's the bit that's causing the error, and the resulting error output:
```python
# combine these columns into 1 table ""bib_properties"" :
# best_title
# bib_level_code
# mat_type
# material_code
# best_author
db[""circ_trans""].extract(
[""best_title"", ""bib_level_code"", ""mat_type"", ""material_code"", ""best_author""],
table=""bib_properties"",
fk_column=""bib_properties_id""
)
db[""circ_trans""].extract(
[""call_number""],
table=""call_number"",
fk_column=""call_number_id"",
rename={""call_number"": ""value""}
)
```
```python
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
Input In [17], in ()
1 # combine these columns into 1 table ""bib_properties"" :
2 # best_title
3 # bib_level_code
4 # mat_type
5 # material_code
6 # best_author
----> 7 db[""circ_trans""].extract(
8 [""best_title"", ""bib_level_code"", ""mat_type"", ""material_code"", ""best_author""],
9 table=""bib_properties"",
10 fk_column=""bib_properties_id""
11 )
13 db[""circ_trans""].extract(
14 [""call_number""],
15 table=""call_number"",
16 fk_column=""call_number_id"",
17 rename={""call_number"": ""value""}
18 )
File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1764, in Table.extract(self, columns, table, fk_column, rename)
1761 column_order.append(c.name)
1763 # Drop the unnecessary columns and rename lookup column
-> 1764 self.transform(
1765 drop=set(columns),
1766 rename={magic_lookup_column: fk_column},
1767 column_order=column_order,
1768 )
1770 # And add the foreign key constraint
1771 self.add_foreign_key(fk_column, table, ""id"")
File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1526, in Table.transform(self, types, rename, drop, pk, not_null, defaults, drop_foreign_keys, column_order)
1524 with self.db.conn:
1525 for sql in sqls:
-> 1526 self.db.execute(sql)
1527 # Run the foreign_key_check before we commit
1528 if pragma_foreign_keys_was_on:
File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:465, in Database.execute(self, sql, parameters)
463 return self.conn.execute(sql, parameters)
464 else:
--> 465 return self.conn.execute(sql)
OperationalError: database or disk is full
```
This database is about 17G in total size, so I'm assuming the error is coming from the vacuum ... where i'm assuming it's maybe trying to do the temp storage in a location that doesn't have sufficient room. The disk space is more than ample on the host in question (1.8T is free in the directory where the sqlite db resides) The `/tmp` directory however is limited on a smaller disk associated with the OS
I'm trying to think if there's a way to set the `PRAGMA temp_store` or maybe if it's `temp_store_directory` that I'm after ... to use the same local directory of where the file is located (maybe this is a property of the version of sqlite on the system?)
```python
# SET the temp file store to be a file ...
print(db.execute('PRAGMA temp_store').fetchall())
print(db.execute('PRAGMA temp_store=FILE').fetchall())
print(db.execute('PRAGMA temp_store').fetchall())
# the users home directory ...
print(db.execute(""PRAGMA temp_store_directory='/home/plchuser/'"").fetchall())
print(db.execute(""PRAGMA sqlite3_temp_directory='/home/plchuser/'"").fetchall())
print(db.execute(""PRAGMA temp_store_directory"").fetchall())
print(db.execute(""PRAGMA sqlite3_temp_directory"").fetchall())
```
```text
[(1,)]
[]
[(1,)]
[]
[]
[('/home/plchuser/',)]
[]
```
Here's the docs on the Temporary File Storage Locations
https://www.sqlite.org/tempfiles.html",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/430/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1339444565,I_kwDOBm6k_c5P1k1V,1783,Better guidance as to what to do after you've installed Datasette,9599,simonw,open,0,,,,,2,2022-08-15T20:11:06Z,2022-08-15T20:14:01Z,,OWNER,,"Feedback [from Discord](https://discord.com/channels/823971286308356157/823971286941302908/1008822978793984060):
> hello, love the project and came for help and to point out a possible gap in the docs. starting with ""getting started"" and ""installation"" every thing looks great, but then there's a giant leap after you have it installed and running. from the user perspective of ""i have a csv of set of csvs that i want to turn into a table(s), what do i do next?"" --- so something like maybe a page for creating your first project should go after ""installation"".
- https://docs.datasette.io/en/0.62/getting_started.html
- https://docs.datasette.io/en/0.62/installation.html",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1783/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1405196044,PR_kwDOCGYnMM5AmYzy,499,feat: recreate fts triggers after table transform,7908073,chapmanjacobd,open,0,,,,,2,2022-10-11T20:35:39Z,2022-10-26T17:54:51Z,,CONTRIBUTOR,simonw/sqlite-utils/pulls/499,"https://github.com/simonw/sqlite-utils/pull/498
----
:books: Documentation preview :books:: https://sqlite-utils--499.org.readthedocs.build/en/499/
alternatively, `self.disable_fts()`",140912432,sqlite-utils,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/499/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
1455928469,I_kwDOBm6k_c5Wx7SV,1903,Refactor all error classes into a datasette.exceptions module,9599,simonw,open,0,,,3268330,Datasette 1.0,2,2022-11-18T22:44:45Z,2022-11-20T22:35:01Z,,OWNER,,"While working on this issue:
- #1896
I realized that Datasette has error classes scattered around a fair bit, including some in the `datasette.utils.asgi` module for some reason.
I should clean these up.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1903/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1483250004,I_kwDOBm6k_c5YaJlU,1936,Fix /db/table/-/upsert in the API explorer,9599,simonw,open,0,,,3268330,Datasette 1.0,2,2022-12-08T00:59:34Z,2022-12-08T01:36:02Z,,OWNER,,"Split from:
- #1931
- #1878
This is a bit tricky because the code needs to figure out what the primary keys are for an item, and whether or not `rowid` should be included.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1936/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1515883470,I_kwDOC8tyDs5aWovO,24,DOC: xml.etree.ElementTree.ParseError due to healthkit version 12 ,6231413,mmngreco,open,0,,,,,2,2023-01-01T23:00:38Z,2023-03-30T10:17:31Z,,NONE,,"Hi @simonw
I hope you find this issue ok, the idea is provide some documentation to other users like me about how to solve this problem and save some time.
Following the instructions from the `README.md` I've faced this error:
```bash
(venv) mgreco@pop-os apple-health master* (23:44|0s)
$ healthkit-to-sqlite apple_health_export/export.xml healthkit.db --xml
Importing from HealthKit [------------------------------------] 0%
Traceback (most recent call last):
File ""/home/mgreco/github/mmngreco/apple-health/venv/bin/healthkit-to-sqlite"", line 33, in
sys.exit(load_entry_point('healthkit-to-sqlite', 'console_scripts', 'healthkit-to-sqlite')())
File ""/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py"", line 1130, in __call__
return self.main(*args, **kwargs)
File ""/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py"", line 1055, in main
rv = self.invoke(ctx)
File ""/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py"", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File ""/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/site-packages/click/core.py"", line 760, in invoke
return __callback(*args, **kwargs)
File ""/home/mgreco/github/mmngreco/apple-health/.deps/healthkit-to-sqlite/healthkit_to_sqlite/cli.py"", line 57, in cli
convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf)
File ""/home/mgreco/github/mmngreco/apple-health/.deps/healthkit-to-sqlite/healthkit_to_sqlite/utils.py"", line 25, in convert_xml_to_sqlite
for tag, el in find_all_tags(
File ""/home/mgreco/github/mmngreco/apple-health/.deps/healthkit-to-sqlite/healthkit_to_sqlite/utils.py"", line 12, in find_all_tags
for event, el in parser.read_events():
File ""/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/xml/etree/ElementTree.py"", line 1324, in read_events
raise event
File ""/home/mgreco/github/mmngreco/apple-health/venv/lib/python3.10/xml/etree/ElementTree.py"", line 1296, in feed
self._parser.feed(data)
xml.etree.ElementTree.ParseError: syntax error: line 156, column 0
```
So, after debugging and searching on internet I found this useful link: https://discussions.apple.com/thread/254202523 (etresoft, the real hero). Which basically says that the xml given by the health app (healthkit version 12) has some bugs but fortunately, they can be solved with a couple of commads:
1. Uncompress the zip and move the new folder where `export.xml` is.
1. Create a `patch.txt` with the following content
```diff
--- export.xml 2022-09-18 15:17:09.000000000 -0400
+++ export-fixed.xml 2022-09-18 16:37:08.000000000 -0400
@@ -15,6 +15,7 @@
HKCharacteristicTypeIdentifierBiologicalSex CDATA #REQUIRED
HKCharacteristicTypeIdentifierBloodType CDATA #REQUIRED
HKCharacteristicTypeIdentifierFitzpatrickSkinType CDATA #REQUIRED
+ HKCharacteristicTypeIdentifierCardioFitnessMedicationsUse CDATA #IMPLIED
>
-
+
-
+
- device CDATA #IMPLIED
-
-
->
]>
```
1. Apply the path with the command: `patch < patch.txt`
1. Fix endDates with the command `sed 's/startDate/endDate/2' export.xml > export-fixed.xml`
1. Try again `healthkit-to-sqlite export-fixed.xml healthkit.db --xml`",197882382,healthkit-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/24/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1524867951,I_kwDOBm6k_c5a46Nv,1980,"""Cannot sort table by id"" when sortable_columns is used",9599,simonw,open,0,,,,,2,2023-01-09T03:21:33Z,2023-01-09T03:23:53Z,,OWNER,,"I had an instance with this in `metadata.yml`:
```yaml
databases:
timezones:
tables:
timezones:
sortable_columns:
- tzid
```
When I clicked on the ""Apply"" button here:
It sent me to `/timezones/timezones?_sort=id&id__exact=133` with the error message:
> 500: Cannot sort table by id",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1980/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1551113681,I_kwDOBm6k_c5cdB3R,1998,`datasette --version` should also show the SQLite version,9599,simonw,open,0,,,,,2,2023-01-20T16:11:30Z,2023-01-20T18:19:06Z,,OWNER,,Idea came up here: https://discord.com/channels/823971286308356157/823971286941302908/1066026473003159783,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1998/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1558644003,I_kwDOBm6k_c5c5wUj,2006,Teach `datasette publish` to pin to `datasette<1.0` in a 0.x release,9599,simonw,open,0,,,3268330,Datasette 1.0,2,2023-01-26T19:17:40Z,2023-01-26T19:20:53Z,,OWNER,,"I just realized that when I ship Datasette 1.0 there may be automated deployments out there which could deploy the 1.0 version by accident, potentially breaking any customizations that aren't compatible with the 1.0 changes.
I can hopefully help avoid that by shipping one last entry in the `0.x` series that ensures `datasette publish` pins to `<1.0` when it installs Datasette itself.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2006/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1570375808,I_kwDODFdgUs5dmgiA,79,Deploy demo job is failing due to rate limit,9599,simonw,open,0,,,,,2,2023-02-03T20:05:01Z,2023-12-08T14:50:15Z,,MEMBER,,https://github.com/dogsheep/github-to-sqlite/actions/runs/4080058087/jobs/7032116511,207052882,github-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1595340692,I_kwDOCGYnMM5fFveU,530,"add ability to configure ""on delete"" and ""on update"" attributes of foreign keys:",536941,fgregg,open,0,,,,,2,2023-02-22T15:44:14Z,2023-05-08T20:39:01Z,,CONTRIBUTOR,,"sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils.
https://www.sqlite.org/foreignkeys.html#fk_actions",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1613974869,PR_kwDOBm6k_c5LgPS-,2034,remove an unused `app` var in cli.py,4370201,wenhoujx,open,0,,,,,2,2023-03-07T18:19:05Z,2023-03-29T20:56:20Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2034,"this var `app` isn't actually used? unless init it does some side-effect outside of the event loop, idon't think it's necessary.
Feel free to ignore this PR if the deleted line actually does something.
----
:books: Documentation preview :books:: https://datasette--2034.org.readthedocs.build/en/2034/
",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2034/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
1616429236,I_kwDOJHON9s5gWMC0,4,Support incremental updates,9599,simonw,open,0,,,,,2,2023-03-09T05:14:00Z,2023-03-09T18:20:56Z,,MEMBER,,"Running this script can take several hours against a large notes database.
Would be neat if it could run against just the notes that have been modified since it last ran. Could pull the max `updated` date and then keep on looping until it finds one modified before then.
Problem is I don't actually know what order it iterates over the notes in.",611552758,apple-notes-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/4/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1617602868,I_kwDOJHON9s5gaqk0,6,Character encoding problem,9599,simonw,open,0,,,,,2,2023-03-09T16:44:34Z,2023-04-14T15:22:09Z,,MEMBER,,"I ran against a recent note with this in it:
> Or just ""Actions ⚙️ ""
And got back:
> `Actions ⚙️`
Pasting that into https://ftfy.vercel.app/?s=Actions+%E2%80%9A%C3%B6%C3%B4%C3%94%E2%88%8F%C3%A8+ gives this:
```python
s = 'Actions â\x80\x9aöôÃ\x94â\x88\x8fè'
s = s.encode('latin-1')
s = s.decode('utf-8')
s = s.encode('macroman')
s = s.decode('utf-8')
print(s)
```
",611552758,apple-notes-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1618249044,I_kwDOBm6k_c5gdIVU,2038,Consider a `strict_templates` setting,9599,simonw,open,0,,,,,2,2023-03-10T02:09:13Z,2023-03-10T02:11:06Z,,OWNER,,"A setting which turns on Jinja strict mode, so any templates that access undefined variables raise a hard error.
Prototype here:
```diff
diff --git a/datasette/app.py b/datasette/app.py
index 40416713..1428a3f0 100644
--- a/datasette/app.py
+++ b/datasette/app.py
@@ -200,6 +200,7 @@ SETTINGS = (
""Allow display of SQL trace debug information with ?_trace=1"",
),
Setting(""base_url"", ""/"", ""Datasette URLs should use this base path""),
+ Setting(""strict_templates"", False, ""Raise errors for undefined template variables""),
)
_HASH_URLS_REMOVED = ""The hash_urls setting has been removed, try the datasette-hashed-urls plugin instead""
OBSOLETE_SETTINGS = {
@@ -399,11 +400,14 @@ class Datasette:
),
]
)
+ env_extras = {}
+ if self.setting(""strict_templates""):
+ env_extras[""undefined""] = StrictUndefined
self.jinja_env = Environment(
loader=template_loader,
autoescape=True,
enable_async=True,
- undefined=StrictUndefined,
+ **env_extras,
)
self.jinja_env.filters[""escape_css_string""] = escape_css_string
self.jinja_env.filters[""quote_plus""] = urllib.parse.quote_plus
```
Explored this idea a bit in:
- #1999",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2038/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1708030220,I_kwDOBm6k_c5lznkM,2073,Faceting doesn't work against integer columns in views,9599,simonw,open,0,,,,,2,2023-05-12T18:20:10Z,2023-05-12T18:24:07Z,,OWNER,,"Spotted this issue here: https://til.simonwillison.net/datasette/baseline
I had to do this workaround:
```sql
create view baseline as select
_key,
spec,
'' || json_extract(status, '$.is_baseline') as is_baseline,
json_extract(status, '$.since') as baseline_since,
json_extract(status, '$.support.chrome') as baseline_chrome,
json_extract(status, '$.support.edge') as baseline_edge,
json_extract(status, '$.support.firefox') as baseline_firefox,
json_extract(status, '$.support.safari') as baseline_safari,
compat_features,
caniuse,
usage_stats,
status
from
[index]
```
I think the core issue here is that, against a table, `select * from x where integer_column = '1'` works correctly, due to some kind of column type conversion mechanism... but this mechanism doesn't work against views.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2073/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1765870617,I_kwDOBm6k_c5pQQwZ,2087,`--settings settings.json` option,9599,simonw,open,0,,,,,2,2023-06-20T17:48:45Z,2023-07-14T17:02:03Z,,OWNER,,"https://discord.com/channels/823971286308356157/823971286941302908/1120705940728066080
> May I add a request to the whole metadata / settings ? Allow to pass `--settings path/to/settings.json` instead of having to rely exclusively on directory mode to centralize settings (this would reflect the behavior of providing metadata)",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2087/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1781005740,I_kwDOBm6k_c5qJ_2s,2090,Adopt ruff for linting,9599,simonw,open,0,,,,,2,2023-06-29T14:56:43Z,2023-06-29T15:05:04Z,,OWNER,,https://beta.ruff.rs/docs/,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2090/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1795219865,I_kwDOCGYnMM5rAOGZ,566,`--no-headers` doesn't work on most formats,33625,zellyn,open,0,,,,,2,2023-07-09T03:43:36Z,2023-07-09T04:13:35Z,,NONE,,"Version 3.33
```
sqlite-utils query library.db 'select asin from audible' --fmt plain --no-headers | head -3
asin
0062804006
0062891421
```",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/566/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1808215339,I_kwDOBm6k_c5rxy0r,2104,Tables starting with an underscore should be treated as hidden,9599,simonw,open,0,,,,,2,2023-07-17T17:13:53Z,2023-07-18T22:41:37Z,,OWNER,,"Plugins can then take advantage of this pattern, for example:
- https://github.com/simonw/datasette-auth-tokens/pull/8",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2104/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1825007061,I_kwDOBm6k_c5sx2XV,2123,datasette serve when invoked with --reload interprets the serve command as a file,79087,cadeef,open,0,,,,,2,2023-07-27T19:07:22Z,2023-09-18T13:02:46Z,,NONE,,"When running `datasette serve` with the `--reload` flag, the serve command is picked up as a file argument:
```
$ datasette serve --reload test_db
Starting monitor for PID 13574.
Error: Invalid value for '[FILES]...': Path 'serve' does not exist.
Press ENTER or change a file to reload.
```
If a 'serve' file is created it launches properly (albeit with an empty database called serve):
```
$ touch serve; datasette serve --reload test_db
Starting monitor for PID 13628.
INFO: Started server process [13628]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
```
Version (running from HEAD on main):
```
$ datasette --version
datasette, version 1.0a2
```
This issue appears to have existed for awhile as https://github.com/simonw/datasette/issues/1380#issuecomment-953366110 mentions the error in a different context.
I'm happy to debug and land a patch if it's welcome.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2123/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1865572575,PR_kwDOBm6k_c5Yt2eO,2155,Fix hupper.start_reloader entry point,79087,cadeef,open,0,,,,,2,2023-08-24T17:14:08Z,2023-09-27T18:44:02Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2155,"Update hupper's entry point so that click commands are processed properly.
Fixes #2123
----
:books: Documentation preview :books:: https://datasette--2155.org.readthedocs.build/en/2155/
",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2155/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 2, ""eyes"": 0}",0,
1891614971,I_kwDOCGYnMM5wv8D7,594,Represent compound foreign keys in table.foreign_keys output,9599,simonw,open,0,,,,,2,2023-09-12T03:48:24Z,2023-09-12T03:51:13Z,,OWNER,,"Given this schema:
```sql
CREATE TABLE departments (
campus_name TEXT NOT NULL,
dept_code TEXT NOT NULL,
dept_name TEXT,
PRIMARY KEY (campus_name, dept_code)
);
CREATE TABLE courses (
course_code TEXT PRIMARY KEY,
course_name TEXT,
campus_name TEXT NOT NULL,
dept_code TEXT NOT NULL,
FOREIGN KEY (campus_name, dept_code) REFERENCES departments(campus_name, dept_code)
);
```
The output of `db[""courses""].foreign_keys` right now is:
```
[ForeignKey(table='courses', column='campus_name', other_table='departments', other_column='campus_name'),
ForeignKey(table='courses', column='dept_code', other_table='departments', other_column='dept_code')]
```
Which suggests two normal foreign keys, not one compound foreign key.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/594/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1898927976,I_kwDOBm6k_c5xL1do,2186,Mechanism for register_output_renderer hooks to access full count,9599,simonw,open,0,,,3268330,Datasette 1.0,2,2023-09-15T18:57:54Z,2023-09-15T19:27:59Z,,OWNER,,"The cause of this bug:
- https://github.com/simonw/datasette-export-notebook/issues/17
Is that `datasette-export-notebook` was consulting `data[""filtered_table_rows_count""]` in the render output plugin function in order to show the total number of rows that would be exported.
That field is no longer available by default - the `""count""` field is only available if `?_extra=count` was passed.
It would be useful if plugins like this could access the total count on demand, should they need to.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2186/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1900026059,I_kwDOBm6k_c5xQBjL,2188,"Plugin Hooks for ""compile to SQL"" languages",15178711,asg017,open,0,,,,,2,2023-09-18T01:37:15Z,2023-09-18T06:58:53Z,,CONTRIBUTOR,,"There's a ton of tools/languages that compile to SQL, which may be nice in Datasette. Some examples:
- Logica https://logica.dev
- PRQL https://prql-lang.org
- Malloy, but not sure if it works with SQLite? https://github.com/malloydata/malloy
It would be cool if plugins could extend Datasette to use these languages, in both the code editor and API usage.
A few things I'd imagine a `datasette-prql` or `datasette-logica` plugin would do:
- `prql=` instead of `sql=`
- Code editor support (syntax highlighting, autocomplete)
- Hide/show SQL",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2188/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
327395270,MDU6SXNzdWUzMjczOTUyNzA=,296,Per-database and per-table /-/ URL namespace,9599,simonw,open,0,,,,,3,2018-05-29T16:23:13Z,2019-06-28T16:46:34Z,,OWNER,,"Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph`
To start:
* `/dbname/-/inspect`
* `/dbname/-/metadata`
* `/dbname/tablename/-/inspect`
* `/dbname/tablename/-/metadata`
This means we will no longer allow databases or tables to have the name `""-""` - I think that's OK
We will continue to support rows with a primary key of `""-""` at the following URL:
* `/dbname/tablename/-`",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/296/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
400340905,MDU6SXNzdWU0MDAzNDA5MDU=,402,Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs,9599,simonw,open,0,,,,,3,2019-01-17T15:52:28Z,2019-01-17T16:15:21Z,,OWNER,,"> Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html
https://twitter.com/ignoredambience/status/1085926961413869568",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/402/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
451585764,MDU6SXNzdWU0NTE1ODU3NjQ=,499,Accessibility for non-techie newsies? ,7936571,chrismp,open,0,,,,,3,2019-06-03T16:49:37Z,2019-06-05T21:22:55Z,,NONE,,"Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/499/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
457147936,MDU6SXNzdWU0NTcxNDc5MzY=,512,"""about"" parameter in metadata does not appear when alone",7936571,chrismp,open,0,,,,,3,2019-06-17T21:04:20Z,2019-10-11T15:49:13Z,,NONE,,"Here's an example of metadata I have for one database on datasette.
```
""Records-requests"": {
""tables"": {
""Some table"": {
""about"": ""This table has data.""
}
}
}
```
The text in `about` does not show up when I publish the data. But it shows up after I add a `""source""` parameter in the metadata.
Is this intended?",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/512/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
465327844,MDU6SXNzdWU0NjUzMjc4NDQ=,553,Potential improvements to facet-by-date,9599,simonw,open,0,,,,,3,2019-07-08T15:37:53Z,2019-07-08T15:41:55Z,,OWNER,,"In addition to #483 Tobias had some useful suggestions on Twitter:
https://twitter.com/rixxtr/status/1148253926476701696
> I think for date facets, it might be more meaningful to order them by date, rather than by size? Or offer both? I'm *definitely* often interested in size-over-time, so https://data.rixx.de/django_tickets/tickets?_facet_date=created#facet-created … isn't all that helpful!
Screenshot of that link:
",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/553/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
530491074,MDU6SXNzdWU1MzA0OTEwNzQ=,14,Command for importing events,9599,simonw,open,0,,,,,3,2019-11-29T21:28:58Z,2020-04-14T19:38:34Z,,MEMBER,,"Eg from https://api.github.com/users/simonw/events
Docs here: https://developer.github.com/v3/activity/events/#list-events-performed-by-a-user",207052882,github-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/14/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
531502365,MDU6SXNzdWU1MzE1MDIzNjU=,646,Make database level information from metadata.json available in the index.html template,18017473,lagolucas,open,0,,,3268330,Datasette 1.0,3,2019-12-02T19:55:10Z,2022-03-15T20:50:34Z,,NONE,,"Did a search on the issues here and didn't find anything related to what I want.
I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page.
I tried some small tweaks on the python and html files, but failed to get that result.
Is there a way? Thanks!",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/646/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
534629631,MDU6SXNzdWU1MzQ2Mjk2MzE=,650,Add a glossary to the documentation,9599,simonw,open,0,,,,,3,2019-12-09T00:23:45Z,2022-01-13T22:04:56Z,,OWNER,,"Call it `glossary.rst` - it can use a definition list something like this:
```rst
.. _glossary:
Glossary
========
Term
A definition of the term.
Another term
Another definition.
```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/650/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
546073980,MDU6SXNzdWU1NDYwNzM5ODA=,74,Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column,15092,jayvdb,open,0,,,,,3,2020-01-07T04:35:50Z,2020-01-12T07:21:17Z,,CONTRIBUTOR,,"openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes.
Most fail on the cli exit code like
```py
[ 74s] =================================== FAILURES ===================================
[ 74s] _________________________________ test_tables __________________________________
[ 74s]
[ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db'
[ 74s]
[ 74s] def test_tables(db_path):
[ 74s] result = CliRunner().invoke(cli.cli, [""tables"", db_path])
[ 74s] > assert '[{""table"": ""Gosh""},\n {""table"": ""Gosh2""}]' == result.output.strip()
[ 74s] E assert '[{""table"": ""...e"": ""Gosh2""}]' == ''
[ 74s] E - [{""table"": ""Gosh""},
[ 74s] E - {""table"": ""Gosh2""}]
[ 74s]
[ 74s] tests/test_cli.py:28: AssertionError
```
packaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils
I'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
573578548,MDU6SXNzdWU1NzM1Nzg1NDg=,89,Ability to customize columns used by extracts= feature,9599,simonw,open,0,,,,,3,2020-03-01T16:54:48Z,2020-10-16T19:17:50Z,,OWNER,,"@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the ""value"" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named ""value"". I have an existing lookup table that I've populated with columns ""id"" and ""name"" as opposed to ""id"" and ""value"", and seems I can't use `extracts=`, unless I'm missing something...
Initial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so:
```
table = db.table(""trees"", extracts={""species_id"": (""Species"", ""name""})
```
I haven't dug too much into the existing code yet, but does this make sense? Worth doing?
_Originally posted by @chrishas35 in https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503_",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/89/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
613422636,MDU6SXNzdWU2MTM0MjI2MzY=,760,Way of seeing full schema for a database,9599,simonw,open,0,,,,,3,2020-05-06T15:46:08Z,2020-05-06T23:49:06Z,,OWNER,,"I find myself wanting to quickly figure out all of the BLOB columns in a database.
A `/-/schema` page showing the full schema (actually since it's per-database probably `/dbname/-/schema` or `/-/schema/dbname`) would be really handy.
It would need to be carefully constructed from various queries against `sqlite_master` - just doing `select * from sqlite_master where type='table'` isn't quite enough because I also want to show indexes, triggers etc.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/760/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
636511683,MDU6SXNzdWU2MzY1MTE2ODM=,830,Redesign register_facet_classes plugin hook,9599,simonw,open,0,,,3268330,Datasette 1.0,3,2020-06-10T20:03:27Z,2021-12-16T19:58:22Z,,OWNER,,"Nothing uses this plugin hook yet, so the design is not yet proven.
I'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.
I'll add a warning about this to the documentation.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/830/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
639542974,MDU6SXNzdWU2Mzk1NDI5NzQ=,47,Fall back to FTS4 if FTS5 is not available,73579,hpk42,open,0,,,,,3,2020-06-16T10:11:23Z,2020-06-17T20:13:48Z,,NONE,,"got this with version 0.21.1 from pypi. twitter-to-sqlite auth worked but then ""twitter-to-sqlite user-timeline USER.db"" produced a tracekback ending in ""no such module: FTS5"". ",206156866,twitter-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
642296989,MDU6SXNzdWU2NDIyOTY5ODk=,856,Consider pagination of canned queries,9599,simonw,open,0,,,,,3,2020-06-20T03:15:59Z,2021-05-21T14:22:41Z,,OWNER,,The new `canned_queries()` plugin hook from #852 combined with plugins like https://github.com/simonw/datasette-saved-queries could mean that some installations end up with hundreds or even thousands of canned queries. I should consider pagination or some other way of ensuring that this doesn't cause performance problems for Datasette.,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/856/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
644161221,MDU6SXNzdWU2NDQxNjEyMjE=,117,Support for compound (composite) foreign keys,9599,simonw,open,0,,,,,3,2020-06-23T21:33:42Z,2020-06-23T21:40:31Z,,OWNER,,"It turns out SQLite supports composite foreign keys: https://www.sqlite.org/foreignkeys.html#fk_composite
Their example looks like this:
```sql
CREATE TABLE album(
albumartist TEXT,
albumname TEXT,
albumcover BINARY,
PRIMARY KEY(albumartist, albumname)
);
CREATE TABLE song(
songid INTEGER,
songartist TEXT,
songalbum TEXT,
songname TEXT,
FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname)
);
```
Here's what that looks like in sqlite-utils:
```
In [1]: import sqlite_utils
In [2]: import sqlite3
In [3]: conn = sqlite3.connect("":memory:"")
In [4]: conn
Out[4]:
In [5]: conn.executescript(""""""
...: CREATE TABLE album(
...: albumartist TEXT,
...: albumname TEXT,
...: albumcover BINARY,
...: PRIMARY KEY(albumartist, albumname)
...: );
...:
...: CREATE TABLE song(
...: songid INTEGER,
...: songartist TEXT,
...: songalbum TEXT,
...: songname TEXT,
...: FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname)
...: );
...: """""")
Out[5]:
In [6]: db = sqlite_utils.Database(conn)
In [7]: db.tables
Out[7]:
[,
]
In [8]: db.tables[0].foreign_keys
Out[8]: []
In [9]: db.tables[1].foreign_keys
Out[9]:
[ForeignKey(table='song', column='songartist', other_table='album', other_column='albumartist'),
ForeignKey(table='song', column='songalbum', other_table='album', other_column='albumname')]
```
The table appears to have two separate foreign keys, when actually it has a single compound composite foreign key.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/117/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
646448486,MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0,868,initial windows ci setup,702729,joshmgrant,open,0,,,,,3,2020-06-26T18:49:13Z,2021-07-10T23:41:43Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/868,Picking up the work done on #557 with a new PR. Seeing if I can get this working.,107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/868/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0,
652961907,MDU6SXNzdWU2NTI5NjE5MDc=,121,Improved (and better documented) support for transactions,9599,simonw,open,0,,,,,3,2020-07-08T04:56:51Z,2020-09-24T20:36:46Z,,OWNER,,"_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655283393_
We should put some thought into how this library supports and encourages smart use of transactions.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/121/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
694493566,MDU6SXNzdWU2OTQ0OTM1NjY=,16,Timeline view,9599,simonw,open,0,,,,,3,2020-09-06T19:13:58Z,2020-09-21T02:42:29Z,,MEMBER,,Ability to browse (and facet) by date.,197431109,dogsheep-beta,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/16/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
735852274,MDU6SXNzdWU3MzU4NTIyNzQ=,1082,DigitalOcean buildpack memory errors for large sqlite db?,39538958,justmars,open,0,,,,,3,2020-11-04T06:35:32Z,2020-11-04T19:35:44Z,,NONE,,"1. Have a sqlite db stored in Dropbox
2. Previously tried the Digital Ocean build pack minimal approach (e.g. Procfile, requirements.txt, bin/post_compile)
3. bin/post_compile with wget from Dropbox
4. download of large sqlite db is successful
5. log reveals that when building Docker container, Digital Ocean runs out of memory for 5gb+ sqlite db but works fine for 2gb+ sqlite db",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1082/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
741862364,MDU6SXNzdWU3NDE4NjIzNjQ=,1090,Custom widgets for canned query forms,9599,simonw,open,0,,,,,3,2020-11-12T19:21:07Z,2021-03-27T16:25:25Z,,OWNER,,"This is an idea that was cut from the first version of writable canned queries:
> I really want the option to use a ` |