html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app https://github.com/simonw/datasette/issues/1168#issuecomment-869076254,https://api.github.com/repos/simonw/datasette/issues/1168,869076254,MDEyOklzc3VlQ29tbWVudDg2OTA3NjI1NA==,2670795,2021-06-27T00:03:16Z,2021-06-27T00:05:51Z,CONTRIBUTOR,"> Related: Here's an implementation of a `get_metadata()` plugin hook by @brandonrobertz [next-LI@3fd8ce9](https://github.com/next-LI/datasette/commit/3fd8ce91f3108c82227bf65ff033929426c60437) Here's a plugin that implements metadata-within-DBs: [next-LI/datasette-live-config](https://github.com/next-LI/datasette-live-config) How it works: If a database has a `__metadata` table, then it gets parsed and included in the global metadata. It also implements a database-action hook with a UI for managing config. More context: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L109-L140","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388, https://github.com/simonw/datasette/issues/1384#issuecomment-869074701,https://api.github.com/repos/simonw/datasette/issues/1384,869074701,MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==,2670795,2021-06-26T23:45:18Z,2021-06-26T23:45:37Z,CONTRIBUTOR,"> Here's where the plugin hook is called, demonstrating the `fallback=` argument: > > https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472 > > I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459? I think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135, https://github.com/simonw/datasette/issues/1384#issuecomment-869074182,https://api.github.com/repos/simonw/datasette/issues/1384,869074182,MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==,2670795,2021-06-26T23:37:42Z,2021-06-26T23:37:42Z,CONTRIBUTOR,"> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries. > > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try? > > Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160 > > That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it. > > This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem. > > But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead. _Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.) In my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135, https://github.com/simonw/datasette/pull/1368#issuecomment-865204472,https://api.github.com/repos/simonw/datasette/issues/1368,865204472,MDEyOklzc3VlQ29tbWVudDg2NTIwNDQ3Mg==,2670795,2021-06-21T17:11:37Z,2021-06-21T17:11:37Z,CONTRIBUTOR,If this is a concept ACK then I will move onto fixing the tests (adding new ones) and updating the documentation for the new plugin hook.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",913865304, https://github.com/simonw/sqlite-utils/issues/278#issuecomment-864621099,https://api.github.com/repos/simonw/sqlite-utils/issues/278,864621099,MDEyOklzc3VlQ29tbWVudDg2NDYyMTA5OQ==,601708,2021-06-20T22:39:57Z,2021-06-20T22:39:57Z,CONTRIBUTOR,"Fair. I looked into it, it looks like it could be done, but it would be _a bit ugly_. I can upload and link a gist of my exploration. **Click** can parse a first argument while still recognizing it as a sub-command keyword. From there, the program could: 1. ignore it preemptively if it matches a sub-command 2. and/or check if a (db) file exists at the path. It would then also need to set a shared db argument variable. Click also makes it easy to parse arguments from environment variables. If you're amenable, I may submit a patch for only that, which would update each sub-command to check for a DB/SQLITE_UTILS_DB environment variable. The goal would be usage that looks like: `DB=./convenient.db sqlite-utils [operation] [args]`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923697888, https://github.com/simonw/sqlite-utils/issues/272#issuecomment-861944202,https://api.github.com/repos/simonw/sqlite-utils/issues/272,861944202,MDEyOklzc3VlQ29tbWVudDg2MTk0NDIwMg==,25778,2021-06-16T01:41:03Z,2021-06-16T01:41:03Z,CONTRIBUTOR,"So, I do things like this a lot, too. I like the idea of piping in from stdin. Something like this would be nice to do in a makefile: ```sh cat file.csv | sqlite-utils --csv --table data - 'SELECT * FROM data WHERE col=""whatever""' > filtered.csv ``` If you assumed that you're always piping out the same format you're piping in, the option names don't have to change. Depends how much you want to change formats.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",921878733, https://github.com/simonw/datasette/pull/1130#issuecomment-861497548,https://api.github.com/repos/simonw/datasette/issues/1130,861497548,MDEyOklzc3VlQ29tbWVudDg2MTQ5NzU0OA==,3243482,2021-06-15T13:27:48Z,2021-06-15T13:27:48Z,CONTRIBUTOR,"There's a workaround: https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ and a future fix: https://css-tricks.com/safari-15-new-ui-theme-colors-and-a-css-tricks-cameo/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",756876238, https://github.com/simonw/datasette/pull/1370#issuecomment-857298526,https://api.github.com/repos/simonw/datasette/issues/1370,857298526,MDEyOklzc3VlQ29tbWVudDg1NzI5ODUyNg==,25778,2021-06-09T01:18:59Z,2021-06-09T01:18:59Z,CONTRIBUTOR,"I'm happy to grab some or all of these in this PR, if you want. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",914130834, https://github.com/simonw/datasette/pull/1368#issuecomment-856182547,https://api.github.com/repos/simonw/datasette/issues/1368,856182547,MDEyOklzc3VlQ29tbWVudDg1NjE4MjU0Nw==,2670795,2021-06-07T18:59:47Z,2021-06-07T23:04:25Z,CONTRIBUTOR,"Note that if we went with a ""update_metadata"" hook, the hook signature would look something like this (it would return nothing): ``` update_metadata( datasette=self, metadata=metadata, key=key, database=database, table=table, fallback=fallback ) ``` The Datasette function `_metadata_recursive_update(self, orig, updated)` would disappear into the plugins. Doing this, though, we'd lose the easy ability to make the local metadata.yaml immutable (since we'd no longer have the recursive update).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",913865304, https://github.com/simonw/datasette/issues/1356#issuecomment-853895159,https://api.github.com/repos/simonw/datasette/issues/1356,853895159,MDEyOklzc3VlQ29tbWVudDg1Mzg5NTE1OQ==,25778,2021-06-03T14:03:59Z,2021-06-03T14:03:59Z,CONTRIBUTOR,"(Putting thoughts here to keep the conversation in one place.) I think using datasette for this use-case is the right approach. I usually have both datasette and sqlite-utils installed in the same project, and that's where I'm trying out queries, so it probably makes the most sense to have datasette also manage the output (and maybe the input, too). It seems like both `--get` and `--query` could work better as subcommands, rather than options, if you're looking at building out a full CLI experience in datasette. It would give a cleaner separation in what you're trying to do and let each have its own dedicated options. So something like this: ```sh # run an arbitrary query datasette query covid.db ""select * from ny_times_us_counties limit 1"" --format yaml # run a canned query datasette get covid.db some-canned-query --format yaml ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",910092577, https://github.com/simonw/datasette/pull/1348#issuecomment-850077261,https://api.github.com/repos/simonw/datasette/issues/1348,850077261,MDEyOklzc3VlQ29tbWVudDg1MDA3NzI2MQ==,10801138,2021-05-28T03:05:38Z,2021-05-28T03:05:38Z,CONTRIBUTOR,"Note, the CVEs are probably resolvable with this https://github.com/simonw/datasette/pull/1296 . My experience is that Ubuntu seems to manage these better? Though that is surprising :/ ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",904598267, https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-846413174,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59,846413174,MDEyOklzc3VlQ29tbWVudDg0NjQxMzE3NA==,631242,2021-05-22T14:06:19Z,2021-05-22T14:06:19Z,CONTRIBUTOR,Thanks Simon!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771872303, https://github.com/simonw/datasette/issues/1236#issuecomment-842798043,https://api.github.com/repos/simonw/datasette/issues/1236,842798043,MDEyOklzc3VlQ29tbWVudDg0Mjc5ODA0Mw==,192568,2021-05-18T03:28:25Z,2021-05-18T03:28:25Z,CONTRIBUTOR,That corner handle looks like a hamburger menu to me. Note that the default resize handle is not limited to two-way resize: http://jsfiddle.net/LLrh7Lte/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",812228314, https://github.com/simonw/datasette/pull/1318#issuecomment-838449572,https://api.github.com/repos/simonw/datasette/issues/1318,838449572,MDEyOklzc3VlQ29tbWVudDgzODQ0OTU3Mg==,49699333,2021-05-11T13:12:30Z,2021-05-11T13:12:30Z,CONTRIBUTOR,Superseded by #1321.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",876431852, https://github.com/simonw/datasette/issues/1280#issuecomment-837166862,https://api.github.com/repos/simonw/datasette/issues/1280,837166862,MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg==,10801138,2021-05-10T19:07:46Z,2021-05-10T19:07:46Z,CONTRIBUTOR,"Do you have a list of sqlite versions you want to test against? One cool thing I saw recently (that we started using) was using `import docker` within python, and then writing pytest functions which executed against the container [setup](https://github.com/StatCan/kubeflow-containers/blob/3c7dcfb5e7188982fb8ebcded82e84292720f720/conftest.py#L85) [example](https://github.com/StatCan/kubeflow-containers/blob/master/tests/jupyterlab-cpu/test_julia.py#L8-L18) The inspiration for this came from the [jupyter docker-stacks](https://github.com/jupyter/docker-stacks/blob/09fb66007615ea68d9bce8f8e1a2cf9402f1e432/test/test_packages.py#L107) So off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842862708, https://github.com/simonw/datasette/pull/1296#issuecomment-835491318,https://api.github.com/repos/simonw/datasette/issues/1296,835491318,MDEyOklzc3VlQ29tbWVudDgzNTQ5MTMxOA==,10801138,2021-05-08T19:59:01Z,2021-05-08T19:59:01Z,CONTRIBUTOR,"I have also found that ubuntu has fewer vulnerabilities than the buster based images. ``` ➜ ~ docker pull python:3-buster ➜ ~ trivy image python:3-buster | head 2021-04-28T17:14:29.313-0400 INFO Detecting Debian vulnerabilities... 2021-04-28T17:14:29.393-0400 INFO Trivy skips scanning programming language libraries because no supported file was detected python:3-buster (debian 10.9) ============================= Total: 1621 (UNKNOWN: 13, LOW: 1106, MEDIUM: 343, HIGH: 145, CRITICAL: 14) +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ | LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION | TITLE | +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829, https://github.com/simonw/datasette/issues/1300#issuecomment-833132571,https://api.github.com/repos/simonw/datasette/issues/1300,833132571,MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ==,3243482,2021-05-06T00:16:50Z,2021-05-06T00:18:05Z,CONTRIBUTOR,"I ended up using some JS as a workaround. First, add a JS file in `metadata.yaml`: ```yaml extra_js_urls: - '/static/app.js' ``` then inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `` elements. You need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post. ~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run. ```js window.addEventListener('DOMContentLoaded', () => { function renderBlobImages() { document.querySelectorAll('a[href*="".blob""]').forEach(el => { const img = document.createElement('img'); img.className = 'blob-image'; img.loading = 'lazy'; img.src = el.href.replace('.blob', '.jpg'); el.parentElement.replaceChild(img, el); }); } renderBlobImages(); }); ``` while this does the job, I'd prefer handling this in Python where it belongs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833, https://github.com/simonw/datasette/pull/1313#issuecomment-829352402,https://api.github.com/repos/simonw/datasette/issues/1313,829352402,MDEyOklzc3VlQ29tbWVudDgyOTM1MjQwMg==,27856297,2021-04-29T15:47:23Z,2021-04-29T15:47:23Z,CONTRIBUTOR,This pull request will no longer be automatically closed when a new version is found as this pull request was created by Dependabot Preview and this repo is using a `version: 2` config file. You can close this pull request and let Dependabot re-create it the next time it checks for updates.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",871046111, https://github.com/simonw/datasette/pull/1311#issuecomment-829260725,https://api.github.com/repos/simonw/datasette/issues/1311,829260725,MDEyOklzc3VlQ29tbWVudDgyOTI2MDcyNQ==,27856297,2021-04-29T13:58:08Z,2021-04-29T13:58:08Z,CONTRIBUTOR,Superseded by #1313.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",870227815, https://github.com/simonw/datasette/pull/1309#issuecomment-828679943,https://api.github.com/repos/simonw/datasette/issues/1309,828679943,MDEyOklzc3VlQ29tbWVudDgyODY3OTk0Mw==,27856297,2021-04-28T18:26:03Z,2021-04-28T18:26:03Z,CONTRIBUTOR,Superseded by #1311.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",869237023, https://github.com/simonw/datasette/issues/1298#issuecomment-823093669,https://api.github.com/repos/simonw/datasette/issues/1298,823093669,MDEyOklzc3VlQ29tbWVudDgyMzA5MzY2OQ==,192568,2021-04-20T08:38:10Z,2021-04-20T08:40:22Z,CONTRIBUTOR,"@dracos I appreciate your ideas! 1. Ooh, I like this: https://codepen.io/astro87/pen/LYRQNbd?editors=1100 (That's the codepen from your linked stackoverflow.) 2. I worry that a max height will be a problem when my facets are open. (I've got 35 active ingredients, and so I've set the default_facet_size to 35.) 3. I don't understand this one. I'm observing the screenshot... very helpful! (Ah, okay, TR = Top Right and BR = Bottom Right. Absolute grid refers to position style.) All the scroll bars look a little wonky to me. I've also got a lot of facets, and prefer the extra horizontal space so that not as many facets disappear below the fold. My site also has end users... some will be on mobile... not sure what the absolute grid would do there... 4. (I still think a hover-arrow that scrolls upon click would help, too...) But meanwhile, I'm going to go ahead and see if I can apply that shadow. (Never would've thought of that.) Hmmm... I'm not an SCSS person. This looks helpful! https://jsonformatter.org/scss-to-css","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501, https://github.com/simonw/datasette/issues/1300#issuecomment-821971059,https://api.github.com/repos/simonw/datasette/issues/1300,821971059,MDEyOklzc3VlQ29tbWVudDgyMTk3MTA1OQ==,3243482,2021-04-18T10:42:19Z,2021-04-18T10:42:19Z,CONTRIBUTOR,"If there's a simpler way to generate a URL for a specific row, I'm all ears","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833, https://github.com/simonw/datasette/issues/1300#issuecomment-821970965,https://api.github.com/repos/simonw/datasette/issues/1300,821970965,MDEyOklzc3VlQ29tbWVudDgyMTk3MDk2NQ==,3243482,2021-04-18T10:41:15Z,2021-04-18T10:41:15Z,CONTRIBUTOR,"If I change the hookspec and add a row parameter, it works https://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/hookspecs.py#L58 ``` def render_cell(value, column, row, table, database, datasette): ``` But to generate a URL, I need the primary keys, but I can't call `pks = await db.primary_keys(table)` inside a sync function. I can't call `datasette.utils.detect_primary_keys` either, because the db connection is not publicly exposed (AFAICT). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833, https://github.com/simonw/datasette/pull/1296#issuecomment-819467759,https://api.github.com/repos/simonw/datasette/issues/1296,819467759,MDEyOklzc3VlQ29tbWVudDgxOTQ2Nzc1OQ==,295329,2021-04-14T12:07:37Z,2021-04-14T12:11:36Z,CONTRIBUTOR,"> Removing /var/lib/apt and /var/lib/dpkg makes apt and dpkg unusable in images based on this one. Running `apt-get clean` and removing /var/lib/apt/lists achieves similar size savings. this PR helps me as removing the /var/lib/apt and /var/lib/dpkg directories breaks my ability to add packages when using `datasetteproject/datasette:0.56` as a base image. ---- Shorterm workaround for me was to use this in my Dockerfile ``` FROM datasetteproject/datasette:0.56 RUN mkdir -p /var/lib/apt RUN mkdir -p /var/lib/dpkg RUN mkdir -p /var/lib/dpkg/updates RUN mkdir -p /var/lib/dpkg/info RUN touch /var/lib/dpkg/status RUN apt-get update # and install your packages etc ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829, https://github.com/simonw/datasette/issues/830#issuecomment-817414881,https://api.github.com/repos/simonw/datasette/issues/830,817414881,MDEyOklzc3VlQ29tbWVudDgxNzQxNDg4MQ==,192568,2021-04-12T01:06:34Z,2021-04-12T01:07:27Z,CONTRIBUTOR,"Related: #1285, including arguments for natural breaks, equal interval, etc. modeled after choropleth map legends.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",636511683, https://github.com/simonw/datasette/issues/1286#issuecomment-815978405,https://api.github.com/repos/simonw/datasette/issues/1286,815978405,MDEyOklzc3VlQ29tbWVudDgxNTk3ODQwNQ==,192568,2021-04-08T16:47:29Z,2021-04-10T03:59:00Z,CONTRIBUTOR,"This worked for me: `{{ cell.value | replace('"", ""','; ') | replace('[\""','') | replace('\""]','')}}` I'm sure there is a prettier (and more flexible) way, but for now, this is ever-so-much more pleasant to look at. ------ AFTER: ------ BEFORE: (Note: I didn't figure out how to have one item have no semicolon, while multi-items close with a semicolon, but this is good enough for now. I also didn't figure out how to set up a new jinja filter. I don't want to add to /datasette/utils/__init__.py as I assume that would get overwritten when upgrading datasette. Having a starter guide on creating jinja filters in datasette would be helpful. (The jinja documentation isn't datasette-specific enough for me to quite nail it.) ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",849220154, https://github.com/simonw/datasette/issues/502#issuecomment-812813732,https://api.github.com/repos/simonw/datasette/issues/502,812813732,MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg==,5413548,2021-04-03T05:16:54Z,2021-04-03T05:16:54Z,CONTRIBUTOR,"For what it's worth, if anyone finds this in the future, I was having the same issue. After digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so `datasette serve -i xyz.db` rather than the doc's quickstart recommendation of `datasette serve xyz.db`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",453131917, https://github.com/simonw/datasette/issues/1286#issuecomment-812679221,https://api.github.com/repos/simonw/datasette/issues/1286,812679221,MDEyOklzc3VlQ29tbWVudDgxMjY3OTIyMQ==,192568,2021-04-02T19:34:01Z,2021-04-02T19:34:01Z,CONTRIBUTOR,"This shows the city in a different color (and not the comma), but I get the idea, and I like it. (Ooh, could be nice to have the gear have an option in array fields to show as bullets or commas or semicolons...)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",849220154, https://github.com/simonw/datasette/issues/1284#issuecomment-810779928,https://api.github.com/repos/simonw/datasette/issues/1284,810779928,MDEyOklzc3VlQ29tbWVudDgxMDc3OTkyOA==,192568,2021-03-31T05:40:12Z,2021-03-31T05:40:12Z,CONTRIBUTOR,"Maybe the addition of two template files: 'one_database_index.html' and 'one_table_index.html' would be a better idea than the documentation diff idea. (They could include commented instructions to rename the preferred template 'index.html', along with any other necessary guidance.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",845794436, https://github.com/simonw/datasette/issues/1274#issuecomment-805214307,https://api.github.com/repos/simonw/datasette/issues/1274,805214307,MDEyOklzc3VlQ29tbWVudDgwNTIxNDMwNw==,7476523,2021-03-23T20:12:29Z,2021-03-23T20:12:29Z,CONTRIBUTOR,"One issue I could see with adding first class support for metadata in hjson format is that this would require adding an additional dependency to handle this, for a feature that would be unused by many users. I wonder if this could fit in as a plugin instead; if a hook existed for loading metadata (maybe as part of https://github.com/simonw/datasette/issues/860) the metadata could then come from any source, as specified by plugins, e.g. hjson, toml, XML, a database table etc. Until/unless this exists, a few ideas for how you could add comments: - Using YAML as you suggest. - A common pattern is adding a `""comment""` key for comments to any object in JSON - I don't think including an unnecessary key like this would break anything in Datasette, but not certain. - You could use another tool as a preprocessor for your JSON metadata - e.g. hjson or Jsonnet. You'd write the metadata in that format, and then convert that into JSON to actually use as your final metadata.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",839008371, https://github.com/simonw/datasette/issues/1153#issuecomment-804640440,https://api.github.com/repos/simonw/datasette/issues/1153,804640440,MDEyOklzc3VlQ29tbWVudDgwNDY0MDQ0MA==,192568,2021-03-23T05:58:20Z,2021-03-23T05:58:20Z,CONTRIBUTOR,Could there be a little widget that offers conversion from one to the other? ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771202454, https://github.com/simonw/datasette/pull/1159#issuecomment-804639427,https://api.github.com/repos/simonw/datasette/issues/1159,804639427,MDEyOklzc3VlQ29tbWVudDgwNDYzOTQyNw==,192568,2021-03-23T05:56:02Z,2021-03-23T05:56:02Z,CONTRIBUTOR,"With just three facets, I like it, but it does take more horizontal space. Would be nice to have a switch somewhere, enabling either original compact option or this proposed more-readable option. Also some control over word wrap (width setting) and facet spacing. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247, https://github.com/simonw/datasette/issues/164#issuecomment-804541064,https://api.github.com/repos/simonw/datasette/issues/164,804541064,MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==,192568,2021-03-23T02:45:12Z,2021-03-23T02:45:12Z,CONTRIBUTOR,"""datasette skeleton"" feature removed #476","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",280013907, https://github.com/simonw/datasette/issues/163#issuecomment-804539729,https://api.github.com/repos/simonw/datasette/issues/163,804539729,MDEyOklzc3VlQ29tbWVudDgwNDUzOTcyOQ==,192568,2021-03-23T02:41:14Z,2021-03-23T02:41:14Z,CONTRIBUTOR,"I'm visiting old issues for context while learning datasette. Let me know if okay to make the occasional comment like this one. querystring argument now located at: https://docs.datasette.io/en/latest/settings.html#sql-time-limit-ms","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",279547886, https://github.com/simonw/datasette/issues/88#issuecomment-804471733,https://api.github.com/repos/simonw/datasette/issues/88,804471733,MDEyOklzc3VlQ29tbWVudDgwNDQ3MTczMw==,192568,2021-03-22T23:46:36Z,2021-03-22T23:46:36Z,CONTRIBUTOR,Google Map API limits seem to prevent https://nhs-england-map.netlify.com from being a working demo.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273775212, https://github.com/simonw/datasette/issues/1149#issuecomment-804415619,https://api.github.com/repos/simonw/datasette/issues/1149,804415619,MDEyOklzc3VlQ29tbWVudDgwNDQxNTYxOQ==,192568,2021-03-22T21:43:16Z,2021-03-22T21:43:16Z,CONTRIBUTOR,Sounds like a good idea.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",769520939, https://github.com/simonw/datasette/issues/942#issuecomment-803631102,https://api.github.com/repos/simonw/datasette/issues/942,803631102,MDEyOklzc3VlQ29tbWVudDgwMzYzMTEwMg==,192568,2021-03-21T17:48:42Z,2021-03-21T17:48:42Z,CONTRIBUTOR,"I like this idea. Though it might be nice to have some kind of automated system from database to file, so that developers could easily track diffs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681334912, https://github.com/simonw/datasette/issues/1265#issuecomment-802923254,https://api.github.com/repos/simonw/datasette/issues/1265,802923254,MDEyOklzc3VlQ29tbWVudDgwMjkyMzI1NA==,7476523,2021-03-19T15:39:15Z,2021-03-19T15:39:15Z,CONTRIBUTOR,"It doesn't use basic auth, but you can put a whole datasette instance, or parts of this, behind a username/password prompt using https://github.com/simonw/datasette-auth-passwords","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",836123030, https://github.com/simonw/datasette/issues/1262#issuecomment-802095132,https://api.github.com/repos/simonw/datasette/issues/1262,802095132,MDEyOklzc3VlQ29tbWVudDgwMjA5NTEzMg==,7476523,2021-03-18T16:37:45Z,2021-03-18T16:37:45Z,CONTRIBUTOR,"This sounds like a good use case for a plugin, since this will only be useful for a subset of Datasette users. It shouldn't be too difficult to add a button to do this with the available plugin hooks - have you taken a look at https://docs.datasette.io/en/latest/writing_plugins.html?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",834602299, https://github.com/simonw/datasette/issues/236#issuecomment-799003172,https://api.github.com/repos/simonw/datasette/issues/236,799003172,MDEyOklzc3VlQ29tbWVudDc5OTAwMzE3Mg==,21148,2021-03-14T23:42:57Z,2021-03-14T23:42:57Z,CONTRIBUTOR,"Oh, and the container image can be up to 10GB, so the EFS step might not be needed except for pretty big stuff.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500, https://github.com/simonw/datasette/issues/236#issuecomment-799002993,https://api.github.com/repos/simonw/datasette/issues/236,799002993,MDEyOklzc3VlQ29tbWVudDc5OTAwMjk5Mw==,21148,2021-03-14T23:41:51Z,2021-03-14T23:41:51Z,CONTRIBUTOR,"Now that [Lambda supports Docker](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/), this probably is a bit easier and may be able to build on top of the existing package command. There are weirdnesses in how the command actually gets invoked; the [aws-lambda-python image](https://hub.docker.com/r/amazon/aws-lambda-python) shows a bit of that. So Datasette would probably need some sort of Lambda-specific entry point to make this work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500, https://github.com/simonw/datasette/pull/1256#issuecomment-795112935,https://api.github.com/repos/simonw/datasette/issues/1256,795112935,MDEyOklzc3VlQ29tbWVudDc5NTExMjkzNQ==,6371750,2021-03-10T08:59:45Z,2021-03-10T08:59:45Z,CONTRIBUTOR,"Sorry, I meant ""minor typo"" not ""minor type"".","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",827341657, https://github.com/simonw/datasette/issues/766#issuecomment-791509910,https://api.github.com/repos/simonw/datasette/issues/766,791509910,MDEyOklzc3VlQ29tbWVudDc5MTUwOTkxMA==,6371750,2021-03-05T15:57:35Z,2021-03-05T16:35:21Z,CONTRIBUTOR,"Hello, I have the same wildcards search problems with an instance of Datasette. http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwerz&_sort=rowid is OK but http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwe* is not (FTS is activated on ""Reference"" ""IntituleAnalyse"" ""NomDuProducteur"" ""PresentationDuContenu"" ""Notes""). Notice that a SQL query as below launched directly from SQLite in the server's shell, retrieves results. `select * from fonds_auguste_dupouy_1872_1967_fts where IntituleAnalyse MATCH ""gwe*"";` Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",617323873, https://github.com/simonw/datasette/issues/1238#issuecomment-789186458,https://api.github.com/repos/simonw/datasette/issues/1238,789186458,MDEyOklzc3VlQ29tbWVudDc4OTE4NjQ1OA==,198537,2021-03-02T20:19:30Z,2021-03-02T20:19:30Z,CONTRIBUTOR,A custom `templates/index.html` seems to work and custom `pages` as a workaround with moving them to `pages/base_url_dir`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813899472, https://github.com/simonw/sqlite-utils/issues/242#issuecomment-787121933,https://api.github.com/repos/simonw/sqlite-utils/issues/242,787121933,MDEyOklzc3VlQ29tbWVudDc4NzEyMTkzMw==,25778,2021-02-27T19:18:57Z,2021-02-27T19:18:57Z,CONTRIBUTOR,"I think HTTPX gets it exactly right, with a clear separation between sync and async clients, each with a basically identical API. (I'm about to switch [feed-to-sqlite](https://github.com/eyeseast/feed-to-sqlite) over to it, from Requests, to eventually make way for async support.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",817989436, https://github.com/simonw/datasette/issues/1212#issuecomment-782430028,https://api.github.com/repos/simonw/datasette/issues/1212,782430028,MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA==,4488943,2021-02-19T22:54:13Z,2021-02-19T22:54:13Z,CONTRIBUTOR,I will close this issue since it appears only in my particular setup.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797651831, https://github.com/simonw/datasette/pull/1229#issuecomment-782053455,https://api.github.com/repos/simonw/datasette/issues/1229,782053455,MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ==,295329,2021-02-19T12:47:19Z,2021-02-19T12:47:19Z,CONTRIBUTOR,I believe this pr and #1031 are related and fix the same issue.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810507413, https://github.com/simonw/datasette/issues/1220#issuecomment-778439617,https://api.github.com/repos/simonw/datasette/issues/1220,778439617,MDEyOklzc3VlQ29tbWVudDc3ODQzOTYxNw==,7476523,2021-02-12T20:33:27Z,2021-02-12T20:33:27Z,CONTRIBUTOR,"That Docker command will mount your current directory inside the Docker container at `/mnt` - so you shouldn't need to change anything locally, just run ``` docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db ``` and it will use the `fixtures.db` file within your current directory","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116, https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60,770069864,MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA==,22578954,2021-01-29T21:52:05Z,2021-02-12T18:29:43Z,CONTRIBUTOR,"For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called `gh-organization`. The github's owner id of gh-orgnization is `123456789`. ```bash github-to-sqlite repos github.db gh-organization ``` I'm on a windows computer running git bash to be able to use the `|` command. This works for me ```bash sqlite3 github.db ""SELECT full_name FROM repos WHERE owner = '123456789';"" | tr '\n\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; } ``` On a pure linux system I think this would work because the new line character is normally `\n` ```bash sqlite3 github.db ""SELECT full_name FROM repos WHERE owner = '123456789';"" | tr '\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }` ``` As expected I ran into rate limit issues #51 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797097140, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778246347,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,778246347,MDEyOklzc3VlQ29tbWVudDc3ODI0NjM0Nw==,41546558,2021-02-12T15:00:43Z,2021-02-12T15:00:43Z,CONTRIBUTOR,"Yes, Big Sur Photos database doesn't have `ZGENERICASSET` table. PR #31 will fix this.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729, https://github.com/simonw/datasette/issues/1220#issuecomment-777927946,https://api.github.com/repos/simonw/datasette/issues/1220,777927946,MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng==,7476523,2021-02-12T02:29:54Z,2021-02-12T02:29:54Z,CONTRIBUTOR,"According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be ``` docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db ``` This uses `/mnt/fixtures.db` whereas you're using `fixtures.db` - did you try using this path instead?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116, https://github.com/simonw/datasette/issues/1200#issuecomment-777132761,https://api.github.com/repos/simonw/datasette/issues/1200,777132761,MDEyOklzc3VlQ29tbWVudDc3NzEzMjc2MQ==,7476523,2021-02-11T00:29:52Z,2021-02-11T00:29:52Z,CONTRIBUTOR,I'm probably missing something but what's the use case here - what would this offer over adding `limit 10` to the query?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",792890765, https://github.com/simonw/datasette/issues/1208#issuecomment-774286962,https://api.github.com/repos/simonw/datasette/issues/1208,774286962,MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg==,4488943,2021-02-05T21:02:39Z,2021-02-05T21:02:39Z,CONTRIBUTOR,@simonw could you please take a look at the PR 1211 that fixes this issue?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",794554881, https://github.com/simonw/datasette/issues/1212#issuecomment-772007663,https://api.github.com/repos/simonw/datasette/issues/1212,772007663,MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw==,4488943,2021-02-02T21:36:56Z,2021-02-02T21:36:56Z,CONTRIBUTOR,"How do you get 4-5 minutes? I run my tests in WSL 2, so may be i need to try a real linux VM.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797651831, https://github.com/simonw/datasette/pull/1211#issuecomment-771127458,https://api.github.com/repos/simonw/datasette/issues/1211,771127458,MDEyOklzc3VlQ29tbWVudDc3MTEyNzQ1OA==,4488943,2021-02-01T20:13:39Z,2021-02-01T20:13:39Z,CONTRIBUTOR,Ping @simonw ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797649915, https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51,770150526,MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg==,22578954,2021-01-30T03:44:19Z,2021-01-30T03:47:24Z,CONTRIBUTOR,I don't have much experience with github's rate limiting. In my day job we use the [tenacity library](https://github.com/jd/tenacity) to handle http errors we get.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",703246031, https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770112248,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60,770112248,MDEyOklzc3VlQ29tbWVudDc3MDExMjI0OA==,22578954,2021-01-30T00:01:03Z,2021-01-30T01:14:42Z,CONTRIBUTOR,"Yes that would be cool! I wouldn't mind helping. Is this the meat of it? https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/utils.py#L512 It looks like the cli option is added with this decorator : https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/cli.py#L14 I looked a bit at utils.py in the GitHub repository. I was surprised at the amount of manual mapping of the API response you had to do to get this to work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797097140, https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55,760950128,MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA==,21148,2021-01-15T13:44:52Z,2021-01-15T13:44:52Z,CONTRIBUTOR,"I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779211940, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754729035,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,754729035,MDEyOklzc3VlQ29tbWVudDc1NDcyOTAzNQ==,21148,2021-01-05T16:03:29Z,2021-01-05T16:03:29Z,CONTRIBUTOR,"I was able to fix this, at least enough to get _my_ archive to import. Not sure if there's more work to be done here or not.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071, https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55,754728696,MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng==,21148,2021-01-05T16:02:55Z,2021-01-05T16:02:55Z,CONTRIBUTOR,"This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779211940, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,754721153,MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw==,21148,2021-01-05T15:51:09Z,2021-01-05T15:51:09Z,CONTRIBUTOR,"Correction: the failure is on `lists-member.js` (I was thrown by the `block` variable name, but that's just a coincidence)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071, https://github.com/simonw/datasette/issues/1167#issuecomment-754619930,https://api.github.com/repos/simonw/datasette/issues/1167,754619930,MDEyOklzc3VlQ29tbWVudDc1NDYxOTkzMA==,3637,2021-01-05T12:57:57Z,2021-01-05T12:57:57Z,CONTRIBUTOR,"Not sure where exactly to put the actual docs (presumably somewhere in [docs/contributing.rst](https://github.com/simonw/datasette/blob/main/docs/contributing.rst)) but I've made a slight change to make it easier to run locally (copying [the approach in excalidraw](https://github.com/excalidraw/excalidraw/blob/ade2565f497243a5e428f4906d8ed80c872fd981/package.json#L90-L94)): https://github.com/simonw/datasette/compare/main...benpickles:prettier-docs ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777145954, https://github.com/simonw/datasette/issues/859#issuecomment-647922203,https://api.github.com/repos/simonw/datasette/issues/859,647922203,MDEyOklzc3VlQ29tbWVudDY0NzkyMjIwMw==,3243482,2020-06-23T05:44:58Z,2021-01-05T08:22:43Z,CONTRIBUTOR,"I'm seeing the problem on database page. Index page and table page runs quite fast. - Tables have <10 columns (`id`, `url`, `title`, `body_html`, `date`, `author`, `meta` (for keeping unstructured json)). I've added index on `date` columns (using `sqlite-utils`) in addition to the index present on `id` columns. - All tables have FTS enabled on `text` and `varchar` columns (`title`, `body_html` etc) to speed up searching. - There are couple of tables related with foreign keys (think a thread in a forum and posts in that thread, related with `thread_id`) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841, https://github.com/simonw/datasette/issues/1169#issuecomment-754007242,https://api.github.com/repos/simonw/datasette/issues/1169,754007242,MDEyOklzc3VlQ29tbWVudDc1NDAwNzI0Mg==,3637,2021-01-04T14:29:57Z,2021-01-04T14:29:57Z,CONTRIBUTOR,I somewhat share your reluctance to add a package.json to seemingly every project out there but ultimately if they're project dependencies it's important they're managed within the codebase.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777677671, https://github.com/simonw/datasette/pull/1170#issuecomment-754004715,https://api.github.com/repos/simonw/datasette/issues/1170,754004715,MDEyOklzc3VlQ29tbWVudDc1NDAwNDcxNQ==,3637,2021-01-04T14:25:44Z,2021-01-04T14:25:44Z,CONTRIBUTOR,I was going to re-add the filter to only run Prettier when there have been changes in `datasette/static` but that would mean it wouldn't run when the package is updated. That plus the fact that [the last run of the job took only 8 seconds](https://github.com/benpickles/datasette/runs/1640121514) is why I decided not to re-add the filter.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778126516, https://github.com/simonw/datasette/issues/1012#issuecomment-753531657,https://api.github.com/repos/simonw/datasette/issues/1012,753531657,MDEyOklzc3VlQ29tbWVudDc1MzUzMTY1Nw==,45380,2021-01-02T21:25:36Z,2021-01-02T21:25:36Z,CONTRIBUTOR,"Actually, on more research, I found out this is handled by the [trove-classifiers package](https://github.com/pypa/trove-classifiers/blob/master/src/trove_classifiers/__init__.py#L2) now, so it's just a one-liner pr instead of fire-up-a-docker-container-and-do-some-migrations","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",718540751, https://github.com/simonw/datasette/issues/417#issuecomment-752098906,https://api.github.com/repos/simonw/datasette/issues/417,752098906,MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg==,82988,2020-12-29T14:34:30Z,2020-12-29T14:34:50Z,CONTRIBUTOR,"FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py Not a production thing, just an experiment trying to explore what might be possible...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421546944, https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59,751375487,MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw==,631242,2020-12-26T17:08:44Z,2020-12-26T17:08:44Z,CONTRIBUTOR,"Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771872303, https://github.com/simonw/datasette/pull/1158#issuecomment-750389683,https://api.github.com/repos/simonw/datasette/issues/1158,750389683,MDEyOklzc3VlQ29tbWVudDc1MDM4OTY4Mw==,6774676,2020-12-23T17:02:50Z,2020-12-23T17:02:50Z,CONTRIBUTOR,"The dict/set suggestion comes from `pyupgrade --py36-plus`, but then had to `black` the change. The rest comes from PyCharm's Inspect code function. I reviewed all the suggestions and fixed a thing or two, such as leading/trailing spaces in the docstrings or turned around the chained conditions. Then I tried to convert all `os.path/glob/open` to `Path`, but there were some local test issues, so I'll have to start over in smaller chunks if you want to have that too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",773913793, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,748562330,MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA==,41546558,2020-12-20T04:45:08Z,2020-12-20T04:45:08Z,CONTRIBUTOR,Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748562288,MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==,41546558,2020-12-20T04:44:22Z,2020-12-20T04:44:22Z,CONTRIBUTOR,"@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748436779,MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==,41546558,2020-12-19T07:49:00Z,2020-12-19T07:49:00Z,CONTRIBUTOR,@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767, https://github.com/simonw/datasette/issues/493#issuecomment-748305976,https://api.github.com/repos/simonw/datasette/issues/493,748305976,MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng==,50527,2020-12-18T20:34:39Z,2020-12-18T20:34:39Z,CONTRIBUTOR,"I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: ```shell datasette serve --metadata=./metadata.json ``` I have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",449886319, https://github.com/simonw/datasette/issues/998#issuecomment-743080047,https://api.github.com/repos/simonw/datasette/issues/998,743080047,MDEyOklzc3VlQ29tbWVudDc0MzA4MDA0Nw==,6371750,2020-12-11T09:25:09Z,2020-12-11T09:25:09Z,CONTRIBUTOR,"Hello Simon, I have a similar problem with horizontal scrollbar display with Datasette version 0.51 and superior for a table with more than 30 rows. With Datasette 0.50, the horizontal scrollbar is displayed, if I upgrade Datasette to 0.51 and superior, the horizontal scrollbar disappears. Datasette 0.50: horizontal scrollbar ![2020-12-11 10_23_28-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885620-a5f17800-3b9a-11eb-8870-654e7d4372ca.png) Datasette 0.51 and superior: no horizontal scrollbar ![2020-12-11 10_24_55-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885782-dfc27e80-3b9a-11eb-9d55-6c9a56227bf2.png) Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",717699884, https://github.com/simonw/datasette/pull/1130#issuecomment-738907852,https://api.github.com/repos/simonw/datasette/issues/1130,738907852,MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg==,3243482,2020-12-04T17:22:29Z,2020-12-04T17:31:25Z,CONTRIBUTOR,"EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it. ```css body { min-height: 100vh; min-height: -webkit-fill-available; } html { height: -webkit-fill-available; } ``` https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ --- It's actually not that difficult to fix. Well, this is actually a workaround to keep viewport in place. I usually put a transition (forgot to do it here) that keeps page from resizing. ```css .container { min-height: 100vh; transition: height 10000s steps(0); } ``` `steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step. This fix also prevents page from jumping around when the keyboard pops up and down.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",756876238, https://github.com/simonw/datasette/issues/1111#issuecomment-736322290,https://api.github.com/repos/simonw/datasette/issues/1111,736322290,MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA==,3243482,2020-12-01T08:54:47Z,2020-12-01T08:54:47Z,CONTRIBUTOR,"Somewhat related: https://github.com/simonw/datasette/issues/859 I fixed the issue with forking and disabling the counts for hidden tables.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",751195017, https://github.com/simonw/datasette/issues/1114#issuecomment-735436014,https://api.github.com/repos/simonw/datasette/issues/1114,735436014,MDEyOklzc3VlQ29tbWVudDczNTQzNjAxNA==,2182,2020-11-29T18:33:30Z,2020-11-29T18:33:30Z,CONTRIBUTOR,Thank you!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",752966476, https://github.com/simonw/datasette/issues/493#issuecomment-735281577,https://api.github.com/repos/simonw/datasette/issues/493,735281577,MDEyOklzc3VlQ29tbWVudDczNTI4MTU3Nw==,50527,2020-11-28T19:39:53Z,2020-11-28T19:39:53Z,CONTRIBUTOR,"I was confused by `--config` and I tried passing the json from datasette-ripgrep into `config.json` just as a wild guess. A short term solution might be pointing out in plugins that their snippet json can go in `metadata.json` at least makes it easier to search for config options or to know where to start if someone is new. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",449886319, https://github.com/simonw/datasette/pull/1112#issuecomment-735279355,https://api.github.com/repos/simonw/datasette/issues/1112,735279355,MDEyOklzc3VlQ29tbWVudDczNTI3OTM1NQ==,50527,2020-11-28T19:21:09Z,2020-11-28T19:21:09Z,CONTRIBUTOR,"(Even more annoying is that I see my editor leaked an extra delete space at the end of the line. I'm happy to rebuild this to be less annoying, but you probably don't want the changelog update either way)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",752749485, https://github.com/simonw/datasette/issues/838#issuecomment-720354227,https://api.github.com/repos/simonw/datasette/issues/838,720354227,MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw==,82988,2020-11-02T09:33:58Z,2020-11-02T09:33:58Z,CONTRIBUTOR,"Thanks; just a note that the `datasette.urls.static(path)` and `datasette.urls.static_plugins(plugin_name, path)` items both seem to be repeated and appear in the docs twice?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097, https://github.com/simonw/datasette/pull/1049#issuecomment-718528252,https://api.github.com/repos/simonw/datasette/issues/1049,718528252,MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg==,82988,2020-10-29T09:20:34Z,2020-10-29T09:20:34Z,CONTRIBUTOR,That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",729017519, https://github.com/simonw/sqlite-utils/pull/189#issuecomment-717359145,https://api.github.com/repos/simonw/sqlite-utils/issues/189,717359145,MDEyOklzc3VlQ29tbWVudDcxNzM1OTE0NQ==,35681,2020-10-27T16:20:32Z,2020-10-27T16:20:32Z,CONTRIBUTOR,"No problem. I added a test. Let me know if it looks sufficient or if you want me to to tweak something! If you don't mind, would you tag this PR as ""hacktoberfest-accepted""? If you do mind, no problem and I'm sorry for asking :) My kiddos like the shirts.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",729818242, https://github.com/simonw/datasette/pull/1043#issuecomment-716237524,https://api.github.com/repos/simonw/datasette/issues/1043,716237524,MDEyOklzc3VlQ29tbWVudDcxNjIzNzUyNA==,45380,2020-10-26T00:14:57Z,2020-10-26T00:14:57Z,CONTRIBUTOR,"Sorry, I was out of the loop this weekend. The missing sdists were in some the `datasette-*` plugins... i'll capture my findings more concretely in one spot when i have a chance...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727915394, https://github.com/simonw/datasette/issues/838#issuecomment-716123598,https://api.github.com/repos/simonw/datasette/issues/838,716123598,MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA==,82988,2020-10-25T10:20:12Z,2020-10-25T10:53:24Z,CONTRIBUTOR,"I'm trying to [run something behind a MyBinder proxy](https://github.com/ouseful-testing/nbsearch), but seem to have something set up incorrectly and not sure what the fix is? I'm starting datasette with jupyter-server-proxy setup: ``` # __init__.py def setup_nbsearch(): return { ""command"": [ ""datasette"", ""serve"", f""{_NBSEARCH_DB_PATH}"", ""-p"", ""{port}"", ""--config"", ""base_url:{base_url}nbsearch/"" ], ""absolute_url"": True, # The following needs a the labextension installing. # eg in postBuild: jupyter labextension install jupyterlab-server-proxy ""launcher_entry"": { ""enabled"": True, ""title"": ""nbsearch"", }, } ``` where the `base_url` gets automatically populated by the server-proxy. I define the loaders as: ``` # __init__.py from datasette import hookimpl @hookimpl def extra_css_urls(database, table, columns, view_name, datasette): return [ ""/-/static-plugins/nbsearch/prism.css"", ""/-/static-plugins/nbsearch/nbsearch.css"", ] ``` but these seem to also need a base_url prefix set somehow? Currently, the generated HTML loads properly but internal links are incorrect; eg they take the form `` which resolves to eg `https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css` rather than required URL of form `https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css`. The main css is loaded correctly: ``","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097, https://github.com/simonw/datasette/issues/1033#issuecomment-716066000,https://api.github.com/repos/simonw/datasette/issues/1033,716066000,MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA==,82988,2020-10-24T22:58:33Z,2020-10-24T22:58:33Z,CONTRIBUTOR,"From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note: ``` datasette.urls.instance() Returns the URL to the Datasette instance root page. This is usually ""/"" ``` What about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to? - [ ] `https://example.com:PORT/weirdpath/datasette` - [ ] `https://example.com:PORT/weirdpath/` - [ ] `https://example.com:PORT/` - [ ] `https://example.com` - [ ] something else?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",725099777, https://github.com/simonw/datasette/issues/1012#issuecomment-714908859,https://api.github.com/repos/simonw/datasette/issues/1012,714908859,MDEyOklzc3VlQ29tbWVudDcxNDkwODg1OQ==,45380,2020-10-23T04:49:20Z,2020-10-23T04:49:20Z,CONTRIBUTOR,"Good luck on 1.0! It may also be worth lobbying for a `Framework::Datasette::1.0` classifier. This would be a nice way to allow the ecosystem to self-document a bit more [discoverably](https://pypi.org/search/?q=&o=&c=Framework+%3A%3A+Datasette%3A%3A+1.0). I was surprised to see the [PR for `Framework::Jupyter`](https://github.com/pypa/warehouse/pull/1905/files) is a... database migration! Of course, there may be more workflow to it!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",718540751, https://github.com/simonw/datasette/issues/1033#issuecomment-714657366,https://api.github.com/repos/simonw/datasette/issues/1033,714657366,MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng==,82988,2020-10-22T17:51:29Z,2020-10-22T17:51:29Z,CONTRIBUTOR,"How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `""extra_js_urls"": [ ""/static/app.js""]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",725099777, https://github.com/simonw/datasette/issues/1019#issuecomment-708520800,https://api.github.com/repos/simonw/datasette/issues/1019,708520800,MDEyOklzc3VlQ29tbWVudDcwODUyMDgwMA==,639012,2020-10-14T16:37:19Z,2020-10-14T16:37:19Z,CONTRIBUTOR,🎉 Thanks so much @simonw ! 🎉 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",721050815, https://github.com/dogsheep/swarm-to-sqlite/pull/10#issuecomment-707326192,https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/10,707326192,MDEyOklzc3VlQ29tbWVudDcwNzMyNjE5Mg==,29426418,2020-10-12T20:20:02Z,2020-10-12T20:20:02Z,CONTRIBUTOR,This closes issue #8 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",719637258, https://github.com/dogsheep/github-to-sqlite/pull/48#issuecomment-704503719,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/48,704503719,MDEyOklzc3VlQ29tbWVudDcwNDUwMzcxOQ==,755825,2020-10-06T19:26:59Z,2020-10-06T19:26:59Z,CONTRIBUTOR,ref #46 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681228542, https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-690860653,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50,690860653,MDEyOklzc3VlQ29tbWVudDY5MDg2MDY1Mw==,370930,2020-09-11T04:04:08Z,2020-09-11T04:04:08Z,CONTRIBUTOR,"There's probably a nicer way of doing (hence this is a comment rather than a PR), but this appears to fix it: ```diff --- a/twitter_to_sqlite/utils.py +++ b/twitter_to_sqlite/utils.py @@ -181,6 +181,7 @@ def fetch_timeline( args[""tweet_mode""] = ""extended"" min_seen_id = None num_rate_limit_errors = 0 + seen_count = 0 while True: if min_seen_id is not None: args[""max_id""] = min_seen_id - 1 @@ -208,6 +209,7 @@ def fetch_timeline( yield tweet min_seen_id = min(t[""id""] for t in tweets) max_seen_id = max(t[""id""] for t in tweets) + seen_count += len(tweets) if last_since_id is not None: max_seen_id = max((last_since_id, max_seen_id)) last_since_id = max_seen_id @@ -217,7 +219,9 @@ def fetch_timeline( replace=True, ) if stop_after is not None: - break + if seen_count >= stop_after: + break + args[""count""] = min(args[""count""], stop_after - seen_count) time.sleep(sleep) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",698791218, https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688573964,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688573964,MDEyOklzc3VlQ29tbWVudDY4ODU3Mzk2NA==,96218,2020-09-08T01:55:07Z,2020-09-08T01:55:07Z,CONTRIBUTOR,"Okay, I've rewritten this PR to preserve the batching behaviour but still fix #145, and rebased the branch to account for the `db.execute()` api change. It's not terribly sophisticated -- if it attempts to insert a batch which has too many variables, the exception is caught, the batch is split in two and each half is inserted separately, and then it carries on as before with the same `batch_size`. In the edge case where this gets triggered, subsequent batches will all be inserted in two groups too if they continue to have the same number of columns (which is presumably reasonably likely). Do you reckon this is acceptable when set against the awkwardness of recalculating the `batch_size` on the fly?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680, https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688481317,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688481317,MDEyOklzc3VlQ29tbWVudDY4ODQ4MTMxNw==,96218,2020-09-07T19:18:55Z,2020-09-07T19:18:55Z,CONTRIBUTOR,"Just force-pushed to update d042f9c with more formatting changes to satisfy `black==20.8b1` and pass the GitHub Actions ""Test"" workflow.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680, https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688479163,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688479163,MDEyOklzc3VlQ29tbWVudDY4ODQ3OTE2Mw==,96218,2020-09-07T19:10:33Z,2020-09-07T19:11:57Z,CONTRIBUTOR,"@simonw -- I've gone ahead updated the documentation to reflect the changes introduced in this PR. IMO it's ready to merge now. In writing the documentation changes, I begin to wonder about the value and role of `batch_size` at all, tbh. May I assume it was originally intended to prevent using the entire row set to determine columns and column types, and that this was a performance consideration? If so, this PR entirely undermines its purpose. I've been passing in excess of 500,000 rows at a time to `insert_all()` with these changes and although I'm sure the performance difference is measurable it's not really noticeable; given #145, I don't know that any performance advantages outweigh the problems doing it this way removes. What do you think about just dropping the argument and defaulting to the maximum `batch_size` permissible given `SQLITE_MAX_VARS`? Are there other reasons one might want to restrict `batch_size` that I've overlooked? I could open a new issue to discuss/implement this. Of course the documentation will need to change again too if/when something is done about #147.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680, https://github.com/simonw/datasette/pull/952#issuecomment-686061028,https://api.github.com/repos/simonw/datasette/issues/952,686061028,MDEyOklzc3VlQ29tbWVudDY4NjA2MTAyOA==,27856297,2020-09-02T22:26:14Z,2020-09-02T22:26:14Z,CONTRIBUTOR,"Looks like black is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",687245650, https://github.com/simonw/sqlite-utils/issues/145#issuecomment-683382252,https://api.github.com/repos/simonw/sqlite-utils/issues/145,683382252,MDEyOklzc3VlQ29tbWVudDY4MzM4MjI1Mg==,96218,2020-08-30T06:27:25Z,2020-08-30T06:27:52Z,CONTRIBUTOR,"Note: had to adjust the test above because trying to exhaust a `SQLITE_MAX_VARIABLE_NUMBER` of 250000 in 99 records requires 2526 columns, and trips the ` ""Rows can have a maximum of {} columns"".format(SQLITE_MAX_VARS)` check even before it trips the default `SQLITE_MAX_COLUMN` value (2000).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688659182, https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682815377,https://api.github.com/repos/simonw/sqlite-utils/issues/139,682815377,MDEyOklzc3VlQ29tbWVudDY4MjgxNTM3Nw==,96218,2020-08-28T16:14:58Z,2020-08-28T16:14:58Z,CONTRIBUTOR,"Thanks! And yeah, I had updating the docs on my list too :) Will try to get to it this afternoon (budgeting time is fraught with uncertainty at the moment!).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",686978131, https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178,https://api.github.com/repos/simonw/sqlite-utils/issues/139,682182178,MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA==,96218,2020-08-27T20:46:18Z,2020-08-27T20:46:18Z,CONTRIBUTOR,"> I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem. So the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048 With regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this: https://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63 It doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here: https://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716 The existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either. @simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made? Would be grateful for your thoughts! Thanks! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",686978131, https://github.com/simonw/datasette/issues/456#issuecomment-661524006,https://api.github.com/repos/simonw/datasette/issues/456,661524006,MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg==,32467826,2020-07-21T01:15:07Z,2020-07-21T01:15:07Z,CONTRIBUTOR,"Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue... The `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",442327592, https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655898722,https://api.github.com/repos/simonw/sqlite-utils/issues/121,655898722,MDEyOklzc3VlQ29tbWVudDY1NTg5ODcyMg==,79913,2020-07-09T04:53:08Z,2020-07-09T04:53:08Z,CONTRIBUTOR,"Yep, I agree that makes more sense for backwards compat and more casual use cases. I think it should be possible for the Database/Queryable methods to DTRT based on seeing if it's within a context-manager-managed transaction.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",652961907, https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655652679,https://api.github.com/repos/simonw/sqlite-utils/issues/121,655652679,MDEyOklzc3VlQ29tbWVudDY1NTY1MjY3OQ==,79913,2020-07-08T17:24:46Z,2020-07-08T17:24:46Z,CONTRIBUTOR,"Better transaction handling would be really great. Some of my thoughts on implementing better transaction discipline are in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728. My preferences: - Each CLI command should operate in a single transaction so that either the whole thing succeeds or the whole thing is rolled back. This avoids partially completed operations when an error occurs part way through processing. Partially completed operations are typically much harder to recovery from gracefully and may cause inconsistent data states. - The Python API should be transaction-agnostic and rely on the caller to coordinate transactions. Only the caller knows how individual insert, create, update, etc operations/methods should be bundled conceptually into transactions. When the caller is the CLI, for example, that bundling would be at the CLI command-level. Other callers might want to break up operations into multiple transactions. Transactions are usually most useful when controlled at the application-level (like logging configuration) instead of the library level. The library needs to provide an API that's conducive to transaction use, though. - The Python API should provide a context manager to provide consistent transactions handling with more useful defaults than Python's `sqlite3` module. The latter issues implicit `BEGIN` statements by default for most DML (`INSERT`, `UPDATE`, `DELETE`, … but not `SELECT`, I believe), but **not** DDL (`CREATE TABLE`, `DROP TABLE`, `CREATE VIEW`, …). Notably, the `sqlite3` module doesn't issue the implicit `BEGIN` until the first DML statement. It _does not_ issue it when entering the `with conn` block, like other DBAPI2-compatible modules do. The `with conn` block for `sqlite3` only arranges to commit or rollback an existing transaction when exiting. Including DDL and `SELECT`s in transactions is important for operation consistency, though. There are several existing bugs.python.org tickets about this and future changes are in the works, but sqlite-utils can provide its own API sooner. sqlite-utils's `Database` class could itself be a context manager (built on the `sqlite3` connection context manager) which additionally issues an explicit `BEGIN` when entering. This would then let Python API callers do something like: ```python db = sqlite_utils.Database(path) with db: # ← BEGIN issued here by Database.__enter__ db.insert(…) db.create_view(…) # ← COMMIT/ROLLBACK issue here by sqlite3.connection.__exit__ ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",652961907,