html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/simonw/datasette/pull/1318#issuecomment-838449572,https://api.github.com/repos/simonw/datasette/issues/1318,838449572,MDEyOklzc3VlQ29tbWVudDgzODQ0OTU3Mg==,49699333,dependabot[bot],2021-05-11T13:12:30Z,2021-05-11T13:12:30Z,CONTRIBUTOR,Superseded by #1321.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",876431852,Bump black from 21.4b2 to 21.5b0, https://github.com/simonw/datasette/issues/1280#issuecomment-837166862,https://api.github.com/repos/simonw/datasette/issues/1280,837166862,MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg==,10801138,blairdrummond,2021-05-10T19:07:46Z,2021-05-10T19:07:46Z,CONTRIBUTOR,"Do you have a list of sqlite versions you want to test against? One cool thing I saw recently (that we started using) was using `import docker` within python, and then writing pytest functions which executed against the container [setup](https://github.com/StatCan/kubeflow-containers/blob/3c7dcfb5e7188982fb8ebcded82e84292720f720/conftest.py#L85) [example](https://github.com/StatCan/kubeflow-containers/blob/master/tests/jupyterlab-cpu/test_julia.py#L8-L18) The inspiration for this came from the [jupyter docker-stacks](https://github.com/jupyter/docker-stacks/blob/09fb66007615ea68d9bce8f8e1a2cf9402f1e432/test/test_packages.py#L107) So off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842862708,Ability to run CI against multiple SQLite versions, https://github.com/simonw/datasette/pull/1296#issuecomment-835491318,https://api.github.com/repos/simonw/datasette/issues/1296,835491318,MDEyOklzc3VlQ29tbWVudDgzNTQ5MTMxOA==,10801138,blairdrummond,2021-05-08T19:59:01Z,2021-05-08T19:59:01Z,CONTRIBUTOR,"I have also found that ubuntu has fewer vulnerabilities than the buster based images. ``` ➜ ~ docker pull python:3-buster ➜ ~ trivy image python:3-buster | head 2021-04-28T17:14:29.313-0400 INFO Detecting Debian vulnerabilities... 2021-04-28T17:14:29.393-0400 INFO Trivy skips scanning programming language libraries because no supported file was detected python:3-buster (debian 10.9) ============================= Total: 1621 (UNKNOWN: 13, LOW: 1106, MEDIUM: 343, HIGH: 145, CRITICAL: 14) +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ | LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION | TITLE | +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829,Dockerfile: use Ubuntu 20.10 as base, https://github.com/simonw/datasette/issues/1300#issuecomment-833132571,https://api.github.com/repos/simonw/datasette/issues/1300,833132571,MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ==,3243482,abdusco,2021-05-06T00:16:50Z,2021-05-06T00:18:05Z,CONTRIBUTOR,"I ended up using some JS as a workaround. First, add a JS file in `metadata.yaml`: ```yaml extra_js_urls: - '/static/app.js' ``` then inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `` elements. You need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post. ~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run. ```js window.addEventListener('DOMContentLoaded', () => { function renderBlobImages() { document.querySelectorAll('a[href*="".blob""]').forEach(el => { const img = document.createElement('img'); img.className = 'blob-image'; img.loading = 'lazy'; img.src = el.href.replace('.blob', '.jpg'); el.parentElement.replaceChild(img, el); }); } renderBlobImages(); }); ``` while this does the job, I'd prefer handling this in Python where it belongs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833,Make row available to `render_cell` plugin hook, https://github.com/simonw/datasette/pull/1313#issuecomment-829352402,https://api.github.com/repos/simonw/datasette/issues/1313,829352402,MDEyOklzc3VlQ29tbWVudDgyOTM1MjQwMg==,27856297,dependabot-preview[bot],2021-04-29T15:47:23Z,2021-04-29T15:47:23Z,CONTRIBUTOR,This pull request will no longer be automatically closed when a new version is found as this pull request was created by Dependabot Preview and this repo is using a `version: 2` config file. You can close this pull request and let Dependabot re-create it the next time it checks for updates.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",871046111,Bump black from 20.8b1 to 21.4b2, https://github.com/simonw/datasette/pull/1311#issuecomment-829260725,https://api.github.com/repos/simonw/datasette/issues/1311,829260725,MDEyOklzc3VlQ29tbWVudDgyOTI2MDcyNQ==,27856297,dependabot-preview[bot],2021-04-29T13:58:08Z,2021-04-29T13:58:08Z,CONTRIBUTOR,Superseded by #1313.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",870227815,Bump black from 20.8b1 to 21.4b1, https://github.com/simonw/datasette/pull/1309#issuecomment-828679943,https://api.github.com/repos/simonw/datasette/issues/1309,828679943,MDEyOklzc3VlQ29tbWVudDgyODY3OTk0Mw==,27856297,dependabot-preview[bot],2021-04-28T18:26:03Z,2021-04-28T18:26:03Z,CONTRIBUTOR,Superseded by #1311.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",869237023,Bump black from 20.8b1 to 21.4b0, https://github.com/simonw/datasette/issues/1298#issuecomment-823093669,https://api.github.com/repos/simonw/datasette/issues/1298,823093669,MDEyOklzc3VlQ29tbWVudDgyMzA5MzY2OQ==,192568,mroswell,2021-04-20T08:38:10Z,2021-04-20T08:40:22Z,CONTRIBUTOR,"@dracos I appreciate your ideas! 1. Ooh, I like this: https://codepen.io/astro87/pen/LYRQNbd?editors=1100 (That's the codepen from your linked stackoverflow.) 2. I worry that a max height will be a problem when my facets are open. (I've got 35 active ingredients, and so I've set the default_facet_size to 35.) 3. I don't understand this one. I'm observing the screenshot... very helpful! (Ah, okay, TR = Top Right and BR = Bottom Right. Absolute grid refers to position style.) All the scroll bars look a little wonky to me. I've also got a lot of facets, and prefer the extra horizontal space so that not as many facets disappear below the fold. My site also has end users... some will be on mobile... not sure what the absolute grid would do there... 4. (I still think a hover-arrow that scrolls upon click would help, too...) But meanwhile, I'm going to go ahead and see if I can apply that shadow. (Never would've thought of that.) Hmmm... I'm not an SCSS person. This looks helpful! https://jsonformatter.org/scss-to-css","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/1300#issuecomment-821971059,https://api.github.com/repos/simonw/datasette/issues/1300,821971059,MDEyOklzc3VlQ29tbWVudDgyMTk3MTA1OQ==,3243482,abdusco,2021-04-18T10:42:19Z,2021-04-18T10:42:19Z,CONTRIBUTOR,"If there's a simpler way to generate a URL for a specific row, I'm all ears","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833,Make row available to `render_cell` plugin hook, https://github.com/simonw/datasette/issues/1300#issuecomment-821970965,https://api.github.com/repos/simonw/datasette/issues/1300,821970965,MDEyOklzc3VlQ29tbWVudDgyMTk3MDk2NQ==,3243482,abdusco,2021-04-18T10:41:15Z,2021-04-18T10:41:15Z,CONTRIBUTOR,"If I change the hookspec and add a row parameter, it works https://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/hookspecs.py#L58 ``` def render_cell(value, column, row, table, database, datasette): ``` But to generate a URL, I need the primary keys, but I can't call `pks = await db.primary_keys(table)` inside a sync function. I can't call `datasette.utils.detect_primary_keys` either, because the db connection is not publicly exposed (AFAICT). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833,Make row available to `render_cell` plugin hook, https://github.com/simonw/datasette/pull/1296#issuecomment-819467759,https://api.github.com/repos/simonw/datasette/issues/1296,819467759,MDEyOklzc3VlQ29tbWVudDgxOTQ2Nzc1OQ==,295329,camallen,2021-04-14T12:07:37Z,2021-04-14T12:11:36Z,CONTRIBUTOR,"> Removing /var/lib/apt and /var/lib/dpkg makes apt and dpkg unusable in images based on this one. Running `apt-get clean` and removing /var/lib/apt/lists achieves similar size savings. this PR helps me as removing the /var/lib/apt and /var/lib/dpkg directories breaks my ability to add packages when using `datasetteproject/datasette:0.56` as a base image. ---- Shorterm workaround for me was to use this in my Dockerfile ``` FROM datasetteproject/datasette:0.56 RUN mkdir -p /var/lib/apt RUN mkdir -p /var/lib/dpkg RUN mkdir -p /var/lib/dpkg/updates RUN mkdir -p /var/lib/dpkg/info RUN touch /var/lib/dpkg/status RUN apt-get update # and install your packages etc ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829,Dockerfile: use Ubuntu 20.10 as base, https://github.com/simonw/datasette/issues/830#issuecomment-817414881,https://api.github.com/repos/simonw/datasette/issues/830,817414881,MDEyOklzc3VlQ29tbWVudDgxNzQxNDg4MQ==,192568,mroswell,2021-04-12T01:06:34Z,2021-04-12T01:07:27Z,CONTRIBUTOR,"Related: #1285, including arguments for natural breaks, equal interval, etc. modeled after choropleth map legends.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",636511683,Redesign register_facet_classes plugin hook, https://github.com/simonw/datasette/issues/1286#issuecomment-815978405,https://api.github.com/repos/simonw/datasette/issues/1286,815978405,MDEyOklzc3VlQ29tbWVudDgxNTk3ODQwNQ==,192568,mroswell,2021-04-08T16:47:29Z,2021-04-10T03:59:00Z,CONTRIBUTOR,"This worked for me: `{{ cell.value | replace('"", ""','; ') | replace('[\""','') | replace('\""]','')}}` I'm sure there is a prettier (and more flexible) way, but for now, this is ever-so-much more pleasant to look at. ------ AFTER: ------ BEFORE: (Note: I didn't figure out how to have one item have no semicolon, while multi-items close with a semicolon, but this is good enough for now. I also didn't figure out how to set up a new jinja filter. I don't want to add to /datasette/utils/__init__.py as I assume that would get overwritten when upgrading datasette. Having a starter guide on creating jinja filters in datasette would be helpful. (The jinja documentation isn't datasette-specific enough for me to quite nail it.) ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",849220154,Better default display of arrays of items, https://github.com/simonw/datasette/issues/502#issuecomment-812813732,https://api.github.com/repos/simonw/datasette/issues/502,812813732,MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg==,5413548,louispotok,2021-04-03T05:16:54Z,2021-04-03T05:16:54Z,CONTRIBUTOR,"For what it's worth, if anyone finds this in the future, I was having the same issue. After digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so `datasette serve -i xyz.db` rather than the doc's quickstart recommendation of `datasette serve xyz.db`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",453131917,Exporting sqlite database(s)?, https://github.com/simonw/datasette/issues/1286#issuecomment-812679221,https://api.github.com/repos/simonw/datasette/issues/1286,812679221,MDEyOklzc3VlQ29tbWVudDgxMjY3OTIyMQ==,192568,mroswell,2021-04-02T19:34:01Z,2021-04-02T19:34:01Z,CONTRIBUTOR,"This shows the city in a different color (and not the comma), but I get the idea, and I like it. (Ooh, could be nice to have the gear have an option in array fields to show as bullets or commas or semicolons...)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",849220154,Better default display of arrays of items, https://github.com/simonw/datasette/issues/1284#issuecomment-810779928,https://api.github.com/repos/simonw/datasette/issues/1284,810779928,MDEyOklzc3VlQ29tbWVudDgxMDc3OTkyOA==,192568,mroswell,2021-03-31T05:40:12Z,2021-03-31T05:40:12Z,CONTRIBUTOR,"Maybe the addition of two template files: 'one_database_index.html' and 'one_table_index.html' would be a better idea than the documentation diff idea. (They could include commented instructions to rename the preferred template 'index.html', along with any other necessary guidance.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",845794436,Feature or Documentation Request: Individual table as home page template, https://github.com/simonw/datasette/issues/1274#issuecomment-805214307,https://api.github.com/repos/simonw/datasette/issues/1274,805214307,MDEyOklzc3VlQ29tbWVudDgwNTIxNDMwNw==,7476523,bobwhitelock,2021-03-23T20:12:29Z,2021-03-23T20:12:29Z,CONTRIBUTOR,"One issue I could see with adding first class support for metadata in hjson format is that this would require adding an additional dependency to handle this, for a feature that would be unused by many users. I wonder if this could fit in as a plugin instead; if a hook existed for loading metadata (maybe as part of https://github.com/simonw/datasette/issues/860) the metadata could then come from any source, as specified by plugins, e.g. hjson, toml, XML, a database table etc. Until/unless this exists, a few ideas for how you could add comments: - Using YAML as you suggest. - A common pattern is adding a `""comment""` key for comments to any object in JSON - I don't think including an unnecessary key like this would break anything in Datasette, but not certain. - You could use another tool as a preprocessor for your JSON metadata - e.g. hjson or Jsonnet. You'd write the metadata in that format, and then convert that into JSON to actually use as your final metadata.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",839008371,Might there be some way to comment metadata.json?, https://github.com/simonw/datasette/pull/1159#issuecomment-804639427,https://api.github.com/repos/simonw/datasette/issues/1159,804639427,MDEyOklzc3VlQ29tbWVudDgwNDYzOTQyNw==,192568,mroswell,2021-03-23T05:56:02Z,2021-03-23T05:56:02Z,CONTRIBUTOR,"With just three facets, I like it, but it does take more horizontal space. Would be nice to have a switch somewhere, enabling either original compact option or this proposed more-readable option. Also some control over word wrap (width setting) and facet spacing. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",774332247,Improve the display of facets information, https://github.com/simonw/datasette/issues/1153#issuecomment-804640440,https://api.github.com/repos/simonw/datasette/issues/1153,804640440,MDEyOklzc3VlQ29tbWVudDgwNDY0MDQ0MA==,192568,mroswell,2021-03-23T05:58:20Z,2021-03-23T05:58:20Z,CONTRIBUTOR,Could there be a little widget that offers conversion from one to the other? ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771202454,"Use YAML examples in documentation by default, not JSON", https://github.com/simonw/datasette/issues/163#issuecomment-804539729,https://api.github.com/repos/simonw/datasette/issues/163,804539729,MDEyOklzc3VlQ29tbWVudDgwNDUzOTcyOQ==,192568,mroswell,2021-03-23T02:41:14Z,2021-03-23T02:41:14Z,CONTRIBUTOR,"I'm visiting old issues for context while learning datasette. Let me know if okay to make the occasional comment like this one. querystring argument now located at: https://docs.datasette.io/en/latest/settings.html#sql-time-limit-ms","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",279547886,Document the querystring argument for setting a different time limit, https://github.com/simonw/datasette/issues/164#issuecomment-804541064,https://api.github.com/repos/simonw/datasette/issues/164,804541064,MDEyOklzc3VlQ29tbWVudDgwNDU0MTA2NA==,192568,mroswell,2021-03-23T02:45:12Z,2021-03-23T02:45:12Z,CONTRIBUTOR,"""datasette skeleton"" feature removed #476","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",280013907,datasette skeleton command for kick-starting database and table metadata, https://github.com/simonw/datasette/issues/1149#issuecomment-804415619,https://api.github.com/repos/simonw/datasette/issues/1149,804415619,MDEyOklzc3VlQ29tbWVudDgwNDQxNTYxOQ==,192568,mroswell,2021-03-22T21:43:16Z,2021-03-22T21:43:16Z,CONTRIBUTOR,Sounds like a good idea.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",769520939,Make it easier to theme Datasette with CSS, https://github.com/simonw/datasette/issues/88#issuecomment-804471733,https://api.github.com/repos/simonw/datasette/issues/88,804471733,MDEyOklzc3VlQ29tbWVudDgwNDQ3MTczMw==,192568,mroswell,2021-03-22T23:46:36Z,2021-03-22T23:46:36Z,CONTRIBUTOR,Google Map API limits seem to prevent https://nhs-england-map.netlify.com from being a working demo.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273775212,Add NHS England Hospitals example to wiki, https://github.com/simonw/datasette/issues/942#issuecomment-803631102,https://api.github.com/repos/simonw/datasette/issues/942,803631102,MDEyOklzc3VlQ29tbWVudDgwMzYzMTEwMg==,192568,mroswell,2021-03-21T17:48:42Z,2021-03-21T17:48:42Z,CONTRIBUTOR,"I like this idea. Though it might be nice to have some kind of automated system from database to file, so that developers could easily track diffs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681334912,Support column descriptions in metadata.json, https://github.com/simonw/datasette/issues/1265#issuecomment-802923254,https://api.github.com/repos/simonw/datasette/issues/1265,802923254,MDEyOklzc3VlQ29tbWVudDgwMjkyMzI1NA==,7476523,bobwhitelock,2021-03-19T15:39:15Z,2021-03-19T15:39:15Z,CONTRIBUTOR,"It doesn't use basic auth, but you can put a whole datasette instance, or parts of this, behind a username/password prompt using https://github.com/simonw/datasette-auth-passwords","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",836123030,Support for HTTP Basic Authentication, https://github.com/simonw/datasette/issues/1262#issuecomment-802095132,https://api.github.com/repos/simonw/datasette/issues/1262,802095132,MDEyOklzc3VlQ29tbWVudDgwMjA5NTEzMg==,7476523,bobwhitelock,2021-03-18T16:37:45Z,2021-03-18T16:37:45Z,CONTRIBUTOR,"This sounds like a good use case for a plugin, since this will only be useful for a subset of Datasette users. It shouldn't be too difficult to add a button to do this with the available plugin hooks - have you taken a look at https://docs.datasette.io/en/latest/writing_plugins.html?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",834602299,Plugin hook that could support 'order by random()' for table view, https://github.com/simonw/datasette/issues/1388#issuecomment-876213177,https://api.github.com/repos/simonw/datasette/issues/1388,876213177,MDEyOklzc3VlQ29tbWVudDg3NjIxMzE3Nw==,80737,aslakr,2021-07-08T07:47:17Z,2021-07-08T07:47:17Z,CONTRIBUTOR,"> This sounds like a valuable feature for people running Datasette behind a proxy. Yes, in some cases it is easer to use e.g. Apache's [ProxyPass Directive](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass) with Unix Domain Socket like `unix:/home/www.socket|http://localhost/whatever/`. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",939051549,Serve using UNIX domain socket, https://github.com/simonw/datasette/issues/1101#issuecomment-869191854,https://api.github.com/repos/simonw/datasette/issues/1101,869191854,MDEyOklzc3VlQ29tbWVudDg2OTE5MTg1NA==,25778,eyeseast,2021-06-27T16:42:14Z,2021-06-27T16:42:14Z,CONTRIBUTOR,This would really help with this issue: https://github.com/eyeseast/datasette-geojson/issues/7,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",749283032,register_output_renderer() should support streaming data, https://github.com/simonw/datasette/issues/1168#issuecomment-869076254,https://api.github.com/repos/simonw/datasette/issues/1168,869076254,MDEyOklzc3VlQ29tbWVudDg2OTA3NjI1NA==,2670795,brandonrobertz,2021-06-27T00:03:16Z,2021-06-27T00:05:51Z,CONTRIBUTOR,"> Related: Here's an implementation of a `get_metadata()` plugin hook by @brandonrobertz [next-LI@3fd8ce9](https://github.com/next-LI/datasette/commit/3fd8ce91f3108c82227bf65ff033929426c60437) Here's a plugin that implements metadata-within-DBs: [next-LI/datasette-live-config](https://github.com/next-LI/datasette-live-config) How it works: If a database has a `__metadata` table, then it gets parsed and included in the global metadata. It also implements a database-action hook with a UI for managing config. More context: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L109-L140","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388,Mechanism for storing metadata in _metadata tables, https://github.com/simonw/datasette/issues/1384#issuecomment-869074701,https://api.github.com/repos/simonw/datasette/issues/1384,869074701,MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==,2670795,brandonrobertz,2021-06-26T23:45:18Z,2021-06-26T23:45:37Z,CONTRIBUTOR,"> Here's where the plugin hook is called, demonstrating the `fallback=` argument: > > https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472 > > I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459? I think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-869074182,https://api.github.com/repos/simonw/datasette/issues/1384,869074182,MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==,2670795,brandonrobertz,2021-06-26T23:37:42Z,2021-06-26T23:37:42Z,CONTRIBUTOR,"> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries. > > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try? > > Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160 > > That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it. > > This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem. > > But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead. _Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.) In my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/pull/1368#issuecomment-865204472,https://api.github.com/repos/simonw/datasette/issues/1368,865204472,MDEyOklzc3VlQ29tbWVudDg2NTIwNDQ3Mg==,2670795,brandonrobertz,2021-06-21T17:11:37Z,2021-06-21T17:11:37Z,CONTRIBUTOR,If this is a concept ACK then I will move onto fixing the tests (adding new ones) and updating the documentation for the new plugin hook.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",913865304,DRAFT: A new plugin hook for dynamic metadata, https://github.com/simonw/sqlite-utils/issues/278#issuecomment-864621099,https://api.github.com/repos/simonw/sqlite-utils/issues/278,864621099,MDEyOklzc3VlQ29tbWVudDg2NDYyMTA5OQ==,601708,mcint,2021-06-20T22:39:57Z,2021-06-20T22:39:57Z,CONTRIBUTOR,"Fair. I looked into it, it looks like it could be done, but it would be _a bit ugly_. I can upload and link a gist of my exploration. **Click** can parse a first argument while still recognizing it as a sub-command keyword. From there, the program could: 1. ignore it preemptively if it matches a sub-command 2. and/or check if a (db) file exists at the path. It would then also need to set a shared db argument variable. Click also makes it easy to parse arguments from environment variables. If you're amenable, I may submit a patch for only that, which would update each sub-command to check for a DB/SQLITE_UTILS_DB environment variable. The goal would be usage that looks like: `DB=./convenient.db sqlite-utils [operation] [args]`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923697888,"Support db as first parameter before subcommand, or as environment variable", https://github.com/simonw/sqlite-utils/issues/272#issuecomment-861944202,https://api.github.com/repos/simonw/sqlite-utils/issues/272,861944202,MDEyOklzc3VlQ29tbWVudDg2MTk0NDIwMg==,25778,eyeseast,2021-06-16T01:41:03Z,2021-06-16T01:41:03Z,CONTRIBUTOR,"So, I do things like this a lot, too. I like the idea of piping in from stdin. Something like this would be nice to do in a makefile: ```sh cat file.csv | sqlite-utils --csv --table data - 'SELECT * FROM data WHERE col=""whatever""' > filtered.csv ``` If you assumed that you're always piping out the same format you're piping in, the option names don't have to change. Depends how much you want to change formats.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",921878733,"Idea: import CSV to memory, run SQL, export in a single command", https://github.com/simonw/datasette/pull/1130#issuecomment-861497548,https://api.github.com/repos/simonw/datasette/issues/1130,861497548,MDEyOklzc3VlQ29tbWVudDg2MTQ5NzU0OA==,3243482,abdusco,2021-06-15T13:27:48Z,2021-06-15T13:27:48Z,CONTRIBUTOR,"There's a workaround: https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ and a future fix: https://css-tricks.com/safari-15-new-ui-theme-colors-and-a-css-tricks-cameo/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",756876238,Fix footer not sticking to bottom in short pages, https://github.com/simonw/datasette/pull/1370#issuecomment-857298526,https://api.github.com/repos/simonw/datasette/issues/1370,857298526,MDEyOklzc3VlQ29tbWVudDg1NzI5ODUyNg==,25778,eyeseast,2021-06-09T01:18:59Z,2021-06-09T01:18:59Z,CONTRIBUTOR,"I'm happy to grab some or all of these in this PR, if you want. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",914130834,Ensure db.path is a string before trying to insert into internal database, https://github.com/simonw/datasette/pull/1368#issuecomment-856182547,https://api.github.com/repos/simonw/datasette/issues/1368,856182547,MDEyOklzc3VlQ29tbWVudDg1NjE4MjU0Nw==,2670795,brandonrobertz,2021-06-07T18:59:47Z,2021-06-07T23:04:25Z,CONTRIBUTOR,"Note that if we went with a ""update_metadata"" hook, the hook signature would look something like this (it would return nothing): ``` update_metadata( datasette=self, metadata=metadata, key=key, database=database, table=table, fallback=fallback ) ``` The Datasette function `_metadata_recursive_update(self, orig, updated)` would disappear into the plugins. Doing this, though, we'd lose the easy ability to make the local metadata.yaml immutable (since we'd no longer have the recursive update).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",913865304,DRAFT: A new plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1356#issuecomment-853895159,https://api.github.com/repos/simonw/datasette/issues/1356,853895159,MDEyOklzc3VlQ29tbWVudDg1Mzg5NTE1OQ==,25778,eyeseast,2021-06-03T14:03:59Z,2021-06-03T14:03:59Z,CONTRIBUTOR,"(Putting thoughts here to keep the conversation in one place.) I think using datasette for this use-case is the right approach. I usually have both datasette and sqlite-utils installed in the same project, and that's where I'm trying out queries, so it probably makes the most sense to have datasette also manage the output (and maybe the input, too). It seems like both `--get` and `--query` could work better as subcommands, rather than options, if you're looking at building out a full CLI experience in datasette. It would give a cleaner separation in what you're trying to do and let each have its own dedicated options. So something like this: ```sh # run an arbitrary query datasette query covid.db ""select * from ny_times_us_counties limit 1"" --format yaml # run a canned query datasette get covid.db some-canned-query --format yaml ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",910092577,"Research: syntactic sugar for using --get with SQL queries, maybe ""datasette query""", https://github.com/simonw/datasette/issues/1284#issuecomment-851567204,https://api.github.com/repos/simonw/datasette/issues/1284,851567204,MDEyOklzc3VlQ29tbWVudDg1MTU2NzIwNA==,192568,mroswell,2021-05-31T15:42:10Z,2021-11-04T03:15:01Z,CONTRIBUTOR,"I very much want to make: https://list.SaferDisinfectants.org/disinfectants/listN have this URL: https://list.SaferDisinfectants.org/ I'm using only one table page on the site, with no pagination. I'm not using the home page, though when I tried to move my table to the home page as mentioned above, I failed to figure out how. I am using cloudflare, but I haven't figured out a forwarding or HTML re-write method of doing this, either. Is there any way I can get a prettier list URL? I'm on Vercel. (I have a wordpress site on the main domain on a separate host.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",845794436,Feature or Documentation Request: Individual table as home page template, https://github.com/simonw/datasette/pull/1348#issuecomment-850077261,https://api.github.com/repos/simonw/datasette/issues/1348,850077261,MDEyOklzc3VlQ29tbWVudDg1MDA3NzI2MQ==,10801138,blairdrummond,2021-05-28T03:05:38Z,2021-05-28T03:05:38Z,CONTRIBUTOR,"Note, the CVEs are probably resolvable with this https://github.com/simonw/datasette/pull/1296 . My experience is that Ubuntu seems to manage these better? Though that is surprising :/ ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",904598267,DRAFT: add test and scan for docker images, https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-846413174,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59,846413174,MDEyOklzc3VlQ29tbWVudDg0NjQxMzE3NA==,631242,frosencrantz,2021-05-22T14:06:19Z,2021-05-22T14:06:19Z,CONTRIBUTOR,Thanks Simon!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771872303,Remove unneeded exists=True for -a/--auth flag., https://github.com/simonw/datasette/issues/1236#issuecomment-842798043,https://api.github.com/repos/simonw/datasette/issues/1236,842798043,MDEyOklzc3VlQ29tbWVudDg0Mjc5ODA0Mw==,192568,mroswell,2021-05-18T03:28:25Z,2021-05-18T03:28:25Z,CONTRIBUTOR,That corner handle looks like a hamburger menu to me. Note that the default resize handle is not limited to two-way resize: http://jsfiddle.net/LLrh7Lte/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",812228314,Ability to increase size of the SQL editor window, https://github.com/simonw/datasette/pull/1130#issuecomment-738907852,https://api.github.com/repos/simonw/datasette/issues/1130,738907852,MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg==,3243482,abdusco,2020-12-04T17:22:29Z,2020-12-04T17:31:25Z,CONTRIBUTOR,"EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it. ```css body { min-height: 100vh; min-height: -webkit-fill-available; } html { height: -webkit-fill-available; } ``` https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ --- It's actually not that difficult to fix. Well, this is actually a workaround to keep viewport in place. I usually put a transition (forgot to do it here) that keeps page from resizing. ```css .container { min-height: 100vh; transition: height 10000s steps(0); } ``` `steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step. This fix also prevents page from jumping around when the keyboard pops up and down.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",756876238,Fix footer not sticking to bottom in short pages, https://github.com/simonw/datasette/issues/1111#issuecomment-736322290,https://api.github.com/repos/simonw/datasette/issues/1111,736322290,MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA==,3243482,abdusco,2020-12-01T08:54:47Z,2020-12-01T08:54:47Z,CONTRIBUTOR,"Somewhat related: https://github.com/simonw/datasette/issues/859 I fixed the issue with forking and disabling the counts for hidden tables.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",751195017,Accessing a database's `.json` is slow for very large SQLite files, https://github.com/simonw/datasette/issues/1114#issuecomment-735436014,https://api.github.com/repos/simonw/datasette/issues/1114,735436014,MDEyOklzc3VlQ29tbWVudDczNTQzNjAxNA==,2182,danp,2020-11-29T18:33:30Z,2020-11-29T18:33:30Z,CONTRIBUTOR,Thank you!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",752966476,--load-extension=spatialite not working with datasetteproject/datasette docker image, https://github.com/simonw/datasette/issues/493#issuecomment-735281577,https://api.github.com/repos/simonw/datasette/issues/493,735281577,MDEyOklzc3VlQ29tbWVudDczNTI4MTU3Nw==,50527,jefftriplett,2020-11-28T19:39:53Z,2020-11-28T19:39:53Z,CONTRIBUTOR,"I was confused by `--config` and I tried passing the json from datasette-ripgrep into `config.json` just as a wild guess. A short term solution might be pointing out in plugins that their snippet json can go in `metadata.json` at least makes it easier to search for config options or to know where to start if someone is new. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",449886319,Rename metadata.json to config.json, https://github.com/simonw/datasette/pull/1112#issuecomment-735279355,https://api.github.com/repos/simonw/datasette/issues/1112,735279355,MDEyOklzc3VlQ29tbWVudDczNTI3OTM1NQ==,50527,jefftriplett,2020-11-28T19:21:09Z,2020-11-28T19:21:09Z,CONTRIBUTOR,"(Even more annoying is that I see my editor leaked an extra delete space at the end of the line. I'm happy to rebuild this to be less annoying, but you probably don't want the changelog update either way)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",752749485,Fix --metadata doc usage, https://github.com/simonw/datasette/issues/838#issuecomment-720354227,https://api.github.com/repos/simonw/datasette/issues/838,720354227,MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw==,82988,psychemedia,2020-11-02T09:33:58Z,2020-11-02T09:33:58Z,CONTRIBUTOR,"Thanks; just a note that the `datasette.urls.static(path)` and `datasette.urls.static_plugins(plugin_name, path)` items both seem to be repeated and appear in the docs twice?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097,Incorrect URLs when served behind a proxy with base_url set, https://github.com/simonw/datasette/pull/1049#issuecomment-718528252,https://api.github.com/repos/simonw/datasette/issues/1049,718528252,MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg==,82988,psychemedia,2020-10-29T09:20:34Z,2020-10-29T09:20:34Z,CONTRIBUTOR,That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",729017519,Add template block prior to extra URL loaders, https://github.com/simonw/sqlite-utils/pull/189#issuecomment-717359145,https://api.github.com/repos/simonw/sqlite-utils/issues/189,717359145,MDEyOklzc3VlQ29tbWVudDcxNzM1OTE0NQ==,35681,adamwolf,2020-10-27T16:20:32Z,2020-10-27T16:20:32Z,CONTRIBUTOR,"No problem. I added a test. Let me know if it looks sufficient or if you want me to to tweak something! If you don't mind, would you tag this PR as ""hacktoberfest-accepted""? If you do mind, no problem and I'm sorry for asking :) My kiddos like the shirts.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",729818242,Allow iterables other than Lists in m2m records, https://github.com/simonw/datasette/pull/1043#issuecomment-716237524,https://api.github.com/repos/simonw/datasette/issues/1043,716237524,MDEyOklzc3VlQ29tbWVudDcxNjIzNzUyNA==,45380,bollwyvl,2020-10-26T00:14:57Z,2020-10-26T00:14:57Z,CONTRIBUTOR,"Sorry, I was out of the loop this weekend. The missing sdists were in some the `datasette-*` plugins... i'll capture my findings more concretely in one spot when i have a chance...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727915394,Include LICENSE in sdist, https://github.com/simonw/datasette/issues/838#issuecomment-716123598,https://api.github.com/repos/simonw/datasette/issues/838,716123598,MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA==,82988,psychemedia,2020-10-25T10:20:12Z,2020-10-25T10:53:24Z,CONTRIBUTOR,"I'm trying to [run something behind a MyBinder proxy](https://github.com/ouseful-testing/nbsearch), but seem to have something set up incorrectly and not sure what the fix is? I'm starting datasette with jupyter-server-proxy setup: ``` # __init__.py def setup_nbsearch(): return { ""command"": [ ""datasette"", ""serve"", f""{_NBSEARCH_DB_PATH}"", ""-p"", ""{port}"", ""--config"", ""base_url:{base_url}nbsearch/"" ], ""absolute_url"": True, # The following needs a the labextension installing. # eg in postBuild: jupyter labextension install jupyterlab-server-proxy ""launcher_entry"": { ""enabled"": True, ""title"": ""nbsearch"", }, } ``` where the `base_url` gets automatically populated by the server-proxy. I define the loaders as: ``` # __init__.py from datasette import hookimpl @hookimpl def extra_css_urls(database, table, columns, view_name, datasette): return [ ""/-/static-plugins/nbsearch/prism.css"", ""/-/static-plugins/nbsearch/nbsearch.css"", ] ``` but these seem to also need a base_url prefix set somehow? Currently, the generated HTML loads properly but internal links are incorrect; eg they take the form `` which resolves to eg `https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css` rather than required URL of form `https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css`. The main css is loaded correctly: ``","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",637395097,Incorrect URLs when served behind a proxy with base_url set, https://github.com/simonw/datasette/issues/1033#issuecomment-716066000,https://api.github.com/repos/simonw/datasette/issues/1033,716066000,MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA==,82988,psychemedia,2020-10-24T22:58:33Z,2020-10-24T22:58:33Z,CONTRIBUTOR,"From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note: ``` datasette.urls.instance() Returns the URL to the Datasette instance root page. This is usually ""/"" ``` What about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to? - [ ] `https://example.com:PORT/weirdpath/datasette` - [ ] `https://example.com:PORT/weirdpath/` - [ ] `https://example.com:PORT/` - [ ] `https://example.com` - [ ] something else?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",725099777,datasette.urls.static_plugins(...) method, https://github.com/simonw/datasette/issues/1012#issuecomment-714908859,https://api.github.com/repos/simonw/datasette/issues/1012,714908859,MDEyOklzc3VlQ29tbWVudDcxNDkwODg1OQ==,45380,bollwyvl,2020-10-23T04:49:20Z,2020-10-23T04:49:20Z,CONTRIBUTOR,"Good luck on 1.0! It may also be worth lobbying for a `Framework::Datasette::1.0` classifier. This would be a nice way to allow the ecosystem to self-document a bit more [discoverably](https://pypi.org/search/?q=&o=&c=Framework+%3A%3A+Datasette%3A%3A+1.0). I was surprised to see the [PR for `Framework::Jupyter`](https://github.com/pypa/warehouse/pull/1905/files) is a... database migration! Of course, there may be more workflow to it!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",718540751,For 1.0 update trove classifier in setup.py, https://github.com/simonw/datasette/issues/1033#issuecomment-714657366,https://api.github.com/repos/simonw/datasette/issues/1033,714657366,MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng==,82988,psychemedia,2020-10-22T17:51:29Z,2020-10-22T17:51:29Z,CONTRIBUTOR,"How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `""extra_js_urls"": [ ""/static/app.js""]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",725099777,datasette.urls.static_plugins(...) method, https://github.com/simonw/datasette/issues/1019#issuecomment-708520800,https://api.github.com/repos/simonw/datasette/issues/1019,708520800,MDEyOklzc3VlQ29tbWVudDcwODUyMDgwMA==,639012,jsfenfen,2020-10-14T16:37:19Z,2020-10-14T16:37:19Z,CONTRIBUTOR,🎉 Thanks so much @simonw ! 🎉 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",721050815,"""Edit SQL"" button on canned queries", https://github.com/dogsheep/swarm-to-sqlite/pull/10#issuecomment-707326192,https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/10,707326192,MDEyOklzc3VlQ29tbWVudDcwNzMyNjE5Mg==,29426418,mattiaborsoi,2020-10-12T20:20:02Z,2020-10-12T20:20:02Z,CONTRIBUTOR,This closes issue #8 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",719637258,Update utils.py to fix sqlite3.OperationalError, https://github.com/dogsheep/github-to-sqlite/pull/48#issuecomment-704503719,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/48,704503719,MDEyOklzc3VlQ29tbWVudDcwNDUwMzcxOQ==,755825,adamjonas,2020-10-06T19:26:59Z,2020-10-06T19:26:59Z,CONTRIBUTOR,ref #46 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681228542,Add pull requests, https://github.com/simonw/datasette/issues/236#issuecomment-799003172,https://api.github.com/repos/simonw/datasette/issues/236,799003172,MDEyOklzc3VlQ29tbWVudDc5OTAwMzE3Mg==,21148,jacobian,2021-03-14T23:42:57Z,2021-03-14T23:42:57Z,CONTRIBUTOR,"Oh, and the container image can be up to 10GB, so the EFS step might not be needed except for pretty big stuff.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/simonw/datasette/issues/236#issuecomment-799002993,https://api.github.com/repos/simonw/datasette/issues/236,799002993,MDEyOklzc3VlQ29tbWVudDc5OTAwMjk5Mw==,21148,jacobian,2021-03-14T23:41:51Z,2021-03-14T23:41:51Z,CONTRIBUTOR,"Now that [Lambda supports Docker](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/), this probably is a bit easier and may be able to build on top of the existing package command. There are weirdnesses in how the command actually gets invoked; the [aws-lambda-python image](https://hub.docker.com/r/amazon/aws-lambda-python) shows a bit of that. So Datasette would probably need some sort of Lambda-specific entry point to make this work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/simonw/datasette/pull/1256#issuecomment-795112935,https://api.github.com/repos/simonw/datasette/issues/1256,795112935,MDEyOklzc3VlQ29tbWVudDc5NTExMjkzNQ==,6371750,JBPressac,2021-03-10T08:59:45Z,2021-03-10T08:59:45Z,CONTRIBUTOR,"Sorry, I meant ""minor typo"" not ""minor type"".","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",827341657,Minor type in IP adress, https://github.com/simonw/datasette/issues/766#issuecomment-791509910,https://api.github.com/repos/simonw/datasette/issues/766,791509910,MDEyOklzc3VlQ29tbWVudDc5MTUwOTkxMA==,6371750,JBPressac,2021-03-05T15:57:35Z,2021-03-05T16:35:21Z,CONTRIBUTOR,"Hello, I have the same wildcards search problems with an instance of Datasette. http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwerz&_sort=rowid is OK but http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwe* is not (FTS is activated on ""Reference"" ""IntituleAnalyse"" ""NomDuProducteur"" ""PresentationDuContenu"" ""Notes""). Notice that a SQL query as below launched directly from SQLite in the server's shell, retrieves results. `select * from fonds_auguste_dupouy_1872_1967_fts where IntituleAnalyse MATCH ""gwe*"";` Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",617323873,Enable wildcard-searches by default, https://github.com/simonw/datasette/issues/1238#issuecomment-789186458,https://api.github.com/repos/simonw/datasette/issues/1238,789186458,MDEyOklzc3VlQ29tbWVudDc4OTE4NjQ1OA==,198537,rgieseke,2021-03-02T20:19:30Z,2021-03-02T20:19:30Z,CONTRIBUTOR,A custom `templates/index.html` seems to work and custom `pages` as a workaround with moving them to `pages/base_url_dir`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813899472,Custom pages don't work with base_url setting, https://github.com/simonw/sqlite-utils/issues/242#issuecomment-787121933,https://api.github.com/repos/simonw/sqlite-utils/issues/242,787121933,MDEyOklzc3VlQ29tbWVudDc4NzEyMTkzMw==,25778,eyeseast,2021-02-27T19:18:57Z,2021-02-27T19:18:57Z,CONTRIBUTOR,"I think HTTPX gets it exactly right, with a clear separation between sync and async clients, each with a basically identical API. (I'm about to switch [feed-to-sqlite](https://github.com/eyeseast/feed-to-sqlite) over to it, from Requests, to eventually make way for async support.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",817989436,Async support, https://github.com/simonw/datasette/issues/1212#issuecomment-782430028,https://api.github.com/repos/simonw/datasette/issues/1212,782430028,MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA==,4488943,kbaikov,2021-02-19T22:54:13Z,2021-02-19T22:54:13Z,CONTRIBUTOR,I will close this issue since it appears only in my particular setup.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797651831,Tests are very slow. , https://github.com/simonw/datasette/pull/1229#issuecomment-782053455,https://api.github.com/repos/simonw/datasette/issues/1229,782053455,MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ==,295329,camallen,2021-02-19T12:47:19Z,2021-02-19T12:47:19Z,CONTRIBUTOR,I believe this pr and #1031 are related and fix the same issue.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810507413,ensure immutable databses when starting in configuration directory mode with, https://github.com/simonw/datasette/issues/1220#issuecomment-778439617,https://api.github.com/repos/simonw/datasette/issues/1220,778439617,MDEyOklzc3VlQ29tbWVudDc3ODQzOTYxNw==,7476523,bobwhitelock,2021-02-12T20:33:27Z,2021-02-12T20:33:27Z,CONTRIBUTOR,"That Docker command will mount your current directory inside the Docker container at `/mnt` - so you shouldn't need to change anything locally, just run ``` docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db ``` and it will use the `fixtures.db` file within your current directory","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778246347,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,778246347,MDEyOklzc3VlQ29tbWVudDc3ODI0NjM0Nw==,41546558,RhetTbull,2021-02-12T15:00:43Z,2021-02-12T15:00:43Z,CONTRIBUTOR,"Yes, Big Sur Photos database doesn't have `ZGENERICASSET` table. PR #31 will fix this.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/simonw/datasette/issues/1220#issuecomment-777927946,https://api.github.com/repos/simonw/datasette/issues/1220,777927946,MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng==,7476523,bobwhitelock,2021-02-12T02:29:54Z,2021-02-12T02:29:54Z,CONTRIBUTOR,"According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be ``` docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db ``` This uses `/mnt/fixtures.db` whereas you're using `fixtures.db` - did you try using this path instead?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1200#issuecomment-777132761,https://api.github.com/repos/simonw/datasette/issues/1200,777132761,MDEyOklzc3VlQ29tbWVudDc3NzEzMjc2MQ==,7476523,bobwhitelock,2021-02-11T00:29:52Z,2021-02-11T00:29:52Z,CONTRIBUTOR,I'm probably missing something but what's the use case here - what would this offer over adding `limit 10` to the query?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",792890765,?_size=10 option for the arbitrary query page would be useful, https://github.com/simonw/datasette/issues/1208#issuecomment-774286962,https://api.github.com/repos/simonw/datasette/issues/1208,774286962,MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg==,4488943,kbaikov,2021-02-05T21:02:39Z,2021-02-05T21:02:39Z,CONTRIBUTOR,@simonw could you please take a look at the PR 1211 that fixes this issue?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",794554881,A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper, https://github.com/simonw/datasette/issues/1212#issuecomment-772007663,https://api.github.com/repos/simonw/datasette/issues/1212,772007663,MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw==,4488943,kbaikov,2021-02-02T21:36:56Z,2021-02-02T21:36:56Z,CONTRIBUTOR,"How do you get 4-5 minutes? I run my tests in WSL 2, so may be i need to try a real linux VM.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797651831,Tests are very slow. , https://github.com/simonw/datasette/pull/1211#issuecomment-771127458,https://api.github.com/repos/simonw/datasette/issues/1211,771127458,MDEyOklzc3VlQ29tbWVudDc3MTEyNzQ1OA==,4488943,kbaikov,2021-02-01T20:13:39Z,2021-02-01T20:13:39Z,CONTRIBUTOR,Ping @simonw ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797649915,Use context manager instead of plain open, https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770112248,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60,770112248,MDEyOklzc3VlQ29tbWVudDc3MDExMjI0OA==,22578954,daniel-butler,2021-01-30T00:01:03Z,2021-01-30T01:14:42Z,CONTRIBUTOR,"Yes that would be cool! I wouldn't mind helping. Is this the meat of it? https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/utils.py#L512 It looks like the cli option is added with this decorator : https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/cli.py#L14 I looked a bit at utils.py in the GitHub repository. I was surprised at the amount of manual mapping of the API response you had to do to get this to work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797097140,Use Data from SQLite in other commands, https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51,770150526,MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg==,22578954,daniel-butler,2021-01-30T03:44:19Z,2021-01-30T03:47:24Z,CONTRIBUTOR,I don't have much experience with github's rate limiting. In my day job we use the [tenacity library](https://github.com/jd/tenacity) to handle http errors we get.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",703246031,github-to-sqlite should handle rate limits better, https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60,770069864,MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA==,22578954,daniel-butler,2021-01-29T21:52:05Z,2021-02-12T18:29:43Z,CONTRIBUTOR,"For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called `gh-organization`. The github's owner id of gh-orgnization is `123456789`. ```bash github-to-sqlite repos github.db gh-organization ``` I'm on a windows computer running git bash to be able to use the `|` command. This works for me ```bash sqlite3 github.db ""SELECT full_name FROM repos WHERE owner = '123456789';"" | tr '\n\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; } ``` On a pure linux system I think this would work because the new line character is normally `\n` ```bash sqlite3 github.db ""SELECT full_name FROM repos WHERE owner = '123456789';"" | tr '\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }` ``` As expected I ran into rate limit issues #51 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797097140,Use Data from SQLite in other commands, https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55,760950128,MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA==,21148,jacobian,2021-01-15T13:44:52Z,2021-01-15T13:44:52Z,CONTRIBUTOR,"I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779211940,Fix archive imports, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754729035,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,754729035,MDEyOklzc3VlQ29tbWVudDc1NDcyOTAzNQ==,21148,jacobian,2021-01-05T16:03:29Z,2021-01-05T16:03:29Z,CONTRIBUTOR,"I was able to fix this, at least enough to get _my_ archive to import. Not sure if there's more work to be done here or not.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071,Archive import appears to be broken on recent exports, https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55,754728696,MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng==,21148,jacobian,2021-01-05T16:02:55Z,2021-01-05T16:02:55Z,CONTRIBUTOR,"This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779211940,Fix archive imports, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,754721153,MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw==,21148,jacobian,2021-01-05T15:51:09Z,2021-01-05T15:51:09Z,CONTRIBUTOR,"Correction: the failure is on `lists-member.js` (I was thrown by the `block` variable name, but that's just a coincidence)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071,Archive import appears to be broken on recent exports, https://github.com/simonw/datasette/issues/1167#issuecomment-754619930,https://api.github.com/repos/simonw/datasette/issues/1167,754619930,MDEyOklzc3VlQ29tbWVudDc1NDYxOTkzMA==,3637,benpickles,2021-01-05T12:57:57Z,2021-01-05T12:57:57Z,CONTRIBUTOR,"Not sure where exactly to put the actual docs (presumably somewhere in [docs/contributing.rst](https://github.com/simonw/datasette/blob/main/docs/contributing.rst)) but I've made a slight change to make it easier to run locally (copying [the approach in excalidraw](https://github.com/excalidraw/excalidraw/blob/ade2565f497243a5e428f4906d8ed80c872fd981/package.json#L90-L94)): https://github.com/simonw/datasette/compare/main...benpickles:prettier-docs ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777145954,Add Prettier to contributing documentation, https://github.com/simonw/datasette/issues/1169#issuecomment-754007242,https://api.github.com/repos/simonw/datasette/issues/1169,754007242,MDEyOklzc3VlQ29tbWVudDc1NDAwNzI0Mg==,3637,benpickles,2021-01-04T14:29:57Z,2021-01-04T14:29:57Z,CONTRIBUTOR,I somewhat share your reluctance to add a package.json to seemingly every project out there but ultimately if they're project dependencies it's important they're managed within the codebase.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777677671,Prettier package not actually being cached, https://github.com/simonw/datasette/pull/1170#issuecomment-754004715,https://api.github.com/repos/simonw/datasette/issues/1170,754004715,MDEyOklzc3VlQ29tbWVudDc1NDAwNDcxNQ==,3637,benpickles,2021-01-04T14:25:44Z,2021-01-04T14:25:44Z,CONTRIBUTOR,I was going to re-add the filter to only run Prettier when there have been changes in `datasette/static` but that would mean it wouldn't run when the package is updated. That plus the fact that [the last run of the job took only 8 seconds](https://github.com/benpickles/datasette/runs/1640121514) is why I decided not to re-add the filter.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778126516,Install Prettier via package.json, https://github.com/simonw/datasette/issues/1012#issuecomment-753531657,https://api.github.com/repos/simonw/datasette/issues/1012,753531657,MDEyOklzc3VlQ29tbWVudDc1MzUzMTY1Nw==,45380,bollwyvl,2021-01-02T21:25:36Z,2021-01-02T21:25:36Z,CONTRIBUTOR,"Actually, on more research, I found out this is handled by the [trove-classifiers package](https://github.com/pypa/trove-classifiers/blob/master/src/trove_classifiers/__init__.py#L2) now, so it's just a one-liner pr instead of fire-up-a-docker-container-and-do-some-migrations","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",718540751,For 1.0 update trove classifier in setup.py, https://github.com/simonw/datasette/issues/417#issuecomment-752098906,https://api.github.com/repos/simonw/datasette/issues/417,752098906,MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg==,82988,psychemedia,2020-12-29T14:34:30Z,2020-12-29T14:34:50Z,CONTRIBUTOR,"FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py Not a production thing, just an experiment trying to explore what might be possible...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421546944,Datasette Library, https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59,751375487,MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw==,631242,frosencrantz,2020-12-26T17:08:44Z,2020-12-26T17:08:44Z,CONTRIBUTOR,"Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771872303,Remove unneeded exists=True for -a/--auth flag., https://github.com/simonw/datasette/pull/1158#issuecomment-750389683,https://api.github.com/repos/simonw/datasette/issues/1158,750389683,MDEyOklzc3VlQ29tbWVudDc1MDM4OTY4Mw==,6774676,eumiro,2020-12-23T17:02:50Z,2020-12-23T17:02:50Z,CONTRIBUTOR,"The dict/set suggestion comes from `pyupgrade --py36-plus`, but then had to `black` the change. The rest comes from PyCharm's Inspect code function. I reviewed all the suggestions and fixed a thing or two, such as leading/trailing spaces in the docstrings or turned around the chained conditions. Then I tried to convert all `os.path/glob/open` to `Path`, but there were some local test issues, so I'll have to start over in smaller chunks if you want to have that too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",773913793,Modernize code to Python 3.6+, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,748562330,MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA==,41546558,RhetTbull,2020-12-20T04:45:08Z,2020-12-20T04:45:08Z,CONTRIBUTOR,Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748562288,MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==,41546558,RhetTbull,2020-12-20T04:44:22Z,2020-12-20T04:44:22Z,CONTRIBUTOR,"@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767,Expose scores from ZCOMPUTEDASSETATTRIBUTES, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748436779,MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==,41546558,RhetTbull,2020-12-19T07:49:00Z,2020-12-19T07:49:00Z,CONTRIBUTOR,@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767,Expose scores from ZCOMPUTEDASSETATTRIBUTES, https://github.com/simonw/datasette/issues/493#issuecomment-748305976,https://api.github.com/repos/simonw/datasette/issues/493,748305976,MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng==,50527,jefftriplett,2020-12-18T20:34:39Z,2020-12-18T20:34:39Z,CONTRIBUTOR,"I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: ```shell datasette serve --metadata=./metadata.json ``` I have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",449886319,Rename metadata.json to config.json, https://github.com/simonw/datasette/issues/998#issuecomment-743080047,https://api.github.com/repos/simonw/datasette/issues/998,743080047,MDEyOklzc3VlQ29tbWVudDc0MzA4MDA0Nw==,6371750,JBPressac,2020-12-11T09:25:09Z,2020-12-11T09:25:09Z,CONTRIBUTOR,"Hello Simon, I have a similar problem with horizontal scrollbar display with Datasette version 0.51 and superior for a table with more than 30 rows. With Datasette 0.50, the horizontal scrollbar is displayed, if I upgrade Datasette to 0.51 and superior, the horizontal scrollbar disappears. Datasette 0.50: horizontal scrollbar ![2020-12-11 10_23_28-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885620-a5f17800-3b9a-11eb-8870-654e7d4372ca.png) Datasette 0.51 and superior: no horizontal scrollbar ![2020-12-11 10_24_55-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885782-dfc27e80-3b9a-11eb-9d55-6c9a56227bf2.png) Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",717699884,Wide tables should scroll horizontally within the page, https://github.com/simonw/datasette/issues/767#issuecomment-632555800,https://api.github.com/repos/simonw/datasette/issues/767,632555800,MDEyOklzc3VlQ29tbWVudDYzMjU1NTgwMA==,2657547,rixx,2020-05-22T08:00:23Z,2020-05-22T08:00:23Z,CONTRIBUTOR,That would be perfect!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",620969465,Allow to specify a URL fragment for canned queries, https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22,628405453,MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw==,41546558,RhetTbull,2020-05-14T05:59:53Z,2020-05-14T05:59:53Z,CONTRIBUTOR,"I've added support for the above exif data to [v0.28.17](https://github.com/RhetTbull/osxphotos/releases/tag/v0.28.17) of osxphotos. `PhotoInfo.exif_info` will return an `ExifInfo` [dataclass](https://docs.python.org/3/library/dataclasses.html) object with the following properties: ```python flash_fired: bool iso: int metering_mode: int sample_rate: int track_format: int white_balance: int aperture: float bit_rate: float duration: float exposure_bias: float focal_length: float fps: float latitude: float longitude: float shutter_speed: float camera_make: str camera_model: str codec: str lens_model: str ``` It's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data Note: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615626118,Try out ExifReader, https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22,627007458,MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA==,41546558,RhetTbull,2020-05-11T22:51:52Z,2020-05-11T22:52:26Z,CONTRIBUTOR,"I'm not familiar with `ExifReader`. I wrote my own wrapper around `exiftool` because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like [pyexiftool](https://github.com/smarnach/pyexiftool) didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in `ZEXTENDEDATTRIBUTES` table. I've got an open issue [#11](https://github.com/RhetTbull/osxphotos/issues/11) to add this to osxphotos but it hasn't bubbled to the top of my backlog yet. osxphotos will give you the location info: `PhotoInfo.location` returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS). ```sql CREATE TABLE ZEXTENDEDATTRIBUTES ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZFLASHFIRED INTEGER, ZISO INTEGER, ZMETERINGMODE INTEGER, ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, ZWHITEBALANCE INTEGER, ZASSET INTEGER, ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, ZLENSMODEL VARCHAR ); ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615626118,Try out ExifReader, https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22,626667235,MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==,41546558,RhetTbull,2020-05-11T12:20:34Z,2020-05-11T12:20:34Z,CONTRIBUTOR,"@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example: ```python import osxphotos photos = osxphotos.PhotosDB().photos() exiftool = osxphotos.exiftool.ExifTool(photos[0].path) exifdata = exiftool.as_dict() tags = exifdata[""IPTC:Keywords""] ``` Not as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC. As for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615626118,Try out ExifReader, https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626396379,MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ==,41546558,RhetTbull,2020-05-10T22:01:48Z,2020-05-10T22:01:48Z,CONTRIBUTOR,"Frustrates me when package authors create a ""drop in"" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do ""import bpylist2 as bpylist""","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626395641,MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ==,41546558,RhetTbull,2020-05-10T21:55:54Z,2020-05-10T21:55:54Z,CONTRIBUTOR,Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626395507,MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw==,41546558,RhetTbull,2020-05-10T21:54:45Z,2020-05-10T21:54:45Z,CONTRIBUTOR,"@simonw does Photos show valid reverse geolocation info? Are you sure you're using [bpylist2](https://github.com/xa4a/bpylist2) and not bpylist? They're both unfortunately imported as ""bpylist"" so if you somehow got the wrong (original bpylist) version installed, it could be the issue. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626390317,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626390317,MDEyOklzc3VlQ29tbWVudDYyNjM5MDMxNw==,41546558,RhetTbull,2020-05-10T21:11:24Z,2020-05-10T21:50:58Z,CONTRIBUTOR,"Ugh....Yeah, I think easiest is to catch the exception and return no place as you suggest. This particular bit of code involves un-archiving a serialized NSKeyedArchiver which uses an object table and it is certainly possible to create a circular reference that way. Because this is happening in the decode, the circular reference must be in the original data. Does Photos show valid reverse geolocation info for the photo in question? If so, Photos may be doing something beyond a simple decode of the binary plist. For now, I'll push a patch to catch the exception.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17,624284539,MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ==,41546558,RhetTbull,2020-05-05T20:20:05Z,2020-05-05T20:20:05Z,CONTRIBUTOR,"FYI, I've got an [issue](https://github.com/RhetTbull/osxphotos/issues/25) to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612860531,Only install osxphotos if running on macOS, https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623845014,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16,623845014,MDEyOklzc3VlQ29tbWVudDYyMzg0NTAxNA==,41546558,RhetTbull,2020-05-05T03:55:14Z,2020-05-05T03:56:24Z,CONTRIBUTOR,"I'm traveling w/o access to my Mac so can't help with any code right now. I suspected ZSCENEIDENTIFIER was a foreign key into one of these psi.sqlite tables. But looks like you're on to something connecting groups to assets. As for the UUID, I think there's two ints because each is 64-bits but UUIDs are 128-bits. Thus they need to be combined to get the 128 bit UUID. You might be able to use Apple's [NSUUID](https://developer.apple.com/documentation/foundation/nsuuid?language=objc), for example, by wrapping with pyObjC. Here's one [example](https://github.com/ronaldoussoren/pyobjc/blob/881c82a7ba90f193934b52b44143360c80dce5e5/pyobjc-framework-Cocoa/PyObjCTest/test_nsuuid.py) of using this in PyObjC's test suite. Interesting it's stored this way instead of a UUIDString as in Photos.sqlite. Perhaps it for faster indexing. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612287234,"Import machine-learning detected labels (dog, llama etc) from Apple Photos", https://github.com/simonw/datasette/pull/730#issuecomment-623463200,https://api.github.com/repos/simonw/datasette/issues/730,623463200,MDEyOklzc3VlQ29tbWVudDYyMzQ2MzIwMA==,27856297,dependabot-preview[bot],2020-05-04T13:27:22Z,2020-05-04T13:27:22Z,CONTRIBUTOR,Superseded by #753.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",604001627,"Update pytest-asyncio requirement from ~=0.10.0 to >=0.10,<0.12", https://github.com/simonw/sqlite-utils/issues/103#issuecomment-622599528,https://api.github.com/repos/simonw/sqlite-utils/issues/103,622599528,MDEyOklzc3VlQ29tbWVudDYyMjU5OTUyOA==,32605365,b0b5h4rp13,2020-05-01T22:49:12Z,2020-05-02T11:15:44Z,CONTRIBUTOR,"With SQLITE_MAX_VARS = 999, or even 899, This hits the problem with the batch rows causing a overflow (works fine if SQLITE_MAX_VARS = 799). p.s. I have tried a few list of dicts to sqlite modules and this was the easiest to use/understand ------------- file begins ------------------ import sqlite_utils as su data = [ {'tickerId': 913324382, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CONSTELLATION B', 'symbol': 'STZ B', 'disSymbol': 'STZ-B', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '163.13', 'change': '6.46', 'changeRatio': '0.0412', 'marketValue': '31180699895.63', 'volume': '417', 'turnoverRate': '0.0000'}, {'tickerId': 913323791, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Molina Health', 'symbol': 'MOH', 'disSymbol': 'MOH', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '173.25', 'change': '9.28', 'changeRatio': '0.0566', 'pPrice': '173.25', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '10520341695.50', 'volume': '1281557', 'turnoverRate': '0.0202'}, {'tickerId': 913257501, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seattle Genetics', 'symbol': 'SGEN', 'disSymbol': 'SGEN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '145.64', 'change': '8.41', 'changeRatio': '0.0613', 'pPrice': '146.45', 'pChange': '0.8100', 'pChRatio': '0.0056', 'marketValue': '25117961347.60', 'volume': '2791411', 'turnoverRate': '0.0162'}, {'tickerId': 925381971, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bandwidth', 'symbol': 'BAND', 'disSymbol': 'BAND', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '89.22', 'change': '7.66', 'changeRatio': '0.0939', 'pPrice': '89.00', 'pChange': '-0.2200', 'pChRatio': '-0.0025', 'marketValue': '2100025474.98', 'volume': '1508629', 'turnoverRate': '0.0641'}, {'tickerId': 913323935, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Magellan Health', 'symbol': 'MGLN', 'disSymbol': 'MGLN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '68.00', 'change': '7.27', 'changeRatio': '0.1197', 'pPrice': '68.00', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '1697894040.00', 'volume': '448919', 'turnoverRate': '0.0180'}, {'tickerId': 913254854, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'On Assignment', 'symbol': 'ASGN', 'disSymbol': 'ASGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '53.04', 'change': '6.59', 'changeRatio': '0.1419', 'pPrice': '53.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '2811120000.00', 'volume': '1339771', 'turnoverRate': '0.0253'}, {'tickerId': 913255732, 'exchangeId': 95, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Arcturus', 'symbol': 'ARCT', 'disSymbol': 'ARCT', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '40.86', 'change': '6.36', 'changeRatio': '0.1843', 'pPrice': '42.60', 'pChange': '1.740', 'pChRatio': '0.0426', 'marketValue': '812021444.46', 'volume': '1577508', 'turnoverRate': '0.0794'}, {'tickerId': 913256616, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'DexCom', 'symbol': 'DXCM', 'disSymbol': 'DXCM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '341.52', 'change': '6.32', 'changeRatio': '0.0189', 'pPrice': '340.00', 'pChange': '-1.5200', 'pChRatio': '-0.0045', 'marketValue': '31522296000.00', 'volume': '1008849', 'turnoverRate': '0.0109'}, {'tickerId': 913255108, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Clorox', 'symbol': 'CLX', 'disSymbol': 'CLX', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '192.71', 'change': '6.27', 'changeRatio': '0.0336', 'pPrice': '192.95', 'pChange': '0.2400', 'pChRatio': '0.0012', 'marketValue': '24185773318.28', 'volume': '4996414', 'turnoverRate': '0.0398'}, {'tickerId': 925314627, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'FRANCO NEVADA', 'symbol': 'FNV', 'disSymbol': 'FNV', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '137.85', 'change': '5.64', 'changeRatio': '0.0427', 'pPrice': '138.50', 'pChange': '0.6500', 'pChRatio': '0.0047', 'marketValue': '26110405326.30', 'volume': '1047688', 'turnoverRate': '0.0055'}, {'tickerId': 913254955, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Aon Plc', 'symbol': 'AON', 'disSymbol': 'AON', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '178.21', 'change': '5.54', 'changeRatio': '0.0321', 'pPrice': '178.21', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '41181209117.22', 'volume': '2026234', 'turnoverRate': '0.0088'}, {'tickerId': 913324105, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Willis Towers', 'symbol': 'WLTW', 'disSymbol': 'WLTW', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '183.34', 'change': '5.05', 'changeRatio': '0.0283', 'pPrice': '183.34', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23597461124.96', 'volume': '968943', 'turnoverRate': '0.0075'}, {'tickerId': 913254759, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'TELADOC HEALTH', 'symbol': 'TDOC', 'disSymbol': 'TDOC', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '169.43', 'change': '4.84', 'changeRatio': '0.0294', 'pPrice': '168.88', 'pChange': '-0.5500', 'pChRatio': '-0.0032', 'marketValue': '12614616858.38', 'volume': '2628946', 'turnoverRate': '0.0353'}, {'tickerId': 913255222, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Emergent Bio', 'symbol': 'EBS', 'disSymbol': 'EBS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '78.70', 'change': '4.75', 'changeRatio': '0.0642', 'pPrice': '78.40', 'pChange': '-0.3000', 'pChRatio': '-0.0038', 'marketValue': '4113368277.10', 'volume': '783804', 'turnoverRate': '0.0150'}, {'tickerId': 913323443, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pool', 'symbol': 'POOL', 'disSymbol': 'POOL', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '216.02', 'change': '4.36', 'changeRatio': '0.0206', 'pPrice': '216.02', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8696077573.82', 'volume': '310837', 'turnoverRate': '0.0077'}, {'tickerId': 913257075, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Masimo', 'symbol': 'MASI', 'disSymbol': 'MASI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '218.00', 'change': '4.09', 'changeRatio': '0.0191', 'pPrice': '217.00', 'pChange': '-1.0000', 'pChRatio': '-0.0046', 'marketValue': '11797070000.00', 'volume': '542131', 'turnoverRate': '0.0100'}, {'tickerId': 913253761, 'exchangeId': 10, 'type': 2, 'secType': [62], 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pope Resources', 'symbol': 'POPE', 'disSymbol': 'POPE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NAS', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '101.05', 'change': '3.95', 'changeRatio': '0.0407', 'pPrice': '99.90', 'pChange': '2.800', 'pChRatio': '0.0288', 'marketValue': '447370075.75', 'volume': '33138', 'turnoverRate': '0.0075'}, {'tickerId': 913323560, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seneca Foods', 'symbol': 'SENEB', 'disSymbol': 'SENEB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '40.04', 'change': '3.84', 'changeRatio': '0.1061', 'marketValue': '347950039.71', 'volume': '501'}, {'tickerId': 913324274, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Resmed', 'symbol': 'RMD', 'disSymbol': 'RMD', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '159.07', 'change': '3.75', 'changeRatio': '0.0241', 'pPrice': '159.07', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23004217759.29', 'volume': '1267075', 'turnoverRate': '0.0088'}, {'tickerId': 913323736, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Vertex Pharms', 'symbol': 'VRTX', 'disSymbol': 'VRTX', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '254.90', 'change': '3.70', 'changeRatio': '0.0147', 'pPrice': '255.00', 'pChange': '0.1000', 'pChRatio': '0.0004', 'marketValue': '66062980780.10', 'volume': '1939843', 'turnoverRate': '0.0075'}, {'tickerId': 913323767, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MCCORMICK VTG', 'symbol': 'MKC V', 'disSymbol': 'MKC-V', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '159.99', 'change': '3.42', 'changeRatio': '0.0218', 'marketValue': '21262671000.00', 'volume': '432', 'turnoverRate': '0.0000'}, {'tickerId': 950118595, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'ZOOM VIDEO', 'symbol': 'ZM', 'disSymbol': 'ZM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '138.56', 'change': '3.39', 'changeRatio': '0.0251', 'pPrice': '138.99', 'pChange': '0.4300', 'pChRatio': '0.0031', 'marketValue': '38620532420.16', 'volume': '13786017', 'turnoverRate': '0.0495'}, {'tickerId': 916040738, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'WHEATON PRECIOUS', 'symbol': 'WPM', 'disSymbol': 'WPM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '41.10', 'change': '3.34', 'changeRatio': '0.0885', 'pPrice': '41.09', 'pChange': '-0.0100', 'pChRatio': '-0.0002', 'marketValue': '18404536146.30', 'volume': '5019137', 'turnoverRate': '0.0112'}, {'tickerId': 913257174, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Royal Gold', 'symbol': 'RGLD', 'disSymbol': 'RGLD', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '125.86', 'change': '3.33', 'changeRatio': '0.0272', 'pPrice': '125.86', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8253015011.08', 'volume': '853473', 'turnoverRate': '0.0130'}, {'tickerId': 913254394, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Fortune Brand', 'symbol': 'FBHS', 'disSymbol': 'FBHS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '51.50', 'change': '3.30', 'changeRatio': '0.0685', 'pPrice': '51.50', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '7194870278.50', 'volume': '3004021', 'turnoverRate': '0.0214'}, {'tickerId': 913323312, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYK', 'disSymbol': 'LBTYK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '21.49', 'change': '3.18', 'changeRatio': '0.1737', 'pPrice': '21.48', 'pChange': '-0.0100', 'pChRatio': '-0.0005', 'marketValue': '13594662302.41', 'volume': '19980228', 'turnoverRate': '0.0315'}, {'tickerId': 913323882, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Preformed Line', 'symbol': 'PLPC', 'disSymbol': 'PLPC', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '52.82', 'change': '3.14', 'changeRatio': '0.0632', 'pPrice': '52.10', 'pChange': '-0.7200', 'pChRatio': '-0.0136', 'marketValue': '264979981.20', 'volume': '9305', 'turnoverRate': '0.0018'}, {'tickerId': 913323248, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Discovery', 'symbol': 'DISCB', 'disSymbol': 'DISCB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'A', 'close': '57.95', 'change': '23.63', 'changeRatio': '0.6884', 'pPrice': '54.26', 'pChange': '-3.6900', 'pChRatio': '-0.0637', 'marketValue': '29362894177.95', 'volume': '218305', 'turnoverRate': '0.0004'}, {'tickerId': 913323930, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MercadoLibre', 'symbol': 'MELI', 'disSymbol': 'MELI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '605.52', 'change': '22.01', 'changeRatio': '0.0377', 'pPrice': '603.69', 'pChange': '-1.8300', 'pChRatio': '-0.0030', 'marketValue': '30226598045.28', 'volume': '699008', 'turnoverRate': '0.0140'}, {'tickerId': 913257170, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYA', 'disSymbol': 'LBTYA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '22.28', 'change': '2.86', 'changeRatio': '0.1473', 'pPrice': '22.29', 'pChange': '0.0100', 'pChRatio': '0.0004', 'marketValue': '14094419548.52', 'volume': '10534672', 'turnoverRate': '0.0167'}, {'tickerId': 913303991, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Brodband', 'symbol': 'LBRDK', 'disSymbol': 'LBRDK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '125.44', 'change': '2.76', 'changeRatio': '0.0225', 'pPrice': '125.44', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '22817900904.96', 'volume': '926177', 'turnoverRate': '0.0042'}, {'tickerId': 913257082, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Helen of Troy', 'symbol': 'HELE', 'disSymbol': 'HELE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '167.04', 'change': '2.76', 'changeRatio': '0.0168', 'pPrice': '167.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '4216707982.08', 'volume': '341465', 'turnoverRate': '0.0135'}, {'tickerId': 913256458, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Forrester', 'symbol': 'FORR', 'disSymbol': 'FORR', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '33.88', 'change': '2.58', 'changeRatio': '0.0824', 'marketValue': '635419400.00', 'volume': '85115', 'turnoverRate': '0.0045'}, {'tickerId': 950158952, 'exchangeId': 95, 'type': 2, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'LYRA THERAPEUTICS, INC.', 'symbol': 'LYRA', 'disSymbol': 'LYRA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'ipo', 'status': 'A', 'close': '18.56', 'change': '2.56', 'changeRatio': '0.1600', 'pPrice': '18.96', 'pChange': '0.4000', 'pChRatio': '0.0216', 'marketValue': '229705575.68', 'volume': '1738472', 'turnoverRate': '0.1405'}, {'tickerId': 913257570, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bio-Techne', 'symbol': 'TECH', 'disSymbol': 'TECH', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '227.54', 'change': '2.54', 'changeRatio': '0.0113', 'pPrice': '227.54', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8726538309.18', 'volume': '497006', 'turnoverRate': '0.0130'}, {'tickerId': 913323246, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bel Fuse', 'symbol': 'BELFB', 'disSymbol': 'BELFB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '9.99', 'change': '2.53', 'changeRatio': '0.3391', 'pPrice': '9.75', 'pChange': '-0.2400', 'pChRatio': '-0.0240', 'marketValue': '122562454.86', 'volume': '177634', 'turnoverRate': '0.0145'}, {'tickerId': 916040647, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Agnico Eagle', 'symbol': 'AEM', 'disSymbol': 'AEM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '61.20', 'change': '2.52', 'changeRatio': '0.0429', 'pPrice': '61.10', 'pChange': '-0.1000', 'pChRatio': '-0.0016', 'marketValue': '14739911553.60', 'volume': '2820765', 'turnoverRate': '0.0117'}, {'tickerId': 913303768, 'exchangeId': 12, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CHASE CORP', 'symbol': 'CCF', 'disSymbol': 'CCF', 'disExchangeCode': 'AMEX', 'exchangeCode': 'ASE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '96.71', 'change': '2.45', 'changeRatio': '0.0260', 'marketValue': '916799598.60', 'volume': '29229', 'turnoverRate': '0.0031'}, {'tickerId': 913324557, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Allergan', 'symbol': 'AGN', 'disSymbol': 'AGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '189.74', 'change': '2.40', 'changeRatio': '0.0128', 'pPrice': '189.76', 'pChange': '0.0200', 'pChRatio': '0.0001', 'marketValue': '62424842326.10', 'volume': '5787032', 'turnoverRate': '0.0176'}, {'tickerId': 913324566, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'West Pharm Svc', 'symbol': 'WST', 'disSymbol': 'WST', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '191.64', 'change': '2.38', 'changeRatio': '0.0126', 'pPrice': '191.64', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '14078267117.08', 'volume': '352460', 'turnoverRate': '0.0042'} ] db = su.Database(f""overnight hold.db"" ) db['active'].insert_all(data) --------------- file ends ----------------------","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",610517472,sqlite3.OperationalError: too many SQL variables in insert_all when using rows with varying numbers of columns, https://github.com/simonw/datasette/issues/731#issuecomment-618758326,https://api.github.com/repos/simonw/datasette/issues/731,618758326,MDEyOklzc3VlQ29tbWVudDYxODc1ODMyNg==,25778,eyeseast,2020-04-24T01:55:00Z,2020-04-24T01:55:00Z,CONTRIBUTOR,Mounting `./static` at `/static` seems the simplest way. Saves you the trouble of deciding what else (`img` for example) gets special treatment.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",605110015,Option to automatically configure based on directory layout, https://github.com/simonw/datasette/issues/731#issuecomment-618126449,https://api.github.com/repos/simonw/datasette/issues/731,618126449,MDEyOklzc3VlQ29tbWVudDYxODEyNjQ0OQ==,25778,eyeseast,2020-04-23T01:38:55Z,2020-04-23T01:38:55Z,CONTRIBUTOR,"I've almost suggested this same thing a couple times. I tend to have Makefile (because I'm doing other `make` stuff anyway to get data prepped), and I end up putting all those CLI options in something like `make run`. But it would be way easier to just have all those typical options -- plugins, templates, metadata -- be defaults.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",605110015,Option to automatically configure based on directory layout, https://github.com/simonw/datasette/issues/236#issuecomment-612216820,https://api.github.com/repos/simonw/datasette/issues/236,612216820,MDEyOklzc3VlQ29tbWVudDYxMjIxNjgyMA==,193185,cldellow,2020-04-10T21:03:38Z,2020-04-10T21:03:38Z,CONTRIBUTOR,"I made a repo at https://github.com/code402/datasette-lambda to demonstrate the idea, and scratch my personal itch for this. The demo relies on some central authority having already published a public, reusable Lambda layer with Datasette & its dependencies. I think that differs from the other publish plugins which seem to mainly publish Dockerfiles that the host will interpret to install deps from a requirements.txt file. I chose that approach because `uvloop` appears to be a dependency with native code that needs to be compiled for the target runtime environment. In this case, that's Amazon Linux 2. I'm not 100% clear on whether that's still required, because: - maybe `uvloop` is only needed for `uvicorn`, which the demo doesn't actually use since HTTP routing is handled by API Gateway - it seems like `uvloop` may be an optional, drop-in optimization for `asyncio` in any case (but I may be misreading this; I'm very much a Python noob) If it's the case that `uvloop` is truly optional, then I think the publish plugin could do the packaging on the user's machine, regardless of what flavour of operating system they're on. That'd be a bit slower for the user, but would provide the most long-term flexibility in terms of supporting plugins.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/simonw/datasette/issues/236#issuecomment-608716819,https://api.github.com/repos/simonw/datasette/issues/236,608716819,MDEyOklzc3VlQ29tbWVudDYwODcxNjgxOQ==,193185,cldellow,2020-04-03T22:19:00Z,2020-04-03T22:19:00Z,CONTRIBUTOR,"Hi Simon, I'm thinking of attempting this. Can you clarify some questions I have? 1) I assume the goal is to have a CORS-friendly HTTPS endpoint that hosts the datasette service + user's db. 2) If that's the goal, I think Lambda alone is insufficient. Lambda provides the compute fabric, but not the HTTP routing. You'd also need to add Application Load Balancer or API Gateway to provide an HTTP endpoint that routes to the lambda function. Do you have a preference between ALB or API GW? ALB has better economics at scale, but has a minimum monthly cost. API GW has worse per-request economics, but scales to zero when no requests are happening. 3) Does Datasette have any native components, or is it all pure python? If it has native bits, they'll likely need to be recompiled to work on Amazon Linux 2. 4) There are a few disparate services that need to be wired together to expose a Python service securely to the web. If I was doing this outside of the datasette publish system, I'd use an AWS CloudFormation template. Even within datasette, I think it still makes sense to use a CloudFormation template and just have the publish plugin invoke it (via the standard `aws` cli) with user-specified parameters. Does that sound reasonable to you? Thanks for your help!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317001500,datasette publish lambda plugin, https://github.com/simonw/datasette/issues/573#issuecomment-604328163,https://api.github.com/repos/simonw/datasette/issues/573,604328163,MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw==,82988,psychemedia,2020-03-26T09:41:30Z,2020-03-26T09:41:30Z,CONTRIBUTOR,Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",492153532,Exposing Datasette via Jupyter-server-proxy, https://github.com/simonw/datasette/issues/712#issuecomment-604225034,https://api.github.com/repos/simonw/datasette/issues/712,604225034,MDEyOklzc3VlQ29tbWVudDYwNDIyNTAzNA==,127565,wragge,2020-03-26T04:40:08Z,2020-03-26T04:40:08Z,CONTRIBUTOR,"Great! Yes, can confirm that this works on Binder. However, when I try to run the same code locally, I get an Internal Server Error when I try to access Datasette. ``` ERROR: Exception in ASGI application Traceback (most recent call last): File ""/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py"", line 385, in run_asgi result = await app(self.scope, self.receive, self.send) File ""/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py"", line 45, in __call__ return await self.app(scope, receive, send) File ""/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette_debug_asgi.py"", line 24, in wrapped_app await app(scope, recieve, send) File ""/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/utils/asgi.py"", line 174, in __call__ await self.app(scope, receive, send) File ""/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/tracer.py"", line 75, in __call__ await self.app(scope, receive, send) File ""/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/app.py"", line 746, in __call__ raw_path = dict(scope[""headers""])[path_from_header.encode(""utf8"")].split(b""?"")[0] KeyError: b'x-original-uri' INFO: 127.0.0.1:49320 - ""GET / HTTP/1.1"" 500 Internal Server Error ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",588108428,base_url doesn't entirely work for running Datasette inside Binder, https://github.com/simonw/datasette/issues/712#issuecomment-604249402,https://api.github.com/repos/simonw/datasette/issues/712,604249402,MDEyOklzc3VlQ29tbWVudDYwNDI0OTQwMg==,127565,wragge,2020-03-26T06:11:44Z,2020-03-26T06:11:44Z,CONTRIBUTOR,"Following on from @betatim's suggestion on Twitter, I've changed the proxy url to include 'absolute'. ``` python proxy_url = f'{base_url}proxy/absolute/8001/' ``` This works both on Binder and locally, without using the `path_from_header` option. I've updated the demo repository. Sorry @simonw if I've led you down the wrong path!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",588108428,base_url doesn't entirely work for running Datasette inside Binder, https://github.com/simonw/datasette/issues/394#issuecomment-604166918,https://api.github.com/repos/simonw/datasette/issues/394,604166918,MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA==,127565,wragge,2020-03-26T00:56:30Z,2020-03-26T00:56:30Z,CONTRIBUTOR,"Thanks! I'm trying to launch Datasette from *within* a notebook using the jupyter-server-proxy and the new `base_url` parameter. While the assets load ok, and the breadcrumb navigation works, the facet links don't seem to use the `base_url`. Or have I missed something? My test repository is here: https://github.com/wragge/datasette-test","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-602907207,https://api.github.com/repos/simonw/datasette/issues/394,602907207,MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw==,127565,wragge,2020-03-23T23:12:18Z,2020-03-23T23:12:18Z,CONTRIBUTOR,"This would also be useful for running Datasette in Jupyter notebooks on [Binder](https://mybinder.org/). While you can use [Jupyter-server-proxy](https://github.com/jupyterhub/jupyter-server-proxy) to access Datasette on Binder, the links are broken. Why run Datasette on Binder? I'm developing a [range of Jupyter notebooks](https://glam-workbench.github.io/) that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic. For example, there are a [series of notebooks](https://glam-workbench.github.io/trove-harvester/) that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data. But it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-690860653,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50,690860653,MDEyOklzc3VlQ29tbWVudDY5MDg2MDY1Mw==,370930,mikepqr,2020-09-11T04:04:08Z,2020-09-11T04:04:08Z,CONTRIBUTOR,"There's probably a nicer way of doing (hence this is a comment rather than a PR), but this appears to fix it: ```diff --- a/twitter_to_sqlite/utils.py +++ b/twitter_to_sqlite/utils.py @@ -181,6 +181,7 @@ def fetch_timeline( args[""tweet_mode""] = ""extended"" min_seen_id = None num_rate_limit_errors = 0 + seen_count = 0 while True: if min_seen_id is not None: args[""max_id""] = min_seen_id - 1 @@ -208,6 +209,7 @@ def fetch_timeline( yield tweet min_seen_id = min(t[""id""] for t in tweets) max_seen_id = max(t[""id""] for t in tweets) + seen_count += len(tweets) if last_since_id is not None: max_seen_id = max((last_since_id, max_seen_id)) last_since_id = max_seen_id @@ -217,7 +219,9 @@ def fetch_timeline( replace=True, ) if stop_after is not None: - break + if seen_count >= stop_after: + break + args[""count""] = min(args[""count""], stop_after - seen_count) time.sleep(sleep) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",698791218,"favorites --stop_after=N stops after min(N, 200)", https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688573964,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688573964,MDEyOklzc3VlQ29tbWVudDY4ODU3Mzk2NA==,96218,simonwiles,2020-09-08T01:55:07Z,2020-09-08T01:55:07Z,CONTRIBUTOR,"Okay, I've rewritten this PR to preserve the batching behaviour but still fix #145, and rebased the branch to account for the `db.execute()` api change. It's not terribly sophisticated -- if it attempts to insert a batch which has too many variables, the exception is caught, the batch is split in two and each half is inserted separately, and then it carries on as before with the same `batch_size`. In the edge case where this gets triggered, subsequent batches will all be inserted in two groups too if they continue to have the same number of columns (which is presumably reasonably likely). Do you reckon this is acceptable when set against the awkwardness of recalculating the `batch_size` on the fly?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680,Handle case where subsequent records (after first batch) include extra columns, https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688481317,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688481317,MDEyOklzc3VlQ29tbWVudDY4ODQ4MTMxNw==,96218,simonwiles,2020-09-07T19:18:55Z,2020-09-07T19:18:55Z,CONTRIBUTOR,"Just force-pushed to update d042f9c with more formatting changes to satisfy `black==20.8b1` and pass the GitHub Actions ""Test"" workflow.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680,Handle case where subsequent records (after first batch) include extra columns, https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688479163,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688479163,MDEyOklzc3VlQ29tbWVudDY4ODQ3OTE2Mw==,96218,simonwiles,2020-09-07T19:10:33Z,2020-09-07T19:11:57Z,CONTRIBUTOR,"@simonw -- I've gone ahead updated the documentation to reflect the changes introduced in this PR. IMO it's ready to merge now. In writing the documentation changes, I begin to wonder about the value and role of `batch_size` at all, tbh. May I assume it was originally intended to prevent using the entire row set to determine columns and column types, and that this was a performance consideration? If so, this PR entirely undermines its purpose. I've been passing in excess of 500,000 rows at a time to `insert_all()` with these changes and although I'm sure the performance difference is measurable it's not really noticeable; given #145, I don't know that any performance advantages outweigh the problems doing it this way removes. What do you think about just dropping the argument and defaulting to the maximum `batch_size` permissible given `SQLITE_MAX_VARS`? Are there other reasons one might want to restrict `batch_size` that I've overlooked? I could open a new issue to discuss/implement this. Of course the documentation will need to change again too if/when something is done about #147.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680,Handle case where subsequent records (after first batch) include extra columns, https://github.com/simonw/datasette/pull/952#issuecomment-686061028,https://api.github.com/repos/simonw/datasette/issues/952,686061028,MDEyOklzc3VlQ29tbWVudDY4NjA2MTAyOA==,27856297,dependabot-preview[bot],2020-09-02T22:26:14Z,2020-09-02T22:26:14Z,CONTRIBUTOR,"Looks like black is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",687245650,"Update black requirement from ~=19.10b0 to >=19.10,<21.0", https://github.com/simonw/sqlite-utils/issues/145#issuecomment-683382252,https://api.github.com/repos/simonw/sqlite-utils/issues/145,683382252,MDEyOklzc3VlQ29tbWVudDY4MzM4MjI1Mg==,96218,simonwiles,2020-08-30T06:27:25Z,2020-08-30T06:27:52Z,CONTRIBUTOR,"Note: had to adjust the test above because trying to exhaust a `SQLITE_MAX_VARIABLE_NUMBER` of 250000 in 99 records requires 2526 columns, and trips the ` ""Rows can have a maximum of {} columns"".format(SQLITE_MAX_VARS)` check even before it trips the default `SQLITE_MAX_COLUMN` value (2000).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688659182,Bug when first record contains fewer columns than subsequent records, https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682815377,https://api.github.com/repos/simonw/sqlite-utils/issues/139,682815377,MDEyOklzc3VlQ29tbWVudDY4MjgxNTM3Nw==,96218,simonwiles,2020-08-28T16:14:58Z,2020-08-28T16:14:58Z,CONTRIBUTOR,"Thanks! And yeah, I had updating the docs on my list too :) Will try to get to it this afternoon (budgeting time is fraught with uncertainty at the moment!).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",686978131,"insert_all(..., alter=True) should work for new columns introduced after the first 100 records", https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178,https://api.github.com/repos/simonw/sqlite-utils/issues/139,682182178,MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA==,96218,simonwiles,2020-08-27T20:46:18Z,2020-08-27T20:46:18Z,CONTRIBUTOR,"> I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem. So the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048 With regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this: https://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63 It doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here: https://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716 The existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either. @simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made? Would be grateful for your thoughts! Thanks! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",686978131,"insert_all(..., alter=True) should work for new columns introduced after the first 100 records", https://github.com/simonw/datasette/issues/456#issuecomment-661524006,https://api.github.com/repos/simonw/datasette/issues/456,661524006,MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg==,32467826,abeyerpath,2020-07-21T01:15:07Z,2020-07-21T01:15:07Z,CONTRIBUTOR,"Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue... The `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",442327592,Installing installs the tests package, https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655898722,https://api.github.com/repos/simonw/sqlite-utils/issues/121,655898722,MDEyOklzc3VlQ29tbWVudDY1NTg5ODcyMg==,79913,tsibley,2020-07-09T04:53:08Z,2020-07-09T04:53:08Z,CONTRIBUTOR,"Yep, I agree that makes more sense for backwards compat and more casual use cases. I think it should be possible for the Database/Queryable methods to DTRT based on seeing if it's within a context-manager-managed transaction.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",652961907,Improved (and better documented) support for transactions, https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655652679,https://api.github.com/repos/simonw/sqlite-utils/issues/121,655652679,MDEyOklzc3VlQ29tbWVudDY1NTY1MjY3OQ==,79913,tsibley,2020-07-08T17:24:46Z,2020-07-08T17:24:46Z,CONTRIBUTOR,"Better transaction handling would be really great. Some of my thoughts on implementing better transaction discipline are in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728. My preferences: - Each CLI command should operate in a single transaction so that either the whole thing succeeds or the whole thing is rolled back. This avoids partially completed operations when an error occurs part way through processing. Partially completed operations are typically much harder to recovery from gracefully and may cause inconsistent data states. - The Python API should be transaction-agnostic and rely on the caller to coordinate transactions. Only the caller knows how individual insert, create, update, etc operations/methods should be bundled conceptually into transactions. When the caller is the CLI, for example, that bundling would be at the CLI command-level. Other callers might want to break up operations into multiple transactions. Transactions are usually most useful when controlled at the application-level (like logging configuration) instead of the library level. The library needs to provide an API that's conducive to transaction use, though. - The Python API should provide a context manager to provide consistent transactions handling with more useful defaults than Python's `sqlite3` module. The latter issues implicit `BEGIN` statements by default for most DML (`INSERT`, `UPDATE`, `DELETE`, … but not `SELECT`, I believe), but **not** DDL (`CREATE TABLE`, `DROP TABLE`, `CREATE VIEW`, …). Notably, the `sqlite3` module doesn't issue the implicit `BEGIN` until the first DML statement. It _does not_ issue it when entering the `with conn` block, like other DBAPI2-compatible modules do. The `with conn` block for `sqlite3` only arranges to commit or rollback an existing transaction when exiting. Including DDL and `SELECT`s in transactions is important for operation consistency, though. There are several existing bugs.python.org tickets about this and future changes are in the works, but sqlite-utils can provide its own API sooner. sqlite-utils's `Database` class could itself be a context manager (built on the `sqlite3` connection context manager) which additionally issues an explicit `BEGIN` when entering. This would then let Python API callers do something like: ```python db = sqlite_utils.Database(path) with db: # ← BEGIN issued here by Database.__enter__ db.insert(…) db.create_view(…) # ← COMMIT/ROLLBACK issue here by sqlite3.connection.__exit__ ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",652961907,Improved (and better documented) support for transactions, https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655643078,https://api.github.com/repos/simonw/sqlite-utils/issues/118,655643078,MDEyOklzc3VlQ29tbWVudDY1NTY0MzA3OA==,79913,tsibley,2020-07-08T17:05:59Z,2020-07-08T17:05:59Z,CONTRIBUTOR,"> The only thing missing from this PR is updates to the documentation. Ah, yes, thanks for this reminder! I've repushed with doc bits added.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",651844316,Add insert --truncate option, https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728,https://api.github.com/repos/simonw/sqlite-utils/issues/118,655239728,MDEyOklzc3VlQ29tbWVudDY1NTIzOTcyOA==,79913,tsibley,2020-07-08T02:16:42Z,2020-07-08T02:16:42Z,CONTRIBUTOR,"I fixed my original oops by moving the `DELETE FROM $table` out of the chunking loop and repushed. I think this change can be considered in isolation from issues around transactions, which I discuss next. I wanted to make the DELETE + INSERT happen all in the same transaction so it was robust, but that was more complicated than I expected. The transaction handling in the Database/Table classes isn't systematic, and this poses big hurdles to making `Table.insert_all` (or other operations) consistent and robust in the face of errors. For example, I wanted to do this (whitespace ignored in diff, so indentation change not highlighted): ```diff diff --git a/sqlite_utils/db.py b/sqlite_utils/db.py index d6b9ecf..4107ceb 100644 --- a/sqlite_utils/db.py +++ b/sqlite_utils/db.py @@ -1028,6 +1028,11 @@ class Table(Queryable): batch_size = max(1, min(batch_size, SQLITE_MAX_VARS // num_columns)) self.last_rowid = None self.last_pk = None + with self.db.conn: + # Explicit BEGIN is necessary because Python's sqlite3 doesn't + # issue implicit BEGINs for DDL, only DML. We mix DDL and DML + # below and might execute DDL first, e.g. for table creation. + self.db.conn.execute(""BEGIN"") if truncate and self.exists(): self.db.conn.execute(""DELETE FROM [{}];"".format(self.name)) for chunk in chunks(itertools.chain([first_record], records), batch_size): @@ -1038,7 +1043,11 @@ class Table(Queryable): # Use the first batch to derive the table names column_types = suggest_column_types(chunk) column_types.update(columns or {}) - self.create( + # Not self.create() because that is wrapped in its own + # transaction and Python's sqlite3 doesn't support + # nested transactions. + self.db.create_table( + self.name, column_types, pk, foreign_keys, @@ -1139,7 +1148,6 @@ class Table(Queryable): flat_values = list(itertools.chain(*values)) queries_and_params = [(sql, flat_values)] - with self.db.conn: for query, params in queries_and_params: try: result = self.db.conn.execute(query, params) ``` but that fails in tests because other methods call `insert/upsert/insert_all/upsert_all` in the middle of their transactions, so the BEGIN statement throws an error (no nested transactions allowed). Stepping back, it would be nice to make the transaction handling systematic and predictable. One way to do this is to make the `sqlite_utils/db.py` code generally not begin or commit any transactions, and require the caller to do that instead. This lets the caller mix and match the Python API calls into transactions as appropriate (which is impossible for the API methods themselves to fully determine). Then, make `sqlite_utils/cli.py` begin and commit a transaction in each `@cli.command` function, making each command robust and consistent in the face of errors. The big change here, and why I didn't just submit a patch, is that it dramatically changes the Python API to _require_ callers to begin a transaction rather than just immediately calling methods. There is also the caveat that for each transaction, an explicit `BEGIN` is also necessary so that DDL as well as DML (as well as `SELECT`s) are consistent and rolled back on error. There are several bugs.python.org discussions around this particular problem of DDL and some plans to make it better and consistent with DBAPI2, eventually. In the meantime, the sqlite-utils Database class could be a context manager which supports the incantations necessary to do proper transactions. This would still be a Python API change for callers but wouldn't expose them to the weirdness of the sqlite3's default transaction handling.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",651844316,Add insert --truncate option, https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655018966,https://api.github.com/repos/simonw/sqlite-utils/issues/118,655018966,MDEyOklzc3VlQ29tbWVudDY1NTAxODk2Ng==,79913,tsibley,2020-07-07T17:41:06Z,2020-07-07T17:41:06Z,CONTRIBUTOR,"Hmm, while tests pass, this may not work as intended on larger datasets. Looking into it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",651844316,Add insert --truncate option, https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655052451,https://api.github.com/repos/simonw/sqlite-utils/issues/118,655052451,MDEyOklzc3VlQ29tbWVudDY1NTA1MjQ1MQ==,79913,tsibley,2020-07-07T18:45:23Z,2020-07-07T18:45:23Z,CONTRIBUTOR,"Ah, I see the problem. The truncate is inside a loop I didn't realize was there.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",651844316,Add insert --truncate option, https://github.com/simonw/datasette/issues/889#issuecomment-653002499,https://api.github.com/repos/simonw/datasette/issues/889,653002499,MDEyOklzc3VlQ29tbWVudDY1MzAwMjQ5OQ==,49260,amjith,2020-07-02T13:22:13Z,2020-07-02T13:22:13Z,CONTRIBUTOR,"I was able to narrow this down to the fact that lifespan protocol is turned on. I see the workaround you've used here: https://github.com/simonw/datasette-debug-asgi/commit/72d568d32a3159c763ce908c0b269736935c6987 If so, maybe it's time to update some of the asg_wrapper [plugins](https://datasette.readthedocs.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",649907676,asgi_wrapper plugin hook is crashing at startup, https://github.com/simonw/datasette/issues/889#issuecomment-652990131,https://api.github.com/repos/simonw/datasette/issues/889,652990131,MDEyOklzc3VlQ29tbWVudDY1Mjk5MDEzMQ==,49260,amjith,2020-07-02T12:58:11Z,2020-07-02T13:00:18Z,CONTRIBUTOR,"FWIW, this error does NOT happen in datasette 0.45a4. It only started on 0.45a5","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",649907676,asgi_wrapper plugin hook is crashing at startup, https://github.com/simonw/datasette/pull/883#issuecomment-652394742,https://api.github.com/repos/simonw/datasette/issues/883,652394742,MDEyOklzc3VlQ29tbWVudDY1MjM5NDc0Mg==,3243482,abdusco,2020-07-01T12:41:13Z,2020-07-01T12:41:13Z,CONTRIBUTOR,"Well tests need to be updated. I need to get tests working on Windows.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648749062,Skip counting hidden tables, https://github.com/simonw/datasette/pull/883#issuecomment-652297139,https://api.github.com/repos/simonw/datasette/issues/883,652297139,MDEyOklzc3VlQ29tbWVudDY1MjI5NzEzOQ==,3243482,abdusco,2020-07-01T09:11:29Z,2020-07-01T09:11:29Z,CONTRIBUTOR,"Turns out we should include hidden tables in the result dict, or we're breaking tests. I've committed a refactor https://github.com/simonw/datasette/pull/883/commits/4f06e1bf6fbe4b73be770b87f610bf7c0e6e3ea7","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648749062,Skip counting hidden tables, https://github.com/simonw/datasette/issues/877#issuecomment-652261382,https://api.github.com/repos/simonw/datasette/issues/877,652261382,MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg==,3243482,abdusco,2020-07-01T08:03:17Z,2020-07-01T08:03:23Z,CONTRIBUTOR,Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648421105,Consider dropping explicit CSRF protection entirely?, https://github.com/simonw/datasette/issues/877#issuecomment-652255960,https://api.github.com/repos/simonw/datasette/issues/877,652255960,MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA==,3243482,abdusco,2020-07-01T07:52:25Z,2020-07-01T08:10:00Z,CONTRIBUTOR,"I am calling the API from another origin, so injecting CSRF token into templates wouldn't work. EDIT: I'll try the new version, it sounds promising","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648421105,Consider dropping explicit CSRF protection entirely?, https://github.com/simonw/datasette/issues/877#issuecomment-652166115,https://api.github.com/repos/simonw/datasette/issues/877,652166115,MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ==,3243482,abdusco,2020-07-01T03:28:07Z,2020-07-01T03:28:07Z,CONTRIBUTOR,"Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648421105,Consider dropping explicit CSRF protection entirely?, https://github.com/simonw/datasette/issues/859#issuecomment-652160909,https://api.github.com/repos/simonw/datasette/issues/859,652160909,MDEyOklzc3VlQ29tbWVudDY1MjE2MDkwOQ==,3243482,abdusco,2020-07-01T03:09:32Z,2020-07-01T03:10:21Z,CONTRIBUTOR,"I've just realized Datasette tries to count hidden tables too. There are 5 visible tables, 25 hidden tables, which I haven't realize earlier to consider their effect. I've turned off counting for hidden tables to see if it has any effect. What's the point of counting FTS tables?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-648669523,https://api.github.com/repos/simonw/datasette/issues/859,648669523,MDEyOklzc3VlQ29tbWVudDY0ODY2OTUyMw==,3243482,abdusco,2020-06-24T08:13:23Z,2020-06-24T10:30:36Z,CONTRIBUTOR,"I tried setting `cache_size_kb=0` then `cache_size_kb=100000`, still getting this behavior. I even changed `Database::table_counts` and lowered time limit to 1 ```py table_count = ( await self.execute( ""select count(*) from [{}]"".format(table), custom_time_limit=1, ) ).rows[0][0] counts[table] = table_count ``` I feel like 10 seconds is a magic number, like a processing timeout and datasette gives up and returns the page. Index page loads instantly, table page, query page, as well. But when I return to database page after some time, it loads in 10s. EDIT: It's always like 10 + 0.3s, like 10s wait and timeout then 300ms to render the page","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-648232645,https://api.github.com/repos/simonw/datasette/issues/859,648232645,MDEyOklzc3VlQ29tbWVudDY0ODIzMjY0NQ==,3243482,abdusco,2020-06-23T15:19:53Z,2020-06-23T15:19:53Z,CONTRIBUTOR,"The issue seems to appear sporadically, like when I return to database page after a while, during which some records have been added to the database. I've just visited database, page first visit took ~10s, consecutive visits took 0.3s.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647936117,https://api.github.com/repos/simonw/datasette/issues/859,647936117,MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==,3243482,abdusco,2020-06-23T06:25:17Z,2020-06-23T06:25:17Z,CONTRIBUTOR,"> > > ``` > sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50 > ``` > > Looks like that will take 35 minutes to run (it's not a particularly fast tool). Try chunking write operations into batches every 1000 records or so.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647935300,https://api.github.com/repos/simonw/datasette/issues/859,647935300,MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA==,3243482,abdusco,2020-06-23T06:23:01Z,2020-06-23T06:23:01Z,CONTRIBUTOR,"> You said ""200k+, 50+ rows in a couple of tables"" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes. Ah that was a typo, I meant 50k.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647925594,https://api.github.com/repos/simonw/datasette/issues/859,647925594,MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA==,3243482,abdusco,2020-06-23T05:55:21Z,2020-06-23T06:28:29Z,CONTRIBUTOR,"Hmm, not seeing the problem now. I've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms. I have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page? EDIT: It turns out it doesn't hold cache with mutable databases. I'll update the issue with more findings and a better way to reproduce the problem if I encounter it again.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647923666,https://api.github.com/repos/simonw/datasette/issues/859,647923666,MDEyOklzc3VlQ29tbWVudDY0NzkyMzY2Ng==,3243482,abdusco,2020-06-23T05:49:31Z,2020-06-23T05:49:31Z,CONTRIBUTOR,"I think I should mention that having FTS on all tables mean I have 5 visible, 25 hidden (FTS) tables displayed on database page.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647922203,https://api.github.com/repos/simonw/datasette/issues/859,647922203,MDEyOklzc3VlQ29tbWVudDY0NzkyMjIwMw==,3243482,abdusco,2020-06-23T05:44:58Z,2021-01-05T08:22:43Z,CONTRIBUTOR,"I'm seeing the problem on database page. Index page and table page runs quite fast. - Tables have <10 columns (`id`, `url`, `title`, `body_html`, `date`, `author`, `meta` (for keeping unstructured json)). I've added index on `date` columns (using `sqlite-utils`) in addition to the index present on `id` columns. - All tables have FTS enabled on `text` and `varchar` columns (`title`, `body_html` etc) to speed up searching. - There are couple of tables related with foreign keys (think a thread in a forum and posts in that thread, related with `thread_id`) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647135713,https://api.github.com/repos/simonw/datasette/issues/859,647135713,MDEyOklzc3VlQ29tbWVudDY0NzEzNTcxMw==,3243482,abdusco,2020-06-21T14:30:02Z,2020-06-21T14:30:02Z,CONTRIBUTOR,"Oops, the same method is called from both index and database pages. But removing select count queries speed up the page load quite a bit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647194131,https://api.github.com/repos/simonw/datasette/issues/859,647194131,MDEyOklzc3VlQ29tbWVudDY0NzE5NDEzMQ==,3243482,abdusco,2020-06-21T23:15:54Z,2020-06-21T23:26:09Z,CONTRIBUTOR,"I'm not sure if table counts are to blame. There shouldn't be a ~3 orders of magnitude difference. ```fish user@klein /a/w/scrapyard (master)> set sql ""select count(*) from table_1; select count(*) from table_2; select count(*) from table_3;"" user@klein /a/w/scrapyard (master)> time sqlite3 scrapyard.db ""$sql"" 187489 46492 2229 ________________________________________________________ Executed in 25.57 millis fish external usr time 3.55 millis 0.00 micros 3.55 millis sys time 22.42 millis 1123.00 micros 21.30 millis ``` but not letting datasette count the tables definitely helps.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/851#issuecomment-645293374,https://api.github.com/repos/simonw/datasette/issues/851,645293374,MDEyOklzc3VlQ29tbWVudDY0NTI5MzM3NA==,3243482,abdusco,2020-06-17T10:32:02Z,2020-06-17T10:32:28Z,CONTRIBUTOR,"Welp, I'm an idiot. Turns out I had a sneaky comma `,` after `sql` key: ``` ... (:name, :url), ``` which tells sqlite to expect another `values(...)` list. Correcting the SQL solved the issue. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",640330278,Having trouble getting writable canned queries to work, https://github.com/simonw/datasette/issues/691#issuecomment-643709037,https://api.github.com/repos/simonw/datasette/issues/691,643709037,MDEyOklzc3VlQ29tbWVudDY0MzcwOTAzNw==,49260,amjith,2020-06-14T02:35:16Z,2020-06-14T02:35:16Z,CONTRIBUTOR,"The server should reload in the `config_dir` mode. Ref: #848","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",574021194,--reload sould reload server if code in --plugins-dir changes, https://github.com/simonw/datasette/issues/394#issuecomment-641908346,https://api.github.com/repos/simonw/datasette/issues/394,641908346,MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng==,127565,wragge,2020-06-10T10:22:54Z,2020-06-10T10:22:54Z,CONTRIBUTOR,"There's a working demo here: https://github.com/wragge/datasette-test And if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/sqlite-utils/issues/61#issuecomment-533818697,https://api.github.com/repos/simonw/sqlite-utils/issues/61,533818697,MDEyOklzc3VlQ29tbWVudDUzMzgxODY5Nw==,49260,amjith,2019-09-21T18:09:01Z,2019-09-21T18:09:28Z,CONTRIBUTOR,"@witeshadow The library version doesn't have helpers around CSV (at least not from what I can see in the code). But here's a snippet that makes it easy to insert from CSV using the library. ``` import csv from sqlite_utils import Database # CSV Reader csv_file = open(""filename.csv"") # open the csv file. reader = csv.reader(csv_file) # Create a CSV reader headers = next(reader) # First line is the header docs = (dict(zip(headers, row)) for row in reader) # Now you can use the `sqlite_utils` library. db = Database(""my_database.db"") db[""table_name""].insert_all(docs) ``` This snippet is adapted from reading the CLI source code on how it implements the csv option.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",491219910,importing CSV to SQLite as library, https://github.com/simonw/sqlite-utils/pull/57#issuecomment-527211047,https://api.github.com/repos/simonw/sqlite-utils/issues/57,527211047,MDEyOklzc3VlQ29tbWVudDUyNzIxMTA0Nw==,49260,amjith,2019-09-02T17:30:43Z,2019-09-02T17:30:43Z,CONTRIBUTOR,"I have merged the other PR (#56) into this one. I have incorporated your suggestions. Cheers!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",487987958,Add triggers while enabling FTS, https://github.com/simonw/sqlite-utils/pull/56#issuecomment-527209840,https://api.github.com/repos/simonw/sqlite-utils/issues/56,527209840,MDEyOklzc3VlQ29tbWVudDUyNzIwOTg0MA==,49260,amjith,2019-09-02T17:23:21Z,2019-09-02T17:23:21Z,CONTRIBUTOR,"I have updated the other PR with the changes from this one and added tests. I have also changed the escaping from double quotes to brackets. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",487847945,Escape the table name in populate_fts and search., https://github.com/simonw/datasette/issues/511#issuecomment-510730200,https://api.github.com/repos/simonw/datasette/issues/511,510730200,MDEyOklzc3VlQ29tbWVudDUxMDczMDIwMA==,3243482,abdusco,2019-07-12T03:23:22Z,2019-07-12T03:23:22Z,CONTRIBUTOR,"@simonw yes it works fine on Windows, but test suite doesn't run properly, for that I had to use WSL","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions, https://github.com/simonw/datasette/pull/554#issuecomment-509629331,https://api.github.com/repos/simonw/datasette/issues/554,509629331,MDEyOklzc3VlQ29tbWVudDUwOTYyOTMzMQ==,3243482,abdusco,2019-07-09T12:51:35Z,2019-07-09T12:51:35Z,CONTRIBUTOR,"I wanted to add a test for it too, but I've realized it's impossible to test a server process as we cannot get its exit code. ```python # tests/test_cli.py def test_static_mounts_on_windows(): if sys.platform != ""win32"": return runner = CliRunner() result = runner.invoke( cli, [""serve"", ""--static"", r""s:C:\\""] ) assert result.exit_code == 0 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",465728430,Fix static mounts using relative paths and prevent traversal exploits, https://github.com/simonw/datasette/pull/554#issuecomment-509618339,https://api.github.com/repos/simonw/datasette/issues/554,509618339,MDEyOklzc3VlQ29tbWVudDUwOTYxODMzOQ==,3243482,abdusco,2019-07-09T12:16:32Z,2019-07-09T12:16:32Z,CONTRIBUTOR,I've also added another fix for using static mounts with absolute paths on Windows. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",465728430,Fix static mounts using relative paths and prevent traversal exploits, https://github.com/simonw/datasette/issues/507#issuecomment-509013413,https://api.github.com/repos/simonw/datasette/issues/507,509013413,MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw==,82988,psychemedia,2019-07-07T16:31:57Z,2019-07-07T16:31:57Z,CONTRIBUTOR,"Chrome and Firefox [both support headless screengrabs]( https://www.bleepingcomputer.com/news/software/chrome-and-firefox-can-take-screenshots-of-sites-from-the-command-line/) from command line, but I don't know how parameterised they can be?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455852801,Every datasette plugin on the ecosystem page should have a screenshot, https://github.com/simonw/datasette/issues/523#issuecomment-504809397,https://api.github.com/repos/simonw/datasette/issues/523,504809397,MDEyOklzc3VlQ29tbWVudDUwNDgwOTM5Nw==,2657547,rixx,2019-06-24T01:38:14Z,2019-06-24T01:38:14Z,CONTRIBUTOR,"Ah, apologies – I had found and read those issues, but I was under the impression that they refered only to the filtered row count, not the unfiltered total row count.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459627549,Show total/unfiltered row count when filtering, https://github.com/simonw/datasette/issues/514#issuecomment-504690927,https://api.github.com/repos/simonw/datasette/issues/514,504690927,MDEyOklzc3VlQ29tbWVudDUwNDY5MDkyNw==,45057,russss,2019-06-22T19:06:07Z,2019-06-22T19:06:07Z,CONTRIBUTOR,"I'd rather not turn this into a systemd support thread, but you're trying to execute the package directory there. Your datasette executable is probably at `/home/chris/Env/datasette/bin/datasette`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/simonw/datasette/issues/514#issuecomment-504684831,https://api.github.com/repos/simonw/datasette/issues/514,504684831,MDEyOklzc3VlQ29tbWVudDUwNDY4NDgzMQ==,45057,russss,2019-06-22T17:38:23Z,2019-06-22T17:38:23Z,CONTRIBUTOR,"> > WorkingDirectory=/path/to/data > > @russss, Which directory does this represent? It's the working directory (cwd) of the spawned process. In this case if you set it to the directory your data is in, you can use relative paths to the db (and metadata/templates/etc) in the `ExecStart` command.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/simonw/datasette/issues/514#issuecomment-504663766,https://api.github.com/repos/simonw/datasette/issues/514,504663766,MDEyOklzc3VlQ29tbWVudDUwNDY2Mzc2Ng==,45057,russss,2019-06-22T12:57:59Z,2019-06-22T12:57:59Z,CONTRIBUTOR,"> This example is useful to - I like how it has a Makefile that knows how to set up systemd: https://github.com/pikesley/Queube I wasn't even aware it was possible to add a systemd service at an arbitrary path, but it seems a little messy to me. Maybe worth noting that systemd does support [per-user services](https://wiki.archlinux.org/index.php/Systemd/User) which don't require root access. Cool but probably overkill for most people (especially when you're going to need root to listen on port 80 anyway, directly or via a reverse proxy).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/simonw/datasette/issues/514#issuecomment-504662904,https://api.github.com/repos/simonw/datasette/issues/514,504662904,MDEyOklzc3VlQ29tbWVudDUwNDY2MjkwNA==,45057,russss,2019-06-22T12:45:21Z,2019-06-22T12:45:39Z,CONTRIBUTOR,"On most modern Linux distros, systemd is the easiest answer. Example systemd unit file (save to `/etc/systemd/system/datasette.service`): ``` [Unit] Description=Datasette After=network.target [Service] Type=simple User= WorkingDirectory=/path/to/data ExecStart=/path/to/datasette serve -h 0.0.0.0 ./my.db Restart=on-failure [Install] WantedBy=multi-user.target ``` Activate it with: ```bash $ sudo systemctl daemon-reload $ sudo systemctl enable datasette $ sudo systemctl start datasette ``` Logs are best viewed using `journalctl -u datasette -f`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/simonw/datasette/issues/573#issuecomment-593026413,https://api.github.com/repos/simonw/datasette/issues/573,593026413,MDEyOklzc3VlQ29tbWVudDU5MzAyNjQxMw==,127565,wragge,2020-03-01T01:24:45Z,2020-03-01T01:24:45Z,CONTRIBUTOR,"Did you manage to find an answer to this? I've got a notebook to help people generate datasets on the fly from an API, so it would be cool if they flick it to Datasette for initial exploration.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",492153532,Exposing Datasette via Jupyter-server-proxy, https://github.com/simonw/datasette/pull/666#issuecomment-590022164,https://api.github.com/repos/simonw/datasette/issues/666,590022164,MDEyOklzc3VlQ29tbWVudDU5MDAyMjE2NA==,13896256,kevindkeogh,2020-02-23T03:26:00Z,2020-02-23T03:26:00Z,CONTRIBUTOR,"It was very helpful for me, using it for a 15M row table. Added a test, happy to amend though!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",562085508,"Use inspect-file, if possible, for total row count", https://github.com/simonw/datasette/issues/417#issuecomment-586599424,https://api.github.com/repos/simonw/datasette/issues/417,586599424,MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA==,82988,psychemedia,2020-02-15T15:12:19Z,2020-02-15T15:12:33Z,CONTRIBUTOR,So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421546944,Datasette Library, https://github.com/simonw/datasette/pull/653#issuecomment-582106085,https://api.github.com/repos/simonw/datasette/issues/653,582106085,MDEyOklzc3VlQ29tbWVudDU4MjEwNjA4NQ==,418191,jaywgraves,2020-02-04T20:43:43Z,2020-02-04T20:43:43Z,CONTRIBUTOR,but this also doesn't have to land at all if it doesn't match your use case. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",541331755,allow leading comments in SQL input field, https://github.com/simonw/datasette/pull/653#issuecomment-582105810,https://api.github.com/repos/simonw/datasette/issues/653,582105810,MDEyOklzc3VlQ29tbWVudDU4MjEwNTgxMA==,418191,jaywgraves,2020-02-04T20:43:01Z,2020-02-04T20:43:01Z,CONTRIBUTOR,"I *think* the existing code will be OK even if I strip the lines in the middle of a new line delimited string. It's only used for the validation, SQLite handles the `--` just fine and the whole SQL textarea still gets sent once it passes validation. I can add your test case to my branch later this evening though. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",541331755,allow leading comments in SQL input field, https://github.com/simonw/datasette/issues/656#issuecomment-576293773,https://api.github.com/repos/simonw/datasette/issues/656,576293773,MDEyOklzc3VlQ29tbWVudDU3NjI5Mzc3Mw==,6371750,JBPressac,2020-01-20T14:17:11Z,2020-01-20T14:17:11Z,CONTRIBUTOR,Seems that headers and definitions has simply to be filled as an HTML table in the description field of matadata.json.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546961357,Display of the column definitions, https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573389669,https://api.github.com/repos/simonw/sqlite-utils/issues/74,573389669,MDEyOklzc3VlQ29tbWVudDU3MzM4OTY2OQ==,15092,jayvdb,2020-01-12T07:21:17Z,2020-01-12T07:21:17Z,CONTRIBUTOR,"I guess there is some extra flag for ` CliRunner.invoke` to check exitcode and raise the exception, or that should be an extra assert added.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546073980,Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column, https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573388052,https://api.github.com/repos/simonw/sqlite-utils/issues/74,573388052,MDEyOklzc3VlQ29tbWVudDU3MzM4ODA1Mg==,15092,jayvdb,2020-01-12T06:51:30Z,2020-01-12T06:51:30Z,CONTRIBUTOR,"Thanks. That showed me that there was a click cli runner error, and setting `export LANG=en_US.UTF-8` fixed it. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546073980,Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column, https://github.com/simonw/datasette/issues/394#issuecomment-567133734,https://api.github.com/repos/simonw/datasette/issues/394,567133734,MDEyOklzc3VlQ29tbWVudDU2NzEzMzczNA==,639012,jsfenfen,2019-12-18T17:33:23Z,2019-12-18T17:33:23Z,CONTRIBUTOR,"FWIW I did a dumb merge of the branch here: https://github.com/jsfenfen/datasette and it seemed to work in that I could run stuff at a subdirectory, but ended up abandoning it in favor of just posting a subdomain because getting the nginx configs right was making me crazy. I still would prefer posting at a subdirectory but the subdomain seems simpler at the moment. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/pull/644#issuecomment-565755208,https://api.github.com/repos/simonw/datasette/issues/644,565755208,MDEyOklzc3VlQ29tbWVudDU2NTc1NTIwOA==,6025893,chris48s,2019-12-14T21:33:31Z,2019-12-14T21:33:31Z,CONTRIBUTOR,"Hi @simonw Have you had a chance to look at this at all? I'm going to have a chunk of time free next week so if there is additional work needed on this, that would be a particularly convenient time for me to revisit this. Cheers","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",530513784,Validate metadata json on startup, https://github.com/simonw/datasette/issues/573#issuecomment-559632608,https://api.github.com/repos/simonw/datasette/issues/573,559632608,MDEyOklzc3VlQ29tbWVudDU1OTYzMjYwOA==,82988,psychemedia,2019-11-29T01:43:38Z,2019-11-29T01:43:38Z,CONTRIBUTOR,"In passing, it looks like a start was made on a datasette Jupyter server extension in https://github.com/lucasdurand/jupyter-datasette although the build fails in MyBinder.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",492153532,Exposing Datasette via Jupyter-server-proxy, https://github.com/simonw/datasette/issues/642#issuecomment-559207224,https://api.github.com/repos/simonw/datasette/issues/642,559207224,MDEyOklzc3VlQ29tbWVudDU1OTIwNzIyNA==,82988,psychemedia,2019-11-27T18:40:57Z,2019-11-27T18:41:07Z,CONTRIBUTOR,"Would cookie cutter approaches also work for creating various flavours of customised templates? I need to try to create a couple of sites for myself to get a feel for what sorts of thing are easily doable, and what cribbable cookie cutter items might be. I'm guessing https://simonwillison.net/2019/Nov/25/niche-museums/ is a good place to start from?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",529429214,Provide a cookiecutter template for creating new plugins, https://github.com/simonw/datasette/issues/639#issuecomment-558687342,https://api.github.com/repos/simonw/datasette/issues/639,558687342,MDEyOklzc3VlQ29tbWVudDU1ODY4NzM0Mg==,21148,jacobian,2019-11-26T15:40:00Z,2019-11-26T15:40:00Z,CONTRIBUTOR,"A bit of background: the reason `heroku git:clone` brings down an empty directory is because `datasette publish heroku` uses the [builds API](https://devcenter.heroku.com/articles/build-and-release-using-the-api), rather than a `git push`, to release the app. I originally did this because it seemed like a lower bar than having a working `git`, but the downside is, as you found out, that tweaking the created app is hard. So there's one option -- change `datasette publish heroku` to use `git push` instead of `heroku builds:create`. @pkoppstein - what you suggested seems like it ought to work (you don't need maintenance mode, though). I'm not sure why it doesn't. You could also look into using the [slugs API](https://devcenter.heroku.com/articles/platform-api-deploying-slugs) to download the slug, change `metadata.json`, re-pack and re-upload the slug. Ultimately though I think I think @simonw's idea of reading `metadata.json` from an external source might be better (#357). Reading from an alternate URL would be fine, or you could also just stuff the whole `metadata.json` into a Heroku config var, and write a plugin to read it from there. Hope this helps a bit!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",527670799,updating metadata.json without recreating the app, https://github.com/simonw/datasette/issues/394#issuecomment-556749086,https://api.github.com/repos/simonw/datasette/issues/394,556749086,MDEyOklzc3VlQ29tbWVudDU1Njc0OTA4Ng==,639012,jsfenfen,2019-11-21T01:15:34Z,2019-11-21T01:21:45Z,CONTRIBUTOR,"Hey @simonw is the url_prefix config option available in another branch, it looks like you've written some tests for it above? In 0.32 I get ""url_prefix is not a valid option"". I think this would be *really helpful*! This would be really handy for proxying datasette in another domain's *subdirectory* I believe this will allow folks to run upstream authentication, but the links break if the url_prefix doesn't match. I'd prefer not to host a proxied version of datasette on a subdomain (e.g. datasette.myurl.com b/c then I gotta worry about sharing authorization cookies with the subdomain, which I just assume not do, but...) Edit: I see the wip-url-prefix branch, I may try with that https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/dogsheep/twitter-to-sqlite/issues/29#issuecomment-552134876,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/29,552134876,MDEyOklzc3VlQ29tbWVudDU1MjEzNDg3Ng==,21148,jacobian,2019-11-09T20:33:38Z,2019-11-09T20:33:38Z,CONTRIBUTOR,❤️ thanks!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",518725064,`import` command fails on empty files, https://github.com/simonw/datasette/pull/602#issuecomment-549246007,https://api.github.com/repos/simonw/datasette/issues/602,549246007,MDEyOklzc3VlQ29tbWVudDU0OTI0NjAwNw==,2657547,rixx,2019-11-04T07:29:33Z,2019-11-04T07:29:33Z,CONTRIBUTOR,Not sure – I'm always a bit weirded out when elements that I clicked disappear on me.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509535510,Offer to format readonly SQL, https://github.com/simonw/datasette/pull/601#issuecomment-544214418,https://api.github.com/repos/simonw/datasette/issues/601,544214418,MDEyOklzc3VlQ29tbWVudDU0NDIxNDQxOA==,2657547,rixx,2019-10-20T02:29:49Z,2019-10-20T02:29:49Z,CONTRIBUTOR,Submitted in #602!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509340359,Don't auto-format SQL on page load, https://github.com/simonw/datasette/pull/601#issuecomment-544008944,https://api.github.com/repos/simonw/datasette/issues/601,544008944,MDEyOklzc3VlQ29tbWVudDU0NDAwODk0NA==,2657547,rixx,2019-10-18T23:40:48Z,2019-10-18T23:40:48Z,CONTRIBUTOR,"The only negative impact that comes to mind is that now you have no way to get the read-only query to be formatted nicely, I think, so maybe a second PR adding the formatting functionality even to the read-only page would be good?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509340359,Don't auto-format SQL on page load, https://github.com/simonw/datasette/pull/601#issuecomment-544008463,https://api.github.com/repos/simonw/datasette/issues/601,544008463,MDEyOklzc3VlQ29tbWVudDU0NDAwODQ2Mw==,2657547,rixx,2019-10-18T23:39:21Z,2019-10-18T23:39:21Z,CONTRIBUTOR,"That looks right, and I completely agree with the intent.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509340359,Don't auto-format SQL on page load, https://github.com/simonw/datasette/pull/590#issuecomment-541587823,https://api.github.com/repos/simonw/datasette/issues/590,541587823,MDEyOklzc3VlQ29tbWVudDU0MTU4NzgyMw==,2657547,rixx,2019-10-14T09:58:23Z,2019-10-14T09:58:23Z,CONTRIBUTOR,Added tests.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",505818256,Handle spaces in DB names, https://github.com/simonw/datasette/pull/590#issuecomment-541562581,https://api.github.com/repos/simonw/datasette/issues/590,541562581,MDEyOklzc3VlQ29tbWVudDU0MTU2MjU4MQ==,2657547,rixx,2019-10-14T08:57:46Z,2019-10-14T08:57:46Z,CONTRIBUTOR,"Ah, thank you – I saw the need for unit tests but wasn't sure what the best way to add one would be.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",505818256,Handle spaces in DB names, https://github.com/simonw/datasette/issues/512#issuecomment-541119038,https://api.github.com/repos/simonw/datasette/issues/512,541119038,MDEyOklzc3VlQ29tbWVudDU0MTExOTAzOA==,2657547,rixx,2019-10-11T15:49:13Z,2019-10-11T15:49:13Z,CONTRIBUTOR,"How open are you to changing the config variable names (with appropriate deprecation, of course)? `""about_url_text"", ""license_url_text""` etc might be better suited to convey that these are just meant as basically URL titles.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",457147936,"""about"" parameter in metadata does not appear when alone", https://github.com/simonw/datasette/issues/507#issuecomment-541118904,https://api.github.com/repos/simonw/datasette/issues/507,541118904,MDEyOklzc3VlQ29tbWVudDU0MTExODkwNA==,2657547,rixx,2019-10-11T15:48:49Z,2019-10-11T15:48:49Z,CONTRIBUTOR,Headless Chrome and Firefox via Selenium are a solid choice in my experience. You may be interested in how pretix and pretalx solve this problem: They use pytest to create those screenshots on release to make sure they are up to date. See [this writeup](https://behind.pretix.eu/2018/11/15/automated-screenshots/) and [this repo](https://github.com/pretix/pretix-screenshots).,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455852801,Every datasette plugin on the ecosystem page should have a screenshot, https://github.com/simonw/datasette/issues/585#issuecomment-541052329,https://api.github.com/repos/simonw/datasette/issues/585,541052329,MDEyOklzc3VlQ29tbWVudDU0MTA1MjMyOQ==,2657547,rixx,2019-10-11T12:53:51Z,2019-10-11T12:53:51Z,CONTRIBUTOR,"I think this would be good, yeah – currently, databases are explicitly sorted by name in the IndexView, we could just remove that part (and use an `OrderedDict` for consistency, I suppose)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",503217375,"Databases on index page should display in order they were passed to ""datasette serve""?", https://github.com/simonw/datasette/issues/370#issuecomment-436037692,https://api.github.com/repos/simonw/datasette/issues/370,436037692,MDEyOklzc3VlQ29tbWVudDQzNjAzNzY5Mg==,82988,psychemedia,2018-11-05T21:15:47Z,2018-11-05T21:18:37Z,CONTRIBUTOR,"In terms of integration with `pandas`, I was pondering two different ways `datasette`/`csvs_to_sqlite` integration may work: - like [`pandasql`](https://github.com/yhat/pandasql), to provide a SQL query layer either by a direct connection to the sqlite db or via `datasette` API; - as an improvement of `pandas.to_sql()`, which is a bit ropey (e.g. `pandas.to_sql_from_csvs()`, routing the dataframe to sqlite via `csvs_tosqlite` rather than the dodgy mapping that `pandas` supports). The `pandas.publish_*` idea could be quite interesting though... Would it be useful/fruitful to think about `publish_` as a complement to [`pandas.to_`](https://pandas.pydata.org/pandas-docs/stable/api.html#id12)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",377155320,Integration with JupyterLab, https://github.com/simonw/datasette/issues/370#issuecomment-436042445,https://api.github.com/repos/simonw/datasette/issues/370,436042445,MDEyOklzc3VlQ29tbWVudDQzNjA0MjQ0NQ==,82988,psychemedia,2018-11-05T21:30:42Z,2018-11-05T21:31:48Z,CONTRIBUTOR,"Another route would be something like creating a `datasette` IPython magic for notebooks to take a dataframe and easily render it as a `datasette`. You'd need to run the app in the background rather than block execution in the notebook. Related to that, or to publishing a dataframe in notebook cell for use in other cells in a non-blocking way, there may be cribs in something like https://github.com/micahscopes/nbmultitask .","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",377155320,Integration with JupyterLab, https://github.com/simonw/datasette/issues/371#issuecomment-435862009,https://api.github.com/repos/simonw/datasette/issues/371,435862009,MDEyOklzc3VlQ29tbWVudDQzNTg2MjAwOQ==,82988,psychemedia,2018-11-05T12:48:35Z,2018-11-05T12:48:35Z,CONTRIBUTOR,I think you need to register a domain name you own separately in order to get a non-IP address address? https://www.digitalocean.com/docs/networking/dns/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",377156339,datasette publish digitalocean plugin, https://github.com/simonw/datasette/issues/369#issuecomment-435768450,https://api.github.com/repos/simonw/datasette/issues/369,435768450,MDEyOklzc3VlQ29tbWVudDQzNTc2ODQ1MA==,416374,gfrmin,2018-11-05T06:31:59Z,2018-11-05T06:31:59Z,CONTRIBUTOR,"That would be ideal, but you know better than me whether the CSV streaming trick works for custom SQL queries.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",374953006,Interface should show same JSON shape options for custom SQL queries, https://github.com/simonw/datasette/issues/366#issuecomment-429737929,https://api.github.com/repos/simonw/datasette/issues/366,429737929,MDEyOklzc3VlQ29tbWVudDQyOTczNzkyOQ==,416374,gfrmin,2018-10-15T07:32:57Z,2018-10-15T07:32:57Z,CONTRIBUTOR,"Very hacky solution is to write now.json file forcing the usage of v1 of Zeit cloud, see https://github.com/slygent/datasette/commit/3ab824793ec6534b6dd87078aa46b11c4fa78ea3 This does work, at least.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",369716228,Default built image size over Zeit Now 100MiB limit, https://github.com/simonw/datasette/issues/329#issuecomment-422915450,https://api.github.com/repos/simonw/datasette/issues/329,422915450,MDEyOklzc3VlQ29tbWVudDQyMjkxNTQ1MA==,418191,jaywgraves,2018-09-19T18:45:02Z,2018-09-20T10:50:50Z,CONTRIBUTOR,"That works for me. Was able to pull the public image and no errors on my canned query. (~although a small rendering bug. I'll create an issue and if I have time today, a PR to fix~ this turned out to be my error.) Thanks for the quick response!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",336465018,Travis should push tagged images to Docker Hub for each release, https://github.com/simonw/datasette/issues/329#issuecomment-422821483,https://api.github.com/repos/simonw/datasette/issues/329,422821483,MDEyOklzc3VlQ29tbWVudDQyMjgyMTQ4Mw==,418191,jaywgraves,2018-09-19T14:17:42Z,2018-09-19T14:17:42Z,CONTRIBUTOR,"I'm using the docker image (0.23.2) and notice some differences/bugs between the docs and the published version with canned queries. (submitted a tiny doc fix also) I was able to build the docker container locally using `master` and I'm using that for now. Would it be possible to manually push 0.24 to DockerHub until the TravisCI stuff is fixed? I would like to run this in our Kubernetes cluster but don't want to publish a version in our internal registry if I don't have to. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",336465018,Travis should push tagged images to Docker Hub for each release, https://github.com/simonw/datasette/issues/294#issuecomment-405026800,https://api.github.com/repos/simonw/datasette/issues/294,405026800,MDEyOklzc3VlQ29tbWVudDQwNTAyNjgwMA==,45057,russss,2018-07-14T14:24:31Z,2018-07-14T14:24:31Z,CONTRIBUTOR,"I had a quick look at this in relation to #343 and I feel like it might be worth modelling the inspected table metadata internally as an object rather than a dict. (We'd still have to serialise it back to JSON.) There are a few places where we rely on the structure of this metadata dict for various reasons, including in templates (and potentially also in user templates). It would be nice to have a reasonably well defined API for accessing metadata internally so that it's clearer what we're breaking.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",327365110,inspect should record column types, https://github.com/simonw/datasette/issues/343#issuecomment-405026441,https://api.github.com/repos/simonw/datasette/issues/343,405026441,MDEyOklzc3VlQ29tbWVudDQwNTAyNjQ0MQ==,45057,russss,2018-07-14T14:17:14Z,2018-07-14T14:17:14Z,CONTRIBUTOR,This probably depends on #294.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",341228846,Render boolean fields better by default, https://github.com/simonw/datasette/issues/344#issuecomment-405022335,https://api.github.com/repos/simonw/datasette/issues/344,405022335,MDEyOklzc3VlQ29tbWVudDQwNTAyMjMzNQ==,45057,russss,2018-07-14T13:00:48Z,2018-07-14T13:00:48Z,CONTRIBUTOR,"Looks like this was a red herring actually, and heroku had a blip when I was testing it...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",341229113,datasette publish heroku fails without name provided, https://github.com/simonw/datasette/issues/276#issuecomment-401312981,https://api.github.com/repos/simonw/datasette/issues/276,401312981,MDEyOklzc3VlQ29tbWVudDQwMTMxMjk4MQ==,45057,russss,2018-06-29T10:14:54Z,2018-06-29T10:14:54Z,CONTRIBUTOR,"> @RusSs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg kartena/Proj4Leaflet) although the leaflet side would need to detect or be informed of the original projection? Well, as @simonw mentioned, GeoJSON only supports WGS84, and GeoJSON (and/or TopoJSON) is the standard we probably want to aim for. On-the-fly reprojection in spatialite is not an issue anyway, and in general I think you want to be serving stuff to web maps in WGS84 or Web Mercator.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/issues/276#issuecomment-401310732,https://api.github.com/repos/simonw/datasette/issues/276,401310732,MDEyOklzc3VlQ29tbWVudDQwMTMxMDczMg==,82988,psychemedia,2018-06-29T10:05:04Z,2018-06-29T10:07:25Z,CONTRIBUTOR,"@russs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg [kartena/Proj4Leaflet](https://kartena.github.io/Proj4Leaflet/)) although the leaflet side would need to detect or be informed of the original projection? Another possibility would be to provide an easy way/guidance for users to create an FK'd table containing the WGS84 projection of a non-WGS84 geometry in the original/principle table? This could then as a proxy for serving GeoJSON to the leaflet map?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/issues/394#issuecomment-499923145,https://api.github.com/repos/simonw/datasette/issues/394,499923145,MDEyOklzc3VlQ29tbWVudDQ5OTkyMzE0NQ==,13896256,kevindkeogh,2019-06-07T15:10:57Z,2019-06-07T15:11:07Z,CONTRIBUTOR,"Putting this here in case anyone else encounters the same issue with nginx, I was able to resolve it by passing the header in the nginx proxy config (i.e., `proxy_set_header Host $host`).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-499320973,https://api.github.com/repos/simonw/datasette/issues/394,499320973,MDEyOklzc3VlQ29tbWVudDQ5OTMyMDk3Mw==,13896256,kevindkeogh,2019-06-06T02:07:59Z,2019-06-06T02:07:59Z,CONTRIBUTOR,"Hey was this ever merged? Trying to run this behind nginx, and encountering this issue.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/pull/450#issuecomment-489342728,https://api.github.com/repos/simonw/datasette/issues/450,489342728,MDEyOklzc3VlQ29tbWVudDQ4OTM0MjcyOA==,45057,russss,2019-05-04T16:37:35Z,2019-05-04T16:37:35Z,CONTRIBUTOR,For a bit more context: this fixes a crash with `unsupported operand type(s) for +: 'int' and 'NoneType'` on the index page for me.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",440304714,Coalesce hidden table count to 0, https://github.com/simonw/datasette/issues/446#issuecomment-489222223,https://api.github.com/repos/simonw/datasette/issues/446,489222223,MDEyOklzc3VlQ29tbWVudDQ4OTIyMjIyMw==,45057,russss,2019-05-03T20:01:19Z,2019-05-03T20:01:29Z,CONTRIBUTOR,"Also I have a slight preference against (ab)using `__slots__` to enforce fields, although I have done it myself in the past. It would be possible to do this with `__setattr__` instead, although that's an implementation detail and I'm not too fussed about it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",440134714,Define mechanism for plugins to return structured data, https://github.com/simonw/datasette/issues/446#issuecomment-489221481,https://api.github.com/repos/simonw/datasette/issues/446,489221481,MDEyOklzc3VlQ29tbWVudDQ4OTIyMTQ4MQ==,45057,russss,2019-05-03T19:58:31Z,2019-05-03T19:58:31Z,CONTRIBUTOR,"In this particular case I don't think there's an issue making all those required. However, I suspect we might have to allow optional values at some point - my preferred solution to russss/datasette-geo#2 would need one.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",440134714,Define mechanism for plugins to return structured data, https://github.com/simonw/datasette/pull/434#issuecomment-489105665,https://api.github.com/repos/simonw/datasette/issues/434,489105665,MDEyOklzc3VlQ29tbWVudDQ4OTEwNTY2NQ==,25778,eyeseast,2019-05-03T14:01:30Z,2019-05-03T14:01:30Z,CONTRIBUTOR,This is exactly what I needed. Thank you.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",434321685,"""datasette publish cloudrun"" command to publish to Google Cloud Run", https://github.com/simonw/datasette/pull/434#issuecomment-489163939,https://api.github.com/repos/simonw/datasette/issues/434,489163939,MDEyOklzc3VlQ29tbWVudDQ4OTE2MzkzOQ==,10352819,rprimet,2019-05-03T16:49:45Z,2019-05-03T16:50:03Z,CONTRIBUTOR,"> The second time I ran the command I got an error: > > ERROR: (gcloud.beta.run.deploy) Deployment endpoint was not found. Perhaps the > provided region was invalid. Set the `run/region` property to a valid region and > retry. Ex: `gcloud config set run/region us-central1` > Yes, I was able to reproduce this; I used to get prompted for a run region interactively by the `gcloud` tool before, but maybe this is changing? (the [documentation](https://cloud.google.com/run/docs/deploying) now assumes `run/region` is set). Not sure which course of action is best: making `datasette` ensure that `run/region` is set beforehand or wait a bit until the gcloud CLI stabilizes?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",434321685,"""datasette publish cloudrun"" command to publish to Google Cloud Run", https://github.com/simonw/datasette/issues/419#issuecomment-489060765,https://api.github.com/repos/simonw/datasette/issues/419,489060765,MDEyOklzc3VlQ29tbWVudDQ4OTA2MDc2NQ==,45057,russss,2019-05-03T11:07:42Z,2019-05-03T11:07:42Z,CONTRIBUTOR,"Are you planning on removing inspect entirely? I didn't spot this work before I started on datasette-geo, but ironically I think it has a use case which really needs the inspect functionality (or some replacement). Datasette-geo uses it to store the bounding box of all the geographic features in the table. This is needed when rendering the map because it avoids having to send loads of tile requests for areas which are empty. Even with relatively small datasets, calculating the bounding box seems to take around 5 seconds, so I don't think it's really feasible to do this on page load. One possible fix would be to do this on startup, and then in a thread which watches the database for changes.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421551434,"Default to opening files in mutable mode, special option for immutable files", https://github.com/simonw/datasette/pull/432#issuecomment-488595724,https://api.github.com/repos/simonw/datasette/issues/432,488595724,MDEyOklzc3VlQ29tbWVudDQ4ODU5NTcyNA==,45057,russss,2019-05-02T08:50:53Z,2019-05-02T08:50:53Z,CONTRIBUTOR,"> Can I pull those needs out of the Facet class somehow? I was thinking that it might be handy for datasette to have a request object which wraps the Sanic Request. This could include the datasette-specific querystring decoding and the `special_args` parsing from TableView.data. This would mean that we could expose the request object to plugin hooks without coupling them to Sanic.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",432893491,"Refactor facets to a class and new plugin, refs #427", https://github.com/simonw/datasette/pull/441#issuecomment-488247617,https://api.github.com/repos/simonw/datasette/issues/441,488247617,MDEyOklzc3VlQ29tbWVudDQ4ODI0NzYxNw==,45057,russss,2019-05-01T09:57:50Z,2019-05-01T09:57:50Z,CONTRIBUTOR,"Just for the record, this PR is now finished and ready to merge from my perspective.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438437973,Add register_output_renderer hook, https://github.com/simonw/datasette/pull/439#issuecomment-487859345,https://api.github.com/repos/simonw/datasette/issues/439,487859345,MDEyOklzc3VlQ29tbWVudDQ4Nzg1OTM0NQ==,45057,russss,2019-04-30T08:21:19Z,2019-04-30T08:21:19Z,CONTRIBUTOR,I think the best approach to this is to pass through the `view_name` parameter I added in #441. It's then simple enough for me to add `.geojson` to the URL in JS - I don't need the pkey.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438240541,[WIP] Add primary key to the extra_body_script hook arguments, https://github.com/simonw/datasette/pull/441#issuecomment-487735247,https://api.github.com/repos/simonw/datasette/issues/441,487735247,MDEyOklzc3VlQ29tbWVudDQ4NzczNTI0Nw==,45057,russss,2019-04-29T20:39:43Z,2019-04-29T20:39:43Z,CONTRIBUTOR,"I updated the hook to pass the datasette object through now. You can see the working [GeoJSON render function here](https://github.com/russss/datasette-geo/blob/master/datasette_plugin_geo/geojson.py) - the [hook function is here](https://github.com/russss/datasette-geo/blob/master/datasette_plugin_geo/__init__.py#L65-L70).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438437973,Add register_output_renderer hook, https://github.com/simonw/datasette/pull/441#issuecomment-487724539,https://api.github.com/repos/simonw/datasette/issues/441,487724539,MDEyOklzc3VlQ29tbWVudDQ4NzcyNDUzOQ==,45057,russss,2019-04-29T20:08:32Z,2019-04-29T20:08:32Z,CONTRIBUTOR,I also just realised that I should be passing the datasette object into the hook function...as I just found I need it. So hold off merging until I've fixed that.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438437973,Add register_output_renderer hook, https://github.com/simonw/datasette/pull/441#issuecomment-487723476,https://api.github.com/repos/simonw/datasette/issues/441,487723476,MDEyOklzc3VlQ29tbWVudDQ4NzcyMzQ3Ng==,45057,russss,2019-04-29T20:05:23Z,2019-04-29T20:05:23Z,CONTRIBUTOR,"This is the minimal example (I also included it in the docs): ```python from datasette import hookimpl def render_test(args, data, view_name): return {   'body': 'Hello World', 'content_type': 'text/plain' } @hookimpl def register_output_renderer(): return { 'extension': 'test', 'callback': render_test } ``` I'm working on the GeoJSON one now and it should be ready soon. (I forgot I was going to run into the same problem as before - that Spatialite's stupid binary format isn't WKB and I have no way of altering the query to change that - but I've just managed to write some code to rearrange the bytes from Spatialite blob-geometry into WKB...)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438437973,Add register_output_renderer hook, https://github.com/simonw/datasette/pull/441#issuecomment-487748271,https://api.github.com/repos/simonw/datasette/issues/441,487748271,MDEyOklzc3VlQ29tbWVudDQ4Nzc0ODI3MQ==,45057,russss,2019-04-29T21:20:17Z,2019-04-29T21:20:17Z,CONTRIBUTOR,"Also I just pushed a change to add registered output renderers to the templates: ![image](https://user-images.githubusercontent.com/45057/56927799-f18e0580-6acc-11e9-8ea9-a0ee961323ec.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438437973,Add register_output_renderer hook, https://github.com/simonw/datasette/pull/424#issuecomment-487692377,https://api.github.com/repos/simonw/datasette/issues/424,487692377,MDEyOklzc3VlQ29tbWVudDQ4NzY5MjM3Nw==,45057,russss,2019-04-29T18:30:46Z,2019-04-29T18:30:46Z,CONTRIBUTOR,"Actually no, I ended up not using the inspected column types in my plugin, and the binary column issue can be solved a lot more simply, so I'll close this.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",427429265,Column types in inspected metadata, https://github.com/simonw/datasette/pull/424#issuecomment-487689477,https://api.github.com/repos/simonw/datasette/issues/424,487689477,MDEyOklzc3VlQ29tbWVudDQ4NzY4OTQ3Nw==,45057,russss,2019-04-29T18:22:40Z,2019-04-29T18:22:40Z,CONTRIBUTOR,This is pretty conflicty because I forgot how to use git fetch. If you're interested in merging this I'll rewrite it against an actual modern checkout...,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",427429265,Column types in inspected metadata, https://github.com/simonw/datasette/pull/441#issuecomment-487686655,https://api.github.com/repos/simonw/datasette/issues/441,487686655,MDEyOklzc3VlQ29tbWVudDQ4NzY4NjY1NQ==,45057,russss,2019-04-29T18:14:25Z,2019-04-29T18:14:25Z,CONTRIBUTOR,"Subsidiary note which I forgot in the commit message: I've decided to give each view a short string name to aid in differentiating which view a hook is being called from. Since hooks are functions and not subclasses, and can get called from different places in the URL hierarchy, it's sometimes difficult to distinguish what data you're actually operating on. I think this will come in handy for other hooks as well.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438437973,Add register_output_renderer hook, https://github.com/simonw/datasette/pull/437#issuecomment-487537452,https://api.github.com/repos/simonw/datasette/issues/437,487537452,MDEyOklzc3VlQ29tbWVudDQ4NzUzNzQ1Mg==,45057,russss,2019-04-29T10:58:49Z,2019-04-29T10:58:49Z,CONTRIBUTOR,I've just spotted that this implements #215.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438048318,Add inspect and prepare_sanic hooks, https://github.com/simonw/datasette/pull/439#issuecomment-487542486,https://api.github.com/repos/simonw/datasette/issues/439,487542486,MDEyOklzc3VlQ29tbWVudDQ4NzU0MjQ4Ng==,45057,russss,2019-04-29T11:20:30Z,2019-04-29T11:20:30Z,CONTRIBUTOR,Actually I think this is not the whole story because of the rowid issue. I'm going to think about this one a bit more.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438240541,[WIP] Add primary key to the extra_body_script hook arguments, https://github.com/simonw/datasette/issues/429#issuecomment-483202658,https://api.github.com/repos/simonw/datasette/issues/429,483202658,MDEyOklzc3VlQ29tbWVudDQ4MzIwMjY1OA==,82988,psychemedia,2019-04-15T10:48:01Z,2019-04-15T10:48:01Z,CONTRIBUTOR,"Minor UI observation: ![image](https://user-images.githubusercontent.com/82988/56127017-2bf78e80-5f74-11e9-9120-9393eb5d4988.png) `_where=` renders a `[remove]` link whereas `_facet=` gets a cross to remove it. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",432636432,?_where=sql-fragment parameter for table views, https://github.com/simonw/datasette/issues/431#issuecomment-483017176,https://api.github.com/repos/simonw/datasette/issues/431,483017176,MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng==,82988,psychemedia,2019-04-14T16:58:37Z,2019-04-14T16:58:37Z,CONTRIBUTOR,Hmm... nope... I see an updated timestamp from `ls -al` on the db but no reload?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",432870248,Datasette doesn't reload when database file changes, https://github.com/simonw/datasette/issues/412#issuecomment-474282321,https://api.github.com/repos/simonw/datasette/issues/412,474282321,MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ==,82988,psychemedia,2019-03-19T10:09:46Z,2019-03-19T10:09:46Z,CONTRIBUTOR,Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to `ATTACH DATABASE`?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",411257981,Linked Data(sette), https://github.com/simonw/datasette/issues/417#issuecomment-474280581,https://api.github.com/repos/simonw/datasette/issues/417,474280581,MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ==,82988,psychemedia,2019-03-19T10:06:42Z,2019-03-19T10:06:42Z,CONTRIBUTOR,"This would be really interesting but several possibilities in use arise, I think? For example: - I put a new CSV file into the import dir and a new table is created therefrom - I put a CSV file into the import dir that replaces a previous file / table of the same name as a pre-existing table (eg files that contain monthly data in year to date). The data may also patch previous months, so a full replace / DROP on the original table may well be in order. - I put a CSV file into the import dir that updates a table of the same name as a pre-existing table (eg files that contain last month's data) CSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form `MYTABLENAME-February2019.csv` etc","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",421546944,Datasette Library, https://github.com/simonw/datasette/issues/160#issuecomment-459915995,https://api.github.com/repos/simonw/datasette/issues/160,459915995,MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ==,82988,psychemedia,2019-02-02T00:43:16Z,2019-02-02T00:58:20Z,CONTRIBUTOR,"Do you have any simple working examples of how to use `--static`? Inspection of default served files suggests locations such as `http://example.com/-/static/app.css?0e06ee`. If `datasette` is being proxied to `http://example.com/foo/datasette`, what form should arguments to `--static` take so that static files are correctly referenced? Use case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple `datasette` demo in MyBinder using jupyter-server-proxy.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",278208011,Ability to bundle and serve additional static files, https://github.com/simonw/datasette/issues/276#issuecomment-393106520,https://api.github.com/repos/simonw/datasette/issues/276,393106520,MDEyOklzc3VlQ29tbWVudDM5MzEwNjUyMA==,45057,russss,2018-05-30T10:09:25Z,2018-05-30T10:09:25Z,CONTRIBUTOR,"I don't think it's unreasonable to only support spatialite geometries in a coordinate reference system which is at least transformable to WGS84. It would be nice to support different CRSes in the database so conversion to spatialite from the source data is lossless. I think the working CRS for datasette should be WGS84 though (leaflet requires it, for example) - it's just a case of calling `ST_Transform(geom, 4326)` on the column while we're loading the data.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/issues/276#issuecomment-392825746,https://api.github.com/repos/simonw/datasette/issues/276,392825746,MDEyOklzc3VlQ29tbWVudDM5MjgyNTc0Ng==,45057,russss,2018-05-29T15:42:53Z,2018-05-29T15:42:53Z,CONTRIBUTOR,"I haven't had time to look further into this, but if doing this as a plugin results in useful hooks then I think we should do it that way. We could always require the plugin as a standard dependency. I think this is going to result in quite a bit of refactoring anyway so it's a good time to add hooks regardless. On the other hand, if we have to add lots of specialist hooks for it then maybe it's worth integrating into the core.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/issues/276#issuecomment-391505930,https://api.github.com/repos/simonw/datasette/issues/276,391505930,MDEyOklzc3VlQ29tbWVudDM5MTUwNTkzMA==,45057,russss,2018-05-23T21:41:37Z,2018-05-23T21:41:37Z,CONTRIBUTOR,"> I'm not keen on anything that modifies the SQLite file itself on startup Ah I didn't mean that - I meant altering the SELECT query to fetch the data so that it ran a spatialite function to transform that specific column. I think that's less useful as a general-purpose plugin hook though, and it's not that hard to parse the WKB in Python (my default approach would be to use [shapely](https://github.com/Toblerity/Shapely), which is great, but geomet looks like an interesting pure-python alternative).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/pull/280#issuecomment-391355030,https://api.github.com/repos/simonw/datasette/issues/280,391355030,MDEyOklzc3VlQ29tbWVudDM5MTM1NTAzMA==,565628,r4vi,2018-05-23T13:53:27Z,2018-05-23T15:22:45Z,CONTRIBUTOR,"No objections; It's good to go @simonw On Wed, 23 May 2018, 14:51 Simon Willison, wrote: > @r4vi any objections to me merging this? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > , or mute > the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/pull/280#issuecomment-391290271,https://api.github.com/repos/simonw/datasette/issues/280,391290271,MDEyOklzc3VlQ29tbWVudDM5MTI5MDI3MQ==,565628,r4vi,2018-05-23T09:53:38Z,2018-05-23T09:53:38Z,CONTRIBUTOR,"Running: ```bash docker run -p 8001:8001 -v `pwd`:/mnt datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \ --load-extension=/usr/local/lib/mod_spatialite.so ``` is now returning FTS5 enabled in the versions output: ```json { ""datasette"": { ""version"": ""0.22"" }, ""python"": { ""full"": ""3.6.5 (default, May 5 2018, 03:07:21) \n[GCC 6.3.0 20170516]"", ""version"": ""3.6.5"" }, ""sqlite"": { ""extensions"": { ""json1"": null, ""spatialite"": ""4.4.0-RC0"" }, ""fts_versions"": [ ""FTS5"", ""FTS4"", ""FTS3"" ], ""version"": ""3.23.1"" } } ``` The old query didn't work because specifying `(t TEXT)` caused an error","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/pull/280#issuecomment-391141391,https://api.github.com/repos/simonw/datasette/issues/280,391141391,MDEyOklzc3VlQ29tbWVudDM5MTE0MTM5MQ==,565628,r4vi,2018-05-22T21:08:39Z,2018-05-22T21:08:39Z,CONTRIBUTOR,"I'm going to clean this up for consistency tomorrow morning so hold off merging until then please On Tue, May 22, 2018 at 6:34 PM, Simon Willison wrote: > Yeah let's try this without pysqlite3 and see if we still get the correct > version. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > , or mute > the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/pull/279#issuecomment-391077700,https://api.github.com/repos/simonw/datasette/issues/279,391077700,MDEyOklzc3VlQ29tbWVudDM5MTA3NzcwMA==,198537,rgieseke,2018-05-22T17:38:17Z,2018-05-22T17:38:17Z,CONTRIBUTOR,"Alright, that should work now -- let me know if you would prefer any different behaviour.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325352370,Add version number support with Versioneer, https://github.com/simonw/datasette/pull/279#issuecomment-391073267,https://api.github.com/repos/simonw/datasette/issues/279,391073267,MDEyOklzc3VlQ29tbWVudDM5MTA3MzI2Nw==,198537,rgieseke,2018-05-22T17:24:16Z,2018-05-22T17:24:16Z,CONTRIBUTOR,"Sorry, just realised you rely on `version` being a module ...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325352370,Add version number support with Versioneer, https://github.com/simonw/datasette/pull/279#issuecomment-391073009,https://api.github.com/repos/simonw/datasette/issues/279,391073009,MDEyOklzc3VlQ29tbWVudDM5MTA3MzAwOQ==,198537,rgieseke,2018-05-22T17:23:26Z,2018-05-22T17:23:26Z,CONTRIBUTOR,"> I think I prefer the aesthetics of just ""0.22"" for the version string if it's a tagged release with no additional changes - does that work? Yes! That's the default versioneer behaviour. > I'd like to continue to provide a tuple that can be imported from the version.py module as well, as seen here: Should work now, it can be a two (for a tagged version), three or four items tuple. ``` In [2]: datasette.__version__ Out[2]: '0.12+292.ga70c2a8.dirty' In [3]: datasette.__version_info__ Out[3]: ('0', '12+292', 'ga70c2a8', 'dirty') ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325352370,Add version number support with Versioneer, https://github.com/simonw/datasette/pull/280#issuecomment-391059008,https://api.github.com/repos/simonw/datasette/issues/280,391059008,MDEyOklzc3VlQ29tbWVudDM5MTA1OTAwOA==,565628,r4vi,2018-05-22T16:40:27Z,2018-05-22T16:40:27Z,CONTRIBUTOR,"```python >>> import sqlite3 >>> sqlite3.sqlite_version '3.23.1' >>> ``` running the above in the container seems to show 3.23.1 too so maybe we don't need pysqlite3 at all?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/issues/276#issuecomment-391050113,https://api.github.com/repos/simonw/datasette/issues/276,391050113,MDEyOklzc3VlQ29tbWVudDM5MTA1MDExMw==,45057,russss,2018-05-22T16:13:00Z,2018-05-22T16:13:00Z,CONTRIBUTOR,"Yup, I'll have a think about it. My current thoughts are for spatialite we'll need to hook into the following places: * Inspection, so we can detect which columns are geometry columns. (We also currently ignore spatialite tables during inspection, it may be worth moving that to the plugin as well.) * After data load, so we can convert WKB into the correct intermediate format for display. The alternative here is to alter the select SQL itself and get spatialite to do this conversion, but that strikes me as a bit more complex and possibly not as useful. * HTML rendering. * Querying? The rendering and querying hooks could also potentially be used to move the units support into a plugin.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/issues/276#issuecomment-390795067,https://api.github.com/repos/simonw/datasette/issues/276,390795067,MDEyOklzc3VlQ29tbWVudDM5MDc5NTA2Nw==,45057,russss,2018-05-21T21:55:57Z,2018-05-21T21:55:57Z,CONTRIBUTOR,"Well, we do have the capability to detect spatialite so my intention certainly wasn't to require it. I can see the advantage of having it as a plugin but it does touch a number of points in the code. I think I'm going to attack this by refactoring the necessary bits and seeing where that leads (which was my plan anyway). I think my main concern is - if I add certain plugin hooks for this, is anything else ever going to use them? I'm not sure I have an answer to that question yet, either way.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324835838,Handle spatialite geometry columns better, https://github.com/simonw/datasette/issues/273#issuecomment-390250253,https://api.github.com/repos/simonw/datasette/issues/273,390250253,MDEyOklzc3VlQ29tbWVudDM5MDI1MDI1Mw==,198537,rgieseke,2018-05-18T15:49:52Z,2018-05-18T15:49:52Z,CONTRIBUTOR,"Shouldn't [versioneer](https://github.com/warner/python-versioneer) do that? E.g. 0.21+2.g1076c97 You'd need to install via `pip install git+https://github.com/simow/datasette.git` though, this does a temp git clone.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324451322,Figure out a way to have /-/version return current git commit hash, https://github.com/simonw/datasette/pull/209#issuecomment-381905593,https://api.github.com/repos/simonw/datasette/issues/209,381905593,MDEyOklzc3VlQ29tbWVudDM4MTkwNTU5Mw==,45057,russss,2018-04-17T08:50:28Z,2018-04-17T08:50:28Z,CONTRIBUTOR,"I've added another commit which puts classes a class on each `` by default with its column name, and I've also made the PK column bold. Unfortunately the tests are still failing on 3.6, which is weird. I can't reproduce locally...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",314455877, Don't duplicate simple primary keys in the link column, https://github.com/simonw/datasette/pull/209#issuecomment-381738137,https://api.github.com/repos/simonw/datasette/issues/209,381738137,MDEyOklzc3VlQ29tbWVudDM4MTczODEzNw==,45057,russss,2018-04-16T20:27:43Z,2018-04-16T20:27:43Z,CONTRIBUTOR,"Tests now fixed, honest. The failing test on Travis looks like an intermittent sqlite failure which should resolve itself on a retry...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",314455877, Don't duplicate simple primary keys in the link column, https://github.com/simonw/datasette/issues/203#issuecomment-381763651,https://api.github.com/repos/simonw/datasette/issues/203,381763651,MDEyOklzc3VlQ29tbWVudDM4MTc2MzY1MQ==,45057,russss,2018-04-16T21:59:17Z,2018-04-16T21:59:17Z,CONTRIBUTOR,"Ah, I had no idea you could bind python functions into sqlite! I think the primary purpose of this issue has been served now - I'm going to close this and create a new issue for the only bit of this that hasn't been touched yet, which is (optionally) exposing units in the JSON API.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",313837303,Support for units, https://github.com/simonw/datasette/pull/209#issuecomment-381441392,https://api.github.com/repos/simonw/datasette/issues/209,381441392,MDEyOklzc3VlQ29tbWVudDM4MTQ0MTM5Mg==,45057,russss,2018-04-15T21:59:15Z,2018-04-15T21:59:15Z,CONTRIBUTOR,"I suspected this would cause some test failures, but I'll wait for opinions before attempting to fix them.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",314455877, Don't duplicate simple primary keys in the link column, https://github.com/simonw/datasette/pull/205#issuecomment-381332222,https://api.github.com/repos/simonw/datasette/issues/205,381332222,MDEyOklzc3VlQ29tbWVudDM4MTMzMjIyMg==,45057,russss,2018-04-14T14:16:35Z,2018-04-14T14:16:35Z,CONTRIBUTOR,I've added some tests and that docs link.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",314319372,Support filtering with units and more, https://github.com/simonw/datasette/issues/203#issuecomment-381315675,https://api.github.com/repos/simonw/datasette/issues/203,381315675,MDEyOklzc3VlQ29tbWVudDM4MTMxNTY3NQ==,45057,russss,2018-04-14T09:14:45Z,2018-04-14T09:27:30Z,CONTRIBUTOR,"> I'd like to figure out a sensible opt-in way to expose this in the JSON output as well. Maybe with a &_units=true parameter? From a machine-readable perspective I'm not sure why it would be useful to decorate the values with units. Edit: Should have had some coffee first. It's clearly useful for stuff like map rendering! I agree that the unit metadata should definitely be exposed in the JSON. > In #204 you said ""I'd like to add support for using units when querying but this is PR is pretty usable as-is."" - I'm fascinated to hear more about how this could work. I'm thinking about a couple of approaches here. I think the simplest one is: if the column has a unit attached, optionally accept units in query fields: ```python column_units = ureg(""Hz"") # Create a unit object for the column's unit query_variable = ureg(""4 GHz"") # Supplied query variable # Now we can convert the query units into column units before querying supplied_value.to(column_units).magnitude > 4000000000.0 # If the user doesn't supply units, pint just returns the plain # number and we can query as usual assuming it's the base unit query_variable = ureg(""50"") query_variable > 50 isinstance(query_variable, numbers.Number) > True ``` This also lets us do some nice unit conversion on querying: ```python column_units = ureg(""m"") query_variable = ureg(""50 ft"") supplied_value.to(column_units) > ``` The alternative would be to provide a dropdown of units next to the query field (so a ""Hz"" field would give you ""kHz"", ""MHz"", ""GHz""). Although this would be clearer to the user, it isn't so easy - we'd need to know more about the context of the field to give you sensible SI prefixes (I'm not so interested in nanoHertz, for example). You also lose the bonus of being able to convert - although pint will happily show you all the compatible units, it again suffers from a lack of context: ```python ureg(""m"").compatible_units() > frozenset({, , , , , , , , , , , }) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",313837303,Support for units, https://github.com/simonw/datasette/issues/125#issuecomment-381361734,https://api.github.com/repos/simonw/datasette/issues/125,381361734,MDEyOklzc3VlQ29tbWVudDM4MTM2MTczNA==,45057,russss,2018-04-14T21:26:30Z,2018-04-14T21:26:30Z,CONTRIBUTOR,"FWIW I am now doing this on my WTR app (instead of silently limiting maps to 1000). [Telefonica](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/18325) now has about 4000 markers and good old [BT](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/8412) has 22,000 or so.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275135393,Plot rows on a map with Leaflet and Leaflet.markercluster, https://github.com/simonw/datasette/pull/202#issuecomment-381237440,https://api.github.com/repos/simonw/datasette/issues/202,381237440,MDEyOklzc3VlQ29tbWVudDM4MTIzNzQ0MA==,45057,russss,2018-04-13T19:22:53Z,2018-04-13T19:22:53Z,CONTRIBUTOR,I spotted you'd mentioned that in #184 but only after I'd written the patch!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",313785206,Raise 404 on nonexistent table URLs, https://github.com/simonw/datasette/issues/203#issuecomment-380966565,https://api.github.com/repos/simonw/datasette/issues/203,380966565,MDEyOklzc3VlQ29tbWVudDM4MDk2NjU2NQ==,45057,russss,2018-04-12T22:43:08Z,2018-04-12T22:43:08Z,CONTRIBUTOR,"Looks like [pint](https://pint.readthedocs.io/en/latest/tutorial.html) is pretty good at this. ```python In [1]: import pint In [2]: ureg = pint.UnitRegistry() In [3]: q = 3e6 * ureg('Hz') In [4]: '{:~P}'.format(q.to_compact()) Out[4]: '3.0 MHz' In [5]: q = 0.3 * ureg('m') In [5]: '{:~P}'.format(q.to_compact()) Out[5]: '300.0 mm' In [6]: q = 5 * ureg('') In [7]: '{:~P}'.format(q.to_compact()) Out[7]: '5' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",313837303,Support for units, https://github.com/simonw/datasette/pull/200#issuecomment-380608372,https://api.github.com/repos/simonw/datasette/issues/200,380608372,MDEyOklzc3VlQ29tbWVudDM4MDYwODM3Mg==,45057,russss,2018-04-11T21:55:46Z,2018-04-11T21:55:46Z,CONTRIBUTOR,"> I think the most reliable way to detect spatialite is to run `SELECT AddGeometryColumn(1, 2, 3, 4, 5);` against a `:memory:` database and see if it throws an exception Or just see if there's a `geometry_columns` table? I think that's quite unlikely to be added by accident (and it's an OGC standard). It also tells you if Spatialite is installed in the database rather than just loaded.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",313494458,Hide Spatialite system tables, https://github.com/simonw/datasette/issues/179#issuecomment-360535979,https://api.github.com/repos/simonw/datasette/issues/179,360535979,MDEyOklzc3VlQ29tbWVudDM2MDUzNTk3OQ==,82988,psychemedia,2018-01-25T17:18:24Z,2018-01-25T17:18:24Z,CONTRIBUTOR,"To summarise that thread: - expose full `metadata.json` object to the index page template, eg to allow tables to be referred to by name; - ability to import multiple `metadata.json` files, eg to allow metadata files created for a specific SQLite db to be reused in a datasette referring to several database files; It could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",288438570,More metadata options for template authors , https://github.com/simonw/datasette/issues/14#issuecomment-346244871,https://api.github.com/repos/simonw/datasette/issues/14,346244871,MDEyOklzc3VlQ29tbWVudDM0NjI0NDg3MQ==,21148,jacobian,2017-11-22T05:06:30Z,2017-11-22T05:06:30Z,CONTRIBUTOR,"I'd also suggest taking a look at [stevedore](https://docs.openstack.org/stevedore/latest/), which has a ton of tools for doing plugin stuff. I've had good luck with it in the past.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",267707940,Datasette Plugins, https://github.com/simonw/datasette/pull/104#issuecomment-346124764,https://api.github.com/repos/simonw/datasette/issues/104,346124764,MDEyOklzc3VlQ29tbWVudDM0NjEyNDc2NA==,21148,jacobian,2017-11-21T18:52:14Z,2017-11-21T18:52:14Z,CONTRIBUTOR,"OK, now this should work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274284246,[WIP] Add publish to heroku support, https://github.com/simonw/datasette/pull/104#issuecomment-346124073,https://api.github.com/repos/simonw/datasette/issues/104,346124073,MDEyOklzc3VlQ29tbWVudDM0NjEyNDA3Mw==,21148,jacobian,2017-11-21T18:49:55Z,2017-11-21T18:49:55Z,CONTRIBUTOR,"Actually hang on, don't merge - there are some bugs that #141 masked when I tested this out elsewhere.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274284246,[WIP] Add publish to heroku support, https://github.com/simonw/datasette/pull/104#issuecomment-346116745,https://api.github.com/repos/simonw/datasette/issues/104,346116745,MDEyOklzc3VlQ29tbWVudDM0NjExNjc0NQ==,21148,jacobian,2017-11-21T18:23:25Z,2017-11-21T18:23:25Z,CONTRIBUTOR,"@simonw ready for a review and merge if you want. There's still some nasty duplicated code in cli.py and utils.py, which is just going to get worse if/when we start adding any other deploy targets (and I want to do one for cloud.gov, at least). I think there's an opportunity for some refactoring here. I'm happy to do that now as part of this PR, or if you merge this first I'll do it in a different one.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274284246,[WIP] Add publish to heroku support, https://github.com/simonw/datasette/issues/27#issuecomment-345652450,https://api.github.com/repos/simonw/datasette/issues/27,345652450,MDEyOklzc3VlQ29tbWVudDM0NTY1MjQ1MA==,198537,rgieseke,2017-11-20T10:19:39Z,2017-11-20T10:19:39Z,CONTRIBUTOR,"If Data Package metadata gets adopted (#105) the views spec work might also be worth a look: http://frictionlessdata.io/specs/views/ http://datahub.io/docs/features/views ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",267886330,Ability to plot a simple graph, https://github.com/simonw/datasette/issues/105#issuecomment-345503897,https://api.github.com/repos/simonw/datasette/issues/105,345503897,MDEyOklzc3VlQ29tbWVudDM0NTUwMzg5Nw==,198537,rgieseke,2017-11-19T09:38:08Z,2017-11-19T09:38:08Z,CONTRIBUTOR,"Thanks, I wrote this very simple reader because the default approach as described on the Datahub pages seemed to complicated. I had metadata from the `datapackage.json` attached to the returned DataFrames but removed this due to some attribute handling change in the latest Pandas version. This could also be useful for getting from Data Package to SQL db: https://github.com/frictionlessdata/tableschema-sql-py I maintain a few climate science related dataset at https://github.com/openclimatedata/ The Data Retriever (mainly ecological data) by @ethanwhite et al. is also using the Data Package format for metadata and has some tooling for different dbs: https://frictionlessdata.io/articles/the-data-retriever/ https://github.com/weecology/retriever The Open Power System Data project also has a couple of datasets that show nicely how CSV is great for assembling and then already make SQLite files available. It's one of the first data sets I tried with Datasette, perfect for the use case of getting an API for putting power stations on a map ... https://data.open-power-system-data.org/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274314940,Consider data-package as a format for metadata, https://github.com/simonw/datasette/pull/104#issuecomment-345452669,https://api.github.com/repos/simonw/datasette/issues/104,345452669,MDEyOklzc3VlQ29tbWVudDM0NTQ1MjY2OQ==,21148,jacobian,2017-11-18T16:18:45Z,2017-11-18T16:18:45Z,CONTRIBUTOR,"I'd like to do a bit of cleanup, and some error checking in case heroku/heroku-builds isn't installed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274284246,[WIP] Add publish to heroku support, https://github.com/simonw/datasette/pull/107#issuecomment-345117690,https://api.github.com/repos/simonw/datasette/issues/107,345117690,MDEyOklzc3VlQ29tbWVudDM0NTExNzY5MA==,3433657,raynae,2017-11-17T01:29:41Z,2017-11-17T01:29:41Z,CONTRIBUTOR,"Thanks for bearing with me. I was getting a message about my branch diverging when I tried to push after rebasing, so I merged master into isnull, seems like that did the trick. Let me know if I should make any corrections.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274343647,add support for ?field__isnull=1, https://github.com/simonw/datasette/issues/46#issuecomment-345002908,https://api.github.com/repos/simonw/datasette/issues/46,345002908,MDEyOklzc3VlQ29tbWVudDM0NTAwMjkwOA==,54999,ingenieroariel,2017-11-16T17:47:49Z,2017-11-16T17:47:49Z,CONTRIBUTOR,I'll try to find alternatives to the Dockerfile option - I also think we should not use that old one without sources or license.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",271301468,Dockerfile should build more recent SQLite with FTS5 and spatialite support, https://github.com/simonw/datasette/pull/107#issuecomment-344811268,https://api.github.com/repos/simonw/datasette/issues/107,344811268,MDEyOklzc3VlQ29tbWVudDM0NDgxMTI2OA==,3433657,raynae,2017-11-16T04:17:45Z,2017-11-16T04:17:45Z,CONTRIBUTOR,"Thanks for the guidance. I added a unit test and made a slight change to utils.py. I didn't realize this, but evidently string.format only complains if you supply less arguments than there are format placeholders, so the original commit worked, but was adding a superfluous named param. I added a conditional that prevents the named param from being created and ensures the correct number of args are passed to sting.format. It has the side effect of hiding the SQL query in /templates/table.html when there are no other where clauses--not sure if that's the desired outcome here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274343647,add support for ?field__isnull=1, https://github.com/simonw/datasette/issues/46#issuecomment-344810525,https://api.github.com/repos/simonw/datasette/issues/46,344810525,MDEyOklzc3VlQ29tbWVudDM0NDgxMDUyNQ==,54999,ingenieroariel,2017-11-16T04:11:25Z,2017-11-16T04:11:25Z,CONTRIBUTOR,"@simonw On the spatialite support, here is some info to make it work and a screenshot: I used the following Dockerfile: ``` FROM prolocutor/python3-sqlite-ext:3.5.1-spatialite as build RUN mkdir /code ADD . /code/ RUN pip install /code/ EXPOSE 8001 CMD [""datasette"", ""serve"", ""/code/ne.sqlite"", ""--host"", ""0.0.0.0""] ``` and added this to `prepare_connection`: ``` conn.enable_load_extension(True) conn.execute(""SELECT load_extension('/usr/local/lib/mod_spatialite.so')"") ```","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",271301468,Dockerfile should build more recent SQLite with FTS5 and spatialite support, https://github.com/simonw/datasette/pull/104#issuecomment-344710204,https://api.github.com/repos/simonw/datasette/issues/104,344710204,MDEyOklzc3VlQ29tbWVudDM0NDcxMDIwNA==,21148,jacobian,2017-11-15T19:57:50Z,2017-11-15T19:57:50Z,CONTRIBUTOR,"A first basic stab at making this work, just to prove the approach. Right now this requires [a Heroku CLI plugin](https://github.com/heroku/heroku-builds), which seems pretty unreasonable. I think this can be replaced with direct API calls, which could clean up a lot of things. But I wanted to prove it worked first, and it does.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274284246,[WIP] Add publish to heroku support, https://github.com/simonw/datasette/issues/88#issuecomment-344430689,https://api.github.com/repos/simonw/datasette/issues/88,344430689,MDEyOklzc3VlQ29tbWVudDM0NDQzMDY4OQ==,15543,tomdyson,2017-11-14T23:08:22Z,2017-11-14T23:08:22Z,CONTRIBUTOR,"> I'm getting an internal server error on http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/ at the moment Sorry about that - here's a working version on Netlify: https://nhs-england-map.netlify.com","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273775212,Add NHS England Hospitals example to wiki, https://github.com/simonw/datasette/pull/81#issuecomment-344125441,https://api.github.com/repos/simonw/datasette/issues/81,344125441,MDEyOklzc3VlQ29tbWVudDM0NDEyNTQ0MQ==,50527,jefftriplett,2017-11-14T02:24:54Z,2017-11-14T02:24:54Z,CONTRIBUTOR,"Oops, if I jumped the gun. I saw the project in my github activity feed and saw some low hanging fruit :) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273595473,:fire: Removes DS_Store, https://github.com/simonw/datasette/issues/57#issuecomment-344151223,https://api.github.com/repos/simonw/datasette/issues/57,344151223,MDEyOklzc3VlQ29tbWVudDM0NDE1MTIyMw==,247192,macropin,2017-11-14T05:32:28Z,2017-11-14T05:33:03Z,CONTRIBUTOR,"The pattern is called ""multi-stage builds"". And the result is a svelte 226MB image (201MB for 3.6-slim) vs 700MB+ for the full image. It's possible to get it even smaller, but that takes a lot more work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273127694,Ship a Docker image of the whole thing, https://github.com/simonw/datasette/issues/57#issuecomment-344147583,https://api.github.com/repos/simonw/datasette/issues/57,344147583,MDEyOklzc3VlQ29tbWVudDM0NDE0NzU4Mw==,247192,macropin,2017-11-14T05:03:47Z,2017-11-14T05:03:47Z,CONTRIBUTOR,"Let me know if you'd like a PR. The image is usable as `docker run --rm -t -i -p 9000:8001 -v $(pwd)/db:/db datasette datasette serve /db/chinook.db`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273127694,Ship a Docker image of the whole thing, https://github.com/simonw/datasette/issues/57#issuecomment-344145265,https://api.github.com/repos/simonw/datasette/issues/57,344145265,MDEyOklzc3VlQ29tbWVudDM0NDE0NTI2NQ==,247192,macropin,2017-11-14T04:45:38Z,2017-11-14T04:45:38Z,CONTRIBUTOR,"I'm happy to contribute this. Just let me know if you want a Dockerfile for development or production purposes, or both. If it's prod then we can just pip install the source from pypi, otherwise for dev we'll need a `requirements.txt` to speed up rebuilds.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273127694,Ship a Docker image of the whole thing, https://github.com/dogsheep/twitter-to-sqlite/issues/58#issuecomment-910121331,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/58,910121331,IC_kwDODEm0Qs42P1lz,42904,rubenv,2021-09-01T09:49:33Z,2021-09-01T09:49:33Z,CONTRIBUTOR,"Found the cause, it's the other commands. PR #59 submitted.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",984939366,"Error: Use either --since or --since_id, not both - still broken", https://github.com/dogsheep/dogsheep-photos/issues/3#issuecomment-934372104,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3,934372104,IC_kwDOD079W843sWMI,41546558,RhetTbull,2021-10-05T12:38:24Z,2021-10-05T12:38:24Z,CONTRIBUTOR,"As dogsheep-photos already uses [osxphotos](https://github.com/RhetTbull/osxphotos) to load photos you can access the EXIF data via osxphotos. Apple Photos imports a small subset of EXIF data at the time the photo is imported and osxphotos provides this via the [exif_info](https://github.com/RhetTbull/osxphotos#exifinfo) property. If you want the full EXIF data, osxphotos also provides a wrapper around [exiftool](https://github.com/RhetTbull/osxphotos#exiftool).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",602533481,"Import EXIF data into SQLite - lens used, ISO, aperture etc", https://github.com/simonw/sqlite-utils/pull/604#issuecomment-1843975536,https://api.github.com/repos/simonw/sqlite-utils/issues/604,1843975536,IC_kwDOCGYnMM5t6NVw,16437338,tkhattra,2023-12-07T01:17:05Z,2023-12-07T01:17:05Z,CONTRIBUTOR,Apologies - I pushed a fix that addresses the mypy failures.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",2001006157,Add more STRICT table support, https://github.com/simonw/sqlite-utils/issues/344#issuecomment-1815825863,https://api.github.com/repos/simonw/sqlite-utils/issues/344,1815825863,IC_kwDOCGYnMM5sO03H,16437338,tkhattra,2023-11-17T06:44:49Z,2023-11-17T06:44:49Z,CONTRIBUTOR,"hello Simon, I've added more STRICT table support per https://github.com/simonw/sqlite-utils/issues/344#issuecomment-982014776 in changeset https://github.com/simonw/sqlite-utils/commit/e4b9b582cdb4e48430865f8739f341bc8017c1e4. It also fixes table.transform() to preserve STRICT mode. Please pull if you deem appropriate. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1066474200,Support STRICT tables, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1747231893,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1747231893,IC_kwDOCGYnMM5oJKSV,62745,spookylukey,2023-10-04T16:15:09Z,2023-10-04T16:28:21Z,CONTRIBUTOR,"I confirm the bug, as above, and that @jonafato 's patch fixes it for me. However, it's not the right fix. The problem is that ProgressBar is being used in the wrong way. This also results in two lines being printed instead of one, like this: ``` [#######-----------------------------] 20% [####################################] 100%% ``` The bug is reproducible for me in any terminal, including Gnome Terminal and Guake, and VSCode. With VSCode I can use this launch.json to reproduce it: ```json { ""version"": ""0.2.0"", ""configurations"": [ { ""name"": ""Python: Module"", ""type"": ""python"", ""request"": ""launch"", ""module"": ""sqlite_utils"", ""justMyCode"": false, ""args"": [""insert"", ""test.db"", ""test"", ""--csv"", ""tests/sniff/example1.csv""] } ] } ``` [edit - deleted my analysis of why the current code is wrong, which was confused and confusing]","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/sqlite-utils/issues/578#issuecomment-1668113177,https://api.github.com/repos/simonw/sqlite-utils/issues/578,1668113177,IC_kwDOCGYnMM5jbWMZ,25778,eyeseast,2023-08-07T15:41:49Z,2023-08-07T15:41:49Z,CONTRIBUTOR,I wonder if this should be two hooks: input and output. The current `--csv` (and `--tsv`) options apply to both. Haven't looked at how it's implemented. Or maybe it's one hook that returns a format for reading and for writing.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1818838294,Plugin hook for adding new output formats, https://github.com/simonw/sqlite-utils/issues/578#issuecomment-1648339661,https://api.github.com/repos/simonw/sqlite-utils/issues/578,1648339661,IC_kwDOCGYnMM5iP6rN,25778,eyeseast,2023-07-24T17:44:30Z,2023-07-24T17:44:30Z,CONTRIBUTOR,"> A related feature would be support for plugins to add new ways of ingesting data - currently sqlite-utils insert works against JSON, newline-JSON, CSV and TSV. This is my goal, to have one plugin that handles input and output symmetrically. I'd like to be able to do something like this: ```sh sqlite-utils insert data.db table file.geojson --format geojson # ... explore and manipulate in Datasette sqlite-utils query data.db ... --format geojson > output.geojson ``` This would work especially well with [datasette-query-files](https://github.com/eyeseast/datasette-query-files), since I already have the queries I need saved in standalone SQL files. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1818838294,Plugin hook for adding new output formats, https://github.com/simonw/sqlite-utils/issues/567#issuecomment-1646656283,https://api.github.com/repos/simonw/sqlite-utils/issues/567,1646656283,IC_kwDOCGYnMM5iJfsb,25778,eyeseast,2023-07-22T19:32:24Z,2023-07-22T19:32:24Z,CONTRIBUTOR,Cool. I might try to add a geojson plugin that handles both input and output. That would help me out a lot. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1801394744,Plugin system, https://github.com/simonw/sqlite-utils/issues/567#issuecomment-1638910473,https://api.github.com/repos/simonw/sqlite-utils/issues/567,1638910473,IC_kwDOCGYnMM5hr8oJ,15178711,asg017,2023-07-17T21:27:41Z,2023-07-17T21:27:41Z,CONTRIBUTOR,"Another use-case: I want to make a `sqlite-utils` plugin that'll help me insert data into Datasette. ```bash sqlite-utils insert-datasette \ --token $DATASETTE_API_KEY \ https://latest.datasette.io/fixtures/my-table \ 'select ...' ``` This could also be a datasette plugin (ex `datasette upload-data ...`, but you can also think of `sqlite-utils` plugins that upload to S3, a postgres table, other DBMS's, etc.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1801394744,Plugin system, https://github.com/simonw/sqlite-utils/issues/567#issuecomment-1642808866,https://api.github.com/repos/simonw/sqlite-utils/issues/567,1642808866,IC_kwDOCGYnMM5h60Yi,25778,eyeseast,2023-07-19T21:54:27Z,2023-07-19T21:54:27Z,CONTRIBUTOR,Would this possibly make a bunch of `x-to-sqlite` tools obsolete? Or nudge some to become plugins?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1801394744,Plugin system, https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1590531892,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1590531892,IC_kwDOCGYnMM5ezZc0,7908073,chapmanjacobd,2023-06-14T06:09:21Z,2023-06-14T06:09:21Z,CONTRIBUTOR,"I put together a [simple script](https://github.com/chapmanjacobd/library/blob/42129c5ebe15f9d74653c0f5ca4ed0c991d383e0/xklb/scripts/dedupe_db.py) to upsert and remove duplicate rows based on business keys. If anyone has similar problems with above this might help ``` CREATE TABLE my_table ( id INTEGER PRIMARY KEY, column1 TEXT, column2 TEXT, column3 TEXT ); INSERT INTO my_table (column1, column2, column3) VALUES ('Value 1', 'Duplicate 1', 'Duplicate A'), ('Value 2', 'Duplicate 2', 'Duplicate B'), ('Value 3', 'Duplicate 2', 'Duplicate C'), ('Value 4', 'Duplicate 3', 'Duplicate D'), ('Value 5', 'Duplicate 3', 'Duplicate E'), ('Value 6', 'Duplicate 3', 'Duplicate F'); ``` ``` library dedupe-db test.db my_table --bk column2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327,Aliased ROWID option for tables created from alter=True commands, https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1577355134,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1577355134,IC_kwDOCGYnMM5eBId-,7908073,chapmanjacobd,2023-06-05T19:26:26Z,2023-06-05T19:26:26Z,CONTRIBUTOR,"this isn't really actionable... I'm just being a whiny baby. I have tasted the milk of being able to use `upsert_all`, `insert_all`, etc without having to write DDL to create tables. The meat of the issue is that SQLITE doesn't make rowid stable between vacuums so it is not possible to take shortcuts","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327,Aliased ROWID option for tables created from alter=True commands, https://github.com/simonw/sqlite-utils/issues/529#issuecomment-1592110694,https://api.github.com/repos/simonw/sqlite-utils/issues/529,1592110694,IC_kwDOCGYnMM5e5a5m,7908073,chapmanjacobd,2023-06-14T23:11:47Z,2023-06-14T23:12:12Z,CONTRIBUTOR,"sorry i was wrong. `sqlite-utils --raw-lines` works correctly ``` sqlite-utils --raw-lines :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ sqlite-utils --csv --no-headers :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ ``` I think this was fixed somewhat recently","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1581090327,Microsoft line endings, https://github.com/simonw/sqlite-utils/issues/535#issuecomment-1592052320,https://api.github.com/repos/simonw/sqlite-utils/issues/535,1592052320,IC_kwDOCGYnMM5e5Mpg,7908073,chapmanjacobd,2023-06-14T22:05:28Z,2023-06-14T22:05:28Z,CONTRIBUTOR,piping to `jq` is good enough usually,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1655860104,rows: --transpose or psql extended view-like functionality, https://github.com/simonw/sqlite-utils/issues/555#issuecomment-1592047502,https://api.github.com/repos/simonw/sqlite-utils/issues/555,1592047502,IC_kwDOCGYnMM5e5LeO,7908073,chapmanjacobd,2023-06-14T22:00:10Z,2023-06-14T22:01:57Z,CONTRIBUTOR,"You may want to try doing a performance comparison between this and just selecting all the ids with few constraints and then doing the filtering within python. That might seem like a lazy-programmer, inefficient way but queries with large resultsets are a different profile than what databases like SQLITE are designed for. That is not to say that SQLITE is slow or that python is always faster but when you start reading >20% of an index there is an equilibrium that is reached. Especially when adding in writing extra temp tables and stuff to memory/disk. And especially given the `NOT IN` style of query... You may also try chunking like this: ```py def chunks(lst, n) -> Generator: for i in range(0, len(lst), n): yield lst[i : i + n] SQLITE_PARAM_LIMIT = 32765 data = [] chunked = chunks(video_ids, consts.SQLITE_PARAM_LIMIT) for ids in chunked: data.expand( list( db.query( f""""""SELECT * from videos WHERE id in ("""""" + "","".join([""?""] * len(ids)) + "")"", (*ids,), ) ) ) ``` but that actually won't work with your `NOT IN` requirements. You need to query the full resultset to check any row. Since you are doing stuff with files/videos in SQLITE you might be interested in my side project: https://github.com/chapmanjacobd/library","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1733198948,Filter table by a large bunch of ids, https://github.com/simonw/sqlite-utils/issues/556#issuecomment-1575310378,https://api.github.com/repos/simonw/sqlite-utils/issues/556,1575310378,IC_kwDOCGYnMM5d5VQq,601708,mcint,2023-06-04T01:21:15Z,2023-06-04T01:21:15Z,CONTRIBUTOR,"I've resolved my use, with the line-buffered output and while read loop for line buffered input, but I leave this here so the incremental saving or line-buffered use-case can be explicitly handled or rejected (or deferred).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740026046,Support storing incrementally piped values, https://github.com/simonw/sqlite-utils/issues/527#issuecomment-1540900733,https://api.github.com/repos/simonw/sqlite-utils/issues/527,1540900733,IC_kwDOCGYnMM5b2Ed9,167893,mcarpenter,2023-05-09T21:15:05Z,2023-05-09T21:15:05Z,CONTRIBUTOR,"Sorry, I completely missed your first comment whilst on Easter break. This looks like a good practical compromise before v4. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1578790070,`Table.convert()` skips falsey values, https://github.com/simonw/sqlite-utils/pull/531#issuecomment-1501017004,https://api.github.com/repos/simonw/sqlite-utils/issues/531,1501017004,IC_kwDOCGYnMM5Zd7Os,25778,eyeseast,2023-04-09T01:49:43Z,2023-04-09T01:49:43Z,CONTRIBUTOR,I'm going to close this in favor of #536. Will try a cleaner approach to custom paths once that one is merge.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1620164673,Add paths for homebrew on Apple silicon, https://github.com/simonw/sqlite-utils/pull/531#issuecomment-1465315726,https://api.github.com/repos/simonw/sqlite-utils/issues/531,1465315726,IC_kwDOCGYnMM5XVvGO,25778,eyeseast,2023-03-12T22:21:56Z,2023-03-12T22:21:56Z,CONTRIBUTOR,"Exactly, that's what I was running into. On my M2 MacBook, SpatiaLite ends up in what is -- for the moment -- a non-standard location, so even when I passed in the location with `--load-extension`, I still hit an error on `create-spatial-index`. What I learned doing this originally is that SQLite needs to load the extension for each connection, even if all the SpatiaLite stuff is already in the database. So that's why `init_spatialite()` gets called again. Here's the code where I hit the error: https://github.com/eyeseast/boston-parcels/blob/main/Makefile#L30 It works using this branch. I'm not attached to this solution if you can think of something better. And I'm not sure, TBH, my test would actually catch what I'm after here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1620164673,Add paths for homebrew on Apple silicon, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1444474487,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1444474487,IC_kwDOCGYnMM5WGO53,167893,mcarpenter,2023-02-24T20:57:43Z,2023-02-24T22:22:18Z,CONTRIBUTOR,"I think I see what is happening here, although I haven't quite work out a fix yet. Usually: * `click.progressbar.render_progress()` renders the cursor invisible on each invocation (update of the bar) * When the progress bar goes out of scope, the `__exit()__` method is invoked, which calls `render_finish()` to make the cursor re-appear. (See terminal escape sequences `BEFORE_BAR` and `AFTER_BAR` in click). However the sqlite-utils `utils.file_progress` context manager wraps `click.progressbar` and yields an instance of a helper class: ``` python @contextlib.contextmanager def file_progress(file, silent=False, **kwargs): ... with click.progressbar(length=file_length, **kwargs) as bar: yield UpdateWrapper(file, bar.update) ``` The yielded `UpdateWrapper` goes out of scope quickly and `click.progressbar.__exit__()` is called. The cursor is made un-invisible. Hoewever `bar` is still live and so when the caller iterates on the yielded wrapper this invokes the bar's update method, calling `render_progress()`, each time printing the ""make cursor invisible"" escape code. The `progressbar.__exit__` function is not called again, so the cursor doesn't re-appear. ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/sqlite-utils/issues/525#issuecomment-1435318713,https://api.github.com/repos/simonw/sqlite-utils/issues/525,1435318713,IC_kwDOCGYnMM5VjTm5,167893,mcarpenter,2023-02-17T21:55:01Z,2023-02-17T21:55:01Z,CONTRIBUTOR,"Meanwhile, a cheap workaround is to invalidate the registered function cache: ``` python table.convert(...) db._registered_functions = set() table.convert(...) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1575131737,Repeated calls to `Table.convert()` fail, https://github.com/simonw/sqlite-utils/issues/520#issuecomment-1421571810,https://api.github.com/repos/simonw/sqlite-utils/issues/520,1421571810,IC_kwDOCGYnMM5Uu3bi,167893,mcarpenter,2023-02-07T22:43:09Z,2023-02-07T22:43:09Z,CONTRIBUTOR,"Hey, isn't this essentially the same issue as #448 ?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1516644980,rows_from_file() raises confusing error if file-like object is not in binary mode, https://github.com/simonw/sqlite-utils/issues/524#issuecomment-1419357290,https://api.github.com/repos/simonw/sqlite-utils/issues/524,1419357290,IC_kwDOCGYnMM5Umaxq,25778,eyeseast,2023-02-06T16:21:44Z,2023-02-06T16:21:44Z,CONTRIBUTOR,SQLite doesn't have a native `DATETIME` type. It stores dates internally as strings and then has [functions](https://www.sqlite.org/lang_datefunc.html) to work with date-like strings. Yes it's weird.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1572766460,Transformation type `--type DATETIME`, https://github.com/simonw/sqlite-utils/issues/525#issuecomment-1423387341,https://api.github.com/repos/simonw/sqlite-utils/issues/525,1423387341,IC_kwDOCGYnMM5U1yrN,167893,mcarpenter,2023-02-08T23:48:52Z,2023-02-09T00:17:30Z,CONTRIBUTOR,PR below,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1575131737,Repeated calls to `Table.convert()` fail, https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1404070841,https://api.github.com/repos/simonw/sqlite-utils/issues/203,1404070841,IC_kwDOCGYnMM5TsGu5,536941,fgregg,2023-01-25T18:47:18Z,2023-01-25T18:47:18Z,CONTRIBUTOR,i'll adopt this PR to make the changes @simonw suggested https://github.com/simonw/sqlite-utils/pull/203#issuecomment-753567932,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829,changes to allow for compound foreign keys, https://github.com/simonw/sqlite-utils/issues/523#issuecomment-1407264466,https://api.github.com/repos/simonw/sqlite-utils/issues/523,1407264466,IC_kwDOCGYnMM5T4SbS,536941,fgregg,2023-01-28T02:41:14Z,2023-01-28T02:41:14Z,CONTRIBUTOR,"I also often then run another little script to cast all empty strings to null, but i save that for another issue if this gets accepted.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1560651350,Feature request: trim all leading and trailing white space for all columns for all tables in a database, https://github.com/simonw/sqlite-utils/issues/510#issuecomment-1318777114,https://api.github.com/repos/simonw/sqlite-utils/issues/510,1318777114,IC_kwDOCGYnMM5OmvEa,7908073,chapmanjacobd,2022-11-17T15:09:47Z,2022-11-17T15:09:47Z,CONTRIBUTOR,"why close? is the only problem that the _config table that incorrectly says 4 for fts5? if so, that's still something that should be fixed","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1434911255,Cannot enable FTS5 despite it being available, https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304320521,https://api.github.com/repos/simonw/sqlite-utils/issues/511,1304320521,IC_kwDOCGYnMM5NvloJ,7908073,chapmanjacobd,2022-11-04T22:54:09Z,2022-11-04T22:59:54Z,CONTRIBUTOR,I ran `PRAGMA integrity_check` and it returned `ok`. but then I tried restoring from a backup and I didn't get this `IntegrityError: constraint failed` error. So I think it was just something wrong with my database. If it happens again I will first try to reindex and see if that fixes the issue,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1436539554,"[insert_all, upsert_all] IntegrityError: constraint failed", https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304078945,https://api.github.com/repos/simonw/sqlite-utils/issues/511,1304078945,IC_kwDOCGYnMM5Nuqph,7908073,chapmanjacobd,2022-11-04T19:38:36Z,2022-11-04T20:13:17Z,CONTRIBUTOR,"Even more bizarre, the source db only has one record and the target table has no conflicting record: ``` 875 0.3s lb:/ (main|✚2) [0|0]🌺 sqlite-utils tube_71.db 'select * from media where path = ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz""' | jq [ { ""size"": null, ""time_created"": null, ""play_count"": 1, ""language"": null, ""view_count"": null, ""width"": null, ""height"": null, ""fps"": null, ""average_rating"": null, ""live_status"": null, ""age_limit"": null, ""uploader"": null, ""time_played"": 0, ""path"": ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"", ""id"": ""088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz/074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv"", ""ie_key"": ""ArchiveOrg"", ""playlist_path"": ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"", ""duration"": 1424.05, ""tags"": null, ""title"": ""074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv"" } ] 875 0.3s lb:/ (main|✚2) [0|0]🥧 sqlite-utils video.db 'select * from media where path = ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz""' | jq [] ``` I've been able to use this code successfully several times before so not sure what's causing the issue. I guess the way that I'm handling multiple databases is an issue, though it hasn't ever inserted into the source db, not sure what's different. The only reasonable explanation is that it is trying to insert into the source db from the source db for some reason? Or maybe sqlite3 is checking the source db for primary key violation because the table name is the same","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1436539554,"[insert_all, upsert_all] IntegrityError: constraint failed", https://github.com/simonw/sqlite-utils/issues/50#issuecomment-1303660293,https://api.github.com/repos/simonw/sqlite-utils/issues/50,1303660293,IC_kwDOCGYnMM5NtEcF,7908073,chapmanjacobd,2022-11-04T14:38:36Z,2022-11-04T14:38:36Z,CONTRIBUTOR,where did you see the limit as 999? I believe the limit has been 32766 for quite some time. If you could detect which one this could speed up batch insert of some types of data significantly,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",473083260,"""Too many SQL variables"" on large inserts", https://github.com/simonw/sqlite-utils/pull/508#issuecomment-1297788531,https://api.github.com/repos/simonw/sqlite-utils/issues/508,1297788531,IC_kwDOCGYnMM5NWq5z,7908073,chapmanjacobd,2022-10-31T22:54:33Z,2022-11-17T15:11:16Z,CONTRIBUTOR,"Maybe this is actually a problem in the python sqlite bindings. Given [SQLITE's stance on this](https://www.sqlite.org/invalidutf.html) they should probably use `encode('utf-8', 'surrogatepass')`. As far as I understand the error here won't actually be resolved by this PR as-is. We would need to modify the data with `surrogateescape`... :/ or modify the sqlite3 module to use `surrogatepass`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1430563092,Allow surrogates in parameters, https://github.com/simonw/sqlite-utils/issues/448#issuecomment-1297703307,https://api.github.com/repos/simonw/sqlite-utils/issues/448,1297703307,IC_kwDOCGYnMM5NWWGL,167893,mcarpenter,2022-10-31T21:23:51Z,2022-10-31T21:27:32Z,CONTRIBUTOR,"The Windows aspect is a red herring: OP's sample above produces the same error on Linux. (Though I don't know what's going on with the CI). The same error can also be obtained by passing an `io` from a file opened in non-binary mode (`'r'` as opposed to `'rb'`) to `rows_from_file()`. This is how I got here. The fix for my case is easy: open the file in mode `'rb'`. The analagous fix for OP's problem also works: use `BytesIO` in place of `StringIO`. Minimal test case (derived from [utils.py](https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/utils.py#L304)): ``` python import io from typing import cast #fp = io.StringIO(""id,name\n1,Cleo"") # error fp = io.BytesIO(bytes(""id,name\n1,Cleo"", encoding='utf-8')) # okay reader = io.BufferedReader(cast(io.RawIOBase, fp)) reader.peek(1) # exception thrown here ``` I see the signature of `rows_from_file()` correctly has `fp: BinaryIO` but I guess you'd need either a runtime type check for that (not all `io`s have `mode()`), or to catch the `AttributeError` on `peek()` to produce a better error for users. Neither option is ideal. Some thoughts on testing binary-ness of `io`s in this SO question: https://stackoverflow.com/questions/44584829/how-to-determine-if-file-is-opened-in-binary-or-text-mode","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279144769,Reading rows from a file => AttributeError: '_io.StringIO' object has no attribute 'readinto', https://github.com/simonw/sqlite-utils/issues/507#issuecomment-1297859539,https://api.github.com/repos/simonw/sqlite-utils/issues/507,1297859539,IC_kwDOCGYnMM5NW8PT,7908073,chapmanjacobd,2022-11-01T00:40:16Z,2022-11-01T00:40:16Z,CONTRIBUTOR,"Ideally people could fix their data if they run into this issue. If you are using filenames try [convmv](https://linux.die.net/man/1/convmv) ``` convmv --preserve-mtimes -f utf8 -t utf8 --notest -i -r . ``` maybe this script will also help: ```py import argparse, shutil from pathlib import Path import ftfy from xklb import utils from xklb.utils import log def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument(""paths"", nargs='*') parser.add_argument(""--verbose"", ""-v"", action=""count"", default=0) args = parser.parse_args() log.info(utils.dict_filter_bool(args.__dict__)) return args def rename_invalid_paths() -> None: args = parse_args() for path in args.paths: log.info(path) for p in sorted([str(p) for p in Path(path).rglob(""*"")], key=len): fixed = ftfy.fix_text(p, uncurl_quotes=False).replace(""\r\n"", ""\n"").replace(""\r"", ""\n"").replace(""\n"", """") if p != fixed: try: shutil.move(p, fixed) except FileNotFoundError: log.warning(""FileNotFound. %s"", p) else: log.info(fixed) if __name__ == ""__main__"": rename_invalid_paths() ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1430325103,conn.execute: UnicodeEncodeError: 'utf-8' codec can't encode character, https://github.com/simonw/sqlite-utils/pull/499#issuecomment-1292401308,https://api.github.com/repos/simonw/sqlite-utils/issues/499,1292401308,IC_kwDOCGYnMM5NCHqc,7908073,chapmanjacobd,2022-10-26T17:54:26Z,2022-10-26T17:54:51Z,CONTRIBUTOR,"The problem with how it is currently is that the transformed fts table _will_ return incorrect results (unless the table was only 1 row or something), even if create_triggers was enabled previously. Maybe the simplest solution is to disable fts on a transformed table rather than try to recreate it? Thoughts?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1405196044,feat: recreate fts triggers after table transform, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223554,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223554,IC_kwDOCGYnMM5LWoVC,7908073,chapmanjacobd,2022-10-01T03:42:50Z,2022-10-01T03:42:50Z,CONTRIBUTOR,oh weird. it inserts into db2,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489,`with db:` for transactions, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223363,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223363,IC_kwDOCGYnMM5LWoSD,7908073,chapmanjacobd,2022-10-01T03:41:45Z,2022-10-01T03:41:45Z,CONTRIBUTOR,"``` pytest xklb/check.py --pdb xklb/check.py:11: in test_transaction assert list(db2[""t""].rows) == [] E AssertionError: assert [{'foo': 1}] == [] E + where [{'foo': 1}] = list() E + where = .rows >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > /home/xk/github/xk/lb/xklb/check.py(11)test_transaction() 9 with db1.conn: 10 db1[""t""].insert({""foo"": 1}) ---> 11 assert list(db2[""t""].rows) == [] 12 assert list(db2[""t""].rows) == [{""foo"": 1}] ``` It fails because it is already inserted. btw if you put these two lines in you pyproject.toml you can get `ipdb` in pytest ``` [tool.pytest.ini_options] addopts = ""--pdbcls=IPython.terminal.debugger:TerminalPdb --ignore=tests/data --capture=tee-sys --log-cli-level=ERROR"" ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489,`with db:` for transactions, https://github.com/simonw/sqlite-utils/issues/493#issuecomment-1264219650,https://api.github.com/repos/simonw/sqlite-utils/issues/493,1264219650,IC_kwDOCGYnMM5LWnYC,7908073,chapmanjacobd,2022-10-01T03:22:50Z,2022-10-01T03:23:58Z,CONTRIBUTOR,"this is likely what you are looking for: https://stackoverflow.com/a/51076749/697964 but yeah I would say just disable smart quotes","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1386562662,Tiny typographical error in install/uninstall docs, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1264218914,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1264218914,IC_kwDOCGYnMM5LWnMi,7908073,chapmanjacobd,2022-10-01T03:18:36Z,2023-06-14T22:14:24Z,CONTRIBUTOR,"> some good concrete use-cases in mind I actually found myself wanting something like this the past couple days. The use-case was databases with slightly different schema but same table names. here is a full script: ``` import argparse from pathlib import Path from sqlite_utils import Database def connect(args, conn=None, **kwargs) -> Database: db = Database(conn or args.database, **kwargs) with db.conn: db.conn.execute(""PRAGMA main.cache_size = 8000"") return db def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument(""database"") parser.add_argument(""dbs_folder"") parser.add_argument(""--db"", ""-db"", help=argparse.SUPPRESS) parser.add_argument(""--verbose"", ""-v"", action=""count"", default=0) args = parser.parse_args() if args.db: args.database = args.db Path(args.database).touch() args.db = connect(args) return args def merge_db(args, source_db): source_db = str(Path(source_db).resolve()) s_db = connect(argparse.Namespace(database=source_db, verbose = args.verbose)) for table in s_db.table_names(): data = s_db[table].rows args.db[table].insert_all(data, alter=True, replace=True) args.db.conn.commit() def merge_directory(): args = parse_args() source_dbs = list(Path(args.dbs_folder).glob('*.db')) for s_db in source_dbs: merge_db(args, s_db) if __name__ == '__main__': merge_directory() ``` edit: I've made some improvements to this and put it on PyPI: ``` $ pip install xklb $ lb merge-db -h usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...] Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided, and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row replacing all existing fields. Upsert mode will update matching PK rows such that if a source row has a NULL field and the destination row has a value then the value will be preserved instead of changed to the source row's NULL value. Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db Test first by using temp databases as the destination db. Try out different modes / flags until you are satisfied with the behavior of the program library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db Merge database data and tables library merge-dbs --upsert --pk path video.db tv.db movies.db library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path audio-fts.db audio.db library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v positional arguments: database source_dbs ``` Also if you want to dedupe a table based on a ""business key"" which isn't explicitly your primary key(s) you can run this: ``` $ lb dedupe-db -h usage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS] Dedupe your database (not to be confused with the dedupe subcommand) It should not need to be said but *backup* your database before trying this tool! Dedupe-DB will help remove duplicate rows based on non-primary-key business keys library dedupe-db ./video.db media --bk path If --primary-keys is not provided table metadata primary keys will be used If --only-columns is not provided all non-primary and non-business key columns will be upserted positional arguments: database table options: -h, --help show this help message and exit --skip-0 --only-columns ONLY_COLUMNS Comma separated column names to upsert --primary-keys PRIMARY_KEYS, --pk PRIMARY_KEYS Comma separated primary keys --business-keys BUSINESS_KEYS, --bk BUSINESS_KEYS Comma separated business keys ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1258712931,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1258712931,IC_kwDOCGYnMM5LBm9j,25778,eyeseast,2022-09-26T22:31:58Z,2022-09-26T22:31:58Z,CONTRIBUTOR,"Right. The backup command will copy tables completely, but in the case of conflicting table names, the destination gets overwritten silently. That might not be what you want here. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1258508215,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1258508215,IC_kwDOCGYnMM5LA0-3,25778,eyeseast,2022-09-26T19:22:14Z,2022-09-26T19:22:14Z,CONTRIBUTOR,"This might be fairly straightforward using SQLite's backup utility: https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.backup ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/pull/498#issuecomment-1274153135,https://api.github.com/repos/simonw/sqlite-utils/issues/498,1274153135,IC_kwDOCGYnMM5L8giv,7908073,chapmanjacobd,2022-10-11T06:34:31Z,2022-10-11T06:34:31Z,CONTRIBUTOR,nevermind it was because I was running `db[table].transform`. The fts tables would still be there but the triggers would be dropped,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1404013495,fix: enable-fts permanently save triggers, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1252898131,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1252898131,IC_kwDOCGYnMM5KrbVT,7908073,chapmanjacobd,2022-09-20T20:51:21Z,2022-09-20T20:56:07Z,CONTRIBUTOR,"When I run `reset` it fixes my terminal. I suspect it is related to the progress bar https://linux.die.net/man/1/reset ``` 950 1s /m/d/03_Downloads 🐑 echo $TERM xterm-kitty ▓░▒░ /m/d/03_Downloads 🌏 kitty -v kitty 0.26.2 created by Kovid Goyal $ sqlite-utils insert test.db facility facility-boundary-us-all.csv --csv blah blah blah (no offense) $ $ reset $ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1256858763,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1256858763,IC_kwDOCGYnMM5K6iSL,7908073,chapmanjacobd,2022-09-24T04:50:59Z,2022-09-24T04:52:08Z,CONTRIBUTOR,"Instead of outputting binary data to stdout the interface might be better like this ``` sqlite-utils merge animals.db cats.db dogs.db ``` similar to `zip`, `ogr2ogr`, etc Actually I think this might already be possible within `ogr2ogr`. I don't believe spatial data is a requirement though it might add an `ogc_id` column or something ``` cp cats.db animals.db ogr2ogr -append animals.db dogs.db ogr2ogr -append animals.db another.db ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/pull/480#issuecomment-1232356302,https://api.github.com/repos/simonw/sqlite-utils/issues/480,1232356302,IC_kwDOCGYnMM5JdEPO,7908073,chapmanjacobd,2022-08-31T01:51:49Z,2022-08-31T01:51:49Z,CONTRIBUTOR,Thanks for pointing me to the right place,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1355433619,search_sql add include_rank option, https://github.com/simonw/sqlite-utils/issues/467#issuecomment-1224382336,https://api.github.com/repos/simonw/sqlite-utils/issues/467,1224382336,IC_kwDOCGYnMM5I-peA,50527,jefftriplett,2022-08-23T17:16:13Z,2022-08-23T17:16:13Z,CONTRIBUTOR,"> Should passing `alter=True` also drop any columns that aren't included in the new table structure? > > It could even spot column types that aren't correct and fix those. > > Is that consistent with the expectations set by how `alter=True` works elsewhere? I would lean towards not dropping them (or making a `drop=True` or `drop_columns=True`or `drop_missing_columns=True`) to work with existing tables easier. I do like that sqlite-utils mostly just works with existing tables but it's also nice to add to existing fields in a few cases. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1348169997,Mechanism for ensuring a table has all the columns, https://github.com/simonw/sqlite-utils/issues/449#issuecomment-1179579878,https://api.github.com/repos/simonw/sqlite-utils/issues/449,1179579878,IC_kwDOCGYnMM5GTvXm,1690072,davidleejy,2022-07-09T17:41:32Z,2022-07-09T17:41:50Z,CONTRIBUTOR,Learnt that the types in Sqlite-utils differ somewhat from those in Sqlite. I've changed my test to account for this difference and the test has passed successfully. I will submit a PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279863844,Utilities for duplicating tables and creating a table with the results of a query, https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190277829,https://api.github.com/repos/simonw/sqlite-utils/issues/456,1190277829,IC_kwDOCGYnMM5G8jLF,536941,fgregg,2022-07-20T13:19:15Z,2022-07-20T13:19:15Z,CONTRIBUTOR,hadley wickham's melt and reshape could be good inspo: http://had.co.nz/reshape/introduction.pdf,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1310243385,feature request: pivot command, https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190272780,https://api.github.com/repos/simonw/sqlite-utils/issues/456,1190272780,IC_kwDOCGYnMM5G8h8M,536941,fgregg,2022-07-20T13:14:54Z,2022-07-20T13:14:54Z,CONTRIBUTOR,"for example, i have data on votes that look like this: | ballot_id | option_id | choice | |-|-|-| | 1 | 1 | 0 | | 1 | 2 | 1 | | 1 | 3 | 0 | | 1 | 4 | 1 | | 2 | 1 | 1 | | 2 | 2 | 0 | | 2 | 3 | 1 | | 2 | 4 | 0 | and i want to reshape from this long form to this wide form: | ballot_id | option_id_1 | option_id_2 | option_id_3 | option_id_ 4| |-|-|-|-| -| | 1 | 0 | 1 | 0 | 1 | | 2 | 1 | 0 | 1| 0 | i could do such a think like this. ```sql select ballot_id, sum(choice) filter (where option_id = 1) as option_id_1, sum(choice) filter (where option_id = 2) as option_id_2, sum(choice) filter (where option_id = 3) as option_id_3, sum(choice) filter (where option_id = 4) as option_id_4 from vote group by ballot_id ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1310243385,feature request: pivot command, https://github.com/simonw/sqlite-utils/issues/423#issuecomment-1189010812,https://api.github.com/repos/simonw/sqlite-utils/issues/423,1189010812,IC_kwDOCGYnMM5G3t18,536941,fgregg,2022-07-19T12:47:39Z,2022-07-19T12:47:39Z,CONTRIBUTOR,just ran into this!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1199158210,.extract() doesn't set foreign key when extracted columns contain NULL value, https://github.com/simonw/sqlite-utils/issues/449#issuecomment-1174027079,https://api.github.com/repos/simonw/sqlite-utils/issues/449,1174027079,IC_kwDOCGYnMM5F-jtH,1690072,davidleejy,2022-07-04T17:33:04Z,2022-07-04T17:48:43Z,CONTRIBUTOR,"I've written the code and test. Would you be able to advise how to compare table columns in a pytest function properly? Experiencing a challenge when comparing columns. Test: ```python def test_duplicate(fresh_db): table = fresh_db.create_table( ""table1"", { ""text_col"": str, ""float_col"": float, ""int_col"": int, ""bool_col"": bool, ""bytes_col"": bytes, ""datetime_col"": datetime.datetime, }, ) dt = datetime.datetime.now() b = bytes('hello world', 'utf-8') data = {""text_col"": ""Cleo"", ""float_col"": 3.14, ""int_col"": -2, ""bool_col"": True, ""bytes_col"": b, ""datetime_col"": str(dt)} table1 = fresh_db[""table1""] row_id = table1.insert(data).last_rowid table1.duplicate('table2') table2 = fresh_db[""table2""] assert data == table2.get(row_id) assert table1.columns == table2.columns # FAILS HERE ``` Result: ![Screenshot 2022-07-05 at 1 31 55 AM](https://user-images.githubusercontent.com/1690072/177198814-daac48c9-5746-49d0-a14a-14fe181c5a2f.png) Failure is due to column types being named differently -- e.g. 'FLOAT' vs 'REAL', 'INTEGER' vs 'INT'. How should I go about comparing columns while accounting for equivalent types? Or did I miss out something in my duplication code correctly? Here's how I did it: in `db.py`, I've added the following code: ```python class Table(Queryable): [...] def duplicate( self, name_new: str ) -> ""Table"": """""" Duplicate this table in this database. :param name_new: Name of new table. """""" assert self.exists() with self.db.conn: sql = ""CREATE TABLE [{new_table}] AS SELECT * FROM [{table}];"".format( new_table = name_new, table = self.name, ) self.db.execute(sql) return self.db[name_new] ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279863844,Utilities for duplicating tables and creating a table with the results of a query, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1077671779,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1077671779,IC_kwDOCGYnMM5AO_dj,25778,eyeseast,2022-03-24T14:11:33Z,2022-03-24T14:11:43Z,CONTRIBUTOR,"Coming back to this. I was about to add a utility function to [datasette-geojson]() to convert lat/lng columns to geometries. Thankfully I googled first. There's a SpatiaLite function for this: [MakePoint](https://www.gaia-gis.it/gaia-sins/spatialite-sql-latest.html#p0). ```sql select MakePoint(longitude, latitude) as geometry from places; ``` I'm not sure if that would work with `conversions`, since it needs two columns, but it's an option for tables that already have latitude, longitude columns.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464,"Make it easier to insert geometries, with documentation and maybe code", https://github.com/simonw/sqlite-utils/issues/131#issuecomment-1067981656,https://api.github.com/repos/simonw/sqlite-utils/issues/131,1067981656,IC_kwDOCGYnMM4_qBtY,25778,eyeseast,2022-03-15T13:21:42Z,2022-03-15T13:21:42Z,CONTRIBUTOR,"Just ran into this issue last night. I have a big table that's _mostly_ numbers, but also a zip code column in a state where ZIP codes start with 0. Would be great to run something like this: ```sh sqlite-utils insert data.db places file.csv --csv --detect-types --type zipcode text ``` Maybe I'll take a crack at this one.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",675753042,sqlite-utils insert: options for column types, https://github.com/simonw/sqlite-utils/issues/411#issuecomment-1065477258,https://api.github.com/repos/simonw/sqlite-utils/issues/411,1065477258,IC_kwDOCGYnMM4_geSK,25778,eyeseast,2022-03-11T20:14:59Z,2022-03-11T20:14:59Z,CONTRIBUTOR,"Good call on adding this to `create-table`, especially for stored columns. Having the stored/virtual split might make this tricky to implement, but I haven't gone any farther than thinking about what the CLI looks like. I'm going to try making the SQL side work first and figure that'll tell me more about what it needs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1160034488,Support for generated columns, https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059647114,https://api.github.com/repos/simonw/sqlite-utils/issues/412,1059647114,IC_kwDOCGYnMM4_KO6K,25778,eyeseast,2022-03-05T01:54:24Z,2022-03-05T01:54:24Z,CONTRIBUTOR,"I haven't tried this, but it looks like Pandas has a method for this: https://pandas.pydata.org/docs/reference/api/pandas.read_sql_query.html ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1160182768,Optional Pandas integration, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1035057014,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1035057014,IC_kwDOCGYnMM49sbd2,25778,eyeseast,2022-02-10T15:30:28Z,2022-02-10T15:30:40Z,CONTRIBUTOR,"Yeah, the CLI experience is probably where any kind of multi-column, configured setup is going to fall apart. Sticking with GIS examples, one way I might think about this is using the [fiona CLI](https://fiona.readthedocs.io/en/latest/cli.html): ```sh # assuming a database is already created and has SpatiaLite fio cat boundary.shp | sqlite-utils insert boundaries --conversion geometry GeometryGeoJSON - ``` Anyway, very interested to see where you land here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737,Advanced class-based `conversions=` mechanism, https://github.com/simonw/sqlite-utils/issues/403#issuecomment-1033332570,https://api.github.com/repos/simonw/sqlite-utils/issues/403,1033332570,IC_kwDOCGYnMM49l2da,536941,fgregg,2022-02-09T04:22:43Z,2022-02-09T04:22:43Z,CONTRIBUTOR,dddoooope,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1126692066,Document how to add a primary key to a rowid table using `sqlite-utils transform --pk`, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1032732242,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1032732242,IC_kwDOCGYnMM49jj5S,25778,eyeseast,2022-02-08T15:26:59Z,2022-02-08T15:26:59Z,CONTRIBUTOR,"What if you did something like this: ```python class Conversion: def __init__(self, *args, **kwargs): ""Put whatever settings you need here"" def python(self, row, column, value): # not sure on args here ""Python step to transform value"" return value def sql(self, row, column, value): ""Return the actual sql that goes in the insert/update step, and maybe params"" # value is the return of self.python() return value, [] ``` This way, you're always passing an instance, which has methods that do the conversion. (Or you're passing a SQL string, as you would now.) The `__init__` could take column names, or SRID, or whatever other setup state you need per row, but the row is getting processed with the `python` and `sql` methods (or whatever you want to call them). This is pretty rough, so do what you will with names and args and such. You'd then use it like this: ```python # subclass might be unneeded here, if methods are present class LngLatConversion(Conversion): def __init__(self, x=""longitude"", y=""latitude""): self.x = x self.y = y def python(self, row, column, value): x = row[self.x] y = row[self.y] return x, y def sql(self, row, column, value): # value is now a tuple, returned above s = ""GeomFromText(POINT(? ?))"" return s, value table.insert_all(rows, conversions={""point"": LngLatConversion(""lng"", ""lat""))} ``` I haven't thought through all the implementation details here, and it'll probably break in ways I haven't foreseen, but wanted to get this idea out of my head. Hope it helps.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737,Advanced class-based `conversions=` mechanism, https://github.com/simonw/sqlite-utils/issues/403#issuecomment-1032126353,https://api.github.com/repos/simonw/sqlite-utils/issues/403,1032126353,IC_kwDOCGYnMM49hP-R,536941,fgregg,2022-02-08T01:45:15Z,2022-02-08T01:45:31Z,CONTRIBUTOR,"you can hack something like this to achieve this result: `sqlite-utils convert my_database my_table rowid ""{'id': value}"" --multi`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1126692066,Document how to add a primary key to a rowid table using `sqlite-utils transform --pk`, https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1032120014,https://api.github.com/repos/simonw/sqlite-utils/issues/26,1032120014,IC_kwDOCGYnMM49hObO,536941,fgregg,2022-02-08T01:32:34Z,2022-02-08T01:32:34Z,CONTRIBUTOR,"if you are curious about prior art, https://github.com/jsnell/json-to-multicsv is really good!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455486286,Mechanism for turning nested JSON into foreign keys / many-to-many, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1031779460,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1031779460,IC_kwDOCGYnMM49f7SE,25778,eyeseast,2022-02-07T18:24:56Z,2022-02-07T18:24:56Z,CONTRIBUTOR,"I wonder if there's any overlap with the goals here and the `sqlite3` module's concept of adapters and converters: https://docs.python.org/3/library/sqlite3.html#sqlite-and-python-types I'm not sure that's _exactly_ what we're talking about here, but it might be a parallel with some useful ideas to borrow.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737,Advanced class-based `conversions=` mechanism, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1031791783,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1031791783,IC_kwDOCGYnMM49f-Sn,25778,eyeseast,2022-02-07T18:37:40Z,2022-02-07T18:37:40Z,CONTRIBUTOR,"I've never used it either, but it's interesting, right? Feel like I should try it for something. I'm trying to get my head around how this conversions feature might work, because I really like the idea of it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737,Advanced class-based `conversions=` mechanism, https://github.com/simonw/sqlite-utils/issues/398#issuecomment-1030629879,https://api.github.com/repos/simonw/sqlite-utils/issues/398,1030629879,IC_kwDOCGYnMM49bin3,25778,eyeseast,2022-02-05T13:57:33Z,2022-02-05T19:49:38Z,CONTRIBUTOR,"I'm mostly using [geojson-to-sqlite](https://github.com/simonw/geojson-to-sqlite) at the moment. Even with shapefiles, I'm usually converting to GeoJSON and projecting to EPSG:4326 (with [ogr2ogr](https://gdal.org/programs/ogr2ogr.html)) first. I think an open question here is how much you want to leave to external libraries and how much you want here. My thinking has been that adding Spatialite helpers here would make external stuff easier, but it would be nice to have some standard way to insert geometries. I'm in the middle of adding GeoJSON and Spatialite support to [geocode-sqlite](https://github.com/eyeseast/geocode-sqlite), and that will probably use WKT. Since that's all points, I think I can just make the string inline. But for polygons, I'd generally use Shapely, which probably isn't a dependency you want to add to sqlite-utils. I've also been trying to get some of the approaches [here](https://www.gaia-gis.it/fossil/libspatialite/wiki?name=Supporting+GeoJSON) to work, but haven't had any success so far.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124237013,Add SpatiaLite helpers to CLI, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030740826,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1030740826,IC_kwDOCGYnMM49b9ta,25778,eyeseast,2022-02-06T02:59:10Z,2022-02-06T02:59:10Z,CONTRIBUTOR,"All this said, I don't think it's unreasonable to point people to dedicated tools like `geojson-to-sqlite`. If I'm dealing with a bunch of GeoJSON or Shapefiles, I need to something to read those anyway (or I need to figure out virtual tables). But something like this might make it easier to build those libraries, or standardize the underlying parts.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464,"Make it easier to insert geometries, with documentation and maybe code", https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030740653,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1030740653,IC_kwDOCGYnMM49b9qt,25778,eyeseast,2022-02-06T02:57:17Z,2022-02-06T02:57:17Z,CONTRIBUTOR,"I like the idea of having stock conversions you could import. I'd actually move them to a dedicated module (call it `sqlite_utils.conversions` or something), because it's different from other utilities. Maybe they even take configuration, or they're composable. ```python from sqlite_utils.conversions import LongitudeLatitude db[""places""].insert( { ""name"": ""London"", ""lng"": -0.118092, ""lat"": 51.509865, }, conversions={""point"": LongitudeLatitude(""lng"", ""lat"")}, ) ``` I would definitely use that for every CSV I get with lat/lng columns where I actually need GeoJSON.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464,"Make it easier to insert geometries, with documentation and maybe code", https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030741289,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1030741289,IC_kwDOCGYnMM49b90p,25778,eyeseast,2022-02-06T03:03:43Z,2022-02-06T03:03:43Z,CONTRIBUTOR,"> I wonder if there are any interesting non-geospatial canned conversions that it would be worth including? Off the top of my head: - Un-nesting JSON objects into columns - Splitting arrays - Normalizing dates and times - URL munging with `urlparse` - Converting strings to numbers Some of this is easy enough with SQL functions, some is easier in Python. Maybe that's where having pre-built classes gets really handy, because it saves you from thinking about which way it's implemented.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464,"Make it easier to insert geometries, with documentation and maybe code", https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1030002502,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1030002502,IC_kwDOCGYnMM49ZJdG,25778,eyeseast,2022-02-04T13:50:19Z,2022-02-04T13:50:19Z,CONTRIBUTOR,Awesome. Thanks for your help getting it in. Will now look at adding CLI versions of this. It's going to be super helpful on a bunch of my projects.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029370537,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029370537,IC_kwDOCGYnMM49WvKp,25778,eyeseast,2022-02-03T20:25:58Z,2022-02-03T20:25:58Z,CONTRIBUTOR,"OK, I moved all the GIS helpers into `db.py` as methods on `Database` and `Table`, and I put `find_spatialite` back in `utils.py`. I deleted `gis.py`, since there's nothing left it. Docs and tests are updated and passing. I think this is better.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029338360,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029338360,IC_kwDOCGYnMM49WnT4,25778,eyeseast,2022-02-03T19:43:56Z,2022-02-03T19:43:56Z,CONTRIBUTOR,"Works for me. I was just looking at how the FTS extensions work and they're just methods, too. So this can be consistent with that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029326568,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029326568,IC_kwDOCGYnMM49Wkbo,25778,eyeseast,2022-02-03T19:28:26Z,2022-02-03T19:28:26Z,CONTRIBUTOR,"> `from sqlite_utils.utils import find_spatialite` is part of the documented API already: > > https://sqlite-utils.datasette.io/en/3.22.1/python-api.html#finding-spatialite > > To avoid needing to bump the major version number to 4 to indicate a backwards incompatible change, we should keep a `from .gis import find_spatialite` line at the top of `utils.py` such that any existing code with that documented import continues to work. This is fixed now. I had to take out the type annotations for `Database` and `Table` to avoid a circular import, but that's fine and may be moot if these become class methods.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1029317527,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1029317527,IC_kwDOCGYnMM49WiOX,25778,eyeseast,2022-02-03T19:18:02Z,2022-02-03T19:18:02Z,CONTRIBUTOR,"Taking part of the conversation from #385 here. > Would sqlite-utils add-geometry-column ... be a good CLI enhancement. for example? Yes. And also `sqlite-utils create-spatial-index` would be great to have. My plan would be to add those once the Python API is settled.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245,Helper methods for working with SpatiaLite, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029306428,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029306428,IC_kwDOCGYnMM49Wfg8,25778,eyeseast,2022-02-03T19:03:43Z,2022-02-03T19:03:43Z,CONTRIBUTOR,"I thought about adding these as methods on `Database` and `Table`, and I'm back and forth on it for the same reasons you are. It's certainly cleaner, and it's clearer what you're operating on. I could go either way. I do sort of like having all the Spatialite stuff in its own module, just because it's built around an extension you might not have or want, but I don't know if that's a good reason to have a different API. You could have `init_spatialite` add methods to `Database` and `Table`, so they're only there if you have Spatialite set up. Is that too clever? It feels too clever. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029180984,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029180984,IC_kwDOCGYnMM49WA44,25778,eyeseast,2022-02-03T16:42:04Z,2022-02-03T16:42:04Z,CONTRIBUTOR,Fixed my spelling. That's a useful thing.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029175907,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029175907,IC_kwDOCGYnMM49V_pj,25778,eyeseast,2022-02-03T16:36:54Z,2022-02-03T16:36:54Z,CONTRIBUTOR,"@simonw Not sure if you've seen this, but any chance you can run the tests?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,Add new spatialite helper methods, https://github.com/simonw/sqlite-utils/issues/398#issuecomment-1038336591,https://api.github.com/repos/simonw/sqlite-utils/issues/398,1038336591,IC_kwDOCGYnMM4948JP,25778,eyeseast,2022-02-13T18:48:21Z,2022-02-13T18:49:49Z,CONTRIBUTOR,"Been chipping away at this between other things and realized `sqlite-utils init-spatialite` is probably unnecessary. Any of the other commands requires running `db.init_spatialite` to have the extension functions available, and that will do everything `init-spatialite` would do. I think it's probably worth keeping a SpatiaLite flag on `create-database` in case you wanted to create all the spatial metadata up front. Otherwise, it's going to get added the first time you run `add-geometry-column` or `create-spatial-index`, which is probably fine in most cases.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124237013,Add SpatiaLite helpers to CLI, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1013698557,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1013698557,IC_kwDOCGYnMM48a8_9,25778,eyeseast,2022-01-15T15:15:22Z,2022-01-15T15:15:22Z,CONTRIBUTOR,@simonw I have a PR here https://github.com/simonw/sqlite-utils/pull/385 that adds Spatialite helpers on the Python side. Please let me know how it looks.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245,Helper methods for working with SpatiaLite, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1012413729,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1012413729,IC_kwDOCGYnMM48WDUh,25778,eyeseast,2022-01-13T18:50:00Z,2022-01-13T18:50:00Z,CONTRIBUTOR,"One more thing I'm going to add: A method to add a geometry column, which I'll need to do to create a spatial index on a table.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245,Helper methods for working with SpatiaLite, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1012253198,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1012253198,IC_kwDOCGYnMM48VcIO,25778,eyeseast,2022-01-13T15:39:14Z,2022-01-13T15:39:14Z,CONTRIBUTOR,"Other thing: If there get to be enough utils, I think it's worth moving all the spatialite stuff into its own file (`gis.py` or something) just so it's easier to find later.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245,Helper methods for working with SpatiaLite, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1012230212,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1012230212,IC_kwDOCGYnMM48VWhE,25778,eyeseast,2022-01-13T15:15:13Z,2022-01-13T15:15:13Z,CONTRIBUTOR,"Some proposals I'd add to sqlite-utils: Some version of this, from [geojson-to-sqlite](https://github.com/simonw/geojson-to-sqlite/blob/main/geojson_to_sqlite/utils.py#L124-L130): ```python def init_spatialite(db, lib): db.conn.enable_load_extension(True) db.conn.load_extension(lib) # Initialize SpatiaLite if not yet initialized if ""spatial_ref_sys"" in db.table_names(): return db.conn.execute(""select InitSpatialMetadata(1)"") ``` Also a function for creating a spatial index: ```python db.conn.execute(""select CreateSpatialIndex(?, ?)"", [table, ""geometry""]) ``` I don't know the nuances of updating a spatial index, or checking if one already exists. This could be a CLI method like: ```sh sqlite-utils spatial-index spatial.db table-name column-name ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245,Helper methods for working with SpatiaLite, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1012158895,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1012158895,IC_kwDOCGYnMM48VFGv,25778,eyeseast,2022-01-13T13:55:59Z,2022-01-13T13:55:59Z,CONTRIBUTOR,"Came here to add this. I might pick it up. Would also add a utility to create (and update and delete?) a spatial index. It's not much code but I have to look it up every time.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245,Helper methods for working with SpatiaLite, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1009548580,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1009548580,IC_kwDOCGYnMM48LH0k,536941,fgregg,2022-01-11T02:43:34Z,2022-01-11T02:43:34Z,CONTRIBUTOR,thanks so much! always a pleasure to see how you work through these things,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008275546,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008275546,IC_kwDOCGYnMM48GRBa,536941,fgregg,2022-01-09T11:01:15Z,2022-01-09T13:37:51Z,CONTRIBUTOR,"i don’t want to be such a partisan for analyze, but the query planner deciding *not* to use an index based on information collected by analyze is not necessarily a bug, but could be the correct choice. the original poster in that stack overflow doesn’t say there’s a performance regression ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008166084,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008166084,IC_kwDOCGYnMM48F2TE,536941,fgregg,2022-01-08T22:32:47Z,2022-01-08T22:32:47Z,CONTRIBUTOR,or using “ pragma optimize”,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008161965,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008161965,IC_kwDOCGYnMM48F1St,536941,fgregg,2022-01-08T22:02:56Z,2022-01-08T22:02:56Z,CONTRIBUTOR,"for options 2 and 3, i would worry about discoverablity. in other db’s it is not necessary to explicitly call analyze for most indices. ie for postgres > The system regularly collects statistics on all of a table's columns. Newly-created non-expression indexes can immediately use these statistics to determine an index's usefulness. i suppose i would propose raising a warning if the stats table is created that explains what is going on and informs users about a —no-analyze argument.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008164116,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008164116,IC_kwDOCGYnMM48F10U,536941,fgregg,2022-01-08T22:18:57Z,2022-01-08T22:18:57Z,CONTRIBUTOR,"the table with the query ran so bad was about 50k. i think the scenario should not be worse than no stats. i also did not know that sqlite was so different from postgres and needed an explicit analyze call.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008164786,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008164786,IC_kwDOCGYnMM48F1-y,536941,fgregg,2022-01-08T22:24:19Z,2022-01-08T22:24:19Z,CONTRIBUTOR,the out-of-date scenario you describe could be addressed by automatically adding an analyze to the insert or convert commands if they implicate an index,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1007636709,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1007636709,IC_kwDOCGYnMM48D1Dl,536941,fgregg,2022-01-07T18:28:33Z,2022-01-07T18:29:43Z,CONTRIBUTOR,"i added an index to one table with sqlite-utils, and then a query that used to take about 1 second started taking hundreds of seconds. running analyze got me back to sub second speed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/353#issuecomment-991405755,https://api.github.com/repos/simonw/sqlite-utils/issues/353,991405755,IC_kwDOCGYnMM47F6a7,536941,fgregg,2021-12-11T01:38:29Z,2021-12-11T01:38:29Z,CONTRIBUTOR,"wow! that's awesome! thanks so much, @simonw!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1077102934,"Allow passing a file of code to ""sqlite-utils convert""", https://github.com/simonw/sqlite-utils/issues/348#issuecomment-983155079,https://api.github.com/repos/simonw/sqlite-utils/issues/348,983155079,IC_kwDOCGYnMM46mcGH,25778,eyeseast,2021-12-01T00:28:40Z,2021-12-01T00:28:40Z,CONTRIBUTOR,"I'd use this. Right now, I tend to do `touch my.db` and then `enable-wal` or whatever else, but I'm never sure if that's a bad idea.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1067771698,Command for creating an empty database, https://github.com/simonw/sqlite-utils/issues/26#issuecomment-964205475,https://api.github.com/repos/simonw/sqlite-utils/issues/26,964205475,IC_kwDOCGYnMM45eJuj,536941,fgregg,2021-11-09T14:31:29Z,2021-11-09T14:31:29Z,CONTRIBUTOR,i was just reaching for a tool to do this this morning,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455486286,Mechanism for turning nested JSON into foreign keys / many-to-many, https://github.com/simonw/sqlite-utils/issues/242#issuecomment-953911245,https://api.github.com/repos/simonw/sqlite-utils/issues/242,953911245,IC_kwDOCGYnMM4424fN,25778,eyeseast,2021-10-28T14:37:55Z,2021-10-28T14:37:55Z,CONTRIBUTOR,"I've been thinking about this a bit lately, doing a project that involves moving a lot of data in and out of SQLite files, datasette and GeoJSON. This has me leaning toward the idea that something like [`datasette query`](https://github.com/simonw/datasette/issues/1356) would be a better place to do async queries. I know there's a lot of overlap in sqlite-utils and datasette, and maybe keeping sqlite-utils synchronous would let datasette be entirely async and give a cleaner separation of implementations. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",817989436,Async support, https://github.com/simonw/sqlite-utils/pull/326#issuecomment-916119657,https://api.github.com/repos/simonw/sqlite-utils/issues/326,916119657,IC_kwDOCGYnMM42muBp,191622,meatcar,2021-09-09T13:54:10Z,2021-09-09T13:54:10Z,CONTRIBUTOR,dupe of #293?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",991237645,Test against 3.10-dev, https://github.com/simonw/sqlite-utils/pull/407#issuecomment-1040998433,https://api.github.com/repos/simonw/sqlite-utils/issues/407,1040998433,IC_kwDOCGYnMM4-DGAh,25778,eyeseast,2022-02-16T01:29:39Z,2022-02-16T01:29:39Z,CONTRIBUTOR,Happy to do it and have it in the library. Going to use it a bunch. This whole SpatiaLite toolchain become a huge part of my work in the past year.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1138948786,Add SpatiaLite helpers to CLI, https://github.com/simonw/sqlite-utils/pull/407#issuecomment-1040580250,https://api.github.com/repos/simonw/sqlite-utils/issues/407,1040580250,IC_kwDOCGYnMM4-Bf6a,25778,eyeseast,2022-02-15T17:40:00Z,2022-02-15T17:40:00Z,CONTRIBUTOR,@simonw I think this is ready for a look.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1138948786,Add SpatiaLite helpers to CLI, https://github.com/simonw/datasette/issues/2213#issuecomment-1843072926,https://api.github.com/repos/simonw/datasette/issues/2213,1843072926,IC_kwDOBm6k_c5t2w-e,536941,fgregg,2023-12-06T15:05:44Z,2023-12-06T15:05:44Z,CONTRIBUTOR,"it probably does not make sense to gzip large sqlite database files on the fly. it can take many seconds to gzip a large file and you either have to have this big thing in memory, or write it to disk, which some deployment environments will not like. i wonder if it would make sense to gzip the databases as part of the datasette publish process. it would be very cool to statically serve those as if they dynamically zipped (i.e. serve the filename example.db, not example.db.zip, and rely on the browser to expand).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",2028698018,feature request: gzip compression of database downloads, https://github.com/simonw/datasette/pull/2209#issuecomment-1812750369,https://api.github.com/repos/simonw/datasette/issues/2209,1812750369,IC_kwDOBm6k_c5sDGAh,198537,rgieseke,2023-11-15T15:29:37Z,2023-11-15T15:29:37Z,CONTRIBUTOR,"Looks like tests are passing now but there is an issue with yaml loading and/or cog. https://github.com/simonw/datasette/actions/runs/6879299298/job/18710911166?pr=2209","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1994861266,Fix query for suggested facets with column named value, https://github.com/simonw/datasette/pull/2209#issuecomment-1812623778,https://api.github.com/repos/simonw/datasette/issues/2209,1812623778,IC_kwDOBm6k_c5sCnGi,198537,rgieseke,2023-11-15T14:22:42Z,2023-11-15T15:24:09Z,CONTRIBUTOR,"Whoops, looks like I forgot to check for other places where the 'facetable' table is used in the tests.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1994861266,Fix query for suggested facets with column named value, https://github.com/simonw/datasette/issues/2208#issuecomment-1812617851,https://api.github.com/repos/simonw/datasette/issues/2208,1812617851,IC_kwDOBm6k_c5sClp7,198537,rgieseke,2023-11-15T14:18:58Z,2023-11-15T14:18:58Z,CONTRIBUTOR,"Without aliases: ![image](https://github.com/simonw/datasette/assets/198537/d9703d3b-9733-4e87-9954-4fc60a07784a) The proposed fix in #2209 also works when the 'value' column is actually facetable (just added another value in the 'value' column). ![image](https://github.com/simonw/datasette/assets/198537/a37a0a1a-c36a-4c78-bdce-01b582637cc6) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1994857251,No suggested facets when a column named 'value' is included, https://github.com/simonw/datasette/pull/2202#issuecomment-1801876943,https://api.github.com/repos/simonw/datasette/issues/2202,1801876943,IC_kwDOBm6k_c5rZnXP,49699333,dependabot[bot],2023-11-08T13:19:00Z,2023-11-08T13:19:00Z,CONTRIBUTOR,Superseded by #2206.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1959278971,Bump the python-packages group with 1 update,"{""id"": 29110, ""slug"": ""dependabot"", ""node_id"": ""MDM6QXBwMjkxMTA="", ""owner"": {""login"": ""github"", ""id"": 9919, ""node_id"": ""MDEyOk9yZ2FuaXphdGlvbjk5MTk="", ""avatar_url"": ""https://avatars.githubusercontent.com/u/9919?v=4"", ""gravatar_id"": """", ""url"": ""https://api.github.com/users/github"", ""html_url"": ""https://github.com/github"", ""followers_url"": ""https://api.github.com/users/github/followers"", ""following_url"": ""https://api.github.com/users/github/following{/other_user}"", ""gists_url"": ""https://api.github.com/users/github/gists{/gist_id}"", ""starred_url"": ""https://api.github.com/users/github/starred{/owner}{/repo}"", ""subscriptions_url"": ""https://api.github.com/users/github/subscriptions"", ""organizations_url"": ""https://api.github.com/users/github/orgs"", ""repos_url"": ""https://api.github.com/users/github/repos"", ""events_url"": ""https://api.github.com/users/github/events{/privacy}"", ""received_events_url"": ""https://api.github.com/users/github/received_events"", ""type"": ""Organization"", ""site_admin"": false}, ""name"": ""Dependabot"", ""description"": """", ""external_url"": ""https://dependabot-api.githubapp.com"", ""html_url"": ""https://github.com/apps/dependabot"", ""created_at"": ""2019-04-16T22:34:25Z"", ""updated_at"": ""2023-10-12T13:35:09Z"", ""permissions"": {""checks"": ""write"", ""contents"": ""write"", ""issues"": ""write"", ""members"": ""read"", ""metadata"": ""read"", ""pull_requests"": ""write"", ""statuses"": ""read"", ""vulnerability_alerts"": ""read"", ""workflows"": ""write""}, ""events"": [""check_suite"", ""issues"", ""issue_comment"", ""label"", ""pull_request"", ""pull_request_review"", ""pull_request_review_comment"", ""repository""]}" https://github.com/simonw/datasette/issues/1655#issuecomment-1767219901,https://api.github.com/repos/simonw/datasette/issues/1655,1767219901,IC_kwDOBm6k_c5pVaK9,536941,fgregg,2023-10-17T21:29:03Z,2023-10-17T21:29:03Z,CONTRIBUTOR,@yejiyang why don’t you move this discussion to my fork to spare simon’s notifications ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515,query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data, https://github.com/simonw/datasette/issues/1655#issuecomment-1766994810,https://api.github.com/repos/simonw/datasette/issues/1655,1766994810,IC_kwDOBm6k_c5pUjN6,536941,fgregg,2023-10-17T19:01:59Z,2023-10-17T19:01:59Z,CONTRIBUTOR,"hi @yejiyang, have your tried using my fork of datasette: https://github.com/fgregg/datasette/tree/no_limit_csv_publish ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515,query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data, https://github.com/simonw/datasette/pull/2200#issuecomment-1777228352,https://api.github.com/repos/simonw/datasette/issues/2200,1777228352,IC_kwDOBm6k_c5p7lpA,49699333,dependabot[bot],2023-10-24T13:40:25Z,2023-10-24T13:40:25Z,CONTRIBUTOR,Superseded by #2202.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1949756141,Bump the python-packages group with 1 update, https://github.com/simonw/datasette/issues/2199#issuecomment-1760401731,https://api.github.com/repos/simonw/datasette/issues/2199,1760401731,IC_kwDOBm6k_c5o7ZlD,15178711,asg017,2023-10-12T21:41:42Z,2023-10-12T21:41:42Z,CONTRIBUTOR,"I dig it - I was thinking an Observable notebook where you paste your `metadata.json`/`metadata.yaml` and it would generate the new metadata + datasette.yaml files, but an extensible `datasette upgrade` plugin would be nice for future plugins. One thing to think about: If someone has comments in their original `metadata.yaml`, could we preserve them in the new files? tbh maybe not too important bc if people cared that much they could just copy + paste, and it might be too distracting ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1940346034,Detailed upgrade instructions for metadata.yaml -> datasette.yaml, https://github.com/simonw/datasette/pull/2155#issuecomment-1737363182,https://api.github.com/repos/simonw/datasette/issues/2155,1737363182,IC_kwDOBm6k_c5njg7u,418191,jaywgraves,2023-09-27T13:05:41Z,2023-09-27T13:05:41Z,CONTRIBUTOR,I'm hitting the #2123 issue and I just patched my local version with this and it seems to work fine.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1865572575,Fix hupper.start_reloader entry point, https://github.com/simonw/datasette/pull/2190#issuecomment-1729961503,https://api.github.com/repos/simonw/datasette/issues/2190,1729961503,IC_kwDOBm6k_c5nHR4f,15178711,asg017,2023-09-21T16:56:57Z,2023-09-21T16:56:57Z,CONTRIBUTOR,TODO: add similar checks for permissions/allow/canned queries,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1901483874,"Raise an exception if a ""plugins"" block exists in metadata.json", https://github.com/simonw/datasette/issues/2188#issuecomment-1722848454,https://api.github.com/repos/simonw/datasette/issues/2188,1722848454,IC_kwDOBm6k_c5msJTG,15178711,asg017,2023-09-18T06:58:53Z,2023-09-18T06:58:53Z,CONTRIBUTOR,"Thinking about this more, here a list of things I imagine a ""compile-to-sql"" plugin would want to do: 1. Attach itself to the SQL code editor (switch from SQL -> PRQL/Logica, additional syntax highlighting) 2. Add ""Query using PRQL"" buttons in various parts of Datasette's UI, like `/dbname` page 3. Use `$LANGUAGE=` instead of `sql=` in the JSON API and the SQL results pages 4. Have their own dedicated code editor page 1) and 2) would be difficult to do with current plugin hooks, unless we add the concept of ""slots"" and get the JS plugin support in. 3) could maybe be done with the [`asgi_wrapper(datasette)`](https://docs.datasette.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette) hook? And 4) ca n be done easily with the `register_routes()` hooks. So it really only sounds like extending the SQL editor will be the hard part. In #2094 I want to add JavaScript plugin hooks for extending the SQL editor, which may work here. If I get the time/motivation, I might try out a `datasette-prql` extension, just because I like playing with it. It'd be really cool if I can get the `asgi_wrapper()` hook to work right there...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1900026059,"Plugin Hooks for ""compile to SQL"" languages", https://github.com/simonw/datasette/issues/1191#issuecomment-1722845490,https://api.github.com/repos/simonw/datasette/issues/1191,1722845490,IC_kwDOBm6k_c5msIky,15178711,asg017,2023-09-18T06:55:52Z,2023-09-18T06:55:52Z,CONTRIBUTOR,"One note here: this feature could be called ""slots"", similar to [Layout Slots](https://vitepress.dev/guide/extending-default-theme#layout-slots) in Vitepress. In Vitepress, you can add custom components/widget/gadgets into determined named ""slots"", like so: ``` doc-top doc-bottom doc-footer-before doc-before doc-after ... ``` Would be great to do in both Python and Javascript, with the upcoming JavaScript API #2052. In `datasette-write-ui`, all we do is add a few ""Insert row"" and ""edit this row"" buttons and that required completely capturing the `table.html` template, which isn't great for other plugins. But having ""slots"" like `table-footer-before` or `table-row-id` or something would be great to work with. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",787098345,Ability for plugins to collaborate when adding extra HTML to blocks in default templates, https://github.com/simonw/datasette/pull/2182#issuecomment-1719451803,https://api.github.com/repos/simonw/datasette/issues/2182,1719451803,IC_kwDOBm6k_c5mfMCb,49699333,dependabot[bot],2023-09-14T13:27:26Z,2023-09-14T13:27:26Z,CONTRIBUTOR,"Looks like these dependencies are updatable in another way, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1890593563,Bump the python-packages group with 2 updates, https://github.com/simonw/datasette/pull/2183#issuecomment-1716801971,https://api.github.com/repos/simonw/datasette/issues/2183,1716801971,IC_kwDOBm6k_c5mVFGz,15178711,asg017,2023-09-13T01:34:01Z,2023-09-13T01:34:01Z,CONTRIBUTOR,"@simonw docs are finished, this is ready for review! One thing: I added ""Configuration"" as a top-level item in the documentation site, at the very bottom. Not sure if this is the best, maybe it can be named ""datasette.yaml Configuration"" or something similar? Mostly because ""Configuration"" by itself can mean many things, but adding ""datasette.yaml"" would make it pretty clear it's about that specific file, and is easier to scan. I'd also be fine with using ""datasette.yaml"" instead of ""datasette.json"", since writing in YAML is much more forgiving (and advanced users will know JSON is also supported) Also, maybe this is a chance to consolidate the docs a bit? I think ""Settings"", ""Configuration"", ""Metadata"", and ""Authentication and permissions"" should possibly be under the same section. Maybe even consolidate the different Plugin pages that exist? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1891212159,`datasette.yaml` plugin support, https://github.com/simonw/datasette/issues/2157#issuecomment-1700291967,https://api.github.com/repos/simonw/datasette/issues/2157,1700291967,IC_kwDOBm6k_c5lWGV_,15178711,asg017,2023-08-31T02:45:56Z,2023-08-31T02:45:56Z,CONTRIBUTOR,"@simonw what do you think about adding a `DATASETTE_INTERNAL_DB_PATH` env variable, where when defined, is the default location of the internal DB? This means when the `--internal` flag is NOT provided, Datasette would check to see if `DATASETTE_INTERNAL_DB_PATH` exists, and if so, uses that as the internal database (and would fallback to an ephemeral memory database) My rationale: some plugins may require, or strongly encourage, a persistent internal database (`datasette-comments`, `datasette-bookmarks`, `datasette-link-shortener`, etc.). However, for users that have a global installation of Datasette (say from `brew install` or a global `pip install`), it would be annoying having to specify `--internal` every time. So instead, they can just add `export DATASETTE_INTERNAL_DB_PATH=""/path/to/internal.db""` to their bashrc/zshrc/whereever to not have to worry about `--internal`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1865869205,"Proposal: Make the `_internal` database persistent, customizable, and hidden", https://github.com/simonw/datasette/pull/2148#issuecomment-1696591957,https://api.github.com/repos/simonw/datasette/issues/2148,1696591957,IC_kwDOBm6k_c5lH_BV,49699333,dependabot[bot],2023-08-29T00:15:29Z,2023-08-29T00:15:29Z,CONTRIBUTOR,This pull request was built based on a group rule. Closing it will not ignore any of these versions in future pull requests.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1859415334,"Bump sphinx, furo, blacken-docs dependencies", https://github.com/simonw/datasette/pull/2152#issuecomment-1695736691,https://api.github.com/repos/simonw/datasette/issues/2152,1695736691,IC_kwDOBm6k_c5lEuNz,49699333,dependabot[bot],2023-08-28T13:49:35Z,2023-08-28T13:49:35Z,CONTRIBUTOR,Superseded by #2160.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1865174661,Bump the python-packages group with 3 updates, https://github.com/simonw/datasette/pull/2148#issuecomment-1689198413,https://api.github.com/repos/simonw/datasette/issues/2148,1689198413,IC_kwDOBm6k_c5krx9N,49699333,dependabot[bot],2023-08-23T02:57:55Z,2023-08-23T02:57:55Z,CONTRIBUTOR,"Looks like this PR has been edited by someone other than Dependabot. That means Dependabot can't rebase it - sorry! If you're happy for Dependabot to recreate it from scratch, overwriting any edits, you can request `@dependabot recreate`. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1859415334,"Bump sphinx, furo, blacken-docs dependencies", https://github.com/simonw/datasette/issues/2093#issuecomment-1688532012,https://api.github.com/repos/simonw/datasette/issues/2093,1688532012,IC_kwDOBm6k_c5kpPQs,15178711,asg017,2023-08-22T16:21:40Z,2023-08-22T16:21:40Z,CONTRIBUTOR,"OK Here's the gameplan for this, which is closely tied to #2143 : - We will add a new `datasette.json`/`datasette.yaml` configuration file to datasette, which combines settings/plugin config/permissions/canned queries into a new file format - Metadata will NOT be a part of this file - TOML support is not planned, but maybe we can create a separate issue for support TOML with JSON/YAML - The `settings.json` file will be deprecated, and the `--config` arg will be brought back. - Command line arguments can still be used to overwrite values (ex `--setting` will overwrite settings in `datasette.yaml` The format of `datasette.json` will follow what Simon listed here: https://github.com/simonw/datasette/issues/2143#issuecomment-1684484426 Here's the current implementation plan: 1. Add a new `--config` flag and port over `""settings""` into a new datasette.json config file, remove settings.json 2. Add top-level plugin config support to `datasette.json` 3. Figure out database/table structure of config `datasette.json` 4. Port over database/table level plugin config support `datasette.json` 5. Port over permissions/auth settings to `datasette.json` 6. Deprecate non-metadata values in `metadata.json`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File", https://github.com/simonw/datasette/issues/2145#issuecomment-1686745094,https://api.github.com/repos/simonw/datasette/issues/2145,1686745094,IC_kwDOBm6k_c5kibAG,15178711,asg017,2023-08-21T17:30:01Z,2023-08-21T17:30:01Z,CONTRIBUTOR,"Another point: The new Datasette write API should refuse to insert a row with a NULL primary key. That will likely decrease the likelihood someone find themselves with NULLs in their primary keys, at least with Datasette users. Especially buggy code that uses the write API, like our `datasette-write-ui` bug that led to this issue. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1857234285,If a row has a primary key of `null` various things break, https://github.com/simonw/datasette/pull/2144#issuecomment-1686366557,https://api.github.com/repos/simonw/datasette/issues/2144,1686366557,IC_kwDOBm6k_c5kg-ld,49699333,dependabot[bot],2023-08-21T13:48:15Z,2023-08-21T13:48:15Z,CONTRIBUTOR,Superseded by #2148.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1856760386,Bump the python-packages group with 3 updates, https://github.com/simonw/datasette/issues/2143#issuecomment-1684496274,https://api.github.com/repos/simonw/datasette/issues/2143,1684496274,IC_kwDOBm6k_c5kZ1-S,15178711,asg017,2023-08-18T22:30:45Z,2023-08-18T22:30:45Z,CONTRIBUTOR,"> That said, I do really like a bias towards settings that can be changed at runtime Does this include things like `--settings` values or plugin config? I can totally see being able to update metadata without restarting, but not sure if that would work well with `--setting`, plugin config, or auth/permissions stuff. Well it could work with `--setting` and auth/permissions, with a lot of core changes. But changing plugin config on the fly could be challenging, for plugin authors. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1855885427,De-tangling Metadata before Datasette 1.0, https://github.com/simonw/datasette/issues/2143#issuecomment-1684205563,https://api.github.com/repos/simonw/datasette/issues/2143,1684205563,IC_kwDOBm6k_c5kYu_7,15178711,asg017,2023-08-18T17:12:54Z,2023-08-18T17:12:54Z,CONTRIBUTOR,"Another option would be, instead of flat `datasette.json`/`datasette.yaml` files, we could instead use a Python file, like `datasette_config.py`. That way one could dynamically generate config (ex dev vs prod, auto-discover credentials, etc.). Kinda like Django settings. Though I imagine Python imports might make this complex to do, and json/yaml is already supported and pretty easy to write ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1855885427,De-tangling Metadata before Datasette 1.0, https://github.com/simonw/datasette/issues/2143#issuecomment-1684202932,https://api.github.com/repos/simonw/datasette/issues/2143,1684202932,IC_kwDOBm6k_c5kYuW0,15178711,asg017,2023-08-18T17:10:21Z,2023-08-18T17:10:21Z,CONTRIBUTOR,"I agree with all your points! I think the best solution would be having a `datasette.json` config file, where you ""configure"" your datasette instances, with settings, permissions/auth, plugin configuration, and table settings (sortable column, label columns, etc.). Which #2093 would do. Then optionally, you have a `metadata.json`, or use `datasette_metadata`, or some other plugin to define metadata (ex the future [sqlite-docs](https://github.com/asg017/sqlite-docs) plugin). Everything in `datasette.json` could also be overwritten by CLI flags, like `--setting key value`, `--plugin xxxx key value`. We could even completely remove `settings.json` in favor or just `datasette.json`. Mostly because I think the less files the better, especially if they have generic names like `settings.json` or `config.json`. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1855885427,De-tangling Metadata before Datasette 1.0, https://github.com/simonw/datasette/pull/2142#issuecomment-1683950031,https://api.github.com/repos/simonw/datasette/issues/2142,1683950031,IC_kwDOBm6k_c5kXwnP,49699333,dependabot[bot],2023-08-18T13:49:24Z,2023-08-18T13:49:24Z,CONTRIBUTOR,"Looks like these dependencies are updatable in another way, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1854970601,Bump the python-packages group with 2 updates, https://github.com/simonw/datasette/pull/2141#issuecomment-1682256251,https://api.github.com/repos/simonw/datasette/issues/2141,1682256251,IC_kwDOBm6k_c5kRTF7,49699333,dependabot[bot],2023-08-17T13:07:43Z,2023-08-17T13:07:43Z,CONTRIBUTOR,"Looks like blacken-docs is updatable in another way, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1853289039,Bump the python-packages group with 1 update, https://github.com/simonw/datasette/pull/2125#issuecomment-1668187546,https://api.github.com/repos/simonw/datasette/issues/2125,1668187546,IC_kwDOBm6k_c5jboWa,49699333,dependabot[bot],2023-08-07T16:20:26Z,2023-08-07T16:20:26Z,CONTRIBUTOR,"Looks like sphinx is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1833193570,Bump sphinx from 6.1.3 to 7.1.2, https://github.com/simonw/datasette/pull/2121#issuecomment-1668186872,https://api.github.com/repos/simonw/datasette/issues/2121,1668186872,IC_kwDOBm6k_c5jboL4,49699333,dependabot[bot],2023-08-07T16:20:19Z,2023-08-07T16:20:19Z,CONTRIBUTOR,"Looks like furo is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1824399610,Bump furo from 2023.3.27 to 2023.7.26, https://github.com/simonw/datasette/pull/2098#issuecomment-1668186815,https://api.github.com/repos/simonw/datasette/issues/2098,1668186815,IC_kwDOBm6k_c5jboK_,49699333,dependabot[bot],2023-08-07T16:20:18Z,2023-08-07T16:20:18Z,CONTRIBUTOR,"Looks like blacken-docs is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1796830110,Bump blacken-docs from 1.14.0 to 1.15.0, https://github.com/simonw/datasette/pull/2124#issuecomment-1662215579,https://api.github.com/repos/simonw/datasette/issues/2124,1662215579,IC_kwDOBm6k_c5jE2Wb,49699333,dependabot[bot],2023-08-02T13:28:43Z,2023-08-02T13:28:43Z,CONTRIBUTOR,Superseded by #2125.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1826424151,Bump sphinx from 6.1.3 to 7.1.1, https://github.com/simonw/datasette/pull/2107#issuecomment-1655678215,https://api.github.com/repos/simonw/datasette/issues/2107,1655678215,IC_kwDOBm6k_c5ir6UH,49699333,dependabot[bot],2023-07-28T13:23:16Z,2023-07-28T13:23:16Z,CONTRIBUTOR,Superseded by #2124.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1820346348,Bump sphinx from 6.1.3 to 7.1.0, https://github.com/simonw/datasette/pull/2077#issuecomment-1653652665,https://api.github.com/repos/simonw/datasette/issues/2077,1653652665,IC_kwDOBm6k_c5ikLy5,49699333,dependabot[bot],2023-07-27T13:40:52Z,2023-07-27T13:40:52Z,CONTRIBUTOR,Superseded by #2121.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1719759468,Bump furo from 2023.3.27 to 2023.5.20, https://github.com/simonw/datasette/pull/2075#issuecomment-1649849249,https://api.github.com/repos/simonw/datasette/issues/2075,1649849249,IC_kwDOBm6k_c5iVrOh,49699333,dependabot[bot],2023-07-25T13:28:35Z,2023-07-25T13:28:35Z,CONTRIBUTOR,Superseded by #2107.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1710164693,Bump sphinx from 6.1.3 to 7.0.1, https://github.com/simonw/datasette/pull/2052#issuecomment-1630776144,https://api.github.com/repos/simonw/datasette/issues/2052,1630776144,IC_kwDOBm6k_c5hM6tQ,9020979,hydrosquall,2023-07-11T12:54:03Z,2023-07-11T12:54:03Z,CONTRIBUTOR,"Thanks for the review and the code pointers @simonw - I've made the suggested edits, fixed the renamed variable, and confirmed that the panels still render on the `table` and `database` views. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/issues/2104#issuecomment-1641082395,https://api.github.com/repos/simonw/datasette/issues/2104,1641082395,IC_kwDOBm6k_c5h0O4b,15178711,asg017,2023-07-18T22:41:37Z,2023-07-18T22:41:37Z,CONTRIBUTOR,"For filtering virtual table's ""shadow tables"" (ex the FTS5 _content and most the spatialite tables), you can use `pragma_table_list` (first appeared in SQLite 3.37 (2021-11-27), which has a `type` column that calls out `type=""shadow""` tables https://www.sqlite.org/pragma.html#pragma_table_list","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 1}",1808215339,Tables starting with an underscore should be treated as hidden, https://github.com/simonw/datasette/issues/2087#issuecomment-1616853644,https://api.github.com/repos/simonw/datasette/issues/2087,1616853644,IC_kwDOBm6k_c5gXzqM,15178711,asg017,2023-07-02T22:00:48Z,2023-07-02T22:00:48Z,CONTRIBUTOR,"I just saw in the docs that Dasette auto-detects `settings.json`: > settings.json - settings that would normally be passed using --setting - here they should be stored as a JSON object of key/value pairs > [*Source*](https://docs.datasette.io/en/stable/settings.html#:~:text=settings.json%20%2D%20settings%20that%20would%20normally%20be%20passed%20using%20%2D%2Dsetting%20%2D%20here%20they%20should%20be%20stored%20as%20a%20JSON%20object%20of%20key/value%20pairs)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1765870617,`--settings settings.json` option, https://github.com/simonw/datasette/issues/2093#issuecomment-1616286848,https://api.github.com/repos/simonw/datasette/issues/2093,1616286848,IC_kwDOBm6k_c5gVpSA,15178711,asg017,2023-07-02T02:17:46Z,2023-07-02T02:17:46Z,CONTRIBUTOR,"Storing metadata in the database won't be required. I imagine there'll be many different ways to store metadata, including any possible `datasette_metadata` or sqlite-docs, or the older metadata.json way. The next question will be how precedence should work - i'd imagine metadata.json > plugins > datasette_metadata > sqlite-docs","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File", https://github.com/simonw/datasette/pull/2052#issuecomment-1615997736,https://api.github.com/repos/simonw/datasette/issues/2052,1615997736,IC_kwDOBm6k_c5gUiso,9020979,hydrosquall,2023-07-01T16:55:24Z,2023-07-01T16:55:24Z,CONTRIBUTOR,"> Ok @hydrosquall a couple things before this PR should be good to go: Thank you @asg017 ! I've pushed both suggested changes onto this branch. > Not sure how difficult it'll be to inject it server-side If we are OK with having a build system, it would free me up to do do many things! We could make datasette-manager.js a server-side rendered file as a ""template"" instead of having it as a static JS file, but I'm not sure it's worth the extra jump in complexity / loss of syntax highlighting in the JS file. In the short-term, I could see an intermediary solution where a unit test in the preferred language was able to read both `version.py` and `datasette-manager.js`, and make sure that the strings versions are in sync. (This assumes that we want the manager and datasette's versions to be synced, and not decoupled). Since the version is not changing very often, a ""manual sync"" might be good enough. > In terms of how to integrate this into Datasette, a few options I can see working: This sounds good to me. I'm not sure how to add a settings flag, but will be interested to see the PR that adds support for it. > I'm also curious to see how ""plugins for a plugin' would work I'm comfortable to wait until we have a realistic usecase for this. In the short term, I think we could give plugins a way to grant access to a ""public API of other plugins"", and also ask to be notified when plugins with other names have loaded, but don't picture the datasette manager getting more involved than that. > here's a list of Simon's Datasette plugins that use ""extra_js_urls()"" Neat, thanks for compiling this list! Just curious, is there a query that can be used to compile this programmatically, or did you identify these through memory? > I want to make a javascript plugin on top of the code-mirror editor to make a few things nicer (function auto-complete, table/column descriptions, etc.) I look forward to trying this out 👍 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1616095810,https://api.github.com/repos/simonw/datasette/issues/2052,1616095810,IC_kwDOBm6k_c5gU6pC,15178711,asg017,2023-07-01T20:31:31Z,2023-07-01T20:31:31Z,CONTRIBUTOR,"> Just curious, is there a query that can be used to compile this programmatically, or did you identify these through memory? I just did a github search for `user:simonw ""def extra_js_urls(""` ! Though I'm sure other plugins made by people other than Simon also exist out there https://github.com/search?q=user%3Asimonw+%22def+extra_js_urls%28%22&type=code","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/issues/2093#issuecomment-1613896210,https://api.github.com/repos/simonw/datasette/issues/2093,1613896210,IC_kwDOBm6k_c5gMhoS,15178711,asg017,2023-06-29T22:53:33Z,2023-06-29T22:53:33Z,CONTRIBUTOR,"Maybe we can have a separate issue for revamping `metadata.json`? A `datasette_metadata` table or the `sqlite-docs` extension seem like two reasonable additions that we can work through. Storing metadata inside a SQLite database makes sense, but I don't think storing `datasette.*` style config (ex ports, settings, etc.) inside a SQLite DB makes sense, since it's very environment-dependent","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File", https://github.com/simonw/datasette/issues/2093#issuecomment-1613895188,https://api.github.com/repos/simonw/datasette/issues/2093,1613895188,IC_kwDOBm6k_c5gMhYU,15178711,asg017,2023-06-29T22:51:53Z,2023-06-29T22:51:53Z,CONTRIBUTOR,"I agree with not liking `metadata.json` stuff in a `datasette.*` config file. Editing description of a table/column in a file like `datasette.*` seems odd to me. Though since plugin configuration currently lives in `metadata.json`, I think it should be removed from there and placed in `datasette.*`, at least for top-level config like `datasette-auth-github`'s config. Keeping `metadata.json` strictly for documentation/licensing/column units makes sense to me, but anything plugin related should be in some config file, like `datasette.*`. And ya, supporting both `datasette.*` and CLI flags makes a lot of sense to me. Any `--setting` flag should override anything in `datasette.*` for easier debugging, with possibly a warning message so people don't get confused. Same with `--port` and a port defined in `datasette.*`","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File", https://github.com/simonw/datasette/pull/2052#issuecomment-1613778296,https://api.github.com/repos/simonw/datasette/issues/2052,1613778296,IC_kwDOBm6k_c5gME14,15178711,asg017,2023-06-29T20:36:09Z,2023-06-29T20:36:09Z,CONTRIBUTOR,"Ok @hydrosquall a couple things before this PR should be good to go: - Can we move `datasette/static/table-example-plugins.js` into `demos/plugins/static`? - For `datasetteManager.VERSION`, can we fill that in or just comment it out for now? Not sure how difficult it'll be to inject it server-side. I imagine we could also have a small build process with esbuild/rollup that just injects a version string into `manager.js` directly, so we don't have to worry about server-rendering (but that can be a future PR) In terms of how to integrate this into Datasette, a few options I can see working: - Push this as-is and figure it out before the next release - Hide this feature behind a settings flag (`--setting unstable-js-plugins on`) and use that setting to hide/show `` in `base.html` I'll let @simonw decide which one to work with. I kindof like the idea of having an ""unstable"" opt-in process to enable JS plugins, to give us time to try it out with a wide variety of plugins until we feel its ready. I'm also curious to see how ""plugins for a plugin' would work, like #1542. For example, if the leaflet plugin showed default markers, but also included its own hook for other plugins to add more markers/styling. I'm imagine that the individual plugin would re-create their own plugin system compared to this, since handling ""plugins of plugins"" at the top with Datasette seems really convoluted. Also for posterity, here's a list of Simon's Datasette plugins that use ""extra_js_urls()"", which probably means they can be ported/re-written to use this new plugin system: - [`datasette-vega`](https://github.com/simonw/datasette-vega/blob/00de059ab1ef77394ba9f9547abfacf966c479c4/datasette_vega/__init__.py#L25) - [`datasette-cluster-map`](https://github.com/simonw/datasette-cluster-map/blob/795d25ad9ff6cba0307191f44fecc8f8070bef5c/datasette_cluster_map/__init__.py#L14) - [`datasette-leaflet-geojson`](https://github.com/simonw/datasette-leaflet-geojson/blob/64713aa497750400b9ac2c12e8bb6ffab8eb77f3/datasette_leaflet_geojson/__init__.py#L47) - [`datasette-pretty-traces`](https://github.com/simonw/datasette-pretty-traces/blob/5219d65eca3d7d7a73bb9d3120df42fe046a1315/datasette_pretty_traces/__init__.py#L5) - [`datasette-youtube-embed`](https://github.com/simonw/datasette-youtube-embed/blob/4b4a0d7e58ebe15f47e9baf68beb9908c1d899da/datasette_youtube_embed/__init__.py#L55) - [`datasette-leaflet-freedraw`](https://github.com/simonw/datasette-leaflet-freedraw/blob/8f28c2c2080ec9d29f18386cc6a2573a1c8fbde7/datasette_leaflet_freedraw/__init__.py#L66) - [`datasette-hovercards`](https://github.com/simonw/datasette-hovercards/blob/9439ba46b7140fb03223faff0d21aeba5615a287/datasette_hovercards/__init__.py#L5) - [`datasette-mp3-audio`](https://github.com/simonw/datasette-mp3-audio/blob/4402168792f452a46ab7b488e40ec49cd4b12185/datasette_mp3_audio/__init__.py#L6) - [`datasette-geojson-map`](https://github.com/simonw/datasette-geojson-map/blob/32af5f1fd1a07278bbf8071fbb20a61e0f613246/datasette_geojson_map/__init__.py#L30)","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 1}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1606352600,https://api.github.com/repos/simonw/datasette/issues/2052,1606352600,IC_kwDOBm6k_c5fvv7Y,15178711,asg017,2023-06-26T00:17:04Z,2023-06-26T00:17:04Z,CONTRIBUTOR,":wave: would love to see this get merged soon! I want to make a javascript plugin on top of the code-mirror editor to make a few things nicer (function auto-complete, table/column descriptions, etc.), and this would help out a bunch","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1585149909,https://api.github.com/repos/simonw/datasette/issues/2052,1585149909,IC_kwDOBm6k_c5ee3fV,9020979,hydrosquall,2023-06-09T21:35:00Z,2023-06-09T21:35:00Z,CONTRIBUTOR,"Thanks @cldellow for the thoughtful comments! These are all things that I'll keep in mind as we figure out how/if this API is actually used by plugin authors once it's actually out in the world. > Yes, this would work - but it requires me to continue to communicate the column names out of band (in order to fetch the facet data per-column before registering my plugin), vs being able to re-use them from the plugin implementation. Ah, I understand now! Thanks for explaining. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1548617257,https://api.github.com/repos/simonw/datasette/issues/2052,1548617257,IC_kwDOBm6k_c5cTgYp,193185,cldellow,2023-05-15T21:32:20Z,2023-05-15T21:32:20Z,CONTRIBUTOR,"> Were you picturing that the whole plugin config object could be returned as a promise, or that the individual hooks (like makeColumnActions or makeAboveTablePanelConfigs supported returning a promise of arrays instead only returning plain arrays? The latter - that you could return a promise of arrays, so it parallels the [""await me maybe"" pattern in Datasette](https://simonwillison.net/2020/Sep/2/await-me-maybe/), where you can return either a value, a callable or an awaitable. > I have a hunch that what you're describing might be achievable without adding Promises to the API with something Oops, I did a poor job explaining. Yes, this would work - but it requires me to continue to communicate the column names out of band (in order to fetch the facet data per-column before registering my plugin), vs being able to re-use them from the plugin implementation. This isn't that big of a deal - it'd be a nice ergonomic improvement, but nowhere near as a big of an improvement as having an officially sanctioned way to add stuff to the column menus in the first place. This could also be layered on in a future commit without breaking v1 users, too, so it's not at all urgent. > especially if those lines are encapsulated by a function we provide (maybe something that's available on the window provided by Datasette as an inline script tag Ah, this is maybe the the key point. Since it's all hosted inside Datasette, Datasette can provide some arbitrary sugar to make it easier to work with. My experience with async scripts in JS is that people sometimes don't understand the race conditions inherent to them. If they copy/paste from a tutorial, it does just work. But then they'll delete half the code, and by chance it still works on their machine/Datasette templates, and now someone's headed for an annoying debugging session -- maybe them, maybe someone else who tries to re-use their plugin. Again, a fairly minor thing, though.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2068#issuecomment-1547911570,https://api.github.com/repos/simonw/datasette/issues/2068,1547911570,IC_kwDOBm6k_c5cQ0GS,49699333,dependabot[bot],2023-05-15T13:59:35Z,2023-05-15T13:59:35Z,CONTRIBUTOR,Superseded by #2075.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1690842199,Bump sphinx from 6.1.3 to 7.0.0, https://github.com/simonw/datasette/pull/2052#issuecomment-1546362374,https://api.github.com/repos/simonw/datasette/issues/2052,1546362374,IC_kwDOBm6k_c5cK54G,9020979,hydrosquall,2023-05-12T22:09:03Z,2023-05-12T22:09:03Z,CONTRIBUTOR,"Hey @cldellow , thanks for the thoughtful feedback and describing the ""lazy facets"" feature! It sounds like the [postTask](https://developer.mozilla.org/en-US/docs/Web/API/Scheduler/postTask) API might be relevant for the types of network request scheduling you have in mind. Addressing your points inline below: > It might also be nice if the plugins could return Promises. Were you picturing that the whole plugin config object could be returned as a promise, or that the individual hooks (like `makeColumnActions` or `makeAboveTablePanelConfigs` supported returning a promise of arrays instead only returning plain arrays? I think what you're describing can be achievable, but I want to make sure I do so in a way that addresses your need / keeps the complexity of the plugin core system at a level this is approachable . I have a hunch that what you're describing might be achievable without adding Promises to the API with something like ``` fetch('/api/with-custom-facets').then(myFacets => { // reusing the go() idiom go(manager, myFacets); }) ``` but I'd like to confirm if that's the case before investigating adding support. > bulletproof plugin registration code that is robust against the order in which the script tags load Yes, I think what you wrote looks right to me! While it looks a little bit verbose compared to the second example, I'm hoping we can mitigate the cost of that during this API incubation phase by making it an easy-to-copy paste code snippet. I haven't heard of the GA queing pattern before, thanks for the example. I won't have time to implement of proof of concept in the next few weeks, but I took some time to think through the pros/cons to decide whether we may want to add this in a future release: I can see that this approach brings advantages - Plugin developers don't need to know the name of the datasette initialization event to start their plugin - Pushing a function to an array probably is easier (definitely more concise) than adding a document event listener - One less event listener sitting in memory It also has some minor costs - A malicious plugin could choose to (or accidentally) mess with the order of the queue if multiple scripts are lined up - Some risk in encouraging people to mutate global state - (not a cost, more a moot point): changing this API may not make a meaningful difference if we're discussing whether people enter 2 vs 5 lines of code, especially if those lines are encapsulated by a function we provide (maybe something that's available on the `window` provided by Datasette as an inline script tag). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1530822437,https://api.github.com/repos/simonw/datasette/issues/2052,1530822437,IC_kwDOBm6k_c5bPn8l,193185,cldellow,2023-05-02T03:35:30Z,2023-05-02T16:02:38Z,CONTRIBUTOR,"Also, just checking - is this how I'd write bulletproof plugin registration code that is robust against the order in which the script tags load (eg if both my code and the Datasette code are loaded via a `