421 rows where author_association = "NONE" sorted by updated_at descending

View and edit SQL

Suggested facets: reactions, created_at (date), updated_at (date)

issue

author_association

  • NONE · 421
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
785485597 https://github.com/simonw/datasette/pull/1243#issuecomment-785485597 https://api.github.com/repos/simonw/datasette/issues/1243 MDEyOklzc3VlQ29tbWVudDc4NTQ4NTU5Nw== codecov[bot] 22429695 2021-02-25T00:28:30Z 2021-02-25T00:28:30Z NONE

Codecov Report

Merging #1243 (887bfd2) into main (726f781) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1243   +/-   ##
=======================================
  Coverage   91.56%   91.56%           
=======================================
  Files          34       34           
  Lines        4242     4242           
=======================================
  Hits         3884     3884           
  Misses        358      358           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 726f781...32652d9. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
fix small typo 815955014  
784638394 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-784638394 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc4NDYzODM5NA== UtahDave 306240 2021-02-24T00:36:18Z 2021-02-24T00:36:18Z NONE

I noticed that @simonw is using black for formatting. I ran black on my additions in this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
784347646 https://github.com/simonw/datasette/issues/1241#issuecomment-784347646 https://api.github.com/repos/simonw/datasette/issues/1241 MDEyOklzc3VlQ29tbWVudDc4NDM0NzY0Ng== Kabouik 7107523 2021-02-23T16:55:26Z 2021-02-23T16:57:39Z NONE

I think it's possible that many users these days no longer assume they can paste a URL from the browser address bar (if they ever understood that at all) because to many apps are SPAs with broken URLs.

Absolutely, that's why I thought my corner case with iframe preventing access to the datasette URL could actually be relevant in more general situations.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[Feature request] Button to copy URL 814595021  
784312460 https://github.com/simonw/datasette/issues/1240#issuecomment-784312460 https://api.github.com/repos/simonw/datasette/issues/1240 MDEyOklzc3VlQ29tbWVudDc4NDMxMjQ2MA== Kabouik 7107523 2021-02-23T16:07:10Z 2021-02-23T16:08:28Z NONE

Likewise, while answering to another issue regarding the Vega plugin, I realized that there is no such way of linking rows after a custom query, I only get this "Link" column with individual URLs for the default SQL view:

Or is it there and I am just missing the option in my custom queries?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow facetting on custom queries 814591962  
784157345 https://github.com/simonw/datasette/issues/1218#issuecomment-784157345 https://api.github.com/repos/simonw/datasette/issues/1218 MDEyOklzc3VlQ29tbWVudDc4NDE1NzM0NQ== soobrosa 1244799 2021-02-23T12:12:17Z 2021-02-23T12:12:17Z NONE

Topline this fixed the same problem for me.

brew install python@3.7
ln -s /usr/local/opt/python@3.7/bin/python3.7 /usr/local/opt/python/bin/python3.7
pip3 uninstall -y numpy
pip3 uninstall -y setuptools
pip3 install setuptools
pip3 install numpy
pip3 install datasette-publish-fly
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
/usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory 803356942  
783794520 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-783794520 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc4Mzc5NDUyMA== UtahDave 306240 2021-02-23T01:13:54Z 2021-02-23T01:13:54Z NONE

Also, @simonw I created a test based off the existing tests. I think it's working correctly

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
783688547 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-783688547 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc4MzY4ODU0Nw== UtahDave 306240 2021-02-22T21:31:28Z 2021-02-22T21:31:28Z NONE

@Btibert3 I've opened a PR with my initial attempt at this. Would you be willing to give this a try?

https://github.com/dogsheep/google-takeout-to-sqlite/pull/5

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
783662968 https://github.com/simonw/sqlite-utils/issues/220#issuecomment-783662968 https://api.github.com/repos/simonw/sqlite-utils/issues/220 MDEyOklzc3VlQ29tbWVudDc4MzY2Mjk2OA== mhalle 649467 2021-02-22T20:44:51Z 2021-02-22T20:44:51Z NONE

Actually, coming back to this, I have a clearer use case for enabling fts generation for views: making it easier to bring in text from lookup tables and other joins.

The datasette documentation describes populating an fts table like so:

INSERT INTO "items_fts" (rowid, name, description, category_name)
    SELECT items. rowid,
    items.name,
    items.description,
    categories.name
    FROM items JOIN categories ON items.category_id=categories.id;

Alternatively if you have fts support in sqlite_utils for views (which sqlite and fts5 support), you can do the same thing just by creating a view that captures the above joins as columns, then creating an fts table from that view. Such an fts table can be created using sqlite_utils, where one created with your method can't.

The resulting fts table can then be used by a whole family of related tables and views in the manner you described earlier in this issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Better error message for *_fts methods against views 783778672  
783560017 https://github.com/simonw/datasette/issues/1166#issuecomment-783560017 https://api.github.com/repos/simonw/datasette/issues/1166 MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw== thorn0 94334 2021-02-22T18:00:57Z 2021-02-22T18:13:11Z NONE

Hi! I don't think Prettier supports this syntax for globs: datasette/static/*[!.min].js Are you sure that works?
Prettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (foo-[1-5].js).

Tested it. Apparently, it works as a negated character class in regexes (like [^.min]). I wonder where this syntax comes from. Micromatch doesn't support that:

micromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']);
// result: ["static/n.js"] -- brackets are treated like [!.min] in regexes, without negation
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Adopt Prettier for JavaScript code formatting 777140799  
783265830 https://github.com/simonw/datasette/issues/782#issuecomment-783265830 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4MzI2NTgzMA== frankieroberto 30665 2021-02-22T10:21:14Z 2021-02-22T10:21:14Z NONE

@simonw:

The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword.

Interesting! Although I don't think it matters too much what the underlying implementation is - I more meant that limit is familiar to developers conceptually as "up to and including this number, if they exist", whereas "size" is potentially more ambiguous. However, it's probably no big deal either way.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782756398 https://github.com/simonw/datasette/issues/782#issuecomment-782756398 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc1NjM5OA== simonrjones 601316 2021-02-20T22:05:48Z 2021-02-20T22:05:48Z NONE

I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default.

I agree it is more predictable if the top level item is an object with a rows or data object that contains an array of data, which then allows for other top-level meta data.

I can see the argument for removing this and just using an array for convenience - but I think that's OK as an option (as you have now).

Rather than have lots of top-level keys you could have a "meta" object to contain non-data stuff. You could use something like "links" for API endpoint URLs (or use a standard like HAL). Which would then leave the top level a bit cleaner - if that's what you what.

Have you had much feedback from users who use the Datasette API a lot?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782746755 https://github.com/simonw/datasette/issues/782#issuecomment-782746755 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc0Njc1NQ== frankieroberto 30665 2021-02-20T20:44:05Z 2021-02-20T20:44:05Z NONE

Minor suggestion: rename size query param to limit, to better reflect that it’s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword?

I like the idea of specifying a limit of 0 if you don’t want any rows data - and returning an empty array under the rows key seems fine.

Have you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don’t parse the JSON. Could be default (can’t be much bigger with gzip?) or opt-in.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782745199 https://github.com/simonw/datasette/issues/782#issuecomment-782745199 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc0NTE5OQ== frankieroberto 30665 2021-02-20T20:32:03Z 2021-02-20T20:32:03Z NONE

I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. Mainly because it allows you to add extra keys in a backwards-compatible way. Also just seems more expected somehow.

The API design guidance for the UK government also recommends this: https://www.gov.uk/guidance/gds-api-technical-and-data-standards#use-json

I also strongly dislike having versioned APIs (eg with a /v1/ path prefix, as it invariably means that old versions stop working at some point, even though the bit of the API you’re using might not have changed at all.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782430028 https://github.com/simonw/datasette/issues/1212#issuecomment-782430028 https://api.github.com/repos/simonw/datasette/issues/1212 MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA== kbaikov 4488943 2021-02-19T22:54:13Z 2021-02-19T22:54:13Z NONE

I will close this issue since it appears only in my particular setup.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Tests are very slow.  797651831  
782053455 https://github.com/simonw/datasette/pull/1229#issuecomment-782053455 https://api.github.com/repos/simonw/datasette/issues/1229 MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ== camallen 295329 2021-02-19T12:47:19Z 2021-02-19T12:47:19Z NONE

I believe this pr and #1031 are related and fix the same issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
ensure immutable databses when starting in configuration directory mode with 810507413  
781599929 https://github.com/simonw/datasette/pull/1232#issuecomment-781599929 https://api.github.com/repos/simonw/datasette/issues/1232 MDEyOklzc3VlQ29tbWVudDc4MTU5OTkyOQ== codecov[bot] 22429695 2021-02-18T19:59:54Z 2021-02-18T22:06:42Z NONE

Codecov Report

Merging #1232 (8876499) into main (4df548e) will increase coverage by 0.03%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main    #1232      +/-   ##
==========================================
+ Coverage   91.42%   91.46%   +0.03%     
==========================================
  Files          32       32              
  Lines        3955     3970      +15     
==========================================
+ Hits         3616     3631      +15     
  Misses        339      339              
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/app.py</td> <td>95.68% <100.00%> (+0.06%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/cli.py</td> <td>76.62% <100.00%> (+0.36%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/views/database.py</td> <td>97.19% <100.00%> (+0.01%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4df548e...8876499. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
--crossdb option for joining across databases 811407131  
781451701 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-781451701 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc4MTQ1MTcwMQ== Btibert3 203343 2021-02-18T16:06:21Z 2021-02-18T16:06:21Z NONE

Awesome!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
781330466 https://github.com/simonw/datasette/issues/1230#issuecomment-781330466 https://api.github.com/repos/simonw/datasette/issues/1230 MDEyOklzc3VlQ29tbWVudDc4MTMzMDQ2Ng== Kabouik 7107523 2021-02-18T13:06:22Z 2021-02-18T15:22:15Z NONE

[Edit] Oh, I just saw the "Load all" button under the cluster map as well as the setting to alter the max number or results. So I guess this issue only is about the Vega charts.

Note that datasette-cluster-map also seems to be limited to 998 displayed points: ![ss-2021-02-18_140548](https://user-images.githubusercontent.com/7107523/108361225-15fb2a80-71ea-11eb-9a19-d885e8513f55.png)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages 811054000  
780991910 https://github.com/simonw/datasette/issues/283#issuecomment-780991910 https://api.github.com/repos/simonw/datasette/issues/283 MDEyOklzc3VlQ29tbWVudDc4MDk5MTkxMA== rayvoelker 9308268 2021-02-18T02:13:56Z 2021-02-18T02:13:56Z NONE

I was going ask you about this issue when we talk during your office-hours schedule this Friday, but was there any support ever added for doing this cross-database joining?

I have a use-case where could be pretty neat to do analysis using this tool on time-specific databases from snapshots

https://ilsweb.cincinnatilibrary.org/collection-analysis/

and thanks again for such an amazing tool!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support cross-database joins 325958506  
780830464 https://github.com/simonw/datasette/pull/1229#issuecomment-780830464 https://api.github.com/repos/simonw/datasette/issues/1229 MDEyOklzc3VlQ29tbWVudDc4MDgzMDQ2NA== codecov[bot] 22429695 2021-02-17T20:24:30Z 2021-02-17T20:24:30Z NONE

Codecov Report

Merging #1229 (ef1948c) into main (9603d89) will not change coverage.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1229   +/-   ##
=======================================
  Coverage   91.42%   91.42%           
=======================================
  Files          32       32           
  Lines        3955     3955           
=======================================
  Hits         3616     3616           
  Misses        339      339           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/app.py</td> <td>95.61% <100.00%> (ø)</td> <td></td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9603d89...ef1948c. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
ensure immutable databses when starting in configuration directory mode with 810507413  
780817596 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-780817596 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc4MDgxNzU5Ng== UtahDave 306240 2021-02-17T20:01:35Z 2021-02-17T20:01:35Z NONE

I've got this almost working. Just needs some polish

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
779785638 https://github.com/simonw/sqlite-utils/issues/227#issuecomment-779785638 https://api.github.com/repos/simonw/sqlite-utils/issues/227 MDEyOklzc3VlQ29tbWVudDc3OTc4NTYzOA== camallen 295329 2021-02-16T11:48:03Z 2021-02-16T11:48:03Z NONE

Thank you @simonw

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Error reading csv files with large column data 807174161  
778467759 https://github.com/simonw/datasette/issues/1220#issuecomment-778467759 https://api.github.com/repos/simonw/datasette/issues/1220 MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ== aborruso 30607 2021-02-12T21:35:17Z 2021-02-12T21:35:17Z NONE

Thank you

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Installing datasette via docker: Path 'fixtures.db' does not exist 806743116  
778439617 https://github.com/simonw/datasette/issues/1220#issuecomment-778439617 https://api.github.com/repos/simonw/datasette/issues/1220 MDEyOklzc3VlQ29tbWVudDc3ODQzOTYxNw== bobwhitelock 7476523 2021-02-12T20:33:27Z 2021-02-12T20:33:27Z NONE

That Docker command will mount your current directory inside the Docker container at /mnt - so you shouldn't need to change anything locally, just run

docker run -p 8001:8001 -v `pwd`:/mnt \
    datasetteproject/datasette \
    datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db

and it will use the fixtures.db file within your current directory

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Installing datasette via docker: Path 'fixtures.db' does not exist 806743116  
770069864 https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60 MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA== daniel-butler 22578954 2021-01-29T21:52:05Z 2021-02-12T18:29:43Z NONE

For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called gh-organization. The github's owner id of gh-orgnization is 123456789.

github-to-sqlite  repos github.db gh-organization

I'm on a windows computer running git bash to be able to use the | command. This works for me

sqlite3 github.db "SELECT full_name FROM repos WHERE owner = '123456789';" | tr '\n\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }

On a pure linux system I think this would work because the new line character is normally \n

sqlite3 github.db "SELECT full_name FROM repos WHERE owner = '123456789';" | tr '\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }`

As expected I ran into rate limit issues #51

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use Data from SQLite in other commands 797097140  
778014990 https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778014990 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33 MDEyOklzc3VlQ29tbWVudDc3ODAxNDk5MA== leafgarland 675335 2021-02-12T06:54:14Z 2021-02-12T06:54:14Z NONE

Ahh, that might be because macOS Big Sur has changed the structure of the photos db. Might need to wait for a later release, there is a PR which adds support for Big Sur.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
photo-to-sqlite: command not found 803338729  
778008752 https://github.com/simonw/datasette/issues/1220#issuecomment-778008752 https://api.github.com/repos/simonw/datasette/issues/1220 MDEyOklzc3VlQ29tbWVudDc3ODAwODc1Mg== aborruso 30607 2021-02-12T06:37:34Z 2021-02-12T06:37:34Z NONE

I have used my path, I'm running it from the folder in wich I have the db.

Do I must an absolute path?

Do I must create exactly that folder?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Installing datasette via docker: Path 'fixtures.db' does not exist 806743116  
778002092 https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778002092 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33 MDEyOklzc3VlQ29tbWVudDc3ODAwMjA5Mg== robmarkcole 11855322 2021-02-12T06:19:32Z 2021-02-12T06:19:32Z NONE

hi @leafgarland that results in a new error:

(venv) (base) Robins-MacBook:datasette robin$ dogsheep-photos apple-photos photos.db
Traceback (most recent call last):
  File "/Users/robin/datasette/venv/bin/dogsheep-photos", line 8, in <module>
    sys.exit(cli())
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/cli.py", line 206, in apple_photos
    db.conn.execute(
sqlite3.OperationalError: no such table: attached.ZGENERICASSET
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
photo-to-sqlite: command not found 803338729  
777951854 https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-777951854 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33 MDEyOklzc3VlQ29tbWVudDc3Nzk1MTg1NA== leafgarland 675335 2021-02-12T03:54:39Z 2021-02-12T03:54:39Z NONE

I think that is a typo in the docs, you can use

> dogsheep-photos apple-photos photos.db
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
photo-to-sqlite: command not found 803338729  
777949755 https://github.com/simonw/datasette/pull/1223#issuecomment-777949755 https://api.github.com/repos/simonw/datasette/issues/1223 MDEyOklzc3VlQ29tbWVudDc3Nzk0OTc1NQ== codecov[bot] 22429695 2021-02-12T03:45:31Z 2021-02-12T03:45:31Z NONE

Codecov Report

Merging #1223 (d1cd1f2) into main (9603d89) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1223   +/-   ##
=======================================
  Coverage   91.42%   91.42%           
=======================================
  Files          32       32           
  Lines        3955     3955           
=======================================
  Hits         3616     3616           
  Misses        339      339           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9603d89...d1cd1f2. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add compile option to Dockerfile to fix failing test (fixes #696) 806918878  
777927946 https://github.com/simonw/datasette/issues/1220#issuecomment-777927946 https://api.github.com/repos/simonw/datasette/issues/1220 MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng== bobwhitelock 7476523 2021-02-12T02:29:54Z 2021-02-12T02:29:54Z NONE

According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be

docker run -p 8001:8001 -v `pwd`:/mnt \
    datasetteproject/datasette \
    datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db

This uses /mnt/fixtures.db whereas you're using fixtures.db - did you try using this path instead?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Installing datasette via docker: Path 'fixtures.db' does not exist 806743116  
777690332 https://github.com/dogsheep/evernote-to-sqlite/issues/11#issuecomment-777690332 https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/11 MDEyOklzc3VlQ29tbWVudDc3NzY5MDMzMg== dskrad 3613583 2021-02-11T18:16:01Z 2021-02-11T18:16:01Z NONE

I solved this issue by modifying line 31 of utils.py

content = ET.tostring(ET.fromstring(content_xml.strip())).decode("utf-8")
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
XML parse error 792851444  
777132761 https://github.com/simonw/datasette/issues/1200#issuecomment-777132761 https://api.github.com/repos/simonw/datasette/issues/1200 MDEyOklzc3VlQ29tbWVudDc3NzEzMjc2MQ== bobwhitelock 7476523 2021-02-11T00:29:52Z 2021-02-11T00:29:52Z NONE

I'm probably missing something but what's the use case here - what would this offer over adding limit 10 to the query?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
?_size=10 option for the arbitrary query page would be useful 792890765  
774730656 https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774730656 https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9 MDEyOklzc3VlQ29tbWVudDc3NDczMDY1Ng== merwok 635179 2021-02-07T18:45:04Z 2021-02-07T18:45:04Z NONE

That URL uses TLS 1.3, but maybe only if the client supports it.
It could be your Python version or your SSL library that’s not recent enough.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
SSL Error 801780625  
774726123 https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774726123 https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9 MDEyOklzc3VlQ29tbWVudDc3NDcyNjEyMw== jfeiwell 12669260 2021-02-07T18:21:08Z 2021-02-07T18:21:08Z NONE

@simonw any ideas here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
SSL Error 801780625  
774528913 https://github.com/simonw/datasette/issues/1217#issuecomment-774528913 https://api.github.com/repos/simonw/datasette/issues/1217 MDEyOklzc3VlQ29tbWVudDc3NDUyODkxMw== virtadpt 639730 2021-02-06T19:23:41Z 2021-02-06T19:23:41Z NONE

I've had a lot of success running it as an OpenFaaS lambda.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Possible to deploy as a python app (for Rstudio connect server)? 802513359  
774385092 https://github.com/simonw/datasette/issues/1217#issuecomment-774385092 https://api.github.com/repos/simonw/datasette/issues/1217 MDEyOklzc3VlQ29tbWVudDc3NDM4NTA5Mg== pavopax 6165713 2021-02-06T02:49:11Z 2021-02-06T02:49:11Z NONE

A good reference seems to be the note to run datasette as a module in https://github.com/simonw/datasette/pull/556

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Possible to deploy as a python app (for Rstudio connect server)? 802513359  
774286962 https://github.com/simonw/datasette/issues/1208#issuecomment-774286962 https://api.github.com/repos/simonw/datasette/issues/1208 MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg== kbaikov 4488943 2021-02-05T21:02:39Z 2021-02-05T21:02:39Z NONE

@simonw could you please take a look at the PR 1211 that fixes this issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper 794554881  
774217792 https://github.com/simonw/sqlite-utils/pull/203#issuecomment-774217792 https://api.github.com/repos/simonw/sqlite-utils/issues/203 MDEyOklzc3VlQ29tbWVudDc3NDIxNzc5Mg== drkane 1049910 2021-02-05T18:44:13Z 2021-02-05T18:44:13Z NONE

Thanks for looking at this - home schooling kids has prevented me from replying.

I'd struggled with how to adapt the API for the foreign keys too - I definitely tried the String/Tuple approach. I hadn't considered the breaking changes that would introduce though. I can take a look at this and try and make the change - see which of your options works best.

I've got a workaround for the use-case I was looking at this for, so it wouldn't be a problem for me if it was put on the back burner until a hypothetical v4.0 anyway.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
changes to allow for compound foreign keys 743384829  
773977128 https://github.com/simonw/datasette/issues/1210#issuecomment-773977128 https://api.github.com/repos/simonw/datasette/issues/1210 MDEyOklzc3VlQ29tbWVudDc3Mzk3NzEyOA== heyarne 525780 2021-02-05T11:30:34Z 2021-02-05T11:30:34Z NONE

Thanks for your quick reply! Having changed my metadata.yml, queries AND database I can't really reproduce it anymore, sorry. But at least I'm happy to say that it works now! :) Thanks again for the super nifty tool, very appreciated.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Immutable Database w/ Canned Queries 796234313  
772408273 https://github.com/dogsheep/twitter-to-sqlite/issues/56#issuecomment-772408273 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/56 MDEyOklzc3VlQ29tbWVudDc3MjQwODI3Mw== gsajko 42315895 2021-02-03T10:36:36Z 2021-02-03T10:36:36Z NONE

I figured it out.
Those tweets are in database, because somebody quote tweeted them, or retweeted them.
And if you grab quoted tweet or reweeted tweet from other tweet json, It doesn't grab all of the details.

So if someone quote tweeted a quote tweet, the second quote tweet won't have quoted_status.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Not all quoted statuses get fetched? 796736607  
772007663 https://github.com/simonw/datasette/issues/1212#issuecomment-772007663 https://api.github.com/repos/simonw/datasette/issues/1212 MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw== kbaikov 4488943 2021-02-02T21:36:56Z 2021-02-02T21:36:56Z NONE

How do you get 4-5 minutes?
I run my tests in WSL 2, so may be i need to try a real linux VM.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Tests are very slow.  797651831  
771127458 https://github.com/simonw/datasette/pull/1211#issuecomment-771127458 https://api.github.com/repos/simonw/datasette/issues/1211 MDEyOklzc3VlQ29tbWVudDc3MTEyNzQ1OA== kbaikov 4488943 2021-02-01T20:13:39Z 2021-02-01T20:13:39Z NONE

Ping @simonw

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use context manager instead of plain open 797649915  
770865698 https://github.com/simonw/datasette/pull/1159#issuecomment-770865698 https://api.github.com/repos/simonw/datasette/issues/1159 MDEyOklzc3VlQ29tbWVudDc3MDg2NTY5OA== lovasoa 552629 2021-02-01T13:42:29Z 2021-02-01T13:42:29Z NONE

@simonw : Could you have a look at this ? I think this really improves readability.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Improve the display of facets information 774332247  
770343684 https://github.com/simonw/datasette/pull/1211#issuecomment-770343684 https://api.github.com/repos/simonw/datasette/issues/1211 MDEyOklzc3VlQ29tbWVudDc3MDM0MzY4NA== codecov[bot] 22429695 2021-01-31T08:03:40Z 2021-01-31T08:03:40Z NONE

Codecov Report

Merging #1211 (e33ccaa) into main (dde3c50) will decrease coverage by 0.00%.
The diff coverage is 92.85%.

@@            Coverage Diff             @@
##             main    #1211      +/-   ##
==========================================
- Coverage   91.54%   91.53%   -0.01%     
==========================================
  Files          32       32              
  Lines        3948     3959      +11     
==========================================
+ Hits         3614     3624      +10     
- Misses        334      335       +1     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/cli.py</td> <td>77.29% <66.66%> (-0.31%)</td> <td>:arrow_down:</td> </tr> <tr> <td>datasette/app.py</td> <td>95.62% <100.00%> (+<0.01%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/publish/cloudrun.py</td> <td>96.96% <100.00%> (+0.09%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/publish/heroku.py</td> <td>87.73% <100.00%> (+0.60%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/utils/__init__.py</td> <td>94.13% <100.00%> (+0.02%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dde3c50...e33ccaa. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use context manager instead of plain open 797649915  
770150526 https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51 MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg== daniel-butler 22578954 2021-01-30T03:44:19Z 2021-01-30T03:47:24Z NONE

I don't have much experience with github's rate limiting. In my day job we use the tenacity library to handle http errors we get.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
github-to-sqlite should handle rate limits better 703246031  
770112248 https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770112248 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60 MDEyOklzc3VlQ29tbWVudDc3MDExMjI0OA== daniel-butler 22578954 2021-01-30T00:01:03Z 2021-01-30T01:14:42Z NONE

Yes that would be cool! I wouldn't mind helping. Is this the meat of it? https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/utils.py#L512

It looks like the cli option is added with this decorator : https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/cli.py#L14

I looked a bit at utils.py in the GitHub repository. I was surprised at the amount of manual mapping of the API response you had to do to get this to work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use Data from SQLite in other commands 797097140  
769973212 https://github.com/dogsheep/twitter-to-sqlite/issues/56#issuecomment-769973212 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/56 MDEyOklzc3VlQ29tbWVudDc2OTk3MzIxMg== gsajko 42315895 2021-01-29T18:29:02Z 2021-01-29T18:31:55Z NONE

I think it was with twitter-to-sqlite home-timeline home.db -a auth.json --since
and Im using only this command to grab tweets

from cron tab
2,7,12,17,22,27,32,37,42,47,52,57 * * * * run-one /home/gsajko/miniconda3/bin/twitter-to-sqlite home-timeline /home/gsajko/work/custom_twitter_feed/home.db -a /home/gsajko/work/custom_twitter_feed/auth/auth.json --since

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Not all quoted statuses get fetched? 796736607  
767888743 https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-767888743 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54 MDEyOklzc3VlQ29tbWVudDc2Nzg4ODc0Mw== henry501 19328961 2021-01-26T23:07:41Z 2021-01-26T23:07:41Z NONE

My import got much further with the applied fixes than 0.21.3, but not 100%. I do appear to have all of the tweets imported at least.
Not sure when I'll have a chance to look further to try to fix or see what didn't make it into the import.

Here's my output:

direct-messages-group: not yet implemented
branch-links: not yet implemented
periscope-expired-broadcasts: not yet implemented
direct-messages: not yet implemented
mute: not yet implemented
periscope-comments-made-by-user: not yet implemented
periscope-ban-information: not yet implemented
periscope-profile-description: not yet implemented
screen-name-change: not yet implemented
manifest: not yet implemented
fleet: not yet implemented
user-link-clicks: not yet implemented
periscope-broadcast-metadata: not yet implemented
contact: not yet implemented
fleet-mute: not yet implemented
device-token: not yet implemented
protected-history: not yet implemented
direct-message-mute: not yet implemented
Traceback (most recent call last):
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/bin/twitter-to-sqlite", line 33, in <module>
    sys.exit(load_entry_point('twitter-to-sqlite==0.21.3', 'console_scripts', 'twitter-to-sqlite')())
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/cli.py", line 772, in import_
    archive.import_from_file(db, filepath.name, open(filepath, "rb").read())
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py", line 233, in import_from_file
    to_insert = transformer(data)
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py", line 21, in callback
    return {filename: [fn(item) for item in data]}
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py", line 21, in <listcomp>
    return {filename: [fn(item) for item in data]}
  File "/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py", line 88, in ageinfo
    return item["ageMeta"]["ageInfo"]
KeyError: 'ageInfo'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Archive import appears to be broken on recent exports 779088071  
766589070 https://github.com/simonw/datasette/pull/1206#issuecomment-766589070 https://api.github.com/repos/simonw/datasette/issues/1206 MDEyOklzc3VlQ29tbWVudDc2NjU4OTA3MA== codecov[bot] 22429695 2021-01-25T06:50:30Z 2021-01-25T17:31:11Z NONE

Codecov Report

Merging #1206 (06480e1) into main (a5ede3c) will not change coverage.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1206   +/-   ##
=======================================
  Coverage   91.53%   91.53%           
=======================================
  Files          32       32           
  Lines        3947     3947           
=======================================
  Hits         3613     3613           
  Misses        334      334           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/version.py</td> <td>100.00% <100.00%> (ø)</td> <td></td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a5ede3c...571476d. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Release 0.54 793086333  
765678057 https://github.com/simonw/sqlite-utils/pull/224#issuecomment-765678057 https://api.github.com/repos/simonw/sqlite-utils/issues/224 MDEyOklzc3VlQ29tbWVudDc2NTY3ODA1Nw== polyrand 37962604 2021-01-22T20:53:06Z 2021-01-23T20:13:27Z NONE

I'm using the FTS methods in sqlite-utils for this website: drwn.io. I wanted to get pagination to have some kind of infinite scrolling in the landing page, and I ended up using that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add fts offset docs. 792297010  
765639968 https://github.com/simonw/datasette/issues/1196#issuecomment-765639968 https://api.github.com/repos/simonw/datasette/issues/1196 MDEyOklzc3VlQ29tbWVudDc2NTYzOTk2OA== QAInsights 2826376 2021-01-22T19:37:15Z 2021-01-22T19:37:15Z NONE

I tried deployment in WSL. It is working fine https://jmeter.vercel.app/

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Access Denied Error in Windows 791237799  
765525338 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765525338 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDc2NTUyNTMzOA== cobiadigital 25372415 2021-01-22T16:22:44Z 2021-01-22T16:22:44Z NONE

rs1333049 associated with coronary artery disease
https://www.snpedia.com/index.php/Rs1333049

select rsid, genotype, case genotype
  when 'CC' then '1.9x increased risk for coronary artery disease'
  when 'CG' then '1.5x increased risk for CAD'
  when 'GG' then 'normal'
end as interpretation from genome where rsid = 'rs1333049'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
765523517 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765523517 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDc2NTUyMzUxNw== cobiadigital 25372415 2021-01-22T16:20:25Z 2021-01-22T16:20:25Z NONE

rs53576: the oxytocin receptor (OXTR) gene

select rsid, genotype, case genotype
  when 'AA' then 'Lack of empathy?'
  when 'AG' then 'Lack of empathy?'
  when 'GG' then 'Optimistic and empathetic; handle stress well'
end as interpretation from genome where rsid = 'rs53576'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
765506901 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765506901 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDc2NTUwNjkwMQ== cobiadigital 25372415 2021-01-22T15:58:41Z 2021-01-22T15:58:58Z NONE

Both rs10757274 and rs2383206 can both indicate higher risks of heart disease
https://www.snpedia.com/index.php/Rs2383206

select rsid, genotype, case genotype
  when 'AA' then 'Normal'
  when 'AG' then '~1.2x increased risk for heart disease'
  when 'GG' then '~1.3x increased risk for heart disease'
end as interpretation from genome where rsid = 'rs10757274'
select rsid, genotype, case genotype
  when 'AA' then 'Normal'
  when 'AG' then '1.4x increased risk for heart disease'
  when 'GG' then '1.7x increased risk for heart disease'
end as interpretation from genome where rsid = 'rs2383206'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
765502845 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765502845 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDc2NTUwMjg0NQ== cobiadigital 25372415 2021-01-22T15:53:19Z 2021-01-22T15:53:19Z NONE

rs7903146 Influences risk of Type-2 diabetes
https://www.snpedia.com/index.php/Rs7903146

select rsid, genotype, case genotype
  when 'CC' then 'Normal (lower) risk of Type 2 Diabetes and Gestational Diabetes.'
  when 'CT' then '1.4x increased risk for diabetes (and perhaps colon cancer).'
  when 'TT' then '2x increased risk for Type-2 diabetes'
end as interpretation from genome where rsid = 'rs7903146'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
765498984 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765498984 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDc2NTQ5ODk4NA== cobiadigital 25372415 2021-01-22T15:48:25Z 2021-01-22T15:49:33Z NONE

The "Warrior Gene" https://www.snpedia.com/index.php/Rs4680

select rsid, genotype, case genotype
  when 'AA' then '(worrier) advantage in memory and attention tasks'
  when 'AG' then 'Intermediate dopamine levels, other effects'
  when 'GG' then '(warrior) multiple associations, see details'
end as interpretation from genome where rsid = 'rs4680'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
765495861 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765495861 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDc2NTQ5NTg2MQ== cobiadigital 25372415 2021-01-22T15:44:00Z 2021-01-22T15:44:00Z NONE

Risk of autoimmune disorders: https://www.snpedia.com/index.php/Genotype

select rsid, genotype, case genotype
  when 'AA' then '2x risk of rheumatoid arthritis and other autoimmune diseases'
  when 'GG' then 'Normal risk for autoimmune disorders'
end as interpretation from genome where rsid = 'rs2476601'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
762488336 https://github.com/simonw/datasette/issues/1175#issuecomment-762488336 https://api.github.com/repos/simonw/datasette/issues/1175 MDEyOklzc3VlQ29tbWVudDc2MjQ4ODMzNg== hannseman 758858 2021-01-18T21:59:28Z 2021-01-18T22:00:31Z NONE

I encountered your issue when trying to find a solution and came up with the following, maybe it can help.

import logging.config
from typing import Tuple

import structlog
import uvicorn

from example.config import settings

shared_processors: Tuple[structlog.types.Processor, ...] = (
    structlog.contextvars.merge_contextvars,
    structlog.stdlib.add_logger_name,
    structlog.stdlib.add_log_level,
    structlog.processors.TimeStamper(fmt="iso"),
)

logging_config = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        "json": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.processors.JSONRenderer(),
            "foreign_pre_chain": shared_processors,
        },
        "console": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.dev.ConsoleRenderer(),
            "foreign_pre_chain": shared_processors,
        },
        **uvicorn.config.LOGGING_CONFIG["formatters"],
    },
    "handlers": {
        "default": {
            "level": "DEBUG",
            "class": "logging.StreamHandler",
            "formatter": "json" if not settings.debug else "console",
        },
        "uvicorn.access": {
            "level": "INFO",
            "class": "logging.StreamHandler",
            "formatter": "access",
        },
        "uvicorn.default": {
            "level": "INFO",
            "class": "logging.StreamHandler",
            "formatter": "default",
        },
    },
    "loggers": {
        "": {"handlers": ["default"], "level": "INFO"},
        "uvicorn.error": {
            "handlers": ["default" if not settings.debug else "uvicorn.default"],
            "level": "INFO",
            "propagate": False,
        },
        "uvicorn.access": {
            "handlers": ["default" if not settings.debug else "uvicorn.access"],
            "level": "INFO",
            "propagate": False,
        },
    },
}


def setup_logging() -> None:
    structlog.configure(
        processors=[
            structlog.stdlib.filter_by_level,
            *shared_processors,
            structlog.stdlib.PositionalArgumentsFormatter(),
            structlog.processors.StackInfoRenderer(),
            structlog.processors.format_exc_info,
            structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
        ],
        context_class=dict,
        logger_factory=structlog.stdlib.LoggerFactory(),
        wrapper_class=structlog.stdlib.AsyncBoundLogger,
        cache_logger_on_first_use=True,
    )
    logging.config.dictConfig(logging_config)

And then it'll be run on the startup event:

@app.on_event("startup")
async def startup_event() -> None:
    setup_logging()

It depends on a setting called debug which controls if it should output the regular uvicorn logging or json.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use structlog for logging 779156520  
762391426 https://github.com/simonw/datasette/issues/1036#issuecomment-762391426 https://api.github.com/repos/simonw/datasette/issues/1036 MDEyOklzc3VlQ29tbWVudDc2MjM5MTQyNg== philshem 4997607 2021-01-18T17:45:00Z 2021-01-18T17:45:00Z NONE

It might be possible with this library: https://docs.python.org/3/library/imghdr.html

quick test of the downloaded blob:

>>> import imghdr
>>> imghdr.what('material_culture-1-image.blob')
'jpeg'

The output plugin would be cool. I'll look into making my first datasette plugin. I'm also imagining displaying the image in the browser -- but that would be a step 2.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Make it possible to download BLOB data from the Datasette UI 725996507  
762385981 https://github.com/simonw/datasette/issues/1036#issuecomment-762385981 https://api.github.com/repos/simonw/datasette/issues/1036 MDEyOklzc3VlQ29tbWVudDc2MjM4NTk4MQ== philshem 4997607 2021-01-18T17:32:13Z 2021-01-18T17:34:50Z NONE

Hi Simon

Just finding this old issue regarding downloading blobs. Nice work!

https://user-images.githubusercontent.com/4997607/104946741-df4bad80-59ba-11eb-96e3-727c85cc4dc6.png">

As a feature request, maybe it would be possible to assign a blob column as a certain data type (e.g. .jpg) and then each blob could be downloaded as that type of file (perhaps if the file types were constrained to normal blobs that people store in sqlite databases, this could avoid the execution stuff mentioned above).

I guess the column blob-type definition could fit into this dropdown selection:

https://user-images.githubusercontent.com/4997607/104947000-479a8f00-59bb-11eb-87d9-1644e5940894.png">

Let me know if I should open a new issue with a feature request. (This could slowly go in the direction of displaying image blob-types in the browser.)

Thanks for the great tool!


edit: just reading the rest of the twitter thread: https://twitter.com/simonw/status/1318685933256855552

perhaps this is already possible in some form with the plugin datasette-media: https://github.com/simonw/datasette-media

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Make it possible to download BLOB data from the Datasette UI 725996507  
761015218 https://github.com/simonw/sqlite-utils/issues/220#issuecomment-761015218 https://api.github.com/repos/simonw/sqlite-utils/issues/220 MDEyOklzc3VlQ29tbWVudDc2MTAxNTIxOA== mhalle 649467 2021-01-15T15:40:08Z 2021-01-15T15:40:08Z NONE

Make sense. If you're coming from the sqlite3 side of things, rather than the datasette side, wanting the fts methods to work for views makes more sense. sqlite3 allows fts5 tables on views, so I was looking for CLI functionality to build the fts virtual tables. Ultimately, though, sharing fts virtual tables across tables and derivative views is likely more efficient.

Maybe an explicit error message like, "fts is not supported for views" rather than just throwing an exception that the method doesn't exist" might be helpful. Not critical though.

Thanks.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Better error message for *_fts methods against views 783778672  
760950128 https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55 MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA== jacobian 21148 2021-01-15T13:44:52Z 2021-01-15T13:44:52Z NONE

I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix archive imports 779211940  
759306228 https://github.com/simonw/datasette/pull/1159#issuecomment-759306228 https://api.github.com/repos/simonw/datasette/issues/1159 MDEyOklzc3VlQ29tbWVudDc1OTMwNjIyOA== lovasoa 552629 2021-01-13T08:59:31Z 2021-01-13T08:59:31Z NONE

@simonw : Did you have the time to take a look at this ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Improve the display of facets information 774332247  
758668359 https://github.com/simonw/datasette/issues/1091#issuecomment-758668359 https://api.github.com/repos/simonw/datasette/issues/1091 MDEyOklzc3VlQ29tbWVudDc1ODY2ODM1OQ== tballison 6739646 2021-01-12T13:52:29Z 2021-01-12T13:52:29Z NONE

Y, thank you to both of you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
.json and .csv exports fail to apply base_url 742011049  
758448525 https://github.com/simonw/datasette/issues/1091#issuecomment-758448525 https://api.github.com/repos/simonw/datasette/issues/1091 MDEyOklzc3VlQ29tbWVudDc1ODQ0ODUyNQ== henry501 19328961 2021-01-12T06:55:08Z 2021-01-12T06:55:08Z NONE

Great, really happy I could help! Reverse proxies get tricky.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
.json and .csv exports fail to apply base_url 742011049  
758280611 https://github.com/simonw/datasette/issues/1091#issuecomment-758280611 https://api.github.com/repos/simonw/datasette/issues/1091 MDEyOklzc3VlQ29tbWVudDc1ODI4MDYxMQ== tballison 6739646 2021-01-11T23:06:10Z 2021-01-11T23:06:10Z NONE

+1

Yep! Fixes it. If I navigate to https://corpora.tika.apache.org/datasette, I get a 404 (database not found: datasette), but if I navigate to https://corpora.tika.apache.org/datasette/file_profiles/, everything WORKS!

Thank you!

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
.json and .csv exports fail to apply base_url 742011049  
756425587 https://github.com/simonw/datasette/issues/1091#issuecomment-756425587 https://api.github.com/repos/simonw/datasette/issues/1091 MDEyOklzc3VlQ29tbWVudDc1NjQyNTU4Nw== henry501 19328961 2021-01-07T22:27:19Z 2021-01-07T22:27:19Z NONE

I found this issue while troubleshooting the same behavior with an nginx reverse proxy. The solution was to make sure I set:

proxy_pass http://server:8001/baseurl/
instead of just:

proxy_pass http://server:8001
The custom SQL query and header links are now correct.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
.json and .csv exports fail to apply base_url 742011049  
754911290 https://github.com/simonw/datasette/issues/1171#issuecomment-754911290 https://api.github.com/repos/simonw/datasette/issues/1171 MDEyOklzc3VlQ29tbWVudDc1NDkxMTI5MA== rcoup 59874 2021-01-05T21:31:15Z 2021-01-05T21:31:15Z NONE

We did this for Sno under macOS — it's a PyInstaller binary/setup which uses Packages for packaging.

FYI (if you ever get to it) for Windows you need to get a code signing certificate. And if you want automated CI, you'll want to get an "EV CodeSigning for HSM" certificate from GlobalSign, which then lets you put the certificate into Azure Key Vault. Which you can use with azuresigntool to sign your code & installer. (Non-EV certificates are a waste of time, the user still gets big warnings at install time).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
GitHub Actions workflow to build and sign macOS binary executables 778450486  
754729035 https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754729035 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54 MDEyOklzc3VlQ29tbWVudDc1NDcyOTAzNQ== jacobian 21148 2021-01-05T16:03:29Z 2021-01-05T16:03:29Z NONE

I was able to fix this, at least enough to get my archive to import. Not sure if there's more work to be done here or not.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Archive import appears to be broken on recent exports 779088071  
754728696 https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55 MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng== jacobian 21148 2021-01-05T16:02:55Z 2021-01-05T16:02:55Z NONE

This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix archive imports 779211940  
754721153 https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54 MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw== jacobian 21148 2021-01-05T15:51:09Z 2021-01-05T15:51:09Z NONE

Correction: the failure is on lists-member.js (I was thrown by the block variable name, but that's just a coincidence)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Archive import appears to be broken on recent exports 779088071  
754210356 https://github.com/simonw/datasette/issues/983#issuecomment-754210356 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1NDIxMDM1Ng== carlmjohnson 222245 2021-01-04T20:49:05Z 2021-01-04T20:49:05Z NONE

For reasons I've written about elsewhere, I'm in favor of modules. It has several beneficial effects. One, old browsers just ignore it all together. Two, if you include the same plain script on the page more than once, it will be executed twice, but if you include the same module script on a page twice, it will only execute once. Three, you get a module local namespace, instead of having to use the global window namespace or a function private namespace.

OTOH, if you are going to use an old style script, the code from before isn't ideal, because you wipe out your registry if the script it included more than once. Also you may as well use object methods and splat arguments.

The event based architecture probably makes more sense though. Just make up some event names prefixed with datasette: and listen for them on the root. The only concern with that approach is it can sometimes be tricky to make sure your plugins are run after datasette has run. Maybe

function mycallback(){
  // whatever
}

if (window.datasette) {
  window.datasette.init(mycallback);
} else {
  document.addEventListener('datasette:init', mycallback);
}
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
754181647 https://github.com/simonw/datasette/issues/983#issuecomment-754181647 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1NDE4MTY0Nw== jussiarpalahti 11941245 2021-01-04T19:52:40Z 2021-01-04T19:52:40Z NONE

I was thinking JavaScript plugins going with server side template extensions custom HTML. Attach my own widgets on there and listen for Datasette events to refresh when user interacts with main UI. Like a map view or table that updates according to selected column. There's certainly other ways to look at this. Perhaps you could list possible hooks or high level design doc on what would be possible with the plugin system?

Re: modules. I would like to see modules supported at least in development. The developer experience is so much better than what JavaScript coding has been in the past. With large parts of NPM at your disposal I’d imagine even less experienced coder can whisk a custom plugin in no time. Proper production build system (like one you get with Pika or Parcel) could package everything up into bundles that older browsers can understand. Though that does come with performance and size penalties alongside the added complexity.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
754002859 https://github.com/simonw/datasette/pull/1170#issuecomment-754002859 https://api.github.com/repos/simonw/datasette/issues/1170 MDEyOklzc3VlQ29tbWVudDc1NDAwMjg1OQ== codecov[bot] 22429695 2021-01-04T14:22:52Z 2021-01-04T14:22:52Z NONE

Codecov Report

Merging #1170 (a5761cc) into main (1e8fa3a) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1170   +/-   ##
=======================================
  Coverage   91.55%   91.55%           
=======================================
  Files          32       32           
  Lines        3932     3932           
=======================================
  Hits         3600     3600           
  Misses        332      332           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1e8fa3a...a5761cc. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Install Prettier via package.json 778126516  
753600999 https://github.com/simonw/datasette/issues/983#issuecomment-753600999 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1MzYwMDk5OQ== MarkusH 475613 2021-01-03T11:11:21Z 2021-01-03T11:11:21Z NONE

With regards to JS/Browser events, given your example of menu items that plugins could add, I could imagine this code to work:

// as part of datasette
datasette.events.AddMenuItem = 'DatasetteAddMenuItemEvent';
document.addEventListener(datasette.events.AddMenuItem, (e) => {
  // do whatever is needed to add the menu item. Data comes from `e`
  alert(e.title + ' ' + e.link);
});

// as part of a plugin
const event = new Event(datasette.events.AddMenuItem, {link: '/foo/bar', title: 'Go somewhere'});
Document.dispatchEvent(event)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
753587963 https://github.com/simonw/datasette/issues/983#issuecomment-753587963 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1MzU4Nzk2Mw== dracos 154364 2021-01-03T09:02:50Z 2021-01-03T10:00:05Z NONE

but I'm already commited to requiring support for () => {} arrow functions

Don't think you are :) (e.g. gzipped, using arrow functions in my example saves 2 bytes over spelling out function). On FMS, past month, looking at popular browsers, looks like we'd have 95.41% arrow support, 94.19% module support, and 4.58% (mostly IE9/IE11/Safari 9) supporting neither.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
753224999 https://github.com/simonw/datasette/issues/983#issuecomment-753224999 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1MzIyNDk5OQ== jussiarpalahti 11941245 2020-12-31T23:29:36Z 2020-12-31T23:29:36Z NONE

I have yet to build Datasette plugin and am unfamiliar with Pluggy. Since browsers have event handling builtin Datasette could communicate with plugins through it. Handlers register as listeners for custom Datasette events and Datasette's JS can then trigger said events.

I was also wondering if you had looked at Javascript Modules for JS plugins? With services like Skypack (https://www.skypack.dev) NPM libraries can be loaded directly into browser, no build step needed. Same goes for local JS if you adhere to ES Module spec.

If minification is required then tools such as Snowpack (https://www.snowpack.dev) could fit better. It uses https://github.com/evanw/esbuild for bundling and minification.

On plugins you'd simply:

import {register} from '/assets/js/datasette'
register.on({'click' : my_func})

In Datasette HTML pages' head you'd merely import these files as modules one by one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
753218817 https://github.com/simonw/datasette/issues/983#issuecomment-753218817 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1MzIxODgxNw== yozlet 173848 2020-12-31T22:32:25Z 2020-12-31T22:32:25Z NONE

Amazing work! And you've put in far more work than I'd expect to reduce the payload (which is admirable).

So, to add a plugin with the current design, it goes in (a) the template or (b) a bookmarklet, right?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
753033121 https://github.com/simonw/datasette/issues/1165#issuecomment-753033121 https://api.github.com/repos/simonw/datasette/issues/1165 MDEyOklzc3VlQ29tbWVudDc1MzAzMzEyMQ== dracos 154364 2020-12-31T19:33:47Z 2020-12-31T19:33:47Z NONE

Sorry to go on about it, but it's my only example ;) And thought it might be of interest/use. Here is FixMyStreet's Cypress workflow https://github.com/mysociety/fixmystreet/blob/master/.github/workflows/cypress.yml with the master script that sets up server etc at https://github.com/mysociety/fixmystreet/blob/master/bin/browser-tests (that has features such as working inside/outside Vagrant, and can do JS code coverage) and then the tests are at https://github.com/mysociety/fixmystreet/tree/master/.cypress/cypress/integration

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Mechanism for executing JavaScript unit tests 776635426  
752882797 https://github.com/simonw/datasette/issues/983#issuecomment-752882797 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1Mjg4Mjc5Nw== dracos 154364 2020-12-31T08:07:59Z 2020-12-31T15:04:32Z NONE

If you're using arrow functions, you can presumably use default parameters, not much difference in support. That would save you 9 bytes. But OTOH you need "use strict"; to use arrow functions etc, and that's 13 bytes.

Your latest 250-byte one, with use strict, gzips to 199 bytes. The following might be 292 bytes, but compresses to 204, basically the same, and works in any browser (well, IE9+) at all:

var datasette=datasette||{};datasette.plugins=function(){var d={};return{register:function(b,c,e){d[b]||(d[b]=[]);d[b].push([c,e])},call:function(b,c){c=c||{};var e=[];(d[b]||[]).forEach(function(a){a=a[0].apply(a[0],a[1].map(function(a){return c[a]}));void 0!==a&&e.push(a)});return e}}}();

Source for that is below; I replaced the [fn,parameters] because closure-compiler includes a polyfill for that, and I ran closure-compiler --language_out ECMASCRIPT3:

var datasette = datasette || {};
datasette.plugins = (() => {
    var registry = {};
    return {
        register: (hook, fn, parameters) => {
            if (!registry[hook]) {
                registry[hook] = [];
            }
            registry[hook].push([fn, parameters]);
        },
        call: (hook, args) => {
            args = args || {};
            var results = [];
            (registry[hook] || []).forEach((data) => {
                /* Call with the correct arguments */
                var result = data[0].apply(data[0], data[1].map(parameter => args[parameter]));
                if (result !== undefined) {
                    results.push(result);
                }
            });
            return results;
        }
    };
})();
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
752888552 https://github.com/simonw/datasette/issues/983#issuecomment-752888552 https://api.github.com/repos/simonw/datasette/issues/983 MDEyOklzc3VlQ29tbWVudDc1Mjg4ODU1Mg== dracos 154364 2020-12-31T08:33:11Z 2020-12-31T08:34:27Z NONE

If you could say that all hook functions had to accept one options parameter (and could use object destructuring if they wished to only see a subset), you could have this, which minifies (to all-browser-JS) to 200 bytes, gzips to 146, and works practically the same:

var datasette = datasette || {};
datasette.plugins = (() => {
    var registry = {};
    return {
        register: (hook, fn) => {
            registry[hook] = registry[hook] || [];
            registry[hook].push(fn);
        },
        call: (hook, args) => {
            var results = (registry[hook] || []).map(fn => fn(args||{}));
            return results;
        }
    };
})();

var datasette=datasette||{};datasette.plugins=function(){var b={};return{register:function(a,c){b[a]=b[a]||[];b[a].push(c)},call:function(a,c){return(b[a]||[]).map(function(a){return a(c||{})})}}}();

Called the same, definitions tiny bit different:

datasette.plugins.register('numbers', ({a, b}) => a + b)
datasette.plugins.register('numbers', o => o.a * o.b)
datasette.plugins.call('numbers', {a: 4, b: 6})
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript plugin hooks mechanism similar to pluggy 712260429  
751504136 https://github.com/simonw/datasette/issues/417#issuecomment-751504136 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDc1MTUwNDEzNg== drewda 212369 2020-12-27T19:02:06Z 2020-12-27T19:02:06Z NONE

Very much looking forward to seeing this functionality come together. This is probably out-of-scope for an initial release, but in the future it could be useful to also think of how to run this is a container'ized context. For example, an immutable datasette container that points to an S3 bucket of SQLite DBs or CSVs. Or an immutable datasette container pointing to a NFS volume elsewhere on a Kubernetes cluster.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
751476406 https://github.com/simonw/datasette/issues/1150#issuecomment-751476406 https://api.github.com/repos/simonw/datasette/issues/1150 MDEyOklzc3VlQ29tbWVudDc1MTQ3NjQwNg== noklam 18221871 2020-12-27T14:51:39Z 2020-12-27T14:51:39Z NONE

I like the idea of _internal, it's a nice way to get a data catalog quickly. I wonder if this trick applies to db other than SQLite.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Maintain an in-memory SQLite table of connected databases and their tables 770436876  
751375487 https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59 MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw== frosencrantz 631242 2020-12-26T17:08:44Z 2020-12-26T17:08:44Z NONE

Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Remove unneeded exists=True for -a/--auth flag. 771872303  
751127384 https://github.com/simonw/datasette/issues/417#issuecomment-751127384 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDc1MTEyNzM4NA== dyllan-to-yu 1279360 2020-12-24T22:56:48Z 2020-12-24T22:56:48Z NONE

Instead of scanning the directory every 10s, have you considered listening for the native system events to notify you of updates?

I think python has a nice module to do this for you called watchdog

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
751125270 https://github.com/dogsheep/dogsheep-photos/issues/28#issuecomment-751125270 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28 MDEyOklzc3VlQ29tbWVudDc1MTEyNTI3MA== jmelloy 129786 2020-12-24T22:26:22Z 2020-12-24T22:26:22Z NONE

This comes around if you’ve run the photo export without running an s3 upload.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Invalid SQL no such table: main.uploads 624490929  
750849460 https://github.com/simonw/datasette/pull/1159#issuecomment-750849460 https://api.github.com/repos/simonw/datasette/issues/1159 MDEyOklzc3VlQ29tbWVudDc1MDg0OTQ2MA== codecov[bot] 22429695 2020-12-24T11:07:35Z 2020-12-24T11:29:21Z NONE

Codecov Report

Merging #1159 (c820abd) into main (a882d67) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1159   +/-   ##
=======================================
  Coverage   91.55%   91.55%           
=======================================
  Files          32       32           
  Lines        3930     3930           
=======================================
  Hits         3598     3598           
  Misses        332      332           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a882d67...c820abd. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Improve the display of facets information 774332247  
750373496 https://github.com/simonw/datasette/pull/1158#issuecomment-750373496 https://api.github.com/repos/simonw/datasette/issues/1158 MDEyOklzc3VlQ29tbWVudDc1MDM3MzQ5Ng== codecov[bot] 22429695 2020-12-23T16:26:06Z 2020-12-23T16:26:06Z NONE

Codecov Report

Merging #1158 (37ce72f) into main (90eba4c) will not change coverage.
The diff coverage is 87.50%.

@@           Coverage Diff           @@
##             main    #1158   +/-   ##
=======================================
  Coverage   91.55%   91.55%           
=======================================
  Files          32       32           
  Lines        3930     3930           
=======================================
  Hits         3598     3598           
  Misses        332      332           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/cli.py</td> <td>77.41% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/facets.py</td> <td>89.04% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/filters.py</td> <td>94.35% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/hookspecs.py</td> <td>100.00% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/inspect.py</td> <td>36.11% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/renderer.py</td> <td>94.02% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/views/base.py</td> <td>95.01% <50.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/app.py</td> <td>95.85% <100.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/utils/__init__.py</td> <td>94.11% <100.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/utils/asgi.py</td> <td>92.13% <100.00%> (ø)</td> <td></td> </tr> <tr> <td>... and 1 more</td> <td></td> <td></td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 90eba4c...37ce72f. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Modernize code to Python 3.6+ 773913793  
748436115 https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15 MDEyOklzc3VlQ29tbWVudDc0ODQzNjExNQ== nickvazz 8573886 2020-12-19T07:43:38Z 2020-12-19T07:47:36Z NONE

Hey Simon! I really enjoy datasette so far, just started trying it out today following your iPhone photos example.

I am not sure if you had run into this or not, but it seems like they might have changed one of the column names from
ZGENERICASSET to ZASSET. Should I open a PR?

Would change:
- here
- here
- here

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Expose scores from ZCOMPUTEDASSETATTRIBUTES 612151767  
748436453 https://github.com/dogsheep/twitter-to-sqlite/issues/53#issuecomment-748436453 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53 MDEyOklzc3VlQ29tbWVudDc0ODQzNjQ1Mw== anotherjesse 27 2020-12-19T07:47:01Z 2020-12-19T07:47:01Z NONE

I think this should probably be closed as won't fix.

Attempting to make a patch for this I realized that the since_id would limit to tweets posted since that since_id, not when it was favorited. So favoriting something in the older would be missed if you used --since with a cron script

Better to just use --stop_after set to a couple hundred

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
--since support for favorites 771324837  
748436195 https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-748436195 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 MDEyOklzc3VlQ29tbWVudDc0ODQzNjE5NQ== nickvazz 8573886 2020-12-19T07:44:32Z 2020-12-19T07:44:49Z NONE

I have also run into this a bit, would it be possible to post your requirements.txt so I can try and reproduce your blog post?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990  
747130908 https://github.com/dogsheep/google-takeout-to-sqlite/issues/2#issuecomment-747130908 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/2 MDEyOklzc3VlQ29tbWVudDc0NzEzMDkwOA== khimaros 231498 2020-12-17T00:47:04Z 2020-12-17T00:47:43Z NONE

it looks like almost all of the memory consumption is coming from json.load().

another direction here may be to use the new "Semantic Location History" data which is already broken down by year and month.

it also provides much more interesting data, such as estimated address, form of travel, etc.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
killed by oomkiller on large location-history 769376447  
745162571 https://github.com/simonw/datasette/issues/1142#issuecomment-745162571 https://api.github.com/repos/simonw/datasette/issues/1142 MDEyOklzc3VlQ29tbWVudDc0NTE2MjU3MQ== nitinpaul 6622733 2020-12-15T09:22:58Z 2020-12-15T09:22:58Z NONE

You're right, probably more straightforward to have the links for JSON. I was imagining to toggle the href for the 'Export JSON' link (button) to the selected shape, but it'll probably be needlessly complex in the end.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Stream all rows" is not at all obvious 763361458  
744618787 https://github.com/simonw/datasette/issues/1143#issuecomment-744618787 https://api.github.com/repos/simonw/datasette/issues/1143 MDEyOklzc3VlQ29tbWVudDc0NDYxODc4Nw== yurivish 114388 2020-12-14T18:15:00Z 2020-12-15T02:21:53Z NONE

From a quick look at the README, it does seem to do everything I need, thanks!

I think the argument for inclusion in core is to lower the chances of unwanted data access. A local server can be accessed by anybody who can make an HTTP request to your computer regardless of CORS rules, but the default * rule additionally opens up access to the local instance to any website you visit while it is running.

That's probably not what people typically intend, particularly when the data is of a sensitive nature. A default of requiring the user to specify the origin (allowing * but encouraging a narrower scope) would solve this problem entirely, I think.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
More flexible CORS support in core, to encourage good security practices 764059235  
744522099 https://github.com/simonw/datasette/issues/1142#issuecomment-744522099 https://api.github.com/repos/simonw/datasette/issues/1142 MDEyOklzc3VlQ29tbWVudDc0NDUyMjA5OQ== nitinpaul 6622733 2020-12-14T15:37:47Z 2020-12-14T15:37:47Z NONE

Alright I could give it a try! This might be a stupid question, can you tell me how to run the server from my fork? So that I can test the changes?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Stream all rows" is not at all obvious 763361458  
744489028 https://github.com/simonw/datasette/issues/1144#issuecomment-744489028 https://api.github.com/repos/simonw/datasette/issues/1144 MDEyOklzc3VlQ29tbWVudDc0NDQ4OTAyOA== MarkusH 475613 2020-12-14T14:47:11Z 2020-12-14T14:47:11Z NONE

Thanks for opening the issue, @simonw. Let me elaborate on my Tweets.

datasette-chartjs provides drop down lists to pick the chart visualization (e.g. bar, line, doughnut, pie, ...) as well as the column used for the "x axis" (e.g. time).

A user can change the values on-demand. The chart will be redrawn w/o querying the database again.

However, if a user wants to change the underlying query, they will use the SQL field provided by datasette or any of the other datasette built-in features to amend a query. In order to maintain a user's selections for the plugin, datasette-chartjs copies some parts of datasette-vega which persist the chosen visualization and column in the hash part of a URL (the stuff behind the #). The plugin load the config from the hash upon initialization on the next page and use it accordingly.

Additionally, datasette-vega and datasette-chartjs need to make sure to include the hash in all links and forms that cause a reload of the page. This is, such that the config persists between clicks.

This ticket is about moving thes parts into datasette that provide the functionality to do so. This includes:

  1. a way to load config options with a given prefix from the current URL hash
  2. a way to update the current URL hash with a new config value or a bunch of config options
  3. updating all necessary links and forms on the current page to include the URL hash whenever its updated
  4. to prevent leaking config options to external pages, only "internal" links should be updated

There's another, optional, feature that we might want to think about during the design phase: the scope of the config. Links within a datasette instance have 1 of 3 scopes:

  1. global, for the whole datasette project
  2. database, for all tables in a database
  3. table, only for a table within a database

When updating the links and forms as pointed out in 3. above, it might be worth considering which links need to be updated. I could imagine a plugin that wants to persist some setting across all tables within a database but another setting only within a table.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
JavaScript to help plugins interact with the fragment part of the URL 765637324  
744475543 https://github.com/simonw/datasette/pull/1145#issuecomment-744475543 https://api.github.com/repos/simonw/datasette/issues/1145 MDEyOklzc3VlQ29tbWVudDc0NDQ3NTU0Mw== codecov[bot] 22429695 2020-12-14T14:26:25Z 2020-12-14T14:26:25Z NONE

Codecov Report

Merging #1145 (a8588f9) into main (0c616f7) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1145   +/-   ##
=======================================
  Coverage   91.41%   91.41%           
=======================================
  Files          31       31           
  Lines        3881     3881           
=======================================
  Hits         3548     3548           
  Misses        333      333           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0c616f7...a8588f9. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update pytest requirement from <6.2.0,>=5.2.2 to >=5.2.2,<6.3.0 766494367  
744461856 https://github.com/simonw/datasette/issues/276#issuecomment-744461856 https://api.github.com/repos/simonw/datasette/issues/276 MDEyOklzc3VlQ29tbWVudDc0NDQ2MTg1Ng== robintw 296686 2020-12-14T14:04:57Z 2020-12-14T14:04:57Z NONE

I'm looking into using datasette with a database with spatialite geometry columns, and came across this issue. Has there been any progress on this since 2018?

In one of my tables I'm just storing lat/lon points in a spatialite point geometry, and I've managed to make datasette-cluster-map display the points by extracting the lat and lon in SQL - using something like select ... ST_X(location) as longitude, ST_Y(location) as latitude from Blah. Something more 'built-in' would be great though - particularly for the tables I have that store more complex geometries.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Handle spatialite geometry columns better 324835838  
744003454 https://github.com/simonw/datasette/pull/1031#issuecomment-744003454 https://api.github.com/repos/simonw/datasette/issues/1031 MDEyOklzc3VlQ29tbWVudDc0NDAwMzQ1NA== frankier 299380 2020-12-13T12:52:56Z 2020-12-13T12:52:56Z NONE

Please let me know if there's anything I can do to help get this merged.

This is causing problems for me because it means when I build my Docker image my databases aren't considered immutable, which I would like them to be so that a download link is produced.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fallback to databases in inspect-data.json when no -i options are passed 724369025  

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);