479 rows where author_association = "NONE" sorted by updated_at descending

View and edit SQL

Suggested facets: reactions, created_at (date), updated_at (date)

author_association

  • NONE · 479
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
837166862 https://github.com/simonw/datasette/issues/1280#issuecomment-837166862 https://api.github.com/repos/simonw/datasette/issues/1280 MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg== blairdrummond 10801138 2021-05-10T19:07:46Z 2021-05-10T19:07:46Z NONE

Do you have a list of sqlite versions you want to test against?

One cool thing I saw recently (that we started using) was using import docker within python, and then writing pytest functions which executed against the container

setup

example

The inspiration for this came from the jupyter docker-stacks

So off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Ability to run CI against multiple SQLite versions 842862708  
835491318 https://github.com/simonw/datasette/pull/1296#issuecomment-835491318 https://api.github.com/repos/simonw/datasette/issues/1296 MDEyOklzc3VlQ29tbWVudDgzNTQ5MTMxOA== blairdrummond 10801138 2021-05-08T19:59:01Z 2021-05-08T19:59:01Z NONE

I have also found that ubuntu has fewer vulnerabilities than the buster based images.

➜  ~ docker pull python:3-buster
➜  ~ trivy image python:3-buster | head                             
2021-04-28T17:14:29.313-0400    INFO    Detecting Debian vulnerabilities...
2021-04-28T17:14:29.393-0400    INFO    Trivy skips scanning programming language libraries because no supported file was detected
python:3-buster (debian 10.9)
=============================
Total: 1621 (UNKNOWN: 13, LOW: 1106, MEDIUM: 343, HIGH: 145, CRITICAL: 14)
+------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+
|           LIBRARY            |  VULNERABILITY ID   | SEVERITY |      INSTALLED VERSION       | FIXED VERSION |                            TITLE                             |
+------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Dockerfile: use Ubuntu 20.10 as base 855446829  
832676649 https://github.com/simonw/datasette/pull/1318#issuecomment-832676649 https://api.github.com/repos/simonw/datasette/issues/1318 MDEyOklzc3VlQ29tbWVudDgzMjY3NjY0OQ== codecov[bot] 22429695 2021-05-05T13:13:45Z 2021-05-05T13:13:45Z NONE

Codecov Report

Merging #1318 (e06c099) into main (1b69753) will increase coverage by 0.02%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##             main    #1318      +/-   ##
==========================================
+ Coverage   91.51%   91.53%   +0.02%     
==========================================
  Files          34       34              
  Lines        4255     4255              
==========================================
+ Hits         3894     3895       +1     
+ Misses        361      360       -1     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/utils/__init__.py</td> <td>94.31% <0.00%> (+0.17%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1b69753...e06c099. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Bump black from 21.4b2 to 21.5b0 876431852  
831004775 https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-831004775 https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1 MDEyOklzc3VlQ29tbWVudDgzMTAwNDc3NQ== cobiadigital 25372415 2021-05-03T03:46:23Z 2021-05-03T03:46:23Z NONE

RS1800955 is related to novelty seeking and ADHD
https://www.snpedia.com/index.php/Rs1800955

select rsid, genotype, case genotype when 'CC' then 'increased susceptibility to novelty seeking' when 'CT' then 'increased susceptibility to novelty seeking' when 'TT' then 'normal' end as interpretation from genome where rsid = 'rs1800955'

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Figure out some interesting example SQL queries 496415321  
829885904 https://github.com/simonw/datasette/issues/1310#issuecomment-829885904 https://api.github.com/repos/simonw/datasette/issues/1310 MDEyOklzc3VlQ29tbWVudDgyOTg4NTkwNA== ColinMaudry 3747136 2021-04-30T06:58:46Z 2021-04-30T07:26:11Z NONE

I made it work with openpyxl. I'm not sure all the code under @hookimpl is necessary... but it works :)

from datasette import hookimpl
from datasette.utils.asgi import Response
from openpyxl import Workbook
from openpyxl.writer.excel import save_virtual_workbook
from openpyxl.cell import WriteOnlyCell
from openpyxl.styles import Alignment, Font, PatternFill
from tempfile import NamedTemporaryFile

def render_spreadsheet(rows):
    wb = Workbook(write_only=True)
    ws = wb.create_sheet()
    ws = wb.active
    ws.title = "decp"

    columns = rows[0].keys()
    headers = []
    for col in columns :
        c = WriteOnlyCell(ws, col)
        c.fill = PatternFill("solid", fgColor="DDEFFF")
        headers.append(c)
    ws.append(headers)

    for row in rows:
        wsRow = []
        for col in columns:
            c = WriteOnlyCell(ws, row[col])
            if col == "objet" :
                c.alignment = Alignment(wrapText = True)
            wsRow.append(c)
        ws.append(wsRow)

    with NamedTemporaryFile() as tmp:
        wb.save(tmp.name)
        tmp.seek(0)
        return Response(
            tmp.read(),
            headers={
                'Content-Disposition': 'attachment; filename=decp.xlsx',
                'Content-type': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
            }
        )

@hookimpl
def register_output_renderer():
    return {"extension": "xlsx",
    "render": render_spreadsheet,
    "can_render": lambda: False}

The key part was to find the right function to wrap the spreadsheet object wb. NamedTemporaryFile() did it!

I'll update this issue when the plugin is packaged and ready for broader use.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
I'm creating a plugin to export a spreadsheet file (.ods or .xlsx) 870125126  
829349118 https://github.com/simonw/datasette/pull/1314#issuecomment-829349118 https://api.github.com/repos/simonw/datasette/issues/1314 MDEyOklzc3VlQ29tbWVudDgyOTM0OTExOA== codecov[bot] 22429695 2021-04-29T15:43:32Z 2021-04-29T15:43:32Z NONE

Codecov Report

Merging #1314 (98eea0b) into main (a4bb2ab) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1314   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4bb2ab...98eea0b. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Upgrade to GitHub-native Dependabot 871157602  
829265979 https://github.com/simonw/datasette/pull/1313#issuecomment-829265979 https://api.github.com/repos/simonw/datasette/issues/1313 MDEyOklzc3VlQ29tbWVudDgyOTI2NTk3OQ== codecov[bot] 22429695 2021-04-29T14:04:13Z 2021-04-29T14:04:13Z NONE

Codecov Report

Merging #1313 (3cd7ad4) into main (a4bb2ab) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1313   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4bb2ab...3cd7ad4. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Bump black from 20.8b1 to 21.4b2 871046111  
828683322 https://github.com/simonw/datasette/pull/1311#issuecomment-828683322 https://api.github.com/repos/simonw/datasette/issues/1311 MDEyOklzc3VlQ29tbWVudDgyODY4MzMyMg== codecov[bot] 22429695 2021-04-28T18:30:49Z 2021-04-28T18:30:49Z NONE

Codecov Report

Merging #1311 (baf3030) into main (a4bb2ab) will increase coverage by 0.07%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##             main    #1311      +/-   ##
==========================================
+ Coverage   91.51%   91.58%   +0.07%     
==========================================
  Files          34       34              
  Lines        4255     4255              
==========================================
+ Hits         3894     3897       +3     
+ Misses        361      358       -3     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/database.py</td> <td>93.68% <0.00%> (+0.74%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/views/index.py</td> <td>98.18% <0.00%> (+1.81%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4bb2ab...baf3030. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Bump black from 20.8b1 to 21.4b1 870227815  
828670621 https://github.com/simonw/datasette/issues/1310#issuecomment-828670621 https://api.github.com/repos/simonw/datasette/issues/1310 MDEyOklzc3VlQ29tbWVudDgyODY3MDYyMQ== ColinMaudry 3747136 2021-04-28T18:12:08Z 2021-04-28T18:12:08Z NONE

Apparently, beside a string, Reponse could also work with bytes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
I'm creating a plugin to export a spreadsheet file (.ods or .xlsx) 870125126  
827911909 https://github.com/simonw/datasette/pull/1309#issuecomment-827911909 https://api.github.com/repos/simonw/datasette/issues/1309 MDEyOklzc3VlQ29tbWVudDgyNzkxMTkwOQ== codecov[bot] 22429695 2021-04-27T20:35:15Z 2021-04-27T20:35:15Z NONE

Codecov Report

Merging #1309 (20fc3fe) into main (a4bb2ab) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1309   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4bb2ab...20fc3fe. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Bump black from 20.8b1 to 21.4b0 869237023  
826784306 https://github.com/simonw/datasette/issues/173#issuecomment-826784306 https://api.github.com/repos/simonw/datasette/issues/173 MDEyOklzc3VlQ29tbWVudDgyNjc4NDMwNg== ColinMaudry 3747136 2021-04-26T12:10:01Z 2021-04-26T12:10:01Z NONE

I found a neat tutorial to set up gettext with jinja2: http://siongui.github.io/2016/01/17/i18n-python-web-application-by-gettext-jinja2/

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
I18n and L10n support 281110295  
824866566 https://github.com/simonw/datasette/pull/1306#issuecomment-824866566 https://api.github.com/repos/simonw/datasette/issues/1306 MDEyOklzc3VlQ29tbWVudDgyNDg2NjU2Ng== codecov[bot] 22429695 2021-04-22T13:59:04Z 2021-04-22T13:59:04Z NONE

Codecov Report

Merging #1306 (115332c) into main (6ed9238) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1306   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4256    +1     
=======================================
+ Hits         3894     3895    +1     
  Misses        361      361           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/views/index.py</td> <td>96.42% <100.00%> (+0.06%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6ed9238...115332c. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Possible fix for issue #1305 864979486  
823961091 https://github.com/simonw/datasette/issues/173#issuecomment-823961091 https://api.github.com/repos/simonw/datasette/issues/173 MDEyOklzc3VlQ29tbWVudDgyMzk2MTA5MQ== ColinMaudry 3747136 2021-04-21T10:37:05Z 2021-04-21T10:37:36Z NONE

I have the feeling that the text visible to users is 95% present in template files (datasette/templates). The python code mainly contains error messages.

In the current situation, the best way to provide a localized frontend is to translate the templates and configure datasette to use them. I think I'm going to do it for French.

If we want localization to be better integrated, for the python code, I think gettext is the way to go. The .po can be translated in user-friendly tools such as Transifex and Crowdin.

For the templates, I'm not sure how we could do it cleanly and easy to maintain. Maybe the tools above could parse HTML and detect the strings to be translated.

In any case, localization implementing l10n is just the first step: a continuous process must be setup to maintain the translations and produce new ones while datasette keeps getting new features.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
I18n and L10n support 281110295  
823102978 https://github.com/simonw/datasette/issues/1298#issuecomment-823102978 https://api.github.com/repos/simonw/datasette/issues/1298 MDEyOklzc3VlQ29tbWVudDgyMzEwMjk3OA== dracos 154364 2021-04-20T08:51:23Z 2021-04-20T08:51:23Z NONE
  1. Max height would still let you scroll the page to underneath the facets to the table, but would mean the table would never take up more than your window size, so the horizontal scrollbar would be visible as soon as the table took up the size of the window.
  2. Yes, this wouldn't be for mobile :) It'd be desktop-only styling. On mobile you can scroll much more easily with touch, anyway. In your case, perhaps better would be the whole top half would be facets, bottom left quadrant chart, bottom right table. Depends upon the particular use case, as you say.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
improve table horizontal scroll experience 855476501  
823064725 https://github.com/simonw/datasette/issues/1298#issuecomment-823064725 https://api.github.com/repos/simonw/datasette/issues/1298 MDEyOklzc3VlQ29tbWVudDgyMzA2NDcyNQ== dracos 154364 2021-04-20T07:57:14Z 2021-04-20T07:57:14Z NONE

My suggestions, originally made on twitter, but might be better here now:

  1. Could have a CSS shadow (one of the comments on https://stackoverflow.com/questions/44793453/how-do-i-add-a-top-and-bottom-shadow-while-scrolling-but-only-when-needed is a codepen for horizontal instead of vertical);

  2. Could give the table a max-height (either the window or work out the available space) so that it is both vertically/horizontally scrollable and you don't have to scroll to the bottom in order to see this;

  3. On a desktop browser, what I think I'd want is an absolute grid to work with - left query/filters, TR chart (or map), BR table. No problem with scrolling then. Here is a mockup I made when this was about the map plugin:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
improve table horizontal scroll experience 855476501  
822486113 https://github.com/simonw/datasette/pull/1303#issuecomment-822486113 https://api.github.com/repos/simonw/datasette/issues/1303 MDEyOklzc3VlQ29tbWVudDgyMjQ4NjExMw== codecov[bot] 22429695 2021-04-19T13:55:24Z 2021-04-19T13:55:24Z NONE

Codecov Report

Merging #1303 (c348ff1) into main (0a7621f) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1303   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0a7621f...c348ff1. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update pytest-asyncio requirement from <0.15,>=0.10 to >=0.10,<0.16 861331159  
819775388 https://github.com/simonw/datasette/issues/1196#issuecomment-819775388 https://api.github.com/repos/simonw/datasette/issues/1196 MDEyOklzc3VlQ29tbWVudDgxOTc3NTM4OA== robroc 1219001 2021-04-14T19:28:38Z 2021-04-14T19:28:38Z NONE

@QAInsights I'm having a similar problem when publishing to Cloud Run on Windows. It's not able to access certain packages in my conda environment where Datasette is installed. Can you explain how you got it to work in WSL? Were you able to access the .db file in the Windows file system? Thank you.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Access Denied Error in Windows 791237799  
817403642 https://github.com/simonw/datasette/pull/1296#issuecomment-817403642 https://api.github.com/repos/simonw/datasette/issues/1296 MDEyOklzc3VlQ29tbWVudDgxNzQwMzY0Mg== codecov[bot] 22429695 2021-04-12T00:29:05Z 2021-04-12T00:29:05Z NONE

Codecov Report

Merging #1296 (527a056) into main (0a7621f) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1296   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0a7621f...527a056. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Dockerfile: use Ubuntu 20.10 as base 855446829  
813249000 https://github.com/dogsheep/dogsheep-photos/issues/35#issuecomment-813249000 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/35 MDEyOklzc3VlQ29tbWVudDgxMzI0OTAwMA== ligurio 1151557 2021-04-05T07:37:57Z 2021-04-05T07:37:57Z NONE
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support to annotate photos on other than macOS OSes 842695374  
812815358 https://github.com/simonw/datasette/pull/1291#issuecomment-812815358 https://api.github.com/repos/simonw/datasette/issues/1291 MDEyOklzc3VlQ29tbWVudDgxMjgxNTM1OA== codecov[bot] 22429695 2021-04-03T05:32:50Z 2021-04-03T13:31:20Z NONE

Codecov Report

Merging #1291 (73342b6) into main (0a7621f) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1291   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0a7621f...73342b6. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update docs: explain allow_download setting 849582643  
812813732 https://github.com/simonw/datasette/issues/502#issuecomment-812813732 https://api.github.com/repos/simonw/datasette/issues/502 MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg== louispotok 5413548 2021-04-03T05:16:54Z 2021-04-03T05:16:54Z NONE

For what it's worth, if anyone finds this in the future, I was having the same issue.

After digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so datasette serve -i xyz.db rather than the doc's quickstart recommendation of datasette serve xyz.db.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exporting sqlite database(s)? 453131917  
812804178 https://github.com/simonw/datasette/pull/1290#issuecomment-812804178 https://api.github.com/repos/simonw/datasette/issues/1290 MDEyOklzc3VlQ29tbWVudDgxMjgwNDE3OA== codecov[bot] 22429695 2021-04-03T03:39:16Z 2021-04-03T03:41:29Z NONE

Codecov Report

Merging #1290 (2fb1e42) into main (87b583a) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1290   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 87b583a...2fb1e42. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use pytest-xdist to speed up tests 849568079  
812742462 https://github.com/simonw/datasette/issues/916#issuecomment-812742462 https://api.github.com/repos/simonw/datasette/issues/916 MDEyOklzc3VlQ29tbWVudDgxMjc0MjQ2Mg== jungle-boogie 1111743 2021-04-02T22:37:27Z 2021-04-02T22:37:27Z NONE

Yes, this would be nice!

I using Datasette v0.56 and don't see a previous page button.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support reverse pagination (previous page, has-previous-items) 672421411  
812711365 https://github.com/simonw/datasette/issues/1245#issuecomment-812711365 https://api.github.com/repos/simonw/datasette/issues/1245 MDEyOklzc3VlQ29tbWVudDgxMjcxMTM2NQ== jungle-boogie 1111743 2021-04-02T20:53:35Z 2021-04-02T20:53:35Z NONE

Yes, I agree.

Alternatively, maybe the header could be at the top and bottom, above the next page button.

Maybe even have the header 50 records down?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Sticky table column headers would be useful, especially on the query page 817544251  
812710120 https://github.com/simonw/datasette/issues/1255#issuecomment-812710120 https://api.github.com/repos/simonw/datasette/issues/1255 MDEyOklzc3VlQ29tbWVudDgxMjcxMDEyMA== jungle-boogie 1111743 2021-04-02T20:50:08Z 2021-04-02T20:50:08Z NONE

Hello again,

I was able to get my facets running with this settings.json, which was lifted from one of Simon's datasette's and slightly modified.

{
    "default_page_size": 100,
    "max_returned_rows": 1000,
    "num_sql_threads": 3,
    "sql_time_limit_ms": 9000,
    "default_facet_size": 10,
    "facet_time_limit_ms": 9000,
    "facet_suggest_time_limit_ms": 500,
    "hash_urls": false,
    "allow_facet": true,
    "suggest_facets": false,
    "default_cache_ttl": 5,
    "default_cache_ttl_hashed": 31536000,
    "cache_size_kb": 0,
    "allow_csv_stream": true,
    "max_csv_mb": 100,
    "truncate_cells_html": 2048,
    "template_debug": false,
    "base_url": "/"
}
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Facets timing out but work when filtering 826700095  
812680519 https://github.com/simonw/datasette/issues/1255#issuecomment-812680519 https://api.github.com/repos/simonw/datasette/issues/1255 MDEyOklzc3VlQ29tbWVudDgxMjY4MDUxOQ== jungle-boogie 1111743 2021-04-02T19:37:57Z 2021-04-02T19:37:57Z NONE

Hello,

I'm also experiencing a timeout in my environment. I don't know if it's because I need more indexes or a more powerful system.

My data has 1,271,111 and when I try to create a facet, there's a time out. I've tried this on two different rows that should significantly filter down data: CITY and PARTY_REG.

Simon's johns_hopkins_csse_daily_reports has more rows and it setup with two facets on load. He does have four indexes created, though. Do I need more indexes?

I have one simple one so far:

CREATE INDEX [idx_party_reg]
    ON [county_active] ([PARTY_REG]);

I'm running Datasette 0.56 installed via pip with Python 3.7.3.

4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux

$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Facets timing out but work when filtering 826700095  
811362316 https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-811362316 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31 MDEyOklzc3VlQ29tbWVudDgxMTM2MjMxNg== PabloLerma 871250 2021-03-31T19:14:39Z 2021-03-31T19:14:39Z NONE

👋 could I help somehow for this to be merged? As Big Sur is going to be more used as the time goes I think it would be nice to merge and publish a new version. Nice work!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update for Big Sur 771511344  
811209922 https://github.com/simonw/datasette/issues/1276#issuecomment-811209922 https://api.github.com/repos/simonw/datasette/issues/1276 MDEyOklzc3VlQ29tbWVudDgxMTIwOTkyMg== justinallen 1314318 2021-03-31T16:27:26Z 2021-03-31T16:27:26Z NONE

Fantastic. Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Invalid SQL: "no such table: pragma_database_list" on database page 841456306  
810943882 https://github.com/simonw/datasette/issues/526#issuecomment-810943882 https://api.github.com/repos/simonw/datasette/issues/526 MDEyOklzc3VlQ29tbWVudDgxMDk0Mzg4Mg== jokull 701 2021-03-31T10:03:55Z 2021-03-31T10:03:55Z NONE

+1 on using nested queries to achieve this! Would be great as streaming CSV is an amazing feature.

Some UX/DX details:

I was expecting it to work to simply add &_stream=on to custom SQL queries because the docs say

Any Datasette table, view or custom SQL query can be exported as CSV.

After a bit of testing back and forth I realized streaming only works for full tables.

Would love this feature because I'm using pandas.read_csv to paint graphs from custom queries and the graphs are cut off because of the 1000 row limit.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
CSV streaming for canned queries 459882902  
809667320 https://github.com/simonw/datasette/pull/1282#issuecomment-809667320 https://api.github.com/repos/simonw/datasette/issues/1282 MDEyOklzc3VlQ29tbWVudDgwOTY2NzMyMA== codecov[bot] 22429695 2021-03-29T19:52:35Z 2021-03-29T19:52:35Z NONE

Codecov Report

Merging #1282 (08f7427) into main (0486303) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1282   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0486303...08f7427. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix little typo 843739658  
780830464 https://github.com/simonw/datasette/pull/1229#issuecomment-780830464 https://api.github.com/repos/simonw/datasette/issues/1229 MDEyOklzc3VlQ29tbWVudDc4MDgzMDQ2NA== codecov[bot] 22429695 2021-02-17T20:24:30Z 2021-03-29T00:17:21Z NONE

Codecov Report

Merging #1229 (a095248) into main (8e18c79) will not change coverage.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1229   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/app.py</td> <td>95.85% <100.00%> (ø)</td> <td></td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8e18c79...a095248. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
ensure immutable databses when starting in configuration directory mode with 810507413  
808762613 https://github.com/simonw/datasette/pull/1279#issuecomment-808762613 https://api.github.com/repos/simonw/datasette/issues/1279 MDEyOklzc3VlQ29tbWVudDgwODc2MjYxMw== codecov[bot] 22429695 2021-03-27T17:03:37Z 2021-03-27T17:03:37Z NONE

Codecov Report

Merging #1279 (14d8977) into main (3fcfc85) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1279   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3fcfc85...14d8977. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Minor Docs Update. Added `--app` to fly install command. 842556944  
807459633 https://github.com/simonw/datasette/issues/1258#issuecomment-807459633 https://api.github.com/repos/simonw/datasette/issues/1258 MDEyOklzc3VlQ29tbWVudDgwNzQ1OTYzMw== wdccdw 1385831 2021-03-25T20:48:33Z 2021-03-25T20:49:34Z NONE

What about allowing default parameters when defining the query in metadata.yml? Something like:

databases:
  fec:
    queries:
      search_by_name:
        params:
            - q
        default-param-values:
             q: "text to search"
        sql: |-
          SELECT...

For now, I'm using a custom database-<file>.html file that hardcodes a default param in the link, but I'd rather not customize the template just for that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow canned query params to specify default values 828858421  
806010960 https://github.com/simonw/datasette/issues/741#issuecomment-806010960 https://api.github.com/repos/simonw/datasette/issues/741 MDEyOklzc3VlQ29tbWVudDgwNjAxMDk2MA== zaneselvans 596279 2021-03-24T17:19:42Z 2021-03-24T17:19:42Z NONE

Ah, okay so --extra-options applies to both datasette publish and datasette package? There wren't any examples of it being used with publish in the docs, so this tripped me up for a bit.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Replace "datasette publish --extra-options" with "--setting" 607223136  
804698315 https://github.com/simonw/datasette/pull/1159#issuecomment-804698315 https://api.github.com/repos/simonw/datasette/issues/1159 MDEyOklzc3VlQ29tbWVudDgwNDY5ODMxNQ== lovasoa 552629 2021-03-23T07:58:28Z 2021-03-23T07:58:38Z NONE

@mroswell Did you try it with more columns ? The display is flexible and columns get closer as new ones are added.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Improve the display of facets information 774332247  
804261103 https://github.com/simonw/datasette/pull/1271#issuecomment-804261103 https://api.github.com/repos/simonw/datasette/issues/1271 MDEyOklzc3VlQ29tbWVudDgwNDI2MTEwMw== codecov[bot] 22429695 2021-03-22T17:39:57Z 2021-03-22T17:39:57Z NONE

Codecov Report

Merging #1271 (fb2ad7a) into main (c4f1ec7) will decrease coverage by 0.28%.
The diff coverage is 94.28%.

@@            Coverage Diff             @@
##             main    #1271      +/-   ##
==========================================
- Coverage   91.51%   91.22%   -0.29%     
==========================================
  Files          34       34              
  Lines        4255     4263       +8     
==========================================
- Hits         3894     3889       -5     
- Misses        361      374      +13     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/database.py</td> <td>92.41% <94.28%> (-0.52%)</td> <td>:arrow_down:</td> </tr> <tr> <td>datasette/utils/__init__.py</td> <td>92.24% <0.00%> (-1.90%)</td> <td>:arrow_down:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c4f1ec7...fb2ad7a. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use SQLite conn.interrupt() instead of sqlite_timelimit() 837956424  
803502424 https://github.com/simonw/sqlite-utils/issues/249#issuecomment-803502424 https://api.github.com/repos/simonw/sqlite-utils/issues/249 MDEyOklzc3VlQ29tbWVudDgwMzUwMjQyNA== prabhur 36287 2021-03-21T02:43:32Z 2021-03-21T02:43:32Z NONE

Did you run enable-fts before you inserted the data?

If so you'll need to run populate-fts after the insert to populate the FTS index.

A better solution may be to add --create-triggers to the enable-fts command to add triggers that will automatically keep the index updated as you insert new records.

Wow. Wasn't expecting a response this quick, especially during a weekend. :-) Sincerely appreciate it.
I tried the populate-fts and that did the trick. My bad for not consulting the docs again. I think I forgot to add that step when I automated the workflow.
Thanks for the suggestion. I'll close this issue. Have a great weekend and many many thanks for creating these suite of tools around sqlite.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Full text search possibly broken? 836963850  
803499509 https://github.com/simonw/datasette/issues/1261#issuecomment-803499509 https://api.github.com/repos/simonw/datasette/issues/1261 MDEyOklzc3VlQ29tbWVudDgwMzQ5OTUwOQ== brimstone 812795 2021-03-21T02:06:43Z 2021-03-21T02:06:43Z NONE

I can confirm 0.9.2 fixes the problem. Thanks for the fast response!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Some links aren't properly URL encoded. 832092321  
803160804 https://github.com/simonw/datasette/issues/1265#issuecomment-803160804 https://api.github.com/repos/simonw/datasette/issues/1265 MDEyOklzc3VlQ29tbWVudDgwMzE2MDgwNA== yunzheng 468612 2021-03-19T22:05:12Z 2021-03-19T22:05:12Z NONE

Wow that was fast! Thanks for this very cool project and quick update! 👍

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support for HTTP Basic Authentication 836123030  
802164134 https://github.com/simonw/datasette/issues/1262#issuecomment-802164134 https://api.github.com/repos/simonw/datasette/issues/1262 MDEyOklzc3VlQ29tbWVudDgwMjE2NDEzNA== henry501 19328961 2021-03-18T17:55:00Z 2021-03-18T17:55:00Z NONE

Thanks for the comments. I'll take a look at the documentation to familiarize myself, as I haven't tried to write any plugins yet. With some luck I might be ready to write it when the hook is implemented.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Plugin hook that could support 'order by random()' for table view 834602299  
802032152 https://github.com/simonw/sqlite-utils/issues/159#issuecomment-802032152 https://api.github.com/repos/simonw/sqlite-utils/issues/159 MDEyOklzc3VlQ29tbWVudDgwMjAzMjE1Mg== limar 1025224 2021-03-18T15:42:52Z 2021-03-18T15:42:52Z NONE

I confirm the bug. Happens for me in version 3.6. I use the call to delete all the records:
table.delete_where()
This does not delete anything.

I see that delete() method DOES use context manager with self.db.conn: which should help. You may want to align the code of both methods.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
.delete_where() does not auto-commit (unlike .insert() or .upsert()) 702386948  
801816980 https://github.com/simonw/sqlite-utils/issues/246#issuecomment-801816980 https://api.github.com/repos/simonw/sqlite-utils/issues/246 MDEyOklzc3VlQ29tbWVudDgwMTgxNjk4MA== polyrand 37962604 2021-03-18T10:40:32Z 2021-03-18T10:43:04Z NONE

I have found a similar problem, but I only when using that type of query (with * for doing a prefix search). I'm also building something on top of FTS5/sqlite-utils, and the way I decided to handle it was creating a specific function for prefixes. According to the docs, the query can be done in this 2 ways:

... MATCH '"one two thr" * '
... MATCH 'one + two + thr*'

I thought I could build a query like the first one using this function:

def prefix(query: str):
    return f'"{query}" *'

And then I use the output of that function as the query parameter for the standard .search() method in sqlite-utils.

However, my use case is different because I'm the one "deciding" when to use a prefix search, not the end user. I also haven't done many tests, but maybe you found that useful. One thing I could think of is checking if the query has an * at the end, remove it and build the prefix query using the function above.

This is just for prefix queries, I think having the escaping function is still useful for other use cases.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Escaping FTS search strings 831751367  
798913090 https://github.com/simonw/datasette/pull/1260#issuecomment-798913090 https://api.github.com/repos/simonw/datasette/issues/1260 MDEyOklzc3VlQ29tbWVudDc5ODkxMzA5MA== codecov[bot] 22429695 2021-03-14T14:01:30Z 2021-03-14T14:01:30Z NONE

Codecov Report

Merging #1260 (90f5fb6) into main (8e18c79) will not change coverage.
The diff coverage is 83.33%.

@@           Coverage Diff           @@
##             main    #1260   +/-   ##
=======================================
  Coverage   91.51%   91.51%           
=======================================
  Files          34       34           
  Lines        4255     4255           
=======================================
  Hits         3894     3894           
  Misses        361      361           
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/inspect.py</td> <td>36.11% <0.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/default_magic_parameters.py</td> <td>91.17% <50.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/app.py</td> <td>95.85% <100.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/views/base.py</td> <td>95.01% <100.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/views/table.py</td> <td>95.88% <100.00%> (ø)</td> <td></td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8e18c79...90f5fb6. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix: code quality issues 831163537  
798468572 https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-798468572 https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14 MDEyOklzc3VlQ29tbWVudDc5ODQ2ODU3Mg== n8henrie 1234956 2021-03-13T14:47:31Z 2021-03-13T14:47:31Z NONE

Ok, new PR works. I'm not git enough so I just force-pushed over the old one.

I still end up with a lot of activities that are missing an id and therefore skipped (since this is used as the primary key). For example:

{'workoutActivityType': 'HKWorkoutActivityTypeRunning', 'duration': '35.31666666666667', 'durationUnit': 'min', 'totalDistance': '4.010870267636999', 'totalDistanceUnit': 'mi', 'totalEnergyBurned': '660.3516235351562', 'totalEnergyBurnedUnit': 'Cal', 'sourceName': 'Strava', 'sourceVersion': '22810', 'creationDate': '2020-07-16 13:38:26 -0700', 'startDate': '2020-07-16 06:38:26 -0700', 'endDate': '2020-07-16 07:13:45 -0700'}

I also end up with some unhappy characters (in the skipped events), such as: 'sourceName': 'Nathan’s Apple\xa0Watch',.

But it's successfully making it through the file, and the resulting db opens in datasette, so I'd call that progress.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
UNIQUE constraint failed: workouts.id 771608692  
798436026 https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-798436026 https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14 MDEyOklzc3VlQ29tbWVudDc5ODQzNjAyNg== n8henrie 1234956 2021-03-13T14:23:16Z 2021-03-13T14:23:16Z NONE

This PR allows my import to succeed.

It looks like some events don't have an id, but do have HKExternalUUID (which gets turned into metadata_HKExternalUUID), so I use this as a fallback.

If a record has neither of these, I changed it to just print the record (for debugging) and return.

For some odd reason this ran fine at first, and now (after removing the generated db and trying again) I'm getting a different error (duplicate column name).

Looks like it may have run when I had two successive runs without remembering to delete the db in between. Will try to refactor.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
UNIQUE constraint failed: workouts.id 771608692  
795950636 https://github.com/simonw/datasette/issues/838#issuecomment-795950636 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDc5NTk1MDYzNg== tsibley 79913 2021-03-10T19:24:13Z 2021-03-10T19:24:13Z NONE

I think this could be solved by one of:

  1. Stop generating absolute URLs, e.g. ones that include an origin. Relative URLs with absolute paths are fine, as long as they take base_url into account (as they do now, yay!).
  2. Extend base_url to include the expected frontend origin, and then use that information when generating absolute URLs.
  3. Document which HTTP headers the reverse proxy should set (e.g. the X-Forwarded-* family of conventional headers) to pass the frontend origin information to Datasette, and then use that information when generating absolute URLs.

Option 1 seems like the easiest to me, if you can get away with never having to generate an absolute URL.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097  
795939998 https://github.com/simonw/datasette/issues/838#issuecomment-795939998 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDc5NTkzOTk5OA== tsibley 79913 2021-03-10T19:16:55Z 2021-03-10T19:16:55Z NONE

Nod. The problem with the tests is that they're ignoring the origin (hostname, port) of links. In a reverse proxy situation, the frontend request origin is different than the backend request origin. The problem is Datasette generates links with the backend request origin.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097  
795893813 https://github.com/simonw/datasette/issues/838#issuecomment-795893813 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDc5NTg5MzgxMw== tsibley 79913 2021-03-10T18:43:39Z 2021-03-10T18:43:39Z NONE

@simonw Unfortunately this issue as I reported it is not actually solved in version 0.55.

Every link which is returned by the Datasette.absolute_url method is still wrong, because it uses the request URL as the base. This still includes the suggested facet links and pagination links.

What I wrote originally still stands:

Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include base_url or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at http://localhost:8001.

I looked into this a little in the source code, and it seems to be an issue anywhere request.url or request.path is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the path_with_… family of utility functions and the Datasette.absolute_url method.

Would you prefer to re-open this issue or have me create a new one?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097  
795085921 https://github.com/simonw/datasette/pull/1256#issuecomment-795085921 https://api.github.com/repos/simonw/datasette/issues/1256 MDEyOklzc3VlQ29tbWVudDc5NTA4NTkyMQ== codecov[bot] 22429695 2021-03-10T08:35:17Z 2021-03-10T08:35:17Z NONE

Codecov Report

Merging #1256 (4eef524) into main (d0fd833) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1256   +/-   ##
=======================================
  Coverage   91.56%   91.56%           
=======================================
  Files          34       34           
  Lines        4244     4244           
=======================================
  Hits         3886     3886           
  Misses        358      358           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0fd833...4eef524. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Minor type in IP adress 827341657  
794518438 https://github.com/simonw/datasette/pull/1254#issuecomment-794518438 https://api.github.com/repos/simonw/datasette/issues/1254 MDEyOklzc3VlQ29tbWVudDc5NDUxODQzOA== durkie 3200608 2021-03-09T22:04:23Z 2021-03-09T22:04:23Z NONE

Dang, you're absolutely right. Spatialite 5.0 had been working fine for a plugin I was developing, but it apparently is broken in several other ways.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update Docker Spatialite version to 5.0.1 + add support for Spatialite topology functions 826613352  
794441034 https://github.com/simonw/datasette/pull/1254#issuecomment-794441034 https://api.github.com/repos/simonw/datasette/issues/1254 MDEyOklzc3VlQ29tbWVudDc5NDQ0MTAzNA== codecov[bot] 22429695 2021-03-09T20:54:18Z 2021-03-09T21:12:15Z NONE

Codecov Report

Merging #1254 (b103204) into main (d0fd833) will decrease coverage by 0.04%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##             main    #1254      +/-   ##
==========================================
- Coverage   91.56%   91.51%   -0.05%     
==========================================
  Files          34       34              
  Lines        4244     4244              
==========================================
- Hits         3886     3884       -2     
- Misses        358      360       +2     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/database.py</td> <td>92.93% <0.00%> (-0.75%)</td> <td>:arrow_down:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0fd833...b103204. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update Docker Spatialite version to 5.0.1 + add support for Spatialite topology functions 826613352  
794443710 https://github.com/simonw/datasette/pull/1254#issuecomment-794443710 https://api.github.com/repos/simonw/datasette/issues/1254 MDEyOklzc3VlQ29tbWVudDc5NDQ0MzcxMA== durkie 3200608 2021-03-09T20:56:45Z 2021-03-09T20:56:45Z NONE

Oh wow I didn't even see that you had opened an issue about this so recently. I'll check on /dbname and report back.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update Docker Spatialite version to 5.0.1 + add support for Spatialite topology functions 826613352  
793308483 https://github.com/simonw/datasette/pull/1252#issuecomment-793308483 https://api.github.com/repos/simonw/datasette/issues/1252 MDEyOklzc3VlQ29tbWVudDc5MzMwODQ4Mw== codecov[bot] 22429695 2021-03-09T03:06:10Z 2021-03-09T03:06:10Z NONE

Codecov Report

Merging #1252 (d22aa32) into main (d0fd833) will decrease coverage by 0.04%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##             main    #1252      +/-   ##
==========================================
- Coverage   91.56%   91.51%   -0.05%     
==========================================
  Files          34       34              
  Lines        4244     4244              
==========================================
- Hits         3886     3884       -2     
- Misses        358      360       +2     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/database.py</td> <td>92.93% <0.00%> (-0.75%)</td> <td>:arrow_down:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0fd833...d22aa32. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add back styling to lists within table cells (fixes #1141) 825217564  
792308036 https://github.com/simonw/datasette/issues/858#issuecomment-792308036 https://api.github.com/repos/simonw/datasette/issues/858 MDEyOklzc3VlQ29tbWVudDc5MjMwODAzNg== robroc 1219001 2021-03-07T16:41:54Z 2021-03-07T16:41:54Z NONE

Apologies if I sound dense but I don't see where you would pass
'shell=True'. I'm using the CLI installed via pip.

On Sun., Mar. 7, 2021, 2:15 a.m. David Smith, notifications@github.com
wrote:

To get it to work I had to:

-

add shell=true to the various commands in datasette
-

use the name argument of the publish command. (
https://docs.datasette.io/en/stable/publish.html)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/simonw/datasette/issues/858#issuecomment-792230560,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAJJTOMZMGYSCGUU4J3AVSDTCMRX5ANCNFSM4ODNEDYA
.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
publish heroku does not work on Windows 10 642388564  
792230560 https://github.com/simonw/datasette/issues/858#issuecomment-792230560 https://api.github.com/repos/simonw/datasette/issues/858 MDEyOklzc3VlQ29tbWVudDc5MjIzMDU2MA== smithdc1 39445562 2021-03-07T07:14:58Z 2021-03-07T07:14:58Z NONE

To get it to work I had to:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
publish heroku does not work on Windows 10 642388564  
792129022 https://github.com/simonw/datasette/issues/858#issuecomment-792129022 https://api.github.com/repos/simonw/datasette/issues/858 MDEyOklzc3VlQ29tbWVudDc5MjEyOTAyMg== robroc 1219001 2021-03-07T00:23:34Z 2021-03-07T00:23:34Z NONE

@smithdc1 Can you tell us what you did to get it to publish in Windows? What commands did you pass?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
publish heroku does not work on Windows 10 642388564  
791530093 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791530093 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc5MTUzMDA5Mw== UtahDave 306240 2021-03-05T16:28:07Z 2021-03-05T16:28:07Z NONE

I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout.

Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once?

@maxhawkins a limitation of the python mbox module is it loads the entire mbox into memory. I did find another approach to this problem that didn't use the builtin python mbox module and created a generator so that it didn't have to load the whole mbox into memory. I was hoping to use standard library modules, but this might be a good reason to investigate that approach a bit more. My worry is making sure a custom processor handles all the ins and outs of the mbox format correctly.

Hm. As I'm writing this, I thought of something. I think I can parse each message one at a time, and then use an mbox function to load each message using the python mbox module. That way the mbox module can still deal with the specifics of the mbox format, but I can use a generator.

I'll give that a try. Thanks for the feedback @maxhawkins and @simonw. I'll give that a try.

@simonw can we hold off on merging this until I can test this new approach?

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
791089881 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791089881 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc5MTA4OTg4MQ== maxhawkins 28565 2021-03-05T02:03:19Z 2021-03-05T02:03:19Z NONE

I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout.

Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
791053721 https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-791053721 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32 MDEyOklzc3VlQ29tbWVudDc5MTA1MzcyMQ== dsisnero 6213 2021-03-05T00:31:27Z 2021-03-05T00:31:27Z NONE

I am getting the same thing for US West (N. California) us-west-1

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
KeyError: 'Contents' on running upload 803333769  
790934616 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790934616 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc5MDkzNDYxNg== Btibert3 203343 2021-03-04T20:54:44Z 2021-03-04T20:54:44Z NONE

Sorry for the delay, I got sidetracked after class last night. I am getting the following error:

/content# google-takeout-to-sqlite mbox takeout.db Takeout/Mail/gmail.mbox 
Usage: google-takeout-to-sqlite [OPTIONS] COMMAND [ARGS]...Try 'google-takeout-to-sqlite --help' for help.

Error: No such command 'mbox'.

On the box, I installed with pip after cloning: https://github.com/UtahDave/google-takeout-to-sqlite.git

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
790857004 https://github.com/simonw/datasette/issues/1238#issuecomment-790857004 https://api.github.com/repos/simonw/datasette/issues/1238 MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA== tsibley 79913 2021-03-04T19:06:55Z 2021-03-04T19:06:55Z NONE

@rgieseke Ah, that's super helpful. Thank you for the workaround for now!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Custom pages don't work with base_url setting 813899472  
790391711 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790391711 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc5MDM5MTcxMQ== UtahDave 306240 2021-03-04T07:36:24Z 2021-03-04T07:36:24Z NONE

Looks like you're doing this:

python elif message.get_content_type() == "text/plain": body = message.get_payload(decode=True)

So presumably that decodes to a unicode string?

I imagine the reason the column is a BLOB for me is that sqlite-utils determines the column type based on the first batch of items - https://github.com/simonw/sqlite-utils/blob/09c3386f55f766b135b6a1c00295646c4ae29bec/sqlite_utils/db.py#L1927-L1928 - and I got unlucky and had something in my first batch that wasn't a unicode string.

Ah, that's good to know. I think explicitly creating the tables will be a great improvement. I'll add that.

Also, I noticed after I opened this PR that the message.get_payload() is being deprecated in favor of message.get_content() or something like that. I'll see if that handles the decoding better, too.

Thanks for the feedback. I should have time tomorrow to put together some improvements.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
790389335 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790389335 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc5MDM4OTMzNQ== UtahDave 306240 2021-03-04T07:32:04Z 2021-03-04T07:32:04Z NONE

The command takes quite a while to start running, presumably because this line causes it to have to scan the WHOLE file in order to generate a count:

https://github.com/dogsheep/google-takeout-to-sqlite/blob/a3de045eba0fae4b309da21aa3119102b0efc576/google_takeout_to_sqlite/utils.py#L66-L67

I'm fine with waiting though. It's not like this is a command people run every day - and without that count we can't show a progress bar, which seems pretty important for a process that takes this long.

The wait is from python loading the mbox file. This happens regardless if you're getting the length of the mbox. The mbox module is on the slow side. It is possible to do one's own parsing of the mbox, but I kind of wanted to avoid doing that.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
790257263 https://github.com/simonw/datasette/issues/268#issuecomment-790257263 https://api.github.com/repos/simonw/datasette/issues/268 MDEyOklzc3VlQ29tbWVudDc5MDI1NzI2Mw== mhalle 649467 2021-03-04T03:20:23Z 2021-03-04T03:20:23Z NONE

It's kind of an ugly hack, but you can try out what using the fts5 table as an actual datasette-accessible table looks like without changing any datasette code by creating yet another view on top of the fts5 table:

create view proxyview as select *, rank, table_fts as fts from table_fts;

That's now visible from datasette, just like any other view, but you can use fts match escape_fts(search_string) order by rank.

This is only good as a proof of concept because you're inefficiently going from view -> fts5 external content table -> view -> data table. However, it does show it works.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Mechanism for ranking results from SQLite full-text search 323718842  
790198930 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790198930 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc5MDE5ODkzMA== Btibert3 203343 2021-03-04T00:58:40Z 2021-03-04T00:58:40Z NONE

I am just seeing this sorry, yes! I will kick the tires later on tonight. My apologies for the delay.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
789680230 https://github.com/simonw/datasette/issues/283#issuecomment-789680230 https://api.github.com/repos/simonw/datasette/issues/283 MDEyOklzc3VlQ29tbWVudDc4OTY4MDIzMA== justinpinkney 605492 2021-03-03T12:28:42Z 2021-03-03T12:28:42Z NONE

One note on using this pragma I got an error on starting datasette no such table: pragma_database_list.

I diagnosed this to an older version of sqlite3 (3.14.2) and upgrading to a newer version (3.34.2) fixed the issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support cross-database joins 325958506  
789409126 https://github.com/simonw/datasette/issues/268#issuecomment-789409126 https://api.github.com/repos/simonw/datasette/issues/268 MDEyOklzc3VlQ29tbWVudDc4OTQwOTEyNg== mhalle 649467 2021-03-03T03:57:15Z 2021-03-03T03:58:40Z NONE

In FTS5, I think doing an FTS search is actually much easier than doing a join against the main table like datasette does now. In fact, FTS5 external content tables provide a transparent interface back to the original table or view.

Here's what I'm currently doing: * build a view that joins whatever tables I want and rename the columns to non-joiny names (e.g, chapter.name AS chapter_name in the view where needed) * Create an FTS5 table with content="viewname" * As described in the "external content tables" section (https://www.sqlite.org/fts5.html#external_content_tables), sql queries can be made directly to the FTS table, which behind the covers makes select calls to the content table when the content of the original columns are needed. * In addition, you get "rank" and "bm25()" available to you when you select on the _fts table.

Unfortunately, datasette doesn't currently seem happy being coerced into doing a real query on an fts5 table. This works:
select col1, col2, col3 from table_fts where coll1="value" and table_fts match escape_fts("search term") order by rank

But this doesn't work in the datasette SQL query interface:
select col1, col2, col3 from table_fts where coll1="value" and table_fts match escape_fts(:search) order by rank (the "search" input text field doesn't show up)

For what datasette is doing right now, I think you could just use contentless fts5 tables (content=""), since all you care about is the rowid since all you're doing a subselect to get the rowid anyway. In fts5, that's just a contentless table.

I guess if you want to follow this suggestion, you'd need a somewhat different code path for fts5.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Mechanism for ranking results from SQLite full-text search 323718842  
787150276 https://github.com/simonw/sqlite-utils/issues/242#issuecomment-787150276 https://api.github.com/repos/simonw/sqlite-utils/issues/242 MDEyOklzc3VlQ29tbWVudDc4NzE1MDI3Ng== polyrand 37962604 2021-02-27T21:27:26Z 2021-02-27T21:27:26Z NONE

I had this resource by Seth Michael Larson saved https://github.com/sethmlarson/pycon-async-sync-poster I haven't had a look at it, but it may contain useful info.

On twitter, I mentioned passing an aiosqlite connection during the Database creation. I'm not 100% familiar with the sqlite-utils codebase, so I may be wrong here, but maybe decorating internal functions could be an option? Then they are awaited or not inside the decorator depending on how they are called.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Async support 817989436  
787121933 https://github.com/simonw/sqlite-utils/issues/242#issuecomment-787121933 https://api.github.com/repos/simonw/sqlite-utils/issues/242 MDEyOklzc3VlQ29tbWVudDc4NzEyMTkzMw== eyeseast 25778 2021-02-27T19:18:57Z 2021-02-27T19:18:57Z NONE

I think HTTPX gets it exactly right, with a clear separation between sync and async clients, each with a basically identical API. (I'm about to switch feed-to-sqlite over to it, from Requests, to eventually make way for async support.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Async support 817989436  
785485597 https://github.com/simonw/datasette/pull/1243#issuecomment-785485597 https://api.github.com/repos/simonw/datasette/issues/1243 MDEyOklzc3VlQ29tbWVudDc4NTQ4NTU5Nw== codecov[bot] 22429695 2021-02-25T00:28:30Z 2021-02-25T00:28:30Z NONE

Codecov Report

Merging #1243 (887bfd2) into main (726f781) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1243   +/-   ##
=======================================
  Coverage   91.56%   91.56%           
=======================================
  Files          34       34           
  Lines        4242     4242           
=======================================
  Hits         3884     3884           
  Misses        358      358           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 726f781...32652d9. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
fix small typo 815955014  
784638394 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-784638394 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc4NDYzODM5NA== UtahDave 306240 2021-02-24T00:36:18Z 2021-02-24T00:36:18Z NONE

I noticed that @simonw is using black for formatting. I ran black on my additions in this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
784347646 https://github.com/simonw/datasette/issues/1241#issuecomment-784347646 https://api.github.com/repos/simonw/datasette/issues/1241 MDEyOklzc3VlQ29tbWVudDc4NDM0NzY0Ng== Kabouik 7107523 2021-02-23T16:55:26Z 2021-02-23T16:57:39Z NONE

I think it's possible that many users these days no longer assume they can paste a URL from the browser address bar (if they ever understood that at all) because to many apps are SPAs with broken URLs.

Absolutely, that's why I thought my corner case with iframe preventing access to the datasette URL could actually be relevant in more general situations.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[Feature request] Button to copy URL 814595021  
784312460 https://github.com/simonw/datasette/issues/1240#issuecomment-784312460 https://api.github.com/repos/simonw/datasette/issues/1240 MDEyOklzc3VlQ29tbWVudDc4NDMxMjQ2MA== Kabouik 7107523 2021-02-23T16:07:10Z 2021-02-23T16:08:28Z NONE

Likewise, while answering to another issue regarding the Vega plugin, I realized that there is no such way of linking rows after a custom query, I only get this "Link" column with individual URLs for the default SQL view:

Or is it there and I am just missing the option in my custom queries?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow facetting on custom queries 814591962  
784157345 https://github.com/simonw/datasette/issues/1218#issuecomment-784157345 https://api.github.com/repos/simonw/datasette/issues/1218 MDEyOklzc3VlQ29tbWVudDc4NDE1NzM0NQ== soobrosa 1244799 2021-02-23T12:12:17Z 2021-02-23T12:12:17Z NONE

Topline this fixed the same problem for me.

brew install python@3.7
ln -s /usr/local/opt/python@3.7/bin/python3.7 /usr/local/opt/python/bin/python3.7
pip3 uninstall -y numpy
pip3 uninstall -y setuptools
pip3 install setuptools
pip3 install numpy
pip3 install datasette-publish-fly
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
/usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory 803356942  
783794520 https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-783794520 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 MDEyOklzc3VlQ29tbWVudDc4Mzc5NDUyMA== UtahDave 306240 2021-02-23T01:13:54Z 2021-02-23T01:13:54Z NONE

Also, @simonw I created a test based off the existing tests. I think it's working correctly

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP: Add Gmail takeout mbox import 813880401  
783688547 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-783688547 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc4MzY4ODU0Nw== UtahDave 306240 2021-02-22T21:31:28Z 2021-02-22T21:31:28Z NONE

@Btibert3 I've opened a PR with my initial attempt at this. Would you be willing to give this a try?

https://github.com/dogsheep/google-takeout-to-sqlite/pull/5

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
783662968 https://github.com/simonw/sqlite-utils/issues/220#issuecomment-783662968 https://api.github.com/repos/simonw/sqlite-utils/issues/220 MDEyOklzc3VlQ29tbWVudDc4MzY2Mjk2OA== mhalle 649467 2021-02-22T20:44:51Z 2021-02-22T20:44:51Z NONE

Actually, coming back to this, I have a clearer use case for enabling fts generation for views: making it easier to bring in text from lookup tables and other joins.

The datasette documentation describes populating an fts table like so:

INSERT INTO "items_fts" (rowid, name, description, category_name)
    SELECT items. rowid,
    items.name,
    items.description,
    categories.name
    FROM items JOIN categories ON items.category_id=categories.id;

Alternatively if you have fts support in sqlite_utils for views (which sqlite and fts5 support), you can do the same thing just by creating a view that captures the above joins as columns, then creating an fts table from that view. Such an fts table can be created using sqlite_utils, where one created with your method can't.

The resulting fts table can then be used by a whole family of related tables and views in the manner you described earlier in this issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Better error message for *_fts methods against views 783778672  
783560017 https://github.com/simonw/datasette/issues/1166#issuecomment-783560017 https://api.github.com/repos/simonw/datasette/issues/1166 MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw== thorn0 94334 2021-02-22T18:00:57Z 2021-02-22T18:13:11Z NONE

Hi! I don't think Prettier supports this syntax for globs: datasette/static/*[!.min].js Are you sure that works?
Prettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (foo-[1-5].js).

Tested it. Apparently, it works as a negated character class in regexes (like [^.min]). I wonder where this syntax comes from. Micromatch doesn't support that:

micromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']);
// result: ["static/n.js"] -- brackets are treated like [!.min] in regexes, without negation
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Adopt Prettier for JavaScript code formatting 777140799  
783265830 https://github.com/simonw/datasette/issues/782#issuecomment-783265830 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4MzI2NTgzMA== frankieroberto 30665 2021-02-22T10:21:14Z 2021-02-22T10:21:14Z NONE

@simonw:

The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword.

Interesting! Although I don't think it matters too much what the underlying implementation is - I more meant that limit is familiar to developers conceptually as "up to and including this number, if they exist", whereas "size" is potentially more ambiguous. However, it's probably no big deal either way.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782756398 https://github.com/simonw/datasette/issues/782#issuecomment-782756398 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc1NjM5OA== simonrjones 601316 2021-02-20T22:05:48Z 2021-02-20T22:05:48Z NONE

I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default.

I agree it is more predictable if the top level item is an object with a rows or data object that contains an array of data, which then allows for other top-level meta data.

I can see the argument for removing this and just using an array for convenience - but I think that's OK as an option (as you have now).

Rather than have lots of top-level keys you could have a "meta" object to contain non-data stuff. You could use something like "links" for API endpoint URLs (or use a standard like HAL). Which would then leave the top level a bit cleaner - if that's what you what.

Have you had much feedback from users who use the Datasette API a lot?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782746755 https://github.com/simonw/datasette/issues/782#issuecomment-782746755 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc0Njc1NQ== frankieroberto 30665 2021-02-20T20:44:05Z 2021-02-20T20:44:05Z NONE

Minor suggestion: rename size query param to limit, to better reflect that it’s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword?

I like the idea of specifying a limit of 0 if you don’t want any rows data - and returning an empty array under the rows key seems fine.

Have you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don’t parse the JSON. Could be default (can’t be much bigger with gzip?) or opt-in.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
782745199 https://github.com/simonw/datasette/issues/782#issuecomment-782745199 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc0NTE5OQ== frankieroberto 30665 2021-02-20T20:32:03Z 2021-02-20T20:32:03Z NONE

I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. Mainly because it allows you to add extra keys in a backwards-compatible way. Also just seems more expected somehow.

The API design guidance for the UK government also recommends this: https://www.gov.uk/guidance/gds-api-technical-and-data-standards#use-json

I also strongly dislike having versioned APIs (eg with a /v1/ path prefix, as it invariably means that old versions stop working at some point, even though the bit of the API you’re using might not have changed at all.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Redesign default JSON format in preparation for Datasette 1.0 627794879  
781599929 https://github.com/simonw/datasette/pull/1232#issuecomment-781599929 https://api.github.com/repos/simonw/datasette/issues/1232 MDEyOklzc3VlQ29tbWVudDc4MTU5OTkyOQ== codecov[bot] 22429695 2021-02-18T19:59:54Z 2021-02-18T22:06:42Z NONE

Codecov Report

Merging #1232 (8876499) into main (4df548e) will increase coverage by 0.03%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main    #1232      +/-   ##
==========================================
+ Coverage   91.42%   91.46%   +0.03%     
==========================================
  Files          32       32              
  Lines        3955     3970      +15     
==========================================
+ Hits         3616     3631      +15     
  Misses        339      339              
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/app.py</td> <td>95.68% <100.00%> (+0.06%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/cli.py</td> <td>76.62% <100.00%> (+0.36%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/views/database.py</td> <td>97.19% <100.00%> (+0.01%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4df548e...8876499. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
--crossdb option for joining across databases 811407131  
781451701 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-781451701 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc4MTQ1MTcwMQ== Btibert3 203343 2021-02-18T16:06:21Z 2021-02-18T16:06:21Z NONE

Awesome!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
781330466 https://github.com/simonw/datasette/issues/1230#issuecomment-781330466 https://api.github.com/repos/simonw/datasette/issues/1230 MDEyOklzc3VlQ29tbWVudDc4MTMzMDQ2Ng== Kabouik 7107523 2021-02-18T13:06:22Z 2021-02-18T15:22:15Z NONE

[Edit] Oh, I just saw the "Load all" button under the cluster map as well as the setting to alter the max number or results. So I guess this issue only is about the Vega charts.

Note that datasette-cluster-map also seems to be limited to 998 displayed points: ![ss-2021-02-18_140548](https://user-images.githubusercontent.com/7107523/108361225-15fb2a80-71ea-11eb-9a19-d885e8513f55.png)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages 811054000  
780991910 https://github.com/simonw/datasette/issues/283#issuecomment-780991910 https://api.github.com/repos/simonw/datasette/issues/283 MDEyOklzc3VlQ29tbWVudDc4MDk5MTkxMA== rayvoelker 9308268 2021-02-18T02:13:56Z 2021-02-18T02:13:56Z NONE

I was going ask you about this issue when we talk during your office-hours schedule this Friday, but was there any support ever added for doing this cross-database joining?

I have a use-case where could be pretty neat to do analysis using this tool on time-specific databases from snapshots

https://ilsweb.cincinnatilibrary.org/collection-analysis/

and thanks again for such an amazing tool!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support cross-database joins 325958506  
780817596 https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-780817596 https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4 MDEyOklzc3VlQ29tbWVudDc4MDgxNzU5Ng== UtahDave 306240 2021-02-17T20:01:35Z 2021-02-17T20:01:35Z NONE

I've got this almost working. Just needs some polish

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature Request: Gmail 778380836  
779785638 https://github.com/simonw/sqlite-utils/issues/227#issuecomment-779785638 https://api.github.com/repos/simonw/sqlite-utils/issues/227 MDEyOklzc3VlQ29tbWVudDc3OTc4NTYzOA== camallen 295329 2021-02-16T11:48:03Z 2021-02-16T11:48:03Z NONE

Thank you @simonw

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Error reading csv files with large column data 807174161  
778467759 https://github.com/simonw/datasette/issues/1220#issuecomment-778467759 https://api.github.com/repos/simonw/datasette/issues/1220 MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ== aborruso 30607 2021-02-12T21:35:17Z 2021-02-12T21:35:17Z NONE

Thank you

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Installing datasette via docker: Path 'fixtures.db' does not exist 806743116  
770069864 https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60 MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA== daniel-butler 22578954 2021-01-29T21:52:05Z 2021-02-12T18:29:43Z NONE

For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called gh-organization. The github's owner id of gh-orgnization is 123456789.

github-to-sqlite  repos github.db gh-organization

I'm on a windows computer running git bash to be able to use the | command. This works for me

sqlite3 github.db "SELECT full_name FROM repos WHERE owner = '123456789';" | tr '\n\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }

On a pure linux system I think this would work because the new line character is normally \n

sqlite3 github.db "SELECT full_name FROM repos WHERE owner = '123456789';" | tr '\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }`

As expected I ran into rate limit issues #51

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Use Data from SQLite in other commands 797097140  
778014990 https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778014990 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33 MDEyOklzc3VlQ29tbWVudDc3ODAxNDk5MA== leafgarland 675335 2021-02-12T06:54:14Z 2021-02-12T06:54:14Z NONE

Ahh, that might be because macOS Big Sur has changed the structure of the photos db. Might need to wait for a later release, there is a PR which adds support for Big Sur.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
photo-to-sqlite: command not found 803338729  
778008752 https://github.com/simonw/datasette/issues/1220#issuecomment-778008752 https://api.github.com/repos/simonw/datasette/issues/1220 MDEyOklzc3VlQ29tbWVudDc3ODAwODc1Mg== aborruso 30607 2021-02-12T06:37:34Z 2021-02-12T06:37:34Z NONE

I have used my path, I'm running it from the folder in wich I have the db.

Do I must an absolute path?

Do I must create exactly that folder?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Installing datasette via docker: Path 'fixtures.db' does not exist 806743116  
778002092 https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778002092 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33 MDEyOklzc3VlQ29tbWVudDc3ODAwMjA5Mg== robmarkcole 11855322 2021-02-12T06:19:32Z 2021-02-12T06:19:32Z NONE

hi @leafgarland that results in a new error:

(venv) (base) Robins-MacBook:datasette robin$ dogsheep-photos apple-photos photos.db
Traceback (most recent call last):
  File "/Users/robin/datasette/venv/bin/dogsheep-photos", line 8, in <module>
    sys.exit(cli())
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/cli.py", line 206, in apple_photos
    db.conn.execute(
sqlite3.OperationalError: no such table: attached.ZGENERICASSET
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
photo-to-sqlite: command not found 803338729  
777951854 https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-777951854 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33 MDEyOklzc3VlQ29tbWVudDc3Nzk1MTg1NA== leafgarland 675335 2021-02-12T03:54:39Z 2021-02-12T03:54:39Z NONE

I think that is a typo in the docs, you can use

> dogsheep-photos apple-photos photos.db
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
photo-to-sqlite: command not found 803338729  
777949755 https://github.com/simonw/datasette/pull/1223#issuecomment-777949755 https://api.github.com/repos/simonw/datasette/issues/1223 MDEyOklzc3VlQ29tbWVudDc3Nzk0OTc1NQ== codecov[bot] 22429695 2021-02-12T03:45:31Z 2021-02-12T03:45:31Z NONE

Codecov Report

Merging #1223 (d1cd1f2) into main (9603d89) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main    #1223   +/-   ##
=======================================
  Coverage   91.42%   91.42%           
=======================================
  Files          32       32           
  Lines        3955     3955           
=======================================
  Hits         3616     3616           
  Misses        339      339           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9603d89...d1cd1f2. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add compile option to Dockerfile to fix failing test (fixes #696) 806918878  
777690332 https://github.com/dogsheep/evernote-to-sqlite/issues/11#issuecomment-777690332 https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/11 MDEyOklzc3VlQ29tbWVudDc3NzY5MDMzMg== dskrad 3613583 2021-02-11T18:16:01Z 2021-02-11T18:16:01Z NONE

I solved this issue by modifying line 31 of utils.py

content = ET.tostring(ET.fromstring(content_xml.strip())).decode("utf-8")
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
XML parse error 792851444  
774730656 https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774730656 https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9 MDEyOklzc3VlQ29tbWVudDc3NDczMDY1Ng== merwok 635179 2021-02-07T18:45:04Z 2021-02-07T18:45:04Z NONE

That URL uses TLS 1.3, but maybe only if the client supports it.
It could be your Python version or your SSL library that’s not recent enough.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
SSL Error 801780625  
774726123 https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774726123 https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9 MDEyOklzc3VlQ29tbWVudDc3NDcyNjEyMw== jfeiwell 12669260 2021-02-07T18:21:08Z 2021-02-07T18:21:08Z NONE

@simonw any ideas here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
SSL Error 801780625  
774528913 https://github.com/simonw/datasette/issues/1217#issuecomment-774528913 https://api.github.com/repos/simonw/datasette/issues/1217 MDEyOklzc3VlQ29tbWVudDc3NDUyODkxMw== virtadpt 639730 2021-02-06T19:23:41Z 2021-02-06T19:23:41Z NONE

I've had a lot of success running it as an OpenFaaS lambda.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Possible to deploy as a python app (for Rstudio connect server)? 802513359  
774385092 https://github.com/simonw/datasette/issues/1217#issuecomment-774385092 https://api.github.com/repos/simonw/datasette/issues/1217 MDEyOklzc3VlQ29tbWVudDc3NDM4NTA5Mg== pavopax 6165713 2021-02-06T02:49:11Z 2021-02-06T02:49:11Z NONE

A good reference seems to be the note to run datasette as a module in https://github.com/simonw/datasette/pull/556

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Possible to deploy as a python app (for Rstudio connect server)? 802513359  

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);