issue_comments

246 rows where author_association = "NONE" sorted by updated_at descending

View and edit SQL

Suggested facets: reactions, created_at (date), updated_at (date)

author_association

  • NONE · 246
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue
650340914 https://github.com/simonw/datasette/pull/868#issuecomment-650340914 https://api.github.com/repos/simonw/datasette/issues/868 MDEyOklzc3VlQ29tbWVudDY1MDM0MDkxNA== codecov[bot] 22429695 2020-06-26T18:53:02Z 2020-06-30T03:51:22Z NONE

Codecov Report

Merging #868 into master will increase coverage by 0.11%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master     #868      +/-   ##
==========================================
+ Coverage   83.31%   83.42%   +0.11%     
==========================================
  Files          27       27              
  Lines        3595     3614      +19     
==========================================
+ Hits         2995     3015      +20     
+ Misses        600      599       -1     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/utils/__init__.py</td> <td>93.93% <0.00%> (+0.05%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/app.py</td> <td>96.47% <0.00%> (+0.19%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/views/special.py</td> <td>81.17% <0.00%> (+3.39%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a8a5f81...ef837b3. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
initial windows ci setup 646448486
650600176 https://github.com/simonw/datasette/pull/869#issuecomment-650600176 https://api.github.com/repos/simonw/datasette/issues/869 MDEyOklzc3VlQ29tbWVudDY1MDYwMDE3Ng== codecov[bot] 22429695 2020-06-27T18:41:31Z 2020-06-28T02:54:21Z NONE

Codecov Report

Merging #869 into master will increase coverage by 0.23%.
The diff coverage is 90.62%.

@@            Coverage Diff             @@
##           master     #869      +/-   ##
==========================================
+ Coverage   82.99%   83.23%   +0.23%     
==========================================
  Files          26       27       +1     
  Lines        3547     3609      +62     
==========================================
+ Hits         2944     3004      +60     
- Misses        603      605       +2     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/plugins.py</td> <td>82.35% <ø> (ø)</td> <td></td> </tr> <tr> <td>datasette/views/database.py</td> <td>96.45% <86.36%> (-1.88%)</td> <td>:arrow_down:</td> </tr> <tr> <td>datasette/default_magic_parameters.py</td> <td>91.17% <91.17%> (ø)</td> <td></td> </tr> <tr> <td>datasette/app.py</td> <td>96.07% <100.00%> (+0.81%)</td> <td>:arrow_up:</td> </tr> <tr> <td>datasette/hookspecs.py</td> <td>100.00% <100.00%> (ø)</td> <td></td> </tr> <tr> <td>datasette/utils/__init__.py</td> <td>93.87% <100.00%> (+0.02%)</td> <td>:arrow_up:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1bb33da...9e693a7. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Magic parameters for canned queries 646734280
648818707 https://github.com/simonw/datasette/pull/866#issuecomment-648818707 https://api.github.com/repos/simonw/datasette/issues/866 MDEyOklzc3VlQ29tbWVudDY0ODgxODcwNw== codecov[bot] 22429695 2020-06-24T13:26:14Z 2020-06-24T13:26:14Z NONE

Codecov Report

Merging #866 into master will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##           master     #866   +/-   ##
=======================================
  Coverage   82.99%   82.99%           
=======================================
  Files          26       26           
  Lines        3547     3547           
=======================================
  Hits         2944     2944           
  Misses        603      603           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1a5b7d3...fb64dda. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update pytest-asyncio requirement from <0.13,>=0.10 to >=0.10,<0.15 644610729
648800356 https://github.com/simonw/datasette/issues/838#issuecomment-648800356 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDY0ODgwMDM1Ng== tballison 6739646 2020-06-24T12:51:48Z 2020-06-24T12:51:48Z NONE

But also want to say thanks for a great tool

+1!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097
648799963 https://github.com/simonw/datasette/issues/865#issuecomment-648799963 https://api.github.com/repos/simonw/datasette/issues/865 MDEyOklzc3VlQ29tbWVudDY0ODc5OTk2Mw== tballison 6739646 2020-06-24T12:51:01Z 2020-06-24T12:51:01Z NONE

This seems to be a duplicate of: https://github.com/simonw/datasette/issues/838

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url doesn't seem to work when adding criteria and clicking "apply" 644582921
648296323 https://github.com/simonw/datasette/issues/694#issuecomment-648296323 https://api.github.com/repos/simonw/datasette/issues/694 MDEyOklzc3VlQ29tbWVudDY0ODI5NjMyMw== kwladyka 3903726 2020-06-23T17:10:51Z 2020-06-23T17:10:51Z NONE

@simonw

Did you find the reason? I had similar situation and I check this on millions ways. I am sure app doesn't consume such memory.

I was trying the app with:
docker run --rm -it -p 80:80 -m 128M foo

I was watching app with docker stats. Even limited memory by CMD ["java", "-Xms60M", "-Xmx60M", "-jar", "api.jar"].
Checked memory usage by app in code and print bash commands. The app definitely doesn't use this memory. Also doesn't write files.

Only one solution is to change memory to 512M.

It is definitely something wrong with cloud run.

I even did special app for testing this. It looks like when I cross very small amount of code / memory / app size in random when, then memory needs grow +hundreds. Nothing make sense here. Especially it works everywhere expect cloud run.

Please let me know if you discovered something more.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish cloudrun --memory option 576582604
647803394 https://github.com/simonw/datasette/issues/838#issuecomment-647803394 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDY0NzgwMzM5NA== ChristopherWilks 6289012 2020-06-22T22:36:34Z 2020-06-22T22:36:34Z NONE

I also am seeing the same issue with an Apache setup (same even w/o ProxyPassReverse, though I typically use it as @tsibley stated).

But also want to say thanks for a great tool (this issue not withstanding)!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097
645515103 https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47 MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw== hpk42 73579 2020-06-17T17:30:01Z 2020-06-17T17:30:01Z NONE

It's the one with python3.7::

>>> sqlite3.sqlite_version
'3.11.0'

On Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote:

That means your version of SQLite is old enough that it doesn't support the FTS5 extension.

Could you share what operating system you're running, and what the output is that you get from running this?

python -c 'import sqlite3; print(sqlite3.connect(":memory:").execute("select sqlite_version()").fetchone()[0])'

I can teach this tool to fall back on FTS4 if FTS5 isn't available.

--
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fall back to FTS4 if FTS5 is not available 639542974
643711117 https://github.com/simonw/datasette/pull/848#issuecomment-643711117 https://api.github.com/repos/simonw/datasette/issues/848 MDEyOklzc3VlQ29tbWVudDY0MzcxMTExNw== codecov[bot] 22429695 2020-06-14T03:05:55Z 2020-06-14T03:05:55Z NONE

Codecov Report

Merging #848 into master will decrease coverage by 0.04%.
The diff coverage is 0.00%.

@@            Coverage Diff             @@
##           master     #848      +/-   ##
==========================================
- Coverage   82.87%   82.82%   -0.05%     
==========================================
  Files          26       26              
  Lines        3538     3540       +2     
==========================================
  Hits         2932     2932              
- Misses        606      608       +2     
<table> <thead> <tr> <th>Impacted Files</th> <th>Coverage Δ</th> <th></th> </tr> </thead> <tbody> <tr> <td>datasette/cli.py</td> <td>71.34% <0.00%> (-0.89%)</td> <td>:arrow_down:</td> </tr> </tbody> </table>

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4ad5a5...4a222a1. Read the comment docs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Reload support for config_dir mode. 638270441
643709037 https://github.com/simonw/datasette/issues/691#issuecomment-643709037 https://api.github.com/repos/simonw/datasette/issues/691 MDEyOklzc3VlQ29tbWVudDY0MzcwOTAzNw== amjith 49260 2020-06-14T02:35:16Z 2020-06-14T02:35:16Z NONE

The server should reload in the config_dir mode.

Ref: #848

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
--reload sould reload server if code in --plugins-dir changes 574021194
643083451 https://github.com/simonw/datasette/issues/838#issuecomment-643083451 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDY0MzA4MzQ1MQ== tsibley 79913 2020-06-12T06:04:14Z 2020-06-12T06:04:14Z NONE

Hmm, I haven't tried removing ProxyPassReverse, but it doesn't touch the HTML, which is the issue I'm seeing. You can read the documentation here. ProxyPassReverse is a standard directive when proxying with Apache. I've used it dozens of times with other applications.

Looking a little more at the code, I think the issue here is that the behaviour of base_url makes sense when Datasette is mounted at a path within a larger application, but not when HTTP requests are being proxied to it.

In a mount situation, it is perfectly fine to construct URLs reusing the domain and path from the request. In a proxy situation, it never is, as the domain and path in the request are not the domain and path that the non-proxy client actually needs to use. That is, links which include the Apache → Datasette request origin, localhost:8001, instead of the browser → Apache request origin, example.com, will be broken.

The tests you pointed to also reflect this in two ways:

  1. They strip a leading http://localhost, allowing such URLs in the facet links to pass, but inclusion of that in a proxy situation would mean the URL is broken.

  2. The test client emits direct ASGI events instead of actual proxied HTTP requests. The headers of these ASGI events don't reflect the way an HTTP proxy works; instead they pass through the original request path which contains base_url. This works because Datasette responds to requests equivalently at either /… or /{base_url}/…, which makes some sense in a mount situation but is unconventional (albeit workable) for a proxied app.

Apps that support being proxied automatically support being mounted, but apps that only support being mounted don't automatically support being proxied.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097
642522285 https://github.com/simonw/datasette/issues/394#issuecomment-642522285 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDY0MjUyMjI4NQ== LucasVerneyDGE 58298410 2020-06-11T09:15:19Z 2020-06-11T09:15:19Z NONE

Hi @wragge,

This looks great, thanks for the share! I refactored it into a self-contained function, binding on a random available TCP port (multi-user context). I am using subprocess API directly since the %run magic was leaving defunct process behind :/

import socket

from signal import SIGINT
from subprocess import Popen, PIPE

from IPython.display import display, HTML
from notebook.notebookapp import list_running_servers


def get_free_tcp_port():
    """
    Get a free TCP port.
    """
    tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    tcp.bind(('', 0))
    _, port = tcp.getsockname()
    tcp.close()
    return port


def datasette(database):
    """
    Run datasette on an SQLite database.
    """
    # Get current running servers
    servers = list_running_servers()

    # Get the current base url
    base_url = next(servers)['base_url']

    # Get a free port
    port = get_free_tcp_port()

    # Create a base url for Datasette suing the proxy path
    proxy_url = f'{base_url}proxy/absolute/{port}/'

    # Display a link to Datasette
    display(HTML(f'<p><a href="{proxy_url}">View Datasette</a> (Click on the stop button to close the Datasette server)</p>'))

    # Launch Datasette
    with Popen(
        [
            'python', '-m', 'datasette', '--',
            database,
            '--port', str(port),
            '--config', f'base_url:{proxy_url}'
        ],
        stdout=PIPE,
        stderr=PIPE,
        bufsize=1,
        universal_newlines=True
    ) as p:
        print(p.stdout.readline(), end='')
        while True:
            try:
                line = p.stderr.readline()
                if not line:
                    break
                print(line, end='')
                exit_code = p.poll()
            except KeyboardInterrupt:
                p.send_signal(SIGINT)

Ideally, I'd like some extra magic to notify users when they are leaving the closing the notebook tab and make them terminate the running datasette processes. I'll be looking for it.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
641908346 https://github.com/simonw/datasette/issues/394#issuecomment-641908346 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng== wragge 127565 2020-06-10T10:22:54Z 2020-06-10T10:22:54Z NONE

There's a working demo here: https://github.com/wragge/datasette-test

And if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
641889565 https://github.com/simonw/datasette/issues/394#issuecomment-641889565 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDY0MTg4OTU2NQ== LucasVerneyDGE 58298410 2020-06-10T09:49:34Z 2020-06-10T09:49:34Z NONE

Hi,

I came across this issue while looking for a way to spawn Datasette as a SQLite files viewer in JupyterLab. I found https://github.com/simonw/jupyterserverproxy-datasette-demo which seems to be the most up to date proof of concept, but it seems to be failing to list the available db (at least in the Binder demo, https://hub.gke.mybinder.org/user/simonw-jupyters--datasette-demo-uw4dmlnn/datasette/, I only have :memory).

Does anyone tried to improve on this proof of concept to have a Datasette visualization for SQLite files?

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
635513983 https://github.com/simonw/datasette/issues/777#issuecomment-635513983 https://api.github.com/repos/simonw/datasette/issues/777 MDEyOklzc3VlQ29tbWVudDYzNTUxMzk4Mw== thisismyfuckingusername 63653929 2020-05-28T18:16:49Z 2020-05-28T18:16:49Z NONE

think, because the given URL of the CSS file doesn't have any complete parameters after query
Try to complete the parameter
<link rel="stylesheet" href="-/static/app.css?(whatever you want that shows a css file)">

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Error pages not correctly loading CSS 626171242
635386935 https://github.com/simonw/datasette/issues/744#issuecomment-635386935 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYzNTM4NjkzNQ== aborruso 30607 2020-05-28T14:32:53Z 2020-05-28T14:32:53Z NONE

Wow, I'm in some way very proud!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
635195322 https://github.com/simonw/datasette/issues/758#issuecomment-635195322 https://api.github.com/repos/simonw/datasette/issues/758 MDEyOklzc3VlQ29tbWVudDYzNTE5NTMyMg== clausjuhl 2181410 2020-05-28T08:23:27Z 2020-05-28T08:23:27Z NONE

@simonw I would prefer just the 7 character hash. No need to make the urls any longer than they need to be :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Question: Access to immutable database-path 612382643
634446887 https://github.com/simonw/datasette/issues/744#issuecomment-634446887 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYzNDQ0Njg4Nw== aborruso 30607 2020-05-27T06:01:28Z 2020-05-27T06:01:28Z NONE

Dear @simonw thank you for your time, now IT WORKS!!!

I hope that this edit to datasette code is not for an exceptional case (my PC configuration) and that it will be useful to other users.

Thank you again!!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
634283355 https://github.com/simonw/datasette/issues/744#issuecomment-634283355 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYzNDI4MzM1NQ== aborruso 30607 2020-05-26T21:15:34Z 2020-05-26T21:15:34Z NONE

Oh no! It looks like dirs_exist_ok is Python 3.8 only. This is a bad fix, it needs to work on older Python's too. Re-opening.

Thank you very much

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
633234781 https://github.com/dogsheep/dogsheep-photos/issues/20#issuecomment-633234781 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20 MDEyOklzc3VlQ29tbWVudDYzMzIzNDc4MQ== dmd 41439 2020-05-24T13:56:13Z 2020-05-24T13:56:13Z NONE

As that seems to be closed, can you give a hint on how to make this work?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Ability to serve thumbnailed Apple Photo from its place on disk 613006393
632305868 https://github.com/simonw/datasette/issues/744#issuecomment-632305868 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYzMjMwNTg2OA== aborruso 30607 2020-05-21T19:43:23Z 2020-05-21T19:43:23Z NONE

@simonw now I have

Traceback (most recent call last):
  File "/home/aborruso/.local/bin/datasette", line 8, in <module>
    sys.exit(cli())
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py", line 103, in heroku
    extra_metadata,
  File "/usr/lib/python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py", line 191, in temporary_heroku_directory
    os.path.join(tmp.name, "templates"),
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py", line 605, in link_or_copy_directory
    shutil.copytree(src, dst, copy_function=os.link, dirs_exist_ok=True)
TypeError: copytree() got an unexpected keyword argument 'dirs_exist_ok'

Do I must open a new issue?

Thank you

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
632255088 https://github.com/simonw/datasette/issues/744#issuecomment-632255088 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYzMjI1NTA4OA== aborruso 30607 2020-05-21T17:58:51Z 2020-05-21T17:58:51Z NONE

Thank you very much!!

I will try and I write you here

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
632249565 https://github.com/simonw/datasette/issues/744#issuecomment-632249565 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYzMjI0OTU2NQ== aborruso 30607 2020-05-21T17:47:40Z 2020-05-21T17:47:40Z NONE

@simonw can I test it know? What I must do to update it?

Thank you

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
628405453 https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22 MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw== RhetTbull 41546558 2020-05-14T05:59:53Z 2020-05-14T05:59:53Z NONE

I've added support for the above exif data to v0.28.17 of osxphotos. PhotoInfo.exif_info will return an ExifInfo dataclass object with the following properties:

    flash_fired: bool
    iso: int
    metering_mode: int
    sample_rate: int
    track_format: int
    white_balance: int
    aperture: float
    bit_rate: float
    duration: float
    exposure_bias: float
    focal_length: float
    fps: float
    latitude: float
    longitude: float
    shutter_speed: float
    camera_make: str
    camera_model: str
    codec: str
    lens_model: str

It's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data

Note: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Try out ExifReader 615626118
627007458 https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22 MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA== RhetTbull 41546558 2020-05-11T22:51:52Z 2020-05-11T22:52:26Z NONE

I'm not familiar with ExifReader. I wrote my own wrapper around exiftool because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like pyexiftool didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in ZEXTENDEDATTRIBUTES table. I've got an open issue #11 to add this to osxphotos but it hasn't bubbled to the top of my backlog yet.

osxphotos will give you the location info: PhotoInfo.location returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS).

CREATE TABLE ZEXTENDEDATTRIBUTES (
  Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, 
  Z_OPT INTEGER, ZFLASHFIRED INTEGER, 
  ZISO INTEGER, ZMETERINGMODE INTEGER, 
  ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, 
  ZWHITEBALANCE INTEGER, ZASSET INTEGER, 
  ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, 
  ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, 
  ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, 
  ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, 
  ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, 
  ZLENSMODEL VARCHAR
);
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Try out ExifReader 615626118
626991001 https://github.com/simonw/datasette/issues/699#issuecomment-626991001 https://api.github.com/repos/simonw/datasette/issues/699 MDEyOklzc3VlQ29tbWVudDYyNjk5MTAwMQ== zeluspudding 8431341 2020-05-11T22:06:34Z 2020-05-11T22:06:34Z NONE

Very nice! Thank you for sharing that :+1: :) Will try it out!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Authentication (and permissions) as a core concept 582526961
626807487 https://github.com/simonw/datasette/issues/699#issuecomment-626807487 https://api.github.com/repos/simonw/datasette/issues/699 MDEyOklzc3VlQ29tbWVudDYyNjgwNzQ4Nw== zeluspudding 8431341 2020-05-11T16:23:57Z 2020-05-11T16:24:59Z NONE

Authorization: bearer xxx auth for API keys is a plus plus for me. Looked into just adding this into your Flask logic but learned this project doesn't use flask. Interesting 🤔

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Authentication (and permissions) as a core concept 582526961
626667235 https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22 MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ== RhetTbull 41546558 2020-05-11T12:20:34Z 2020-05-11T12:20:34Z NONE

@simonw FYI, osxphotos includes a built in ExifTool class that uses exiftool to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example:

import osxphotos
photos = osxphotos.PhotosDB().photos()
exiftool = osxphotos.exiftool.ExifTool(photos[0].path)
exifdata = exiftool.as_dict()
tags = exifdata["IPTC:Keywords"]

Not as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC.

As for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Try out ExifReader 615626118
626396379 https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ== RhetTbull 41546558 2020-05-10T22:01:48Z 2020-05-10T22:01:48Z NONE

Frustrates me when package authors create a "drop in" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do "import bpylist2 as bpylist"

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990
626395641 https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ== RhetTbull 41546558 2020-05-10T21:55:54Z 2020-05-10T21:55:54Z NONE

Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990
626395507 https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw== RhetTbull 41546558 2020-05-10T21:54:45Z 2020-05-10T21:54:45Z NONE

@simonw does Photos show valid reverse geolocation info? Are you sure you're using bpylist2 and not bpylist? They're both unfortunately imported as "bpylist" so if you somehow got the wrong (original bpylist) version installed, it could be the issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990
626390317 https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626390317 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 MDEyOklzc3VlQ29tbWVudDYyNjM5MDMxNw== RhetTbull 41546558 2020-05-10T21:11:24Z 2020-05-10T21:50:58Z NONE

Ugh....Yeah, I think easiest is to catch the exception and return no place as you suggest. This particular bit of code involves un-archiving a serialized NSKeyedArchiver which uses an object table and it is certainly possible to create a circular reference that way. Because this is happening in the decode, the circular reference must be in the original data. Does Photos show valid reverse geolocation info for the photo in question? If so, Photos may be doing something beyond a simple decode of the binary plist. For now, I'll push a patch to catch the exception.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990
626340387 https://github.com/simonw/datasette/issues/254#issuecomment-626340387 https://api.github.com/repos/simonw/datasette/issues/254 MDEyOklzc3VlQ29tbWVudDYyNjM0MDM4Nw== philroche 247131 2020-05-10T14:54:13Z 2020-05-10T14:54:13Z NONE

This has now been resolved and is not present in current version of datasette.

Sample query @simonw mentioned now returns as expected.

https://aggreg8streams.tinyviking.ie/simplestreams?sql=select+*+from+cloudimage+where+%22content_id%22+%3D+%22com.ubuntu.cloud%3Areleased%3Adownload%22+order+by+id+limit+10

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Escaping named parameters in canned queries 322283067
626006493 https://github.com/simonw/datasette/issues/619#issuecomment-626006493 https://api.github.com/repos/simonw/datasette/issues/619 MDEyOklzc3VlQ29tbWVudDYyNjAwNjQ5Mw== davidszotten 412005 2020-05-08T20:29:12Z 2020-05-08T20:29:12Z NONE

just trying out datasette and quite like it, thanks! i found this issue annoying enough to have a go at a fix. have you any thoughts on a good approach? (i'm happy to dig in myself if you haven't thought about it yet, but wanted to check if you had an idea for how to fix when you raised the issue)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
"Invalid SQL" page should let you edit the SQL 520655983
625321121 https://github.com/simonw/datasette/issues/648#issuecomment-625321121 https://api.github.com/repos/simonw/datasette/issues/648 MDEyOklzc3VlQ29tbWVudDYyNTMyMTEyMQ== chekos 28694175 2020-05-07T15:21:19Z 2020-05-07T15:21:19Z NONE

It seems that heroku wasn't updating to 0.41 on deployment.

Had to add --branch 0.41 and that solved it! Heroku caches dependencies

and (i think) because the requirements.txt doesn't specify the datasette version, it didn't update from 0.40 to 0.41 on heroku even though it was specified on my local requirements file as datasette >= 0.41

These are the lines that gave me an idea on how to solve it:
https://github.com/simonw/datasette/blob/182e5c8745c94576718315f7596ccc81e5e2417b/datasette/publish/heroku.py#L164-L186

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Mechanism for adding arbitrary pages like /about 534492501
625286519 https://github.com/simonw/datasette/issues/648#issuecomment-625286519 https://api.github.com/repos/simonw/datasette/issues/648 MDEyOklzc3VlQ29tbWVudDYyNTI4NjUxOQ== chekos 28694175 2020-05-07T14:23:22Z 2020-05-07T14:28:33Z NONE

Hi! I'm using datasette on this repository: https://github.com/chekos/RIPA-2018-datasette

and on my local machine i can see an /about page i created but when i deploy to heroku i get a 404 (http://ripa-2018-db.herokuapp.com)

I bumped datasette in my requirements file to 0.41 so I'm 100% what the issue is 🤔

Do you have any idea what could be the problem? 👀

EDIT: for context, I have a templates directory with a pages/about.html file in https://github.com/chekos/RIPA-2018-datasette/tree/master/datasette/templates

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Mechanism for adding arbitrary pages like /about 534492501
625091976 https://github.com/simonw/datasette/issues/744#issuecomment-625091976 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyNTA5MTk3Ng== aborruso 30607 2020-05-07T07:51:25Z 2020-05-07T07:51:25Z NONE

I have installed heroku plugins:install heroku-builds, but I have the same error.

Then I have removed from datasette\publish\heroku.py

        # Check for heroku-builds plugin
        plugins = [
            line.split()[0] for line in check_output(["heroku", "plugins"]).splitlines()
        ]
        if b"heroku-builds" not in plugins:
            click.echo(
                "Publishing to Heroku requires the heroku-builds plugin to be installed."
            )
            click.confirm(
                "Install it? (this will run `heroku plugins:install heroku-builds`)",
                abort=True,
            )
            call(["heroku", "plugins:install", "heroku-builds"])

And now I have

Traceback (most recent call last):
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 210, in temporary_heroku_directory
    yield
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 96, in heroku
    list_output = check_output(["heroku", "apps:list", "--json"]).decode(
  File "c:\python37\lib\subprocess.py", line 395, in check_output
    **kwargs).stdout
  File "c:\python37\lib\subprocess.py", line 472, in run
    with Popen(*popenargs, **kwargs) as process:
  File "c:\python37\lib\subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "c:\python37\lib\subprocess.py", line 1178, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] The specified file could not be found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\python37\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\python37\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\Scripts\datasette.exe\__main__.py", line 9, in <module>
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 782, in main
    rv = self.invoke(ctx)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 120, in heroku
    call(["heroku", "builds:create", "-a", app_name, "--include-vcs-ignore"])
  File "c:\python37\lib\contextlib.py", line 130, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 213, in temporary_heroku_directory
    tmp.cleanup()
  File "c:\python37\lib\tempfile.py", line 809, in cleanup
    _shutil.rmtree(self.name)
  File "c:\python37\lib\shutil.py", line 513, in rmtree
    return _rmtree_unsafe(path, onerror)
  File "c:\python37\lib\shutil.py", line 401, in _rmtree_unsafe
    onerror(os.rmdir, path, sys.exc_info())
  File "c:\python37\lib\shutil.py", line 399, in _rmtree_unsafe
    os.rmdir(path)
PermissionError: [WinError 32] Unable to access file. The file is being used by another process: 'C:\\Users\\aborr\\AppData\\Local\\Temp\\tmpkcxy8i_q'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
625083715 https://github.com/simonw/datasette/issues/744#issuecomment-625083715 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyNTA4MzcxNQ== aborruso 30607 2020-05-07T07:34:18Z 2020-05-07T07:34:18Z NONE

In Windows I'm not very strong. I use debian (inside WSL).

However these are the possible steps:

  • I have installed Python 3 for win (I have 3.7.3);
  • I have installed heroku cli for win64 and logged in;
  • I have installed datasette running python -m pip install --upgrade --user datasette.

It's a very basic Python env that I do not use. This time only to reach my goal: try to publish using custom template

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
625066073 https://github.com/simonw/datasette/issues/744#issuecomment-625066073 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyNTA2NjA3Mw== aborruso 30607 2020-05-07T06:53:09Z 2020-05-07T06:53:09Z NONE

@simonw another error starting from Windows.

I run

datasette  publish heroku -n comunepa --template-dir template commissioniComunePalermo.db

And I have

Traceback (most recent call last):
  File "c:\python37\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\python37\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\Scripts\datasette.exe\__main__.py", line 9, in <module>
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 782, in main
    rv = self.invoke(ctx)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 53, in heroku
    line.split()[0] for line in check_output(["heroku", "plugins"]).splitlines()
  File "c:\python37\lib\subprocess.py", line 395, in check_output
    **kwargs).stdout
  File "c:\python37\lib\subprocess.py", line 472, in run
    with Popen(*popenargs, **kwargs) as process:
  File "c:\python37\lib\subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "c:\python37\lib\subprocess.py", line 1178, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] The specified file could not be found

files.zip

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
625060561 https://github.com/simonw/datasette/issues/744#issuecomment-625060561 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyNTA2MDU2MQ== aborruso 30607 2020-05-07T06:38:24Z 2020-05-07T06:38:24Z NONE

Hi @simonw probably I could try to do it in Python for windows. I do not like to do these things in win enviroment.

Because probably WSL Linux env (in which I do a lot of great things) is not an environment that will be tested for datasette.

In win I shouldn't have any problems. Am I right?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
624860451 https://github.com/simonw/datasette/issues/759#issuecomment-624860451 https://api.github.com/repos/simonw/datasette/issues/759 MDEyOklzc3VlQ29tbWVudDYyNDg2MDQ1MQ== Krazybug 133845 2020-05-06T20:03:01Z 2020-05-06T20:04:42Z NONE

Thank you. Now it's ok with the url

http://localhost:8001/index/summary?_search=language%3Aeng&_sort=title&_searchmode=raw

But I'm not able to manage it in the metadata file. Here is mine (note that the sort column is taken into account)
Here it is:

{
    "databases": {
      "index": {
        "tables": {
            "summary": {
                "sort": "title",
                "searchmode": "raw"
            }
        }
      }
    }
  }

Any idea ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
fts search on a column doesn't work anymore due to escape_fts 612673948
624284539 https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17 MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ== RhetTbull 41546558 2020-05-05T20:20:05Z 2020-05-05T20:20:05Z NONE

FYI, I've got an issue to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Only install osxphotos if running on macOS 612860531
623845014 https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623845014 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16 MDEyOklzc3VlQ29tbWVudDYyMzg0NTAxNA== RhetTbull 41546558 2020-05-05T03:55:14Z 2020-05-05T03:56:24Z NONE

I'm traveling w/o access to my Mac so can't help with any code right now. I suspected ZSCENEIDENTIFIER was a foreign key into one of these psi.sqlite tables. But looks like you're on to something connecting groups to assets. As for the UUID, I think there's two ints because each is 64-bits but UUIDs are 128-bits. Thus they need to be combined to get the 128 bit UUID. You might be able to use Apple's NSUUID, for example, by wrapping with pyObjC. Here's one example of using this in PyObjC's test suite. Interesting it's stored this way instead of a UUIDString as in Photos.sqlite. Perhaps it for faster indexing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Import machine-learning detected labels (dog, llama etc) from Apple Photos 612287234
623623696 https://github.com/simonw/datasette/pull/725#issuecomment-623623696 https://api.github.com/repos/simonw/datasette/issues/725 MDEyOklzc3VlQ29tbWVudDYyMzYyMzY5Ng== stonebig 4312421 2020-05-04T18:16:54Z 2020-05-04T18:16:54Z NONE

thanks a lot, Simon

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update aiofiles requirement from ~=0.4.0 to >=0.4,<0.6 598891570
623044643 https://github.com/dogsheep/github-to-sqlite/issues/38#issuecomment-623044643 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38 MDEyOklzc3VlQ29tbWVudDYyMzA0NDY0Mw== zzeleznick 5779832 2020-05-03T02:34:32Z 2020-05-03T02:34:32Z NONE
  1. More than glad to share feedback from the sidelines as a starrer.
-- Motivation:
--  Datasette is a data hammer and I'm looking for nails
--  e.g. Find which repos a user has starred => trigger a TBD downstream action
select
  starred_at,
  starred_by,
  full_name as repo_name
from
  repos_starred
where
  starred_by = "zzeleznick"
order by
  starred_at desc
<table> <thead> <tr> <th>starred_at</th> <th>starred_by</th> <th>repo_name</th> </tr> </thead> <tbody> <tr> <td>2020-02-11T01:08:59Z</td> <td>zzeleznick</td> <td>dogsheep/twitter-to-sqlite</td> </tr> <tr> <td>2020-01-11T21:57:34Z</td> <td>zzeleznick</td> <td>simonw/datasette</td> </tr> </tbody> </table>
  1. In my day job, I use airflow, and that's the mental model I'm bringing to datasette.

  2. I see your project like twitter-to-sqlite akin to Operators in Airflow world.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[Feature Request] Support Repo Name in Search 🥺 611284481
623038148 https://github.com/dogsheep/github-to-sqlite/issues/38#issuecomment-623038148 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38 MDEyOklzc3VlQ29tbWVudDYyMzAzODE0OA== zzeleznick 5779832 2020-05-03T01:18:57Z 2020-05-03T01:18:57Z NONE

Thanks, @simonw!

I feel a little foolish in hindsight, but I'm on the same page now and am glad to have discovered first-hand a motivation for this repos_starred use case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[Feature Request] Support Repo Name in Search 🥺 611284481
622279374 https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622279374 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33 MDEyOklzc3VlQ29tbWVudDYyMjI3OTM3NA== garethr 2029 2020-05-01T07:12:47Z 2020-05-01T07:12:47Z NONE

I also go it working with:

run: echo ${{ secrets.github_token }} | github-to-sqlite auth
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fall back to authentication via ENV 609950090
621030783 https://github.com/simonw/datasette/issues/744#issuecomment-621030783 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyMTAzMDc4Mw== aborruso 30607 2020-04-29T07:16:27Z 2020-04-29T07:16:27Z NONE

Hi @simonw it's debian as Windows Subsystem for Linux

PRETTY_NAME="Pengwin"
NAME="Pengwin"
VERSION_ID="10"
VERSION="10 (buster)"
ID=debian
ID_LIKE=debian
HOME_URL="https://github.com/whitewaterfoundry/Pengwin"
SUPPORT_URL="https://github.com/whitewaterfoundry/Pengwin"
BUG_REPORT_URL="https://github.com/whitewaterfoundry/Pengwin"
VERSION_CODENAME=buster
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
621011554 https://github.com/simonw/datasette/issues/744#issuecomment-621011554 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyMTAxMTU1NA== aborruso 30607 2020-04-29T06:17:26Z 2020-04-29T06:17:26Z NONE

A stupid note: I have no tmpcqv_1i5d folder in in /tmp.

It seems to me that it does not create any /tmp/tmpcqv_1i5d/templates folder (or other name folder, inside /tmp)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
621008152 https://github.com/simonw/datasette/issues/744#issuecomment-621008152 https://api.github.com/repos/simonw/datasette/issues/744 MDEyOklzc3VlQ29tbWVudDYyMTAwODE1Mg== aborruso 30607 2020-04-29T06:05:02Z 2020-04-29T06:05:02Z NONE

Hi @simonw , I have installed it and I have the below errors.

Is it possible that your /tmp directory is on a different volume from the template folder? That could cause a problem with the symlinks.

No, /tmp folder is in the same volume.

Thank you

Traceback (most recent call last):
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py", line 607, in link_or_copy_directory
    shutil.copytree(src, dst, copy_function=os.link)
  File "/usr/lib/python3.7/shutil.py", line 365, in copytree
    raise Error(errors)
shutil.Error: [('/var/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmpcqv_1i5d/templates/base.html', "[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmpcqv_1i5d/templates/base.html'"), ('/var/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmpcqv_1i5d/templates/index.html', "[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmpcqv_1i5d/templates/index.html'")]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/aborruso/.local/bin/datasette", line 8, in <module>
    sys.exit(cli())
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 610, in invoke    return callback(*args, **kwargs)
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py", line 103, in heroku
    extra_metadata,
  File "/usr/lib/python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py", line 191, in temporary_heroku_directory
    os.path.join(tmp.name, "templates"),
  File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py", line 609, in link_or_copy_directory
    shutil.copytree(src, dst)
  File "/usr/lib/python3.7/shutil.py", line 321, in copytree
    os.makedirs(dst)
  File "/usr/lib/python3.7/os.py", line 221, in makedirs
    mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/tmp/tmpcqv_1i5d/templates'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
link_or_copy_directory() error - Invalid cross-device link 608058890
620841496 https://github.com/simonw/datasette/issues/633#issuecomment-620841496 https://api.github.com/repos/simonw/datasette/issues/633 MDEyOklzc3VlQ29tbWVudDYyMDg0MTQ5Ng== nryberg 46165 2020-04-28T20:37:50Z 2020-04-28T20:37:50Z NONE

Using the Heroku web interface, you can set the WEB_CONCURRENCY = 1

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Publish to Heroku is broken: "WARNING: You must pass the application as an import string to enable 'reload' or 'workers" 522334771
620401443 https://github.com/simonw/datasette/issues/735#issuecomment-620401443 https://api.github.com/repos/simonw/datasette/issues/735 MDEyOklzc3VlQ29tbWVudDYyMDQwMTQ0Mw== aborruso 30607 2020-04-28T06:10:20Z 2020-04-28T06:10:20Z NONE

It works in heroku, than might be a bug with datasette-publish-now.

Thank you

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Error when I click on "View and edit SQL" 605806386
620401172 https://github.com/simonw/datasette/issues/736#issuecomment-620401172 https://api.github.com/repos/simonw/datasette/issues/736 MDEyOklzc3VlQ29tbWVudDYyMDQwMTE3Mg== aborruso 30607 2020-04-28T06:09:28Z 2020-04-28T06:09:28Z NONE

Would you mind trying publishing your database using one of the other options - Heroku, Cloud Run or https://fly.io/ - and see if you have the same bug there?

It works in heroku, than might be a bug with datasette-publish-now.

Thank you

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
strange behavior using accented characters 606720674
619489720 https://github.com/simonw/datasette/pull/725#issuecomment-619489720 https://api.github.com/repos/simonw/datasette/issues/725 MDEyOklzc3VlQ29tbWVudDYxOTQ4OTcyMA== stonebig 4312421 2020-04-26T06:09:59Z 2020-04-26T06:10:13Z NONE

as a complementary remark: the versioning of datasette dependancies will become a problem when the new pip "dependancy resolver" will be activated. for now, it's just warnings via pip checks, later it will be a "no":

datasette 0.40 has requirement aiofiles~=0.4.0, but you have aiofiles 0.5.0.
datasette 0.40 has requirement Jinja2~=2.10.3, but you have jinja2 2.11.2.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Update aiofiles requirement from ~=0.4.0 to >=0.4,<0.6 598891570
618758326 https://github.com/simonw/datasette/issues/731#issuecomment-618758326 https://api.github.com/repos/simonw/datasette/issues/731 MDEyOklzc3VlQ29tbWVudDYxODc1ODMyNg== eyeseast 25778 2020-04-24T01:55:00Z 2020-04-24T01:55:00Z NONE

Mounting ./static at /static seems the simplest way. Saves you the trouble of deciding what else (img for example) gets special treatment.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Option to automatically configure based on directory layout 605110015
618126449 https://github.com/simonw/datasette/issues/731#issuecomment-618126449 https://api.github.com/repos/simonw/datasette/issues/731 MDEyOklzc3VlQ29tbWVudDYxODEyNjQ0OQ== eyeseast 25778 2020-04-23T01:38:55Z 2020-04-23T01:38:55Z NONE

I've almost suggested this same thing a couple times. I tend to have Makefile (because I'm doing other make stuff anyway to get data prepped), and I end up putting all those CLI options in something like make run. But it would be way easier to just have all those typical options -- plugins, templates, metadata -- be defaults.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Option to automatically configure based on directory layout 605110015
617208503 https://github.com/simonw/datasette/issues/176#issuecomment-617208503 https://api.github.com/repos/simonw/datasette/issues/176 MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw== nkirsch 12976 2020-04-21T14:16:24Z 2020-04-21T14:16:24Z NONE

@eads I'm interested in helping, if there's still a need...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add GraphQL endpoint 285168503
614440032 https://github.com/simonw/sqlite-utils/issues/76#issuecomment-614440032 https://api.github.com/repos/simonw/sqlite-utils/issues/76 MDEyOklzc3VlQ29tbWVudDYxNDQ0MDAzMg== metab0t 10501166 2020-04-16T06:23:29Z 2020-04-16T06:23:29Z NONE

Thanks for your hard work!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
order_by mechanism 549287310
614073859 https://github.com/simonw/sqlite-utils/issues/97#issuecomment-614073859 https://api.github.com/repos/simonw/sqlite-utils/issues/97 MDEyOklzc3VlQ29tbWVudDYxNDA3Mzg1OQ== betatim 1448859 2020-04-15T14:29:30Z 2020-04-15T14:29:30Z NONE

Woah! Thanks a lot. Next time I will add a more obvious/explicit "if you like this idea let me know I'd love to work on it to get my feet wet here" :D

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Adding a "recreate" flag to the `Database` constructor 593751293
609393513 https://github.com/simonw/datasette/pull/627#issuecomment-609393513 https://api.github.com/repos/simonw/datasette/issues/627 MDEyOklzc3VlQ29tbWVudDYwOTM5MzUxMw== stonebig 4312421 2020-04-05T10:23:57Z 2020-04-05T10:23:57Z NONE

is there any specific reason to stick to Jinja2~=2.10.3 when there is Jinja-2.11.1 ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Support Python 3.8, stop supporting Python 3.5 521323012
605439685 https://github.com/dogsheep/github-to-sqlite/issues/15#issuecomment-605439685 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15 MDEyOklzc3VlQ29tbWVudDYwNTQzOTY4NQ== garethr 2029 2020-03-28T12:17:01Z 2020-03-28T12:17:01Z NONE

That looks great, thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Assets table with downloads 544571092
604249402 https://github.com/simonw/datasette/issues/712#issuecomment-604249402 https://api.github.com/repos/simonw/datasette/issues/712 MDEyOklzc3VlQ29tbWVudDYwNDI0OTQwMg== wragge 127565 2020-03-26T06:11:44Z 2020-03-26T06:11:44Z NONE

Following on from @betatim's suggestion on Twitter, I've changed the proxy url to include 'absolute'.

proxy_url = f'{base_url}proxy/absolute/8001/'

This works both on Binder and locally, without using the path_from_header option. I've updated the demo repository. Sorry @simonw if I've led you down the wrong path!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url doesn't entirely work for running Datasette inside Binder 588108428
604225034 https://github.com/simonw/datasette/issues/712#issuecomment-604225034 https://api.github.com/repos/simonw/datasette/issues/712 MDEyOklzc3VlQ29tbWVudDYwNDIyNTAzNA== wragge 127565 2020-03-26T04:40:08Z 2020-03-26T04:40:08Z NONE

Great! Yes, can confirm that this works on Binder. However, when I try to run the same code locally, I get an Internal Server Error when I try to access Datasette.

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette_debug_asgi.py", line 24, in wrapped_app
    await app(scope, recieve, send)
  File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/utils/asgi.py", line 174, in __call__
    await self.app(scope, receive, send)
  File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/tracer.py", line 75, in __call__
    await self.app(scope, receive, send)
  File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/app.py", line 746, in __call__
    raw_path = dict(scope["headers"])[path_from_header.encode("utf8")].split(b"?")[0]
KeyError: b'x-original-uri'
INFO:     127.0.0.1:49320 - "GET / HTTP/1.1" 500 Internal Server Error
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url doesn't entirely work for running Datasette inside Binder 588108428
604166918 https://github.com/simonw/datasette/issues/394#issuecomment-604166918 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA== wragge 127565 2020-03-26T00:56:30Z 2020-03-26T00:56:30Z NONE

Thanks! I'm trying to launch Datasette from within a notebook using the jupyter-server-proxy and the new base_url parameter. While the assets load ok, and the breadcrumb navigation works, the facet links don't seem to use the base_url. Or have I missed something?

My test repository is here: https://github.com/wragge/datasette-test

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
603849245 https://github.com/simonw/datasette/issues/394#issuecomment-603849245 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMzg0OTI0NQ== terrycojones 132978 2020-03-25T13:48:13Z 2020-03-25T13:48:13Z NONE

Great - thanks again.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
603539349 https://github.com/simonw/datasette/issues/394#issuecomment-603539349 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMzUzOTM0OQ== terrycojones 132978 2020-03-24T22:33:23Z 2020-03-24T22:33:23Z NONE

Hi Simon - I'm just (trying, at least) to follow along in the above. I can't try it out now, but I will if no one else gets to it. Sorry I didn't write any tests in the original bit of code I pushed - I was just trying to see if it could work & whether you'd want to maybe head in that direction. Anyway, thank you, I will certainly use this. Comment back here if no one tried it out & I'll make time.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
602916580 https://github.com/simonw/datasette/issues/394#issuecomment-602916580 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMjkxNjU4MA== terrycojones 132978 2020-03-23T23:37:06Z 2020-03-23T23:37:06Z NONE

@simonw You're welcome - I was just trying it out back in December as I thought it should work. Now there's a pandemic to work on though.... so no time at all for more at the moment. BTW, I have datasette running on several protein and full (virus) genome databases I build, and it's great - thank you! Hi and best regards to you & Nat :-)

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
602911133 https://github.com/simonw/datasette/issues/394#issuecomment-602911133 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMjkxMTEzMw== terrycojones 132978 2020-03-23T23:22:10Z 2020-03-23T23:22:10Z NONE

I just updated #652 to remove a merge conflict. I think it's an easy way to add this functionality. I don't have time to do more though, sorry!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
602907207 https://github.com/simonw/datasette/issues/394#issuecomment-602907207 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw== wragge 127565 2020-03-23T23:12:18Z 2020-03-23T23:12:18Z NONE

This would also be useful for running Datasette in Jupyter notebooks on Binder. While you can use Jupyter-server-proxy to access Datasette on Binder, the links are broken.

Why run Datasette on Binder? I'm developing a range of Jupyter notebooks that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic.

For example, there are a series of notebooks that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data.

But it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
602904184 https://github.com/simonw/datasette/issues/394#issuecomment-602904184 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMjkwNDE4NA== betatim 1448859 2020-03-23T23:03:42Z 2020-03-23T23:03:42Z NONE

On mybinder.org we allow access to arbitrary processes listening on a port inside the container via a reverse proxy.

This means we need support for a proxy prefix as the proxy ends up running at a URL like /something/random/proxy/datasette/...

An example that shows the problem is https://github.com/psychemedia/jupyterserverproxy-datasette-demo. Launch directly into a datasette instance on mybinder.org with https://mybinder.org/v2/gh/psychemedia/jupyterserverproxy-datasette-demo/master?urlpath=datasette then try to follow links inside the UI.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021
602136481 https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-602136481 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16 MDEyOklzc3VlQ29tbWVudDYwMjEzNjQ4MQ== jayvdb 15092 2020-03-22T02:08:57Z 2020-03-22T02:08:57Z NONE

I'd love to be using your library as a better cached gh layer for a new library I have built, replacing large parts of the very ugly https://github.com/jayvdb/pypidb/blob/master/pypidb/_github.py , and then probably being able to rebuild the setuppy chunk as a feature here at a later stage.

I would also need tokenless and netrc support, but I would be happy to add those bits.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exception running first command: IndexError: list index out of range 546051181
593122605 https://github.com/simonw/sqlite-utils/issues/89#issuecomment-593122605 https://api.github.com/repos/simonw/sqlite-utils/issues/89 MDEyOklzc3VlQ29tbWVudDU5MzEyMjYwNQ== chrishas35 35075 2020-03-01T17:33:11Z 2020-03-01T17:33:11Z NONE

If you're happy with the proposed implementation, I have code & tests written that I'll get ready for a PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Ability to customize columns used by extracts= feature 573578548
593026413 https://github.com/simonw/datasette/issues/573#issuecomment-593026413 https://api.github.com/repos/simonw/datasette/issues/573 MDEyOklzc3VlQ29tbWVudDU5MzAyNjQxMw== wragge 127565 2020-03-01T01:24:45Z 2020-03-01T01:24:45Z NONE

Did you manage to find an answer to this? I've got a notebook to help people generate datasets on the fly from an API, so it would be cool if they flick it to Datasette for initial exploration.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exposing Datasette via Jupyter-server-proxy 492153532
592999503 https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503 https://api.github.com/repos/simonw/sqlite-utils/issues/46 MDEyOklzc3VlQ29tbWVudDU5Mjk5OTUwMw== chrishas35 35075 2020-02-29T22:08:20Z 2020-02-29T22:08:20Z NONE

@simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, .lookup() allows you to define the "value" column (the documentation uses name), but when you use extracts keyword as part of .insert(), .upsert() etc. the lookup must be done against a column named "value". I have an existing lookup table that I've populated with columns "id" and "name" as opposed to "id" and "value", and seems I can't use extracts=, unless I'm missing something...

Initial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so:

table = db.table("trees", extracts={"species_id": ("Species", "name"})

I haven't dug too much into the existing code yet, but does this make sense? Worth doing?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
extracts= option for insert/update/etc 471780443
590593247 https://github.com/simonw/datasette/issues/675#issuecomment-590593247 https://api.github.com/repos/simonw/datasette/issues/675 MDEyOklzc3VlQ29tbWVudDU5MDU5MzI0Nw== aviflax 141844 2020-02-24T23:02:52Z 2020-02-24T23:02:52Z NONE

Design looks great to me.

Excellent, thanks!

I'm not keen on two letter short versions (-cp) - I'd rather either have a single character or no short form at all.

Hmm, well, anyone running datasette package is probably at least somewhat familiar with UNIX CLIs… so how about --cp as a middle ground?

$ datasette package --cp /the/source/path /the/target/path data.db

I think I like it. Easy to remember!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature request: new option for `package` to specify additional arbitrary files to be copied into the built image 567902704
590543398 https://github.com/simonw/datasette/issues/681#issuecomment-590543398 https://api.github.com/repos/simonw/datasette/issues/681 MDEyOklzc3VlQ29tbWVudDU5MDU0MzM5OA== clausjuhl 2181410 2020-02-24T20:53:56Z 2020-02-24T20:53:56Z NONE

Excellent. I'll implement the simple plugin-solution now. And will have a go at a more mature plugin later. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Cashe-header missing in http-response 569317377
590405736 https://github.com/simonw/datasette/issues/675#issuecomment-590405736 https://api.github.com/repos/simonw/datasette/issues/675 MDEyOklzc3VlQ29tbWVudDU5MDQwNTczNg== aviflax 141844 2020-02-24T16:06:27Z 2020-02-24T16:06:27Z NONE

So yeah - if you're happy to design this I think it would be worth us adding.

Great! I’ll give it a go.

Small design suggestion: allow --copy to be applied multiple times…

Makes a ton of sense, will do.

Also since Click arguments can take multiple options I don't think you need to have the : in there - although if it better matches Docker's own UI it might be more consistent to have it.

Great point. I double checked the docs for docker cp and in that context the colon is used to delimit a container and a path, while spaces are used to separate the source and target.

The usage string is:

docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH

so in fact it’ll be more consistent to use a space to delimit the source and destination paths, like so:

$ datasette package --copy /the/source/path /the/target/path data.db

and I suppose the short-form version of the option should be cp like so:

$ datasette package -cp /the/source/path /the/target/path data.db
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Feature request: new option for `package` to specify additional arbitrary files to be copied into the built image 567902704
590209074 https://github.com/simonw/datasette/issues/676#issuecomment-590209074 https://api.github.com/repos/simonw/datasette/issues/676 MDEyOklzc3VlQ29tbWVudDU5MDIwOTA3NA== tunguyenatwork 58088336 2020-02-24T08:20:15Z 2020-02-24T08:20:15Z NONE

Awesome, thank you so much. I’ll try it out and let you know.

On Sun, Feb 23, 2020 at 1:44 PM Simon Willison notifications@github.com
wrote:

You can try this right now like so:

pip install https://github.com/simonw/datasette/archive/search-raw.zip

Then use the following:

?_search=foo*&_searchmode=raw`


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/simonw/datasette/issues/676?email_source=notifications&email_token=AN3FXEFS6B22U2NOT6M5FULRELNY7A5CNFSM4KYIOIB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMWGYTI#issuecomment-590113869,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AN3FXEANDJ6AIHGU4ADK4D3RELNY7ANCNFSM4KYIOIBQ
.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
?_searchmode=raw option for running FTS searches without escaping characters 568091133
589922016 https://github.com/simonw/datasette/issues/676#issuecomment-589922016 https://api.github.com/repos/simonw/datasette/issues/676 MDEyOklzc3VlQ29tbWVudDU4OTkyMjAxNg== tunguyenatwork 58088336 2020-02-22T05:50:10Z 2020-02-22T05:50:10Z NONE

Thanks Simon,
My use case is using Datasette for full text search type ahead. That was working pretty well. The _search_wildcard= option will be awesome. Thanks

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
?_searchmode=raw option for running FTS searches without escaping characters 568091133
586683572 https://github.com/simonw/sqlite-utils/issues/86#issuecomment-586683572 https://api.github.com/repos/simonw/sqlite-utils/issues/86 MDEyOklzc3VlQ29tbWVudDU4NjY4MzU3Mg== foscoj 8149512 2020-02-16T09:03:54Z 2020-02-16T09:03:54Z NONE

Probably the best option to just throw the error.
Is there any active dev chan where we could post the issue to python sqlite3?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Problem with square bracket in CSV column name 564579430
585285753 https://github.com/simonw/datasette/issues/667#issuecomment-585285753 https://api.github.com/repos/simonw/datasette/issues/667 MDEyOklzc3VlQ29tbWVudDU4NTI4NTc1Mw== xrotwang 870184 2020-02-12T16:18:22Z 2020-02-12T16:18:22Z NONE

@simonw fwiw, here's the plugin I implemented to support CLDF datasets: https://github.com/cldf/datasette-cldf/blob/master/README.md
It's a bit of a hybrid in that it does both, building the SQLite database and extending datasette by exploting what we know about the data format - so it may not be worth listing it with the other plugins.

Having tools like datasette available definitely helps selling people on package formats like CLDF (or CSVW), many thanks for this!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow injecting configuration data from plugins 562787785
585109972 https://github.com/simonw/datasette/issues/667#issuecomment-585109972 https://api.github.com/repos/simonw/datasette/issues/667 MDEyOklzc3VlQ29tbWVudDU4NTEwOTk3Mg== xrotwang 870184 2020-02-12T09:21:22Z 2020-02-12T09:21:22Z NONE

I think I found a better way to implement my use case: I wrap the datasette serve call into my own cli, which
- creates the SQLite from CSV data
- writes metadata.json for datasette
- determines suitable config like max_page_size
- then calls datasette serve.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow injecting configuration data from plugins 562787785
584657949 https://github.com/simonw/datasette/issues/327#issuecomment-584657949 https://api.github.com/repos/simonw/datasette/issues/327 MDEyOklzc3VlQ29tbWVudDU4NDY1Nzk0OQ== dazzag24 1055831 2020-02-11T14:21:15Z 2020-02-11T14:21:15Z NONE

See https://github.com/simonw/datasette/issues/657 and my changes that allow datasette to load parquet files

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Explore if SquashFS can be used to shrink size of packaged Docker containers 335200136
584203999 https://github.com/simonw/datasette/issues/352#issuecomment-584203999 https://api.github.com/repos/simonw/datasette/issues/352 MDEyOklzc3VlQ29tbWVudDU4NDIwMzk5OQ== xrotwang 870184 2020-02-10T16:18:58Z 2020-02-10T16:18:58Z NONE

I don't want to re-open this issue, but I'm wondering whether it would be possible to include the full row for which a specific cell is to be rendered in the hook signature. My use case are rows where custom rendering would need access to multiple values (specifically, rows containing the constituents of interlinear glossed text (IGT) in separate columns, see https://github.com/cldf/cldf/tree/master/components/examples).

I could probably cobble this together with custom SQL and the sql-to-html plugin. But having a full row within a render_cell implementation seems a lot simpler.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
render_cell(value) plugin hook 345821500
583177728 https://github.com/simonw/datasette/issues/658#issuecomment-583177728 https://api.github.com/repos/simonw/datasette/issues/658 MDEyOklzc3VlQ29tbWVudDU4MzE3NzcyOA== null92 49656826 2020-02-07T00:28:55Z 2020-02-07T00:29:50Z NONE

Simon,

Yes, there is an "app.css" on static folder, however, anyone modification I do on this .css, doesn't apply on the datasette.

I'm using this command: datasette publish heroku "databases folder" -n "herokuapp name" --extra-options="--config sql_time_limit_ms:60000 --config max_returned_rows:10000 --config force_https_urls:1" --template-dir "templates folder" -m "metadata.json folder"

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
How do I use the app.css as style sheet? 550293770
580745213 https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213 https://api.github.com/repos/simonw/sqlite-utils/issues/73 MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw== psychemedia 82988 2020-01-31T14:02:38Z 2020-01-31T14:21:09Z NONE

So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error.

If I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly.

It feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
upsert_all() throws issue when upserting to empty table 545407916
580075725 https://github.com/simonw/datasette/issues/661#issuecomment-580075725 https://api.github.com/repos/simonw/datasette/issues/661 MDEyOklzc3VlQ29tbWVudDU4MDA3NTcyNQ== dvhthomas 134771 2020-01-30T04:17:51Z 2020-01-30T04:17:51Z NONE

Thanks for the elegant solution to the problem as stated. I'm packaging right now :-)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
--port option to expose a port other than 8001 in "datasette package" 555832585
579864036 https://github.com/simonw/datasette/issues/662#issuecomment-579864036 https://api.github.com/repos/simonw/datasette/issues/662 MDEyOklzc3VlQ29tbWVudDU3OTg2NDAzNg== clausjuhl 2181410 2020-01-29T17:17:01Z 2020-01-29T17:17:01Z NONE

This is excellent news. I'll wait until version 0.34. It would be tiresome to rewrite all standard-queries into custom queries. Thank you!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Escape_fts5_query-hookimplementation does not work with queries to standard tables 556814876
579798917 https://github.com/simonw/datasette/issues/662#issuecomment-579798917 https://api.github.com/repos/simonw/datasette/issues/662 MDEyOklzc3VlQ29tbWVudDU3OTc5ODkxNw== clausjuhl 2181410 2020-01-29T15:08:57Z 2020-01-29T15:08:57Z NONE

Hi Simon

Thankt you for a quick reply. Here are a few examples of urls, where I search the 'cases_fts'-virtual table for tokens in the title-column. It returns the same results, wether the other query-params are present or not.

Searching for sky
http://localhost:8001/db-7596a4e/cases?_search_title=sky&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date
Returns searchresults

Searching for sky
http://localhost:8001/db-7596a4e/cases?_search_title=sky
&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date
Returns searchresults

Searching for sky-tog
http://localhost:8001/db-7596a4e/cases?_search_title=sky-tog&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date
Throws: No such column: tog

searching for sky+
http://localhost:8001/db-7596a4e/cases?_search_title=sky%2B&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date
Throws: Invalid SQL: fts5: syntax error near ""

Searching for "madpakke" (including double quotes)
http://localhost:8001/db-7596a4e/cases?_search_title=%22madpakke%22&year__gte=1997&year__lte=2017&_sort_desc=last_deliberation_date
Returns searchresults even though 'madpakke' only appears in the fulltextindex without quotes

As I said, my other plugins work just fine, and I just copied your sql_functions.py from the datasette-repo.

Thanks!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Escape_fts5_query-hookimplementation does not work with queries to standard tables 556814876
579675357 https://github.com/simonw/datasette/issues/651#issuecomment-579675357 https://api.github.com/repos/simonw/datasette/issues/651 MDEyOklzc3VlQ29tbWVudDU3OTY3NTM1Nw== clausjuhl 2181410 2020-01-29T09:45:00Z 2020-01-29T09:45:00Z NONE

Hi Simon

Thank you for adding the escape_function, but it does not work on my datasette-installation (0.33). I've added the following file to my datasette-dir: /plugins/sql_functions.py:

`from datasette import hookimpl

def escape_fts_query(query):
bits = query.split()
return ' '.join('"{}"'.format(bit.replace('"', '')) for bit in bits)

@hookimpl
def prepare_connection(conn):
conn.create_function("escape_fts_query", 1, escape_fts_query)`

It has no effect on the standard queries to the tables though, as they still produce errors when including any characters like '-', '/', '+' or '?'

Does the function only work when using costum queries, where I can include the escape_fts-function explicitly in the sql-query?

PS. I'm calling datasette with --plugins=plugins, and my other plugins work just fine.
PPS. The fts5 virtual table is created with 'sqlite3' like so:

CREATE VIRTUAL TABLE "cases_fts" USING FTS5( title, subtitle, resume, suggestion, presentation, detail = full, content_rowid = 'id', content = 'cases', tokenize='unicode61', 'remove_diacritics 2', 'tokenchars "-_"' );

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
fts5 syntax error when using punctuation 539590148
576759416 https://github.com/simonw/datasette/issues/657#issuecomment-576759416 https://api.github.com/repos/simonw/datasette/issues/657 MDEyOklzc3VlQ29tbWVudDU3Njc1OTQxNg== dazzag24 1055831 2020-01-21T16:20:19Z 2020-01-21T16:20:19Z NONE

Hi,

I've completed some changes to my fork of datasette that allows it to automatically create the parquet virtual table when you supply it with a filename that has the ".parquet" extension.

I had to figure out how to make the "CREATE VIRTUAL TABLE" statement only be applied to the fake in memory parquet database and not to any others that were also being loaded. Thus it supports mixed mode databases e.g

datasette my_test.parquet normal_sqlite_file.db  --load-extension=libparquet.so --load-extensio
n=mod_spatialite.so

Please see my changes here:
https://github.com/dazzag24/datasette/commit/8e18394353114f17291fd1857073b1e0485a1faf

Thanks

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow creation of virtual tables at startup 548591089
576293773 https://github.com/simonw/datasette/issues/656#issuecomment-576293773 https://api.github.com/repos/simonw/datasette/issues/656 MDEyOklzc3VlQ29tbWVudDU3NjI5Mzc3Mw== JBPressac 6371750 2020-01-20T14:17:11Z 2020-01-20T14:17:11Z NONE

Seems that headers and definitions has simply to be filled as an HTML table in the description field of matadata.json.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Display of the column definitions 546961357
575799104 https://github.com/simonw/sqlite-utils/issues/70#issuecomment-575799104 https://api.github.com/repos/simonw/sqlite-utils/issues/70 MDEyOklzc3VlQ29tbWVudDU3NTc5OTEwNA== LucasElArruda 26292069 2020-01-17T21:20:17Z 2020-01-17T21:20:17Z NONE

Omg sorry I took so long to reply!

On SQL we can say how the foreign key behaves when it is deleted or updated on the parent table (see https://www.sqlitetutorial.net/sqlite-foreign-key/ for more details).

I did not see clearly how to create tables with this feature on sqlite-utils library.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Implement ON DELETE and ON UPDATE actions for foreign keys 539204432
575321322 https://github.com/simonw/datasette/issues/657#issuecomment-575321322 https://api.github.com/repos/simonw/datasette/issues/657 MDEyOklzc3VlQ29tbWVudDU3NTMyMTMyMg== dazzag24 1055831 2020-01-16T20:01:43Z 2020-01-16T20:01:43Z NONE

I have successfully tested datasette using a parquet VIRTUAL TABLE. In the first terminal:

datasette airports.db --load-extension=libparquet

In another terminal I load the same sqlite db file using the sqlite3 cli client.

$ sqlite3 airports.db

and then load the parquet extension and create the virtual table.

sqlite> .load /home/darreng/metars/libparquet
sqlite> CREATE VIRTUAL TABLE mytable USING parquet('/home/xx/data.parquet');

Now the parquet virtual table is usable by the datasette web UI.

Its not an ideal solution but is a proof that datasette works the parquet extension.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Allow creation of virtual tables at startup 548591089
573047321 https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321 https://api.github.com/repos/simonw/sqlite-utils/issues/73 MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ== psychemedia 82988 2020-01-10T14:02:56Z 2020-01-10T14:09:23Z NONE

Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop).

Could it be sqlite3? I'm on 3.30.1.

UPDATE: just tried it on jupyter.org/try and I get the error there, too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
upsert_all() throws issue when upserting to empty table 545407916
571412923 https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-571412923 https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16 MDEyOklzc3VlQ29tbWVudDU3MTQxMjkyMw== jayvdb 15092 2020-01-07T03:06:46Z 2020-01-07T03:06:46Z NONE

I re-tried after doing auth, and I get the same result.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exception running first command: IndexError: list index out of range 546051181
571138093 https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093 https://api.github.com/repos/simonw/sqlite-utils/issues/73 MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw== psychemedia 82988 2020-01-06T13:28:31Z 2020-01-06T13:28:31Z NONE

I think I actually had several issues in play...

The missing key was one, but I think there is also an issue as per below.

For example, in the following:

def init_testdb(dbname='test.db'):

    if os.path.exists(dbname):
        os.remove(dbname)

    conn = sqlite3.connect(dbname)
    db = Database(conn)

    return conn, db

conn, db = init_testdb()

c = conn.cursor()
c.executescript('CREATE TABLE "test1" ("Col1" TEXT, "Col2" TEXT, PRIMARY KEY ("Col1"));')
c.executescript('CREATE TABLE "test2" ("Col1" TEXT, "Col2" TEXT, PRIMARY KEY ("Col1"));')

print('Test 1...')
for i in range(3):
    db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))
    db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))

print('Test 2...')
for i in range(3):
    db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))
    db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},
                            {'Col1':'c','Col2':'x'}], pk=('Col1'))
print('Done...')

---------------------------------------------------------------------------
Test 1...
Test 2...
 IndexError: list index out of range 
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-763-444132ca189f> in <module>
     22 print('Test 2...')
     23 for i in range(3):
---> 24     db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))
     25     db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},
     26                             {'Col1':'c','Col2':'x'}], pk=('Col1'))

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts)
   1157             alter=alter,
   1158             extracts=extracts,
-> 1159             upsert=True,
   1160         )
   1161 

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert)
   1097                 # self.last_rowid will be 0 if a "INSERT OR IGNORE" happened
   1098                 if (hash_id or pk) and self.last_rowid:
-> 1099                     row = list(self.rows_where("rowid = ?", [self.last_rowid]))[0]
   1100                     if hash_id:
   1101                         self.last_pk = row[hash_id]

IndexError: list index out of range

the first test works but the second fails. Is the length of the list of items being upserted leaking somewhere?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
upsert_all() throws issue when upserting to empty table 545407916
567226048 https://github.com/simonw/datasette/issues/596#issuecomment-567226048 https://api.github.com/repos/simonw/datasette/issues/596 MDEyOklzc3VlQ29tbWVudDU2NzIyNjA0OA== terrycojones 132978 2019-12-18T21:43:13Z 2019-12-18T21:43:13Z NONE

Meant to add that of course it would be better not to reinvent CSS (one time was already enough). But one option would be to provide a mechanism to specify a CSS class for a column (a cell, a row...) and let the user give a URL path to a CSS file on the command line.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Handle really wide tables better 507454958
567225156 https://github.com/simonw/datasette/issues/596#issuecomment-567225156 https://api.github.com/repos/simonw/datasette/issues/596 MDEyOklzc3VlQ29tbWVudDU2NzIyNTE1Ng== terrycojones 132978 2019-12-18T21:40:35Z 2019-12-18T21:40:35Z NONE

I initially went looking for a way to hide a column completely. Today I found the setting to truncate cells, but it applies to all cells. In my case I have text columns that can have many thousands of characters. I was wondering whether the metadata JSON would be an appropriate place to indicate how columns are displayed (on a col-by-col basis). E.g., I'd like to be able to specify that only 20 chars of a given column be shown, and the font be monospace. But maybe I can do that in some other way - I barely know anything about datasette yet, sorry!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Handle really wide tables better 507454958
567219479 https://github.com/simonw/datasette/issues/394#issuecomment-567219479 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDU2NzIxOTQ3OQ== terrycojones 132978 2019-12-18T21:24:23Z 2019-12-18T21:24:23Z NONE

@simonw What about allowing a base url. The <base>....</base> tag has been around forever. Then just use all relative URLs, which I guess is likely what you already do. See https://www.w3schools.com/TAGs/tag_base.asp

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Query took 194.621ms · About: github-to-sqlite