home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 396212021 and user = 58298410 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • LVerneyPEReN · 2 ✖

issue 1

  • base_url configuration setting · 2 ✖

author_association 1

  • NONE 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
642522285 https://github.com/simonw/datasette/issues/394#issuecomment-642522285 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDY0MjUyMjI4NQ== LVerneyPEReN 58298410 2020-06-11T09:15:19Z 2020-06-11T09:15:19Z NONE

Hi @wragge,

This looks great, thanks for the share! I refactored it into a self-contained function, binding on a random available TCP port (multi-user context). I am using subprocess API directly since the %run magic was leaving defunct process behind :/

```python import socket

from signal import SIGINT from subprocess import Popen, PIPE

from IPython.display import display, HTML from notebook.notebookapp import list_running_servers

def get_free_tcp_port(): """ Get a free TCP port. """ tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM) tcp.bind(('', 0)) _, port = tcp.getsockname() tcp.close() return port

def datasette(database): """ Run datasette on an SQLite database. """ # Get current running servers servers = list_running_servers()

# Get the current base url
base_url = next(servers)['base_url']

# Get a free port
port = get_free_tcp_port()

# Create a base url for Datasette suing the proxy path
proxy_url = f'{base_url}proxy/absolute/{port}/'

# Display a link to Datasette
display(HTML(f'<p><a href="{proxy_url}">View Datasette</a> (Click on the stop button to close the Datasette server)</p>'))

# Launch Datasette
with Popen(
    [
        'python', '-m', 'datasette', '--',
        database,
        '--port', str(port),
        '--config', f'base_url:{proxy_url}'
    ],
    stdout=PIPE,
    stderr=PIPE,
    bufsize=1,
    universal_newlines=True
) as p:
    print(p.stdout.readline(), end='')
    while True:
        try:
            line = p.stderr.readline()
            if not line:
                break
            print(line, end='')
            exit_code = p.poll()
        except KeyboardInterrupt:
            p.send_signal(SIGINT)

```

Ideally, I'd like some extra magic to notify users when they are leaving the closing the notebook tab and make them terminate the running datasette processes. I'll be looking for it.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021  
641889565 https://github.com/simonw/datasette/issues/394#issuecomment-641889565 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDY0MTg4OTU2NQ== LVerneyPEReN 58298410 2020-06-10T09:49:34Z 2020-06-10T09:49:34Z NONE

Hi,

I came across this issue while looking for a way to spawn Datasette as a SQLite files viewer in JupyterLab. I found https://github.com/simonw/jupyterserverproxy-datasette-demo which seems to be the most up to date proof of concept, but it seems to be failing to list the available db (at least in the Binder demo, https://hub.gke.mybinder.org/user/simonw-jupyters--datasette-demo-uw4dmlnn/datasette/, I only have :memory).

Does anyone tried to improve on this proof of concept to have a Datasette visualization for SQLite files?

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 22.527ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows