home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where user = 127565 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 5

  • Writable canned queries fail with useless non-error against immutable databases 4
  • base_url configuration setting 3
  • base_url doesn't entirely work for running Datasette inside Binder 2
  • Writable canned queries fail to load custom templates 2
  • Exposing Datasette via Jupyter-server-proxy 1

user 1

  • wragge · 12 ✖

author_association 1

  • CONTRIBUTOR 12
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1111752676 https://github.com/simonw/datasette/issues/1728#issuecomment-1111752676 https://api.github.com/repos/simonw/datasette/issues/1728 IC_kwDOBm6k_c5CQ__k wragge 127565 2022-04-28T05:11:54Z 2022-04-28T05:11:54Z CONTRIBUTOR

And in terms of the bug, yep I agree that option 2 would be the most useful and least frustrating.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Writable canned queries fail with useless non-error against immutable databases 1218133366  
1111751734 https://github.com/simonw/datasette/issues/1728#issuecomment-1111751734 https://api.github.com/repos/simonw/datasette/issues/1728 IC_kwDOBm6k_c5CQ_w2 wragge 127565 2022-04-28T05:09:59Z 2022-04-28T05:09:59Z CONTRIBUTOR

Thanks, I'll give it a try!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Writable canned queries fail with useless non-error against immutable databases 1218133366  
1111712953 https://github.com/simonw/datasette/issues/1728#issuecomment-1111712953 https://api.github.com/repos/simonw/datasette/issues/1728 IC_kwDOBm6k_c5CQ2S5 wragge 127565 2022-04-28T03:48:36Z 2022-04-28T03:48:36Z CONTRIBUTOR

I don't think that'd work for this project. The db is very big, and my aim was to have an environment where researchers could be making use of the data, but be easily able to add corrections to the HTR/OCR extracted data when they came across problems. It's in its immutable (!) form here: https://sydney-stock-exchange-xqtkxtd5za-ts.a.run.app/stock_exchange/stocks

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Writable canned queries fail with useless non-error against immutable databases 1218133366  
1111705323 https://github.com/simonw/datasette/issues/1728#issuecomment-1111705323 https://api.github.com/repos/simonw/datasette/issues/1728 IC_kwDOBm6k_c5CQ0br wragge 127565 2022-04-28T03:32:06Z 2022-04-28T03:32:06Z CONTRIBUTOR

Ah, that would be it! I have a core set of data which doesn't change to which I want authorised users to be able to submit corrections. I was going to deal with the persistence issue by just grabbing the user corrections at regular intervals and saving to GitHub. I might need to rethink. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Writable canned queries fail with useless non-error against immutable databases 1218133366  
997519202 https://github.com/simonw/datasette/issues/1547#issuecomment-997519202 https://api.github.com/repos/simonw/datasette/issues/1547 IC_kwDOBm6k_c47dO9i wragge 127565 2021-12-20T01:36:58Z 2021-12-20T01:36:58Z CONTRIBUTOR

Yep, that works -- thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Writable canned queries fail to load custom templates 1076388044  
997511968 https://github.com/simonw/datasette/issues/1547#issuecomment-997511968 https://api.github.com/repos/simonw/datasette/issues/1547 IC_kwDOBm6k_c47dNMg wragge 127565 2021-12-20T01:21:59Z 2021-12-20T01:21:59Z CONTRIBUTOR

I've installed the alpha version but get an error when starting up Datasette:

Traceback (most recent call last): File "/Users/tim/.pyenv/versions/stock-exchange/bin/datasette", line 5, in <module> from datasette.cli import cli File "/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/cli.py", line 15, in <module> from .app import Datasette, DEFAULT_SETTINGS, SETTINGS, SQLITE_LIMIT_ATTACHED, pm File "/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/app.py", line 31, in <module> from .views.database import DatabaseDownload, DatabaseView File "/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/views/database.py", line 25, in <module> from datasette.plugins import pm File "/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/plugins.py", line 29, in <module> mod = importlib.import_module(plugin) File "/Users/tim/.pyenv/versions/3.8.5/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/Users/tim/.pyenv/versions/3.8.5/envs/stock-exchange/lib/python3.8/site-packages/datasette/filters.py", line 9, in <module> @hookimpl(specname="filters_from_request") TypeError: __call__() got an unexpected keyword argument 'specname'

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Writable canned queries fail to load custom templates 1076388044  
641908346 https://github.com/simonw/datasette/issues/394#issuecomment-641908346 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng== wragge 127565 2020-06-10T10:22:54Z 2020-06-10T10:22:54Z CONTRIBUTOR

There's a working demo here: https://github.com/wragge/datasette-test

And if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021  
604249402 https://github.com/simonw/datasette/issues/712#issuecomment-604249402 https://api.github.com/repos/simonw/datasette/issues/712 MDEyOklzc3VlQ29tbWVudDYwNDI0OTQwMg== wragge 127565 2020-03-26T06:11:44Z 2020-03-26T06:11:44Z CONTRIBUTOR

Following on from @betatim's suggestion on Twitter, I've changed the proxy url to include 'absolute'.

python proxy_url = f'{base_url}proxy/absolute/8001/' This works both on Binder and locally, without using the path_from_header option. I've updated the demo repository. Sorry @simonw if I've led you down the wrong path!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url doesn't entirely work for running Datasette inside Binder 588108428  
604225034 https://github.com/simonw/datasette/issues/712#issuecomment-604225034 https://api.github.com/repos/simonw/datasette/issues/712 MDEyOklzc3VlQ29tbWVudDYwNDIyNTAzNA== wragge 127565 2020-03-26T04:40:08Z 2020-03-26T04:40:08Z CONTRIBUTOR

Great! Yes, can confirm that this works on Binder. However, when I try to run the same code locally, I get an Internal Server Error when I try to access Datasette.

ERROR: Exception in ASGI application Traceback (most recent call last): File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette_debug_asgi.py", line 24, in wrapped_app await app(scope, recieve, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/utils/asgi.py", line 174, in __call__ await self.app(scope, receive, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/tracer.py", line 75, in __call__ await self.app(scope, receive, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/app.py", line 746, in __call__ raw_path = dict(scope["headers"])[path_from_header.encode("utf8")].split(b"?")[0] KeyError: b'x-original-uri' INFO: 127.0.0.1:49320 - "GET / HTTP/1.1" 500 Internal Server Error

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url doesn't entirely work for running Datasette inside Binder 588108428  
604166918 https://github.com/simonw/datasette/issues/394#issuecomment-604166918 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA== wragge 127565 2020-03-26T00:56:30Z 2020-03-26T00:56:30Z CONTRIBUTOR

Thanks! I'm trying to launch Datasette from within a notebook using the jupyter-server-proxy and the new base_url parameter. While the assets load ok, and the breadcrumb navigation works, the facet links don't seem to use the base_url. Or have I missed something?

My test repository is here: https://github.com/wragge/datasette-test

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021  
602907207 https://github.com/simonw/datasette/issues/394#issuecomment-602907207 https://api.github.com/repos/simonw/datasette/issues/394 MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw== wragge 127565 2020-03-23T23:12:18Z 2020-03-23T23:12:18Z CONTRIBUTOR

This would also be useful for running Datasette in Jupyter notebooks on Binder. While you can use Jupyter-server-proxy to access Datasette on Binder, the links are broken.

Why run Datasette on Binder? I'm developing a range of Jupyter notebooks that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic.

For example, there are a series of notebooks that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data.

But it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
base_url configuration setting 396212021  
593026413 https://github.com/simonw/datasette/issues/573#issuecomment-593026413 https://api.github.com/repos/simonw/datasette/issues/573 MDEyOklzc3VlQ29tbWVudDU5MzAyNjQxMw== wragge 127565 2020-03-01T01:24:45Z 2020-03-01T01:24:45Z CONTRIBUTOR

Did you manage to find an answer to this? I've got a notebook to help people generate datasets on the fly from an API, so it would be cool if they flick it to Datasette for initial exploration.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exposing Datasette via Jupyter-server-proxy 492153532  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 21.97ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows