home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

9 rows where "created_at" is on date 2018-11-05 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

issue 5

  • Integration with JupyterLab 3
  • Interface should show same JSON shape options for custom SQL queries 2
  • datasette publish digitalocean plugin 2
  • Travis should push tagged images to Docker Hub for each release 1
  • Get Datasette working with Zeit Now v2's 100MB image size limit 1

user 3

  • simonw 5
  • psychemedia 3
  • gfrmin 1

author_association 2

  • OWNER 5
  • CONTRIBUTOR 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
436042445 https://github.com/simonw/datasette/issues/370#issuecomment-436042445 https://api.github.com/repos/simonw/datasette/issues/370 MDEyOklzc3VlQ29tbWVudDQzNjA0MjQ0NQ== psychemedia 82988 2018-11-05T21:30:42Z 2018-11-05T21:31:48Z CONTRIBUTOR

Another route would be something like creating a datasette IPython magic for notebooks to take a dataframe and easily render it as a datasette. You'd need to run the app in the background rather than block execution in the notebook. Related to that, or to publishing a dataframe in notebook cell for use in other cells in a non-blocking way, there may be cribs in something like https://github.com/micahscopes/nbmultitask .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Integration with JupyterLab 377155320  
436037692 https://github.com/simonw/datasette/issues/370#issuecomment-436037692 https://api.github.com/repos/simonw/datasette/issues/370 MDEyOklzc3VlQ29tbWVudDQzNjAzNzY5Mg== psychemedia 82988 2018-11-05T21:15:47Z 2018-11-05T21:18:37Z CONTRIBUTOR

In terms of integration with pandas, I was pondering two different ways datasette/csvs_to_sqlite integration may work:

  • like pandasql, to provide a SQL query layer either by a direct connection to the sqlite db or via datasette API;
  • as an improvement of pandas.to_sql(), which is a bit ropey (e.g. pandas.to_sql_from_csvs(), routing the dataframe to sqlite via csvs_tosqlite rather than the dodgy mapping that pandas supports).

The pandas.publish_* idea could be quite interesting though... Would it be useful/fruitful to think about publish_ as a complement to pandas.to_?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Integration with JupyterLab 377155320  
435976262 https://github.com/simonw/datasette/issues/374#issuecomment-435976262 https://api.github.com/repos/simonw/datasette/issues/374 MDEyOklzc3VlQ29tbWVudDQzNTk3NjI2Mg== simonw 9599 2018-11-05T18:11:10Z 2018-11-05T18:11:10Z OWNER

I think there is a useful way forward here though: the image size may be limited to 100MB, but once the instance launches it gets access to a filesystem with a lot more space than that (possibly as much as 15GB given my initial poking around).

So... one potential solution here is to teach Datasette to launch from a smaller image and then download a larger SQLite file from a known URL as part of its initial startup.

Combined with the ability to get Now to always run at least one copy of an instance this could allow Datasette to host much larger SQLite databases on that platform while playing nicely with the Zeit v2 platform.

See also https://github.com/zeit/now-cli/issues/1523

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Get Datasette working with Zeit Now v2's 100MB image size limit 377518499  
435974786 https://github.com/simonw/datasette/issues/370#issuecomment-435974786 https://api.github.com/repos/simonw/datasette/issues/370 MDEyOklzc3VlQ29tbWVudDQzNTk3NDc4Ng== simonw 9599 2018-11-05T18:06:56Z 2018-11-05T18:06:56Z OWNER

I've been thinking a bit about ways of using Jupyter Notebook more effectively with Datasette (thinks like a publish_dataframes(df1, df2, df3) function which publishes some Pandas dataframes and returns you a URL to a new hosted Datasette instance) but you're right, Jupyter Lab is potentially a much more interesting fit.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Integration with JupyterLab 377155320  
435862009 https://github.com/simonw/datasette/issues/371#issuecomment-435862009 https://api.github.com/repos/simonw/datasette/issues/371 MDEyOklzc3VlQ29tbWVudDQzNTg2MjAwOQ== psychemedia 82988 2018-11-05T12:48:35Z 2018-11-05T12:48:35Z CONTRIBUTOR

I think you need to register a domain name you own separately in order to get a non-IP address address? https://www.digitalocean.com/docs/networking/dns/

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish digitalocean plugin 377156339  
435772031 https://github.com/simonw/datasette/issues/329#issuecomment-435772031 https://api.github.com/repos/simonw/datasette/issues/329 MDEyOklzc3VlQ29tbWVudDQzNTc3MjAzMQ== simonw 9599 2018-11-05T06:53:28Z 2018-11-05T06:54:10Z OWNER

This works now! The 0.25.1 release was the first release which successfully pushed to Docker Hub: https://hub.docker.com/r/datasetteproject/datasette/tags/

Here's the log from the successful Travis release job: https://travis-ci.org/simonw/datasette/jobs/450714602

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Travis should push tagged images to Docker Hub for each release 336465018  
435768450 https://github.com/simonw/datasette/issues/369#issuecomment-435768450 https://api.github.com/repos/simonw/datasette/issues/369 MDEyOklzc3VlQ29tbWVudDQzNTc2ODQ1MA== gfrmin 416374 2018-11-05T06:31:59Z 2018-11-05T06:31:59Z CONTRIBUTOR

That would be ideal, but you know better than me whether the CSV streaming trick works for custom SQL queries.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Interface should show same JSON shape options for custom SQL queries 374953006  
435767827 https://github.com/simonw/datasette/issues/369#issuecomment-435767827 https://api.github.com/repos/simonw/datasette/issues/369 MDEyOklzc3VlQ29tbWVudDQzNTc2NzgyNw== simonw 9599 2018-11-05T06:27:55Z 2018-11-05T06:28:48Z OWNER

This is a good idea. Basically a version of this bug but on the custom SQL query page:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Interface should show same JSON shape options for custom SQL queries 374953006  
435767775 https://github.com/simonw/datasette/issues/371#issuecomment-435767775 https://api.github.com/repos/simonw/datasette/issues/371 MDEyOklzc3VlQ29tbWVudDQzNTc2Nzc3NQ== simonw 9599 2018-11-05T06:27:33Z 2018-11-05T06:27:33Z OWNER

This would be fantastic - that tutorial looks like many of the details needed for this.

Do you know if Digital Ocean have the ability to provision URLs for a droplet without you needing to buy your own domain name? Heroku have https://example.herokuapp.com/ and Zeit have https://blah.now.sh/ - does Digital Ocean have an equivalent?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish digitalocean plugin 377156339  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1191.425ms · About: github-to-sqlite