issue_comments: 345503897
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/105#issuecomment-345503897 | https://api.github.com/repos/simonw/datasette/issues/105 | 345503897 | MDEyOklzc3VlQ29tbWVudDM0NTUwMzg5Nw== | 198537 | 2017-11-19T09:38:08Z | 2017-11-19T09:38:08Z | CONTRIBUTOR | Thanks, I wrote this very simple reader because the default approach as described on the Datahub pages seemed to complicated. I had metadata from the This could also be useful for getting from Data Package to SQL db: https://github.com/frictionlessdata/tableschema-sql-py I maintain a few climate science related dataset at https://github.com/openclimatedata/ The Data Retriever (mainly ecological data) by @ethanwhite et al. is also using the Data Package format for metadata and has some tooling for different dbs: https://frictionlessdata.io/articles/the-data-retriever/ https://github.com/weecology/retriever The Open Power System Data project also has a couple of datasets that show nicely how CSV is great for assembling and then already make SQLite files available. It's one of the first data sets I tried with Datasette, perfect for the use case of getting an API for putting power stations on a map ... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
274314940 |