issue_comments
14 rows where "created_at" is on date 2019-03-15, reactions = "{"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}" and user = 9599 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date)
user 1
- simonw · 14 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
473323329 | https://github.com/simonw/datasette/issues/123#issuecomment-473323329 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDQ3MzMyMzMyOQ== | simonw 9599 | 2019-03-15T15:09:15Z | 2019-05-14T15:53:05Z | OWNER | How would Datasette accepting URLs work? I want to support not just SQLite files and CSVs but other extensible formats (geojson, Atom, shapefiles etc) as well. So If it's a URL, we can use the first 200 downloaded bytes to decide which type of file it is. This is likely more reliable than hoping the web server provided the correct content-type. Also: let's have a threshold for downloading to disk. We will start downloading to a temp file (location controlled by an environment variable) if either the content length header is above that threshold OR we hit that much data cached in memory already and don't know how much more is still to come. There needs to be a command line option for saying "grab from this URL but force treat it as CSV" - same thing for files on disk.
If you provide less Auto detection could be tricky. Probably do this with a plugin hook. https://github.com/h2non/filetype.py is interesting but deals with images video etc so not right for this purpose. I think we need our own simple content sniffing code via a plugin hook. What if two plugin type hooks can both potentially handle a sniffed file? The CLI can quit and return an error saying content is ambiguous and you need to specify a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
473313975 | https://github.com/simonw/datasette/issues/123#issuecomment-473313975 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDQ3MzMxMzk3NQ== | simonw 9599 | 2019-03-15T14:45:46Z | 2019-03-15T14:45:46Z | OWNER | I'm reopening this one as part of #417. Further experience with Python's CSV standard library module has convinced me that pandas is not a required dependency for this. My sqlite-utils package can do most of the work here with very few dependencies. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
473310026 | https://github.com/simonw/datasette/pull/416#issuecomment-473310026 | https://api.github.com/repos/simonw/datasette/issues/416 | MDEyOklzc3VlQ29tbWVudDQ3MzMxMDAyNg== | simonw 9599 | 2019-03-15T14:35:53Z | 2019-03-15T14:35:53Z | OWNER | See #418 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
URL hashing now optional: turn on with --config hash_urls:1 (#418) 421348146 | |
473308631 | https://github.com/simonw/datasette/issues/417#issuecomment-473308631 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDQ3MzMwODYzMQ== | simonw 9599 | 2019-03-15T14:32:13Z | 2019-03-15T14:32:13Z | OWNER | This would allow Datasette to be easily used as a "data library" (like a data warehouse but less expectation of big data querying technology such as Presto). One of the things I learned at the NICAR CAR 2019 conference in Newport Beach is that there is a very real need for some kind of easily accessible data library at most newsrooms. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
473164038 | https://github.com/simonw/datasette/issues/415#issuecomment-473164038 | https://api.github.com/repos/simonw/datasette/issues/415 | MDEyOklzc3VlQ29tbWVudDQ3MzE2NDAzOA== | simonw 9599 | 2019-03-15T05:31:21Z | 2019-03-15T05:31:21Z | OWNER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add query parameter to hide SQL textarea 418329842 | ||
473160702 | https://github.com/simonw/datasette/pull/416#issuecomment-473160702 | https://api.github.com/repos/simonw/datasette/issues/416 | MDEyOklzc3VlQ29tbWVudDQ3MzE2MDcwMg== | simonw 9599 | 2019-03-15T05:08:13Z | 2019-03-15T05:08:13Z | OWNER | This also needs extensive tests to ensure that with the option turned on all of the redirects behave as they should. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
URL hashing now optional: turn on with --config hash_urls:1 (#418) 421348146 | |
473160476 | https://github.com/simonw/datasette/pull/413#issuecomment-473160476 | https://api.github.com/repos/simonw/datasette/issues/413 | MDEyOklzc3VlQ29tbWVudDQ3MzE2MDQ3Ng== | simonw 9599 | 2019-03-15T05:06:37Z | 2019-03-15T05:06:37Z | OWNER | Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Update spatialite.rst 413887019 | |
473159679 | https://github.com/simonw/datasette/pull/416#issuecomment-473159679 | https://api.github.com/repos/simonw/datasette/issues/416 | MDEyOklzc3VlQ29tbWVudDQ3MzE1OTY3OQ== | simonw 9599 | 2019-03-15T05:01:27Z | 2019-03-15T05:01:27Z | OWNER | Also: if the option is False and the user visits a URL with a hash in it, should we redirect them? I'm inclined to say no: furthermore, I'd be OK continuing to serve a far-future cache header for that case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
URL hashing now optional: turn on with --config hash_urls:1 (#418) 421348146 | |
473158506 | https://github.com/simonw/datasette/issues/412#issuecomment-473158506 | https://api.github.com/repos/simonw/datasette/issues/412 | MDEyOklzc3VlQ29tbWVudDQ3MzE1ODUwNg== | simonw 9599 | 2019-03-15T04:53:53Z | 2019-03-15T04:53:53Z | OWNER | I've been thinking about how Datasette instances could query each other for a while - it's a really interesting direction. There are some tricky problems to solve to get this to work. There's a SQLite mechanism called "virtual table functions" which can implement things like this, but it's not supported by Python's https://github.com/coleifer/sqlite-vtfunc is a library that enables this feature. I experimented with using that to implement a function that scrapes HTML content (with an eye to accessing data from other APIs and Datasette instances) a while ago: https://github.com/coleifer/sqlite-vtfunc/issues/6 The bigger challenge is how to get this kind of thing to behave well within a Python 3 async environment. I have some ideas here but they're going to require some very crafty engineering. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Linked Data(sette) 411257981 | |
473157770 | https://github.com/simonw/datasette/issues/415#issuecomment-473157770 | https://api.github.com/repos/simonw/datasette/issues/415 | MDEyOklzc3VlQ29tbWVudDQ3MzE1Nzc3MA== | simonw 9599 | 2019-03-15T04:49:03Z | 2019-03-15T04:49:03Z | OWNER | Interesting idea. I can see how this would make sense if you are dealing with really long SQL queries. My own example of a long query that might benefit from this: https://russian-ads-demo.herokuapp.com/russian-ads-a42c4e8?sql=select%0D%0A++++target_id%2C%0D%0A++++targets.name%2C%0D%0A++++count()+as+n%2C%0D%0A++++json_object(%0D%0A++++++++%22href%22%2C+%22%2Frussian-ads%2Ffaceted-targets%3Ftargets%3D%22+||+%0D%0A++++++++++++json_insert(%3Atargets%2C+%27%24[%27+||+json_array_length(%3Atargets)+||+%27]%27%2C+target_id)%0D%0A++++++++%2C%0D%0A++++++++%22label%22%2C+json_insert(%3Atargets%2C+%27%24[%27+||+json_array_length(%3Atargets)+||+%27]%27%2C+target_id)%0D%0A++++)+as+apply_this_facet%2C%0D%0A++++json_object(%0D%0A++++++++%22href%22%2C+%22%2Frussian-ads%2Fdisplay_ads%3F_targets_json%3D%22+||+%0D%0A++++++++++++json_insert(%3Atargets%2C+%27%24[%27+||+json_array_length(%3Atargets)+||+%27]%27%2C+target_id)%0D%0A++++++++%2C%0D%0A++++++++%22label%22%2C+%22See+%22+||+count()+||+%22+ads+matching+%22+||+json_insert(%3Atargets%2C+%27%24[%27+||+json_array_length(%3Atargets)+||+%27]%27%2C+target_id)%0D%0A++++)+as+browse_these_ads%0D%0Afrom+ad_targets%0D%0Ajoin+targets+on+ad_targets.target_id+%3D+targets.id%0D%0Awhere%0D%0A++++json_array_length(%3Atargets)+%3D%3D+0+or%0D%0A++++ad_id+in+(%0D%0A++++++++select+ad_id%0D%0A++++++++from+%22ad_targets%22%0D%0A++++++++where+%22ad_targets%22.target_id+in+(select+value+from+json_each(%3Atargets))%0D%0A++++++++group+by+%22ad_targets%22.ad_id%0D%0A++++++++having+count(distinct+%22ad_targets%22.target_id)+%3D+json_array_length(%3Atargets)%0D%0A++++)%0D%0A++++and+target_id+not+in+(select+value+from+json_each(%3Atargets))%0D%0Agroup+by%0D%0A++++target_id+order+by+n+desc%0D%0A&targets=[%22e6200%22] Having a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add query parameter to hide SQL textarea 418329842 | |
473156905 | https://github.com/simonw/datasette/issues/411#issuecomment-473156905 | https://api.github.com/repos/simonw/datasette/issues/411 | MDEyOklzc3VlQ29tbWVudDQ3MzE1NjkwNQ== | simonw 9599 | 2019-03-15T04:42:58Z | 2019-03-15T04:42:58Z | OWNER | Have you tried this?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to pass named parameter into spatialite MakePoint() function 410384988 | |
473156774 | https://github.com/simonw/datasette/issues/414#issuecomment-473156774 | https://api.github.com/repos/simonw/datasette/issues/414 | MDEyOklzc3VlQ29tbWVudDQ3MzE1Njc3NA== | simonw 9599 | 2019-03-15T04:42:06Z | 2019-03-15T04:42:06Z | OWNER | This has been bothering me as well, especially when I try to install |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette requires specific version of Click 415575624 | |
473156513 | https://github.com/simonw/datasette/pull/416#issuecomment-473156513 | https://api.github.com/repos/simonw/datasette/issues/416 | MDEyOklzc3VlQ29tbWVudDQ3MzE1NjUxMw== | simonw 9599 | 2019-03-15T04:40:29Z | 2019-03-15T04:40:29Z | OWNER | Still TODO: need to figure out what to do about cache TTL. Defaulting to 365 days no longer makes sense without the hash_urls setting. Maybe drop that setting default to 0? Here's the setting: And here's where it takes affect: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
URL hashing now optional: turn on with --config hash_urls:1 (#418) 421348146 | |
473154643 | https://github.com/simonw/datasette/pull/416#issuecomment-473154643 | https://api.github.com/repos/simonw/datasette/issues/416 | MDEyOklzc3VlQ29tbWVudDQ3MzE1NDY0Mw== | simonw 9599 | 2019-03-15T04:27:47Z | 2019-03-15T04:28:00Z | OWNER | Deployed a demo: https://datasette-optional-hash-demo.now.sh/
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
URL hashing now optional: turn on with --config hash_urls:1 (#418) 421348146 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 8