issue_comments
3 rows where author_association = "NONE" and user = 172847 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
559916057 | https://github.com/simonw/datasette/issues/639#issuecomment-559916057 | https://api.github.com/repos/simonw/datasette/issues/639 | MDEyOklzc3VlQ29tbWVudDU1OTkxNjA1Nw== | pkoppstein 172847 | 2019-11-30T06:08:50Z | 2019-11-30T06:08:50Z | NONE | @simonw, @jacobian - I was able to resolve the metadata.json issue by adding I also had to set the environment variable WEB_CONCURRENCY -- I used WEB_CONCURRENCY=1. I am still anxious to know whether it's possible for Datasette on Heroku to access the SQLite file at another location. Cloudcube seems the most promising, and I'm hoping it can be done by tweaking the Procfile suitably, but maybe that's too optimistic? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
updating metadata.json without recreating the app 527670799 | |
558852316 | https://github.com/simonw/datasette/issues/639#issuecomment-558852316 | https://api.github.com/repos/simonw/datasette/issues/639 | MDEyOklzc3VlQ29tbWVudDU1ODg1MjMxNg== | pkoppstein 172847 | 2019-11-26T22:54:23Z | 2019-11-26T22:54:23Z | NONE | @jacobian - Thanks for your help. Having to upload an entire slug each time a small change is needed in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
updating metadata.json without recreating the app 527670799 | |
558437707 | https://github.com/simonw/datasette/issues/639#issuecomment-558437707 | https://api.github.com/repos/simonw/datasette/issues/639 | MDEyOklzc3VlQ29tbWVudDU1ODQzNzcwNw== | pkoppstein 172847 | 2019-11-26T03:02:53Z | 2019-11-26T03:03:29Z | NONE | @simonw - Thanks for the reply! My reading of the heroku documents is that if one sets things up using git, then one can use "git push" (from a {local, GitHub, GitLab} git repository to Heroku) to "update" a Heroku deployment, but I'm not sure exactly how this works. However, assuming there is some way to use "git push" to update the Heroku deployment, the question becomes how can one do this in conjunction with datasette. Again based on my reading the heroku documents, it would seem that the following should work (but it doesn't quite): 1) Use datasette to create a deployment (named MYAPP) 2) Put it in maintenance mode 3) heroku git:clone -a MYAPP -- This results in an empty repository (as expected) 4) In another directory, heroku slugs:download -a MYAPP 5) Copy the downloaded slug into the repository 6) Make some change to metadata.json 6) Commit and push it back 7) Take the deployment out of maintenance mode 8) Refresh the deployment Using the heroku console, I've verified that the edits appear on heroku, but somehow they are not reflected in the running app. I'm hopeful that with some small tweak or perhaps the addition of a bit of voodoo, this strategy will work. I think it will be important to get this working for another reason: getting Heroku, Cloudcube, and datasette to work together, to overcome the slug size limitation so that large SQLite databases can be deployed to Heroku using Datasette. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
updating metadata.json without recreating the app 527670799 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1