14 rows where user = 21148 sorted by id

View and edit SQL

Suggested facets: issue_url, created_at (date), updated_at (date)

author_association

user

  • jacobian · 14
id ▼ html_url issue_url node_id user created_at updated_at author_association body reactions issue performed_via_github_app
344710204 https://github.com/simonw/datasette/pull/104#issuecomment-344710204 https://api.github.com/repos/simonw/datasette/issues/104 MDEyOklzc3VlQ29tbWVudDM0NDcxMDIwNA== jacobian 21148 2017-11-15T19:57:50Z 2017-11-15T19:57:50Z CONTRIBUTOR

A first basic stab at making this work, just to prove the approach. Right now this requires a Heroku CLI plugin, which seems pretty unreasonable. I think this can be replaced with direct API calls, which could clean up a lot of things. But I wanted to prove it worked first, and it does.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[WIP] Add publish to heroku support 274284246  
345452669 https://github.com/simonw/datasette/pull/104#issuecomment-345452669 https://api.github.com/repos/simonw/datasette/issues/104 MDEyOklzc3VlQ29tbWVudDM0NTQ1MjY2OQ== jacobian 21148 2017-11-18T16:18:45Z 2017-11-18T16:18:45Z CONTRIBUTOR

I'd like to do a bit of cleanup, and some error checking in case heroku/heroku-builds isn't installed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[WIP] Add publish to heroku support 274284246  
346116745 https://github.com/simonw/datasette/pull/104#issuecomment-346116745 https://api.github.com/repos/simonw/datasette/issues/104 MDEyOklzc3VlQ29tbWVudDM0NjExNjc0NQ== jacobian 21148 2017-11-21T18:23:25Z 2017-11-21T18:23:25Z CONTRIBUTOR

@simonw ready for a review and merge if you want.

There's still some nasty duplicated code in cli.py and utils.py, which is just going to get worse if/when we start adding any other deploy targets (and I want to do one for cloud.gov, at least). I think there's an opportunity for some refactoring here. I'm happy to do that now as part of this PR, or if you merge this first I'll do it in a different one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[WIP] Add publish to heroku support 274284246  
346124073 https://github.com/simonw/datasette/pull/104#issuecomment-346124073 https://api.github.com/repos/simonw/datasette/issues/104 MDEyOklzc3VlQ29tbWVudDM0NjEyNDA3Mw== jacobian 21148 2017-11-21T18:49:55Z 2017-11-21T18:49:55Z CONTRIBUTOR

Actually hang on, don't merge - there are some bugs that #141 masked when I tested this out elsewhere.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[WIP] Add publish to heroku support 274284246  
346124764 https://github.com/simonw/datasette/pull/104#issuecomment-346124764 https://api.github.com/repos/simonw/datasette/issues/104 MDEyOklzc3VlQ29tbWVudDM0NjEyNDc2NA== jacobian 21148 2017-11-21T18:52:14Z 2017-11-21T18:52:14Z CONTRIBUTOR

OK, now this should work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
[WIP] Add publish to heroku support 274284246  
346244871 https://github.com/simonw/datasette/issues/14#issuecomment-346244871 https://api.github.com/repos/simonw/datasette/issues/14 MDEyOklzc3VlQ29tbWVudDM0NjI0NDg3MQ== jacobian 21148 2017-11-22T05:06:30Z 2017-11-22T05:06:30Z CONTRIBUTOR

I'd also suggest taking a look at stevedore, which has a ton of tools for doing plugin stuff. I've had good luck with it in the past.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Plugins 267707940  
552134876 https://github.com/dogsheep/twitter-to-sqlite/issues/29#issuecomment-552134876 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/29 MDEyOklzc3VlQ29tbWVudDU1MjEzNDg3Ng== jacobian 21148 2019-11-09T20:33:38Z 2019-11-09T20:33:38Z NONE

❤️ thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
`import` command fails on empty files 518725064  
558687342 https://github.com/simonw/datasette/issues/639#issuecomment-558687342 https://api.github.com/repos/simonw/datasette/issues/639 MDEyOklzc3VlQ29tbWVudDU1ODY4NzM0Mg== jacobian 21148 2019-11-26T15:40:00Z 2019-11-26T15:40:00Z CONTRIBUTOR

A bit of background: the reason heroku git:clone brings down an empty directory is because datasette publish heroku uses the builds API, rather than a git push, to release the app. I originally did this because it seemed like a lower bar than having a working git, but the downside is, as you found out, that tweaking the created app is hard.

So there's one option -- change datasette publish heroku to use git push instead of heroku builds:create.

@pkoppstein - what you suggested seems like it ought to work (you don't need maintenance mode, though). I'm not sure why it doesn't.

You could also look into using the slugs API to download the slug, change metadata.json, re-pack and re-upload the slug.

Ultimately though I think I think @simonw's idea of reading metadata.json from an external source might be better (#357). Reading from an alternate URL would be fine, or you could also just stuff the whole metadata.json into a Heroku config var, and write a plugin to read it from there.

Hope this helps a bit!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
updating metadata.json without recreating the app 527670799  
754721153 https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54 MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw== jacobian 21148 2021-01-05T15:51:09Z 2021-01-05T15:51:09Z NONE

Correction: the failure is on lists-member.js (I was thrown by the block variable name, but that's just a coincidence)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Archive import appears to be broken on recent exports 779088071  
754728696 https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55 MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng== jacobian 21148 2021-01-05T16:02:55Z 2021-01-05T16:02:55Z NONE

This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix archive imports 779211940  
754729035 https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754729035 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54 MDEyOklzc3VlQ29tbWVudDc1NDcyOTAzNQ== jacobian 21148 2021-01-05T16:03:29Z 2021-01-05T16:03:29Z NONE

I was able to fix this, at least enough to get my archive to import. Not sure if there's more work to be done here or not.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Archive import appears to be broken on recent exports 779088071  
760950128 https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128 https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55 MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA== jacobian 21148 2021-01-15T13:44:52Z 2021-01-15T13:44:52Z NONE

I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Fix archive imports 779211940  
799002993 https://github.com/simonw/datasette/issues/236#issuecomment-799002993 https://api.github.com/repos/simonw/datasette/issues/236 MDEyOklzc3VlQ29tbWVudDc5OTAwMjk5Mw== jacobian 21148 2021-03-14T23:41:51Z 2021-03-14T23:41:51Z CONTRIBUTOR

Now that Lambda supports Docker, this probably is a bit easier and may be able to build on top of the existing package command.

There are weirdnesses in how the command actually gets invoked; the aws-lambda-python image shows a bit of that. So Datasette would probably need some sort of Lambda-specific entry point to make this work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish lambda plugin 317001500  
799003172 https://github.com/simonw/datasette/issues/236#issuecomment-799003172 https://api.github.com/repos/simonw/datasette/issues/236 MDEyOklzc3VlQ29tbWVudDc5OTAwMzE3Mg== jacobian 21148 2021-03-14T23:42:57Z 2021-03-14T23:42:57Z CONTRIBUTOR

Oh, and the container image can be up to 10GB, so the EFS step might not be needed except for pretty big stuff.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish lambda plugin 317001500  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);