issue_comments
9 rows where author_association = "OWNER", "created_at" is on date 2019-07-19 and user = 9599 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- simonw · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
513262013 | https://github.com/simonw/sqlite-utils/issues/42#issuecomment-513262013 | https://api.github.com/repos/simonw/sqlite-utils/issues/42 | MDEyOklzc3VlQ29tbWVudDUxMzI2MjAxMw== | simonw 9599 | 2019-07-19T14:58:23Z | 2020-09-22T18:12:11Z | OWNER | CLI design idea:
Here we just specify the original table and column - the new extracted table will automatically be called "company_name" and will have "id" and "value" columns, by default. To set a custom extract table:
And for extracting multiple columns and renaming them on the created table, maybe something like this:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
table.extract(...) method and "sqlite-utils extract" command 470345929 | |
513373673 | https://github.com/simonw/datasette/issues/562#issuecomment-513373673 | https://api.github.com/repos/simonw/datasette/issues/562 | MDEyOklzc3VlQ29tbWVudDUxMzM3MzY3Mw== | simonw 9599 | 2019-07-19T20:52:04Z | 2019-07-19T20:52:04Z | OWNER | I'll do this as part of #551 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Facet by array shouldn't suggest for arrays that are not arrays-of-strings 470542938 | |
513317952 | https://github.com/simonw/datasette/issues/537#issuecomment-513317952 | https://api.github.com/repos/simonw/datasette/issues/537 | MDEyOklzc3VlQ29tbWVudDUxMzMxNzk1Mg== | simonw 9599 | 2019-07-19T17:49:06Z | 2019-07-19T17:49:06Z | OWNER | It strikes me that if scope is indeed meant to stay immutable the alternative way of solving this would be to add an outbound custom request header with the endpoint - |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Populate "endpoint" key in ASGI scope 463544206 | |
513307487 | https://github.com/simonw/datasette/issues/537#issuecomment-513307487 | https://api.github.com/repos/simonw/datasette/issues/537 | MDEyOklzc3VlQ29tbWVudDUxMzMwNzQ4Nw== | simonw 9599 | 2019-07-19T17:17:43Z | 2019-07-19T17:17:43Z | OWNER | Huh, interesting. I'd got it into my head that scope should not be mutated under any circumstances - if that's not true and it's mutable there's all kinds of useful things we could do with it. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Populate "endpoint" key in ASGI scope 463544206 | |
513273003 | https://github.com/simonw/datasette/issues/537#issuecomment-513273003 | https://api.github.com/repos/simonw/datasette/issues/537 | MDEyOklzc3VlQ29tbWVudDUxMzI3MzAwMw== | simonw 9599 | 2019-07-19T15:28:42Z | 2019-07-19T15:28:42Z | OWNER | Asked about this on Twitter: https://twitter.com/simonw/status/1152238730259791877 |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Populate "endpoint" key in ASGI scope 463544206 | |
513272392 | https://github.com/simonw/datasette/issues/537#issuecomment-513272392 | https://api.github.com/repos/simonw/datasette/issues/537 | MDEyOklzc3VlQ29tbWVudDUxMzI3MjM5Mg== | simonw 9599 | 2019-07-19T15:27:03Z | 2019-07-19T15:27:03Z | OWNER | Yeah that's a good call: the Datasette plugin mechanism where middleware is wrapped around the outside doesn't appear to be compatible with the Sentry mechanism of expecting that @tomchristie is this something you've thought about? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Populate "endpoint" key in ASGI scope 463544206 | |
513246831 | https://github.com/simonw/sqlite-utils/issues/42#issuecomment-513246831 | https://api.github.com/repos/simonw/sqlite-utils/issues/42 | MDEyOklzc3VlQ29tbWVudDUxMzI0NjgzMQ== | simonw 9599 | 2019-07-19T14:20:15Z | 2019-07-19T14:20:49Z | OWNER | Since these operations could take a long time against large tables, it would be neat if there was a progress bar option for the CLI command. The operations are full table scans so calculating progress shouldn't be too difficult. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
table.extract(...) method and "sqlite-utils extract" command 470345929 | |
513246124 | https://github.com/simonw/sqlite-utils/issues/42#issuecomment-513246124 | https://api.github.com/repos/simonw/sqlite-utils/issues/42 | MDEyOklzc3VlQ29tbWVudDUxMzI0NjEyNA== | simonw 9599 | 2019-07-19T14:18:35Z | 2019-07-19T14:19:40Z | OWNER | How about the Python version? That should be easier to design.
Would also be nice if there was a syntax for saying "... and use the value from this column as the primary key column in the newly created table". |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
table.extract(...) method and "sqlite-utils extract" command 470345929 | |
513244121 | https://github.com/simonw/sqlite-utils/issues/42#issuecomment-513244121 | https://api.github.com/repos/simonw/sqlite-utils/issues/42 | MDEyOklzc3VlQ29tbWVudDUxMzI0NDEyMQ== | simonw 9599 | 2019-07-19T14:13:33Z | 2019-07-19T14:13:33Z | OWNER | So what could the interface to this look like? Especially for the CLI? One option:
Tricky thing here is that it's quite a large number of positional arguments:
It would be great if this could supported multiple columns - for if a spreadsheet has e.g. a “Company Name”, “Company Address” pair of fields that always match each other and areduplicated many times. This could be handled by creating the new table with two columns that are indexed as a unique compound key. Then you can easily get-or-create on the pairs (or triples or whatever) from the original table. Challenge here is what does the CLI syntax look like. Something like this?
Perhaps the columns in the new table are FORCED to be the same as the old ones, hence avoiding some options? Bit restrictive… maybe they default to the same but you can customize?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
table.extract(...) method and "sqlite-utils extract" command 470345929 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3