html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052693,https://api.github.com/repos/simonw/sqlite-utils/issues/297,882052693,IC_kwDOCGYnMM40kw5V,9599,2021-07-18T12:57:54Z,2022-06-21T13:17:15Z,OWNER,"Another implementation option would be to use the CSV virtual table mechanism. This could avoid shelling out to the `sqlite3` binary, but requires solving the harder problem of compiling and distributing a loadable SQLite module: https://www.sqlite.org/csv.html (Would be neat to produce a Python wheel of this, see https://simonwillison.net/2022/May/23/bundling-binary-tools-in-python-wheels/) This would also help solve the challenge of making this optimization available to the `sqlite-utils memory` command. That command operates against an in-memory database so it's not obvious how it could shell out to a binary.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776, https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052852,https://api.github.com/repos/simonw/sqlite-utils/issues/297,882052852,IC_kwDOCGYnMM40kw70,9599,2021-07-18T12:59:20Z,2021-07-18T12:59:20Z,OWNER,I'm not too worried about `sqlite-utils memory` because if your data is large enough that you can benefit from this optimization you probably should use a real file as opposed to a disposable memory database when analyzing it.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944846776,