issues: 751195017
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
751195017 | MDU6SXNzdWU3NTExOTUwMTc= | 1111 | Accessing a database's `.json` is slow for very large SQLite files | 15178711 | open | 0 | 3 | 2020-11-26T00:27:27Z | 2021-01-04T19:57:53Z | CONTRIBUTOR | I have a SQLite DB that's pretty large, 23GB and something like 300 million rows. I expect that most queries I run on it will be slow, which is fine, but there are some things that Datasette does that makes working with the DB very slow. Specifically, when I access the
I suspect this is because a ```bash $ time sqlite3 out.db < <(echo "select count(*) from PageviewsHour;") 362794272 real 0m44.523s user 0m2.497s sys 0m6.703s ``` I'm using the
More than happy to debug further, or send a PR if you like one of the proposals above! |
107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |