issues: 403625674
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
403625674 | MDU6SXNzdWU0MDM2MjU2NzQ= | 7 | .insert_all() should accept a generator and process it efficiently | 9599 | closed | 0 | 3 | 2019-01-28T02:11:58Z | 2019-01-28T06:26:53Z | 2019-01-28T06:26:53Z | OWNER | Right now you have to load every record into memory before passing the list to If you want to process millions of rows, this is inefficient. Python has generators - we should use them! The only catch here is that part of the magic of If a record outside of those first 1,000 has a rogue column, we can crash with an error. This will free us up to make the |
140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |