{"id": 807174161, "node_id": "MDU6SXNzdWU4MDcxNzQxNjE=", "number": 227, "title": "Error reading csv files with large column data", "user": {"value": 295329, "label": "camallen"}, "state": "closed", "locked": 0, "assignee": null, "milestone": null, "comments": 4, "created_at": "2021-02-12T11:51:47Z", "updated_at": "2021-02-16T11:48:03Z", "closed_at": "2021-02-14T21:17:19Z", "author_association": "NONE", "pull_request": null, "body": "*Feel free to close this issue - I mostly added it for reference for future folks that run into this :)*\r\n\r\nI have a CSV file with one column that has very long strings. When i try to import this file via the `insert` command I get the following error: \r\n```\r\nsqlite-utils insert database.db table_name file_with_large_column.csv\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/sqlite-utils\", line 10, in \r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py\", line 774, in insert\r\n default=default,\r\n File \"/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py\", line 705, in insert_upsert_implementation\r\n docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs\r\n File \"/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py\", line 1852, in insert_all\r\n first_record = next(records)\r\n File \"/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py\", line 703, in \r\n docs = (decode_base64_values(doc) for doc in docs)\r\n File \"/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py\", line 681, in \r\n docs = (dict(zip(headers, row)) for row in reader)\r\n_csv.Error: field larger than field limit (131072)\r\n```\r\nBuilt with the docker image `datasetteproject/datasette:0.54` with the following versions:\r\n```\r\n# sqlite-utils --version\r\nsqlite-utils, version 3.4.1\r\n\r\n# datasette --version\r\ndatasette, version 0.54\r\n```\r\nIt appears this is a [known issue](https://stackoverflow.com/a/54517228/2761423) reading in csv files in python and [doesn't look to be modifiable](https://github.com/python/cpython/blob/ea46579067fd2d4e164d6605719ffec690c4d621/Modules/_csv.c#L1685) through system / env vars (i may be very wrong on this).\r\n\r\nNoting that using sqlite3 `import` command work without error (not using the python csv reader)\r\n```\r\nsqlite3 database.db\r\nsqlite> .mode csv\r\nsqlite> .import file_with_large_column.csv table_name\r\n```\r\nSadly I couldn't see an easy way around this while using the cli as it appears this value needs to be changed in python code. FWIW I've switched to using https://datasette.io/tools/csvs-to-sqlite for importing csv data and it's working well. \r\n\r\nFinally, I'm loving https://datasette.io/ thank you very much for an amazing tool and data ecosytem \ud83d\ude47\u200d\u2640\ufe0f ", "repo": {"value": 140912432, "label": "sqlite-utils"}, "type": "issue", "active_lock_reason": null, "performed_via_github_app": null, "reactions": "{\"url\": \"https://api.github.com/repos/simonw/sqlite-utils/issues/227/reactions\", \"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "draft": null, "state_reason": "completed"}