id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason
1174717287,I_kwDOBm6k_c5GBMNn,1674,Tweak design of /.json,9599,open,0,,3268330,1,2022-03-20T22:58:01Z,2022-03-20T22:58:40Z,,OWNER,,"https://latest.datasette.io/.json

Currently:
```json
{
  ""_memory"": {
    ""name"": ""_memory"",
    ""hash"": null,
    ""color"": ""a6c7b9"",
    ""path"": ""/_memory"",
    ""tables_and_views_truncated"": [],
    ""tables_and_views_more"": false,
    ""tables_count"": 0,
    ""table_rows_sum"": 0,
    ""show_table_row_counts"": false,
    ""hidden_table_rows_sum"": 0,
    ""hidden_tables_count"": 0,
    ""views_count"": 0,
    ""private"": false
  },
  ""fixtures"": {
    ""name"": ""fixtures"",
    ""hash"": ""645005884646eb941c89997fbd1c0dd6be517cb1b493df9816ae497c0c5afbaa"",
    ""color"": ""645005"",
    ""path"": ""/fixtures"",
    ""tables_and_views_truncated"": [
      {
        ""name"": ""compound_three_primary_keys"",
        ""columns"": [
          ""pk1"",
          ""pk2"",
          ""pk3"",
          ""content""
        ],
        ""primary_keys"": [
          ""pk1"",
          ""pk2"",
          ""pk3""
        ],
        ""count"": 1001,
        ""hidden"": false,
        ""fts_table"": null,
        ""num_relationships_for_sorting"": 0,
        ""private"": false
      },
```
As-of this issue the `""path""` key is confusing, it doesn't match what https://latest.datasette.io/-/databases returns:

- #1668",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1674/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
910088936,MDU6SXNzdWU5MTAwODg5MzY=,1355,datasette --get should efficiently handle streaming CSV,9599,open,0,,,2,2021-06-03T04:40:40Z,2022-03-20T22:38:53Z,,OWNER,,"It would be great if you could use `datasette --get` to run queries that return streaming CSV data without running out of RAM.

Current implementation looks like it loads the entire result into memory first: https://github.com/simonw/datasette/blob/f78ebdc04537a6102316d6dbbf6c887565806078/datasette/cli.py#L546-L552",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1355/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1174708375,I_kwDOBm6k_c5GBKCX,1673,Streaming CSV spends a lot of time in `table_column_details`,9599,open,0,,,1,2022-03-20T22:25:28Z,2022-03-20T22:34:06Z,,OWNER,,"At least I think it does. I tried running `py-spy top -p $PID` against a Datasette process that was trying to do:

    datasette covid.db --get '/covid/ny_times_us_counties.csv?_size=10&_stream=on'

While investigating:
- #1355

And spotted this:
```
datasette covid.db --get /covid/ny_times_us_counties.csv?_size=10&_stream=on' (python v3.10.2)
Total Samples 5800
GIL: 71.00%, Active: 98.00%, Threads: 4

  %Own   %Total  OwnTime  TotalTime  Function (filename:line)                                                                                                                                            
  8.00%   8.00%    4.32s     4.38s   sql_operation_in_thread (datasette/database.py:212)
  5.00%   5.00%    3.77s     3.93s   table_column_details (datasette/utils/__init__.py:614)
  6.00%   6.00%    3.72s     3.72s   _worker (concurrent/futures/thread.py:81)
  7.00%   7.00%    2.98s     2.98s   _read_from_self (asyncio/selector_events.py:120)
  5.00%   6.00%    2.35s     2.49s   detect_fts (datasette/utils/__init__.py:571)
  4.00%   4.00%    1.34s     1.34s   _write_to_self (asyncio/selector_events.py:140)
```
Relevant code: https://github.com/simonw/datasette/blob/798f075ef9b98819fdb564f9f79c78975a0f71e8/datasette/utils/__init__.py#L609-L625
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1673/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1174697144,I_kwDOBm6k_c5GBHS4,1672,Refactor CSV handling code out of DataView,9599,open,0,,3268330,1,2022-03-20T21:47:00Z,2022-03-20T21:52:39Z,,OWNER,,"> I think the way to get rid of most of the remaining complexity in `DataView` is to refactor how CSV stuff works - pulling it in line with other export factors and extracting the streaming mechanism. Opening a fresh issue for that.

_Originally posted by @simonw in https://github.com/simonw/datasette/issues/1660#issuecomment-1073355032_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1672/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
688351054,MDU6SXNzdWU2ODgzNTEwNTQ=,140,Idea: insert-files mechanism for adding extra columns with fixed values,9599,open,0,,,1,2020-08-28T20:57:36Z,2022-03-20T19:45:45Z,,OWNER,,"Say for example you want to populate a `file_type` column with the value `gif`. That could work like this:

```
sqlite-utils insert-files gifs.db images *.gif \
    -c path -c md5 -c last_modified:mtime \
    -c file_type:text:gif --pk=path
```
So a column defined as a `text` column with a value that follows a second colon.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/140/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,