html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app https://github.com/simonw/datasette/issues/1836#issuecomment-1270992795,https://api.github.com/repos/simonw/datasette/issues/1836,1270992795,IC_kwDOBm6k_c5Lwc-b,536941,2022-10-07T01:29:15Z,2022-10-07T01:50:14Z,CONTRIBUTOR,"fascinatingly, telling python to open sqlite in read only mode makes this layer have a size of 0 ```python # test_sql_ro.py import sqlite3 import sys db_name = sys.argv[1] conn = sqlite3.connect(f'file:/app/{db_name}?mode=ro', uri=True) cur = conn.cursor() cur.execute('select count(*) from filing') print(cur.fetchone()) ``` that's quite weird because setting the file permissions to read only didn't do anything. (on reflection, that chmod isn't doing anything because the dockerfile commands are run as root)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908, https://github.com/simonw/datasette/issues/1836#issuecomment-1270988081,https://api.github.com/repos/simonw/datasette/issues/1836,1270988081,IC_kwDOBm6k_c5Lwb0x,536941,2022-10-07T01:19:01Z,2022-10-07T01:27:35Z,CONTRIBUTOR,"okay, some progress!! running some sql against a database file causes that file to get duplicated even if it doesn't apparently change the file. make a little test script like this: ```python # test_sql.py import sqlite3 import sys db_name = sys.argv[1] conn = sqlite3.connect(f'file:/app/{db_name}', uri=True) cur = conn.cursor() cur.execute('select count(*) from filing') print(cur.fetchone()) ``` then ```docker RUN python test_sql.py nlrb.db ``` produced a layer that's the same size as `nlrb.db`!! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908, https://github.com/simonw/datasette/issues/1836#issuecomment-1270936982,https://api.github.com/repos/simonw/datasette/issues/1836,1270936982,IC_kwDOBm6k_c5LwPWW,536941,2022-10-07T00:52:41Z,2022-10-07T00:52:41Z,CONTRIBUTOR,"it's not that the inspect command is somehow changing the db files. if i set them to only read-only, the ""inspect"" layer still has the same very large size.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908, https://github.com/simonw/datasette/issues/1836#issuecomment-1270923537,https://api.github.com/repos/simonw/datasette/issues/1836,1270923537,IC_kwDOBm6k_c5LwMER,536941,2022-10-07T00:46:08Z,2022-10-07T00:46:08Z,CONTRIBUTOR,"i thought it was maybe to do with reading through all the files, but that does not seem to be the case if i make a little test file like: ```python # test_read.py import hashlib import sys import pathlib HASH_BLOCK_SIZE = 1024 * 1024 def inspect_hash(path): """"""Calculate the hash of a database, efficiently."""""" m = hashlib.sha256() with path.open(""rb"") as fp: while True: data = fp.read(HASH_BLOCK_SIZE) if not data: break m.update(data) return m.hexdigest() inspect_hash(pathlib.Path(sys.argv[1])) ``` then a line in the Dockerfile like ```docker RUN python test_read.py nlrb.db && echo ""[]"" > /etc/inspect.json ``` just produes a layer of `3B` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908, https://github.com/simonw/datasette/issues/1480#issuecomment-1269847461,https://api.github.com/repos/simonw/datasette/issues/1480,1269847461,IC_kwDOBm6k_c5LsFWl,536941,2022-10-06T11:21:49Z,2022-10-06T11:21:49Z,CONTRIBUTOR,"thanks @simonw, i'll spend a little more time trying to figure out why this isn't working on cloudrun, and then will flip over to fly if i can't. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369, https://github.com/simonw/datasette/issues/1480#issuecomment-1268629159,https://api.github.com/repos/simonw/datasette/issues/1480,1268629159,IC_kwDOBm6k_c5Lnb6n,536941,2022-10-05T16:00:55Z,2022-10-05T16:00:55Z,CONTRIBUTOR,"as a next step, i'll fetch the docker image from the google registry, and see what memory and disk usage looks like when i run it locally.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369, https://github.com/simonw/datasette/issues/1480#issuecomment-1268613335,https://api.github.com/repos/simonw/datasette/issues/1480,1268613335,IC_kwDOBm6k_c5LnYDX,536941,2022-10-05T15:45:49Z,2022-10-05T15:45:49Z,CONTRIBUTOR,"running into this as i continue to grow my labor data warehouse. Here a CloudRun PM says the container size should **not** count against memory: https://stackoverflow.com/a/56570717","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223554,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223554,IC_kwDOCGYnMM5LWoVC,7908073,2022-10-01T03:42:50Z,2022-10-01T03:42:50Z,CONTRIBUTOR,oh weird. it inserts into db2,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223363,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223363,IC_kwDOCGYnMM5LWoSD,7908073,2022-10-01T03:41:45Z,2022-10-01T03:41:45Z,CONTRIBUTOR,"``` pytest xklb/check.py --pdb xklb/check.py:11: in test_transaction assert list(db2[""t""].rows) == [] E AssertionError: assert [{'foo': 1}] == [] E + where [{'foo': 1}] = list() E + where = .rows >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > /home/xk/github/xk/lb/xklb/check.py(11)test_transaction() 9 with db1.conn: 10 db1[""t""].insert({""foo"": 1}) ---> 11 assert list(db2[""t""].rows) == [] 12 assert list(db2[""t""].rows) == [{""foo"": 1}] ``` It fails because it is already inserted. btw if you put these two lines in you pyproject.toml you can get `ipdb` in pytest ``` [tool.pytest.ini_options] addopts = ""--pdbcls=IPython.terminal.debugger:TerminalPdb --ignore=tests/data --capture=tee-sys --log-cli-level=ERROR"" ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489, https://github.com/simonw/sqlite-utils/issues/493#issuecomment-1264219650,https://api.github.com/repos/simonw/sqlite-utils/issues/493,1264219650,IC_kwDOCGYnMM5LWnYC,7908073,2022-10-01T03:22:50Z,2022-10-01T03:23:58Z,CONTRIBUTOR,"this is likely what you are looking for: https://stackoverflow.com/a/51076749/697964 but yeah I would say just disable smart quotes","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1386562662, https://github.com/simonw/datasette/issues/370#issuecomment-1261930179,https://api.github.com/repos/simonw/datasette/issues/370,1261930179,IC_kwDOBm6k_c5LN4bD,72577720,2022-09-29T08:17:46Z,2022-09-29T08:17:46Z,CONTRIBUTOR,"Just watched this video which demonstrates the integration of *any* webapp into JupyterLab: https://youtu.be/FH1dKKmvFtc Maybe this is the answer?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",377155320, https://github.com/simonw/datasette/issues/1062#issuecomment-1260909128,https://api.github.com/repos/simonw/datasette/issues/1062,1260909128,IC_kwDOBm6k_c5LJ_JI,536941,2022-09-28T13:22:53Z,2022-09-28T14:09:54Z,CONTRIBUTOR,"if you went this route: ```python with sqlite_timelimit(conn, time_limit_ms): c.execute(query) for chunk in c.fetchmany(chunk_size): yield from chunk ``` then `time_limit_ms` would probably have to be greatly extended, because the time spent in the loop will depend on the downstream processing. i wonder if this was why you were thinking this feature would need a dedicated connection? --- reading more, there's no real limit i can find on the number of active cursors (or more precisely active prepared statements objects, because sqlite doesn't really have cursors). maybe something like this would be okay? ```python with sqlite_timelimit(conn, time_limit_ms): c.execute(query) # step through at least one to evaluate the statement, not sure if this is necessary yield c.execute.fetchone() for chunk in c.fetchmany(chunk_size): yield from chunk ``` this seems quite weird that there's not more of limit of the number of active prepared statements, but i haven't been able to find one. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",732674148, https://github.com/simonw/datasette/issues/1062#issuecomment-1260829829,https://api.github.com/repos/simonw/datasette/issues/1062,1260829829,IC_kwDOBm6k_c5LJryF,536941,2022-09-28T12:27:19Z,2022-09-28T12:27:19Z,CONTRIBUTOR,"for teaching `register_output_renderer` to stream it seems like the two options are to 1. a [nested query technique ](https://github.com/simonw/datasette/issues/526#issuecomment-505162238)to paginate through 2. a fetching model that looks like something ```python with sqlite_timelimit(conn, time_limit_ms): c.execute(query) for chunk in c.fetchmany(chunk_size): yield from chunk ``` currently `db.execute` is not a generator, so this would probably need a new method?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",732674148, https://github.com/simonw/datasette/issues/526#issuecomment-1259718517,https://api.github.com/repos/simonw/datasette/issues/526,1259718517,IC_kwDOBm6k_c5LFcd1,536941,2022-09-27T16:02:51Z,2022-09-27T16:04:46Z,CONTRIBUTOR,"i think that `max_returned_rows` **is** a defense mechanism, just not for connection exhaustion. `max_returned_rows` is a defense mechanism against **memory bombs**. if you are potentially yielding out hundreds of thousands or even millions of rows, you need to be quite careful about data flow to not run out of memory on the server, or on the client. you have a lot of places in your code that are protective of that right now, but `max_returned_rows` acts as the final backstop. so, given that, it makes sense to have removing `max_returned_rows` altogether be a non-goal, but instead allow for for specific codepaths (like streaming csv's) be able to bypass. that could dramatically lower the surface area for a memory-bomb attack.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/issues/526#issuecomment-1258910228,https://api.github.com/repos/simonw/datasette/issues/526,1258910228,IC_kwDOBm6k_c5LCXIU,536941,2022-09-27T03:11:07Z,2022-09-27T03:11:07Z,CONTRIBUTOR,"i think this feature would be safe, as its really only the time limit that can, and imo, should protect against long running queries, as it is pretty easy to make very expensive queries that don't return many rows. moving away from `max_returned_rows` will requires some thinking about: 1. memory usage and data flows to handle potentially very large result sets 2. how to avoid rendering tens or hundreds of thousands of [html rows](#1655).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/issues/526#issuecomment-1258878311,https://api.github.com/repos/simonw/datasette/issues/526,1258878311,IC_kwDOBm6k_c5LCPVn,536941,2022-09-27T02:19:48Z,2022-09-27T02:19:48Z,CONTRIBUTOR,"this sql query doesn't trip up `maximum_returned_rows` but does timeout ```sql with recursive counter(x) as ( select 0 union select x + 1 from counter ) select * from counter LIMIT 10 OFFSET 100000000 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/issues/526#issuecomment-1258871525,https://api.github.com/repos/simonw/datasette/issues/526,1258871525,IC_kwDOBm6k_c5LCNrl,536941,2022-09-27T02:09:32Z,2022-09-27T02:14:53Z,CONTRIBUTOR,"thanks @simonw, i learned something i didn't know about sqlite's execution model! > Imagine if Datasette CSVs did allow unlimited retrievals. Someone could hit the CSV endpoint for that recursive query and tie up Datasette's SQL connection effectively forever. why wouldn't the `sqlite_timelimit` guard prevent that? --- on my local version which has the code to [turn off truncations for query csv](#1820), `sqlite_timelimit` does protect me. ![Screenshot 2022-09-26 at 22-14-31 Error 500](https://user-images.githubusercontent.com/536941/192415680-94b32b7f-868f-4b89-8194-5752d45f6009.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/issues/526#issuecomment-1258849766,https://api.github.com/repos/simonw/datasette/issues/526,1258849766,IC_kwDOBm6k_c5LCIXm,536941,2022-09-27T01:27:03Z,2022-09-27T01:27:03Z,CONTRIBUTOR,"i agree with that concern! but if i'm understanding the code correctly, `maximum_returned_rows` does not protect against long-running queries in any way.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/pull/1820#issuecomment-1258803261,https://api.github.com/repos/simonw/datasette/issues/1820,1258803261,IC_kwDOBm6k_c5LB9A9,536941,2022-09-27T00:03:09Z,2022-09-27T00:03:09Z,CONTRIBUTOR,"the pattern in this PR `max_returned_rows` control the maximum rows rendered through html and json, and the csv render bypasses that. i think it would be better to have each of these different query renderers have more direct control for how many rows to fetch, instead of relying on the internals of the `execute` method. generally, users will not want to paginate through tens of thousands of results, but often will want to download a full query as json or as csv. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1386456717, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1258712931,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1258712931,IC_kwDOCGYnMM5LBm9j,25778,2022-09-26T22:31:58Z,2022-09-26T22:31:58Z,CONTRIBUTOR,"Right. The backup command will copy tables completely, but in the case of conflicting table names, the destination gets overwritten silently. That might not be what you want here. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1258508215,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1258508215,IC_kwDOCGYnMM5LA0-3,25778,2022-09-26T19:22:14Z,2022-09-26T19:22:14Z,CONTRIBUTOR,"This might be fairly straightforward using SQLite's backup utility: https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.backup ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615, https://github.com/simonw/datasette/issues/526#issuecomment-1258337011,https://api.github.com/repos/simonw/datasette/issues/526,1258337011,IC_kwDOBm6k_c5LALLz,536941,2022-09-26T16:49:48Z,2022-09-26T16:49:48Z,CONTRIBUTOR,"i think the smallest change that gets close to what i want is to change the behavior so that `max_returned_rows` is not applied in the `execute` method when we are are asking for a csv of query. there are some infelicities for that approach, but i'll make a PR to make it easier to discuss.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/issues/526#issuecomment-1258167564,https://api.github.com/repos/simonw/datasette/issues/526,1258167564,IC_kwDOBm6k_c5K_h0M,536941,2022-09-26T14:57:44Z,2022-09-26T15:08:36Z,CONTRIBUTOR,"reading the database execute method i have a few questions. https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L229-L242 --- unless i'm missing something (which is very likely!!), the `max_returned_rows` argument doesn't actually offer any protections against running very expensive queries. It's not like adding a `LIMIT max_rows` argument. it make sense that it isn't because, the query could already have an `LIMIT` argument. Doing something like `select * from (query) limit {max_returned_rows}` **might** be protective but wouldn't always. Instead the code executes the full original query, and if still has time it fetches out the first `max_rows + 1` rows. this *does* offer some protection of memory exhaustion, as you won't hydrate a huge result set into python (however, there are [data flow patterns](https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113) that could avoid that too) given the current architecture, i don't see how creating a new connection would be use? --- If we just removed the `max_return_rows` limitation, then i think most things would be fine **except** for the QueryViews. Right now rendering, just [5000 rows takes a lot of client-side memory](https://github.com/simonw/datasette/issues/1655) so some form of pagination would be required. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/datasette/issues/1655#issuecomment-1258166572,https://api.github.com/repos/simonw/datasette/issues/1655,1258166572,IC_kwDOBm6k_c5K_hks,536941,2022-09-26T14:57:04Z,2022-09-26T14:57:04Z,CONTRIBUTOR,"I think that paginating, even in javascript, could be very helpful. Maybe render json or csv into the page and let javascript loading that into the dom?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515, https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113,https://api.github.com/repos/simonw/datasette/issues/1727,1258129113,IC_kwDOBm6k_c5K_YbZ,536941,2022-09-26T14:30:11Z,2022-09-26T14:48:31Z,CONTRIBUTOR,"from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the `fetchmany` call) this is probably a simplistic idea, but what if you had the python code in the `execute` method iterate over the cursor and yield out rows or small chunks of rows. something like: ```python with sqlite_timelimit(conn, time_limit_ms): try: cursor = conn.cursor() cursor.execute(sql, params if params is not None else {}) except: ... max_returned_rows = self.ds.max_returned_rows if max_returned_rows == page_size: max_returned_rows += 1 if max_returned_rows and truncate: for i, row in enumerate(cursor): yield row if i == max_returned_rows - 1: break else: for row in cursor: yield row truncated = False ``` this kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite. you would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking. depending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1256858763,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1256858763,IC_kwDOCGYnMM5K6iSL,7908073,2022-09-24T04:50:59Z,2022-09-24T04:52:08Z,CONTRIBUTOR,"Instead of outputting binary data to stdout the interface might be better like this ``` sqlite-utils merge animals.db cats.db dogs.db ``` similar to `zip`, `ogr2ogr`, etc Actually I think this might already be possible within `ogr2ogr`. I don't believe spatial data is a requirement though it might add an `ogc_id` column or something ``` cp cats.db animals.db ogr2ogr -append animals.db dogs.db ogr2ogr -append animals.db another.db ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615, https://github.com/simonw/datasette/issues/1817#issuecomment-1256781274,https://api.github.com/repos/simonw/datasette/issues/1817,1256781274,IC_kwDOBm6k_c5K6PXa,50527,2022-09-23T22:59:46Z,2022-09-23T22:59:46Z,CONTRIBUTOR,"While you are adding features, would you be future-proofing your APIs if you switched over some arguments over to keyword-only arguments or would that be too disruptive? Thinking out loud: ``` async def render_template( self, templates, *, context=None, plugin_context=None, request=None, view_name=None ): ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1384273985, https://github.com/simonw/datasette/issues/526#issuecomment-1254064260,https://api.github.com/repos/simonw/datasette/issues/526,1254064260,IC_kwDOBm6k_c5Kv4CE,536941,2022-09-21T18:17:04Z,2022-09-21T18:18:01Z,CONTRIBUTOR,"hi @simonw, this is becoming more of a bother for my [labor data warehouse](https://labordata.bunkum.us/). Is there any research or a spike i could do that would help you investigate this issue?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1252898131,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1252898131,IC_kwDOCGYnMM5KrbVT,7908073,2022-09-20T20:51:21Z,2022-09-20T20:56:07Z,CONTRIBUTOR,"When I run `reset` it fixes my terminal. I suspect it is related to the progress bar https://linux.die.net/man/1/reset ``` 950 1s /m/d/03_Downloads 🐑 echo $TERM xterm-kitty ▓░▒░ /m/d/03_Downloads 🌏 kitty -v kitty 0.26.2 created by Kovid Goyal $ sqlite-utils insert test.db facility facility-boundary-us-all.csv --csv blah blah blah (no offense) $ $ reset $ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903, https://github.com/simonw/datasette/issues/1813#issuecomment-1250901367,https://api.github.com/repos/simonw/datasette/issues/1813,1250901367,IC_kwDOBm6k_c5Kjz13,883348,2022-09-19T11:34:45Z,2022-09-19T11:34:45Z,CONTRIBUTOR,"oh and by writing this I just realized the difference: the URL on fly.io is with a custom SQL command whereas the local one is without. It seems that there is no pagination when using custom SQL commands which makes sense Sorry for this useless issue, maybe this can be useful for someone else / me in the future. Thanks again for this wonderful project !","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1377811868, https://github.com/simonw/datasette/issues/1810#issuecomment-1248204219,https://api.github.com/repos/simonw/datasette/issues/1810,1248204219,IC_kwDOBm6k_c5KZhW7,82988,2022-09-15T14:44:47Z,2022-09-15T14:46:26Z,CONTRIBUTOR,"A couple+ of possible use case examples: - someone has a collection of articles indexed with FTS; they want to publish a simple search tool over the results; - someone has an image collection and they want to be able to search over description text to return images; - someone has a set of locations with descriptions, and wants to run a query over places and descriptions and get results as a listing or on a map; - someone has a set of audio or video files with titles, descriptions and/or transcripts, and wants to be able to search over them and return playable versions of returned items. In many cases, I suspect the raw content will be in one table, but the search table will be a second (eg FTS) table. Generally, the search may be over one or more joined tables, and the results constructed from one or more tables (which may or may not be distinct from the search tables).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1374626873, https://github.com/simonw/datasette/pull/1685#issuecomment-1237381620,https://api.github.com/repos/simonw/datasette/issues/1685,1237381620,IC_kwDOBm6k_c5JwPH0,49699333,2022-09-05T18:36:47Z,2022-09-05T18:36:47Z,CONTRIBUTOR,"Looks like jinja2 is no longer updatable, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1180778860, https://github.com/simonw/datasette/pull/1799#issuecomment-1237381569,https://api.github.com/repos/simonw/datasette/issues/1799,1237381569,IC_kwDOBm6k_c5JwPHB,49699333,2022-09-05T18:36:42Z,2022-09-05T18:36:42Z,CONTRIBUTOR,"Looks like aiofiles is no longer updatable, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1362242558, https://github.com/simonw/sqlite-utils/pull/480#issuecomment-1232356302,https://api.github.com/repos/simonw/sqlite-utils/issues/480,1232356302,IC_kwDOCGYnMM5JdEPO,7908073,2022-08-31T01:51:49Z,2022-08-31T01:51:49Z,CONTRIBUTOR,Thanks for pointing me to the right place,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1355433619, https://github.com/simonw/sqlite-utils/issues/467#issuecomment-1224382336,https://api.github.com/repos/simonw/sqlite-utils/issues/467,1224382336,IC_kwDOCGYnMM5I-peA,50527,2022-08-23T17:16:13Z,2022-08-23T17:16:13Z,CONTRIBUTOR,"> Should passing `alter=True` also drop any columns that aren't included in the new table structure? > > It could even spot column types that aren't correct and fix those. > > Is that consistent with the expectations set by how `alter=True` works elsewhere? I would lean towards not dropping them (or making a `drop=True` or `drop_columns=True`or `drop_missing_columns=True`) to work with existing tables easier. I do like that sqlite-utils mostly just works with existing tables but it's also nice to add to existing fields in a few cases. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1348169997, https://github.com/simonw/datasette/pull/1789#issuecomment-1223347322,https://api.github.com/repos/simonw/datasette/issues/1789,1223347322,IC_kwDOBm6k_c5I6sx6,15178711,2022-08-23T00:03:20Z,2022-08-23T00:03:20Z,CONTRIBUTOR,"@simonw to build the extension on ubuntu, you can run: ``` apt-get update && apt-get install libsqlite3-dev gcc gcc ext.c -fPIC -shared -o ext.so ``` I'm not the best with Actions, but if you set the cache key to `ext.c`, run those two commands to download dependencies + compile to `ext.so`, then the unit test should pick it up and run it correctly. Let me know if you want me to update the PR with that added","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1344823170, https://github.com/simonw/datasette/pull/1789#issuecomment-1221576460,https://api.github.com/repos/simonw/datasette/issues/1789,1221576460,IC_kwDOBm6k_c5Iz8cM,15178711,2022-08-21T16:16:42Z,2022-08-21T16:16:42Z,CONTRIBUTOR,"Rebased, Read the docs failure should now now fixed Re docs - ya that's a pretty ambitious page, I'm still not 100% sure what the best practices are/should be... Would be happy to make that page in a future PR","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1344823170, https://github.com/simonw/datasette/issues/1779#issuecomment-1214437408,https://api.github.com/repos/simonw/datasette/issues/1779,1214437408,IC_kwDOBm6k_c5IYtgg,536941,2022-08-14T19:42:58Z,2022-08-14T19:42:58Z,CONTRIBUTOR,thanks @simonw!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1334628400, https://github.com/simonw/datasette/issues/1779#issuecomment-1210675046,https://api.github.com/repos/simonw/datasette/issues/1779,1210675046,IC_kwDOBm6k_c5IKW9m,536941,2022-08-10T13:28:37Z,2022-08-10T13:28:37Z,CONTRIBUTOR,maybe a simpler solution is to set the maxscale to like 2? since datasette is not set up to make use of container scaling anyway?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1334628400, https://github.com/simonw/datasette/issues/1191#issuecomment-1200732975,https://api.github.com/repos/simonw/datasette/issues/1191,1200732975,IC_kwDOBm6k_c5Hkbsv,2670795,2022-08-01T05:39:27Z,2022-08-01T05:39:27Z,CONTRIBUTOR,I've got a URL shortening plugin that I would like to embed on the query page but I'd like avoid capturing the entire `query.html` template. A feature like this would solve it. Where's this at and how can I help?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",787098345, https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190277829,https://api.github.com/repos/simonw/sqlite-utils/issues/456,1190277829,IC_kwDOCGYnMM5G8jLF,536941,2022-07-20T13:19:15Z,2022-07-20T13:19:15Z,CONTRIBUTOR,hadley wickham's melt and reshape could be good inspo: http://had.co.nz/reshape/introduction.pdf,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1310243385, https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190272780,https://api.github.com/repos/simonw/sqlite-utils/issues/456,1190272780,IC_kwDOCGYnMM5G8h8M,536941,2022-07-20T13:14:54Z,2022-07-20T13:14:54Z,CONTRIBUTOR,"for example, i have data on votes that look like this: | ballot_id | option_id | choice | |-|-|-| | 1 | 1 | 0 | | 1 | 2 | 1 | | 1 | 3 | 0 | | 1 | 4 | 1 | | 2 | 1 | 1 | | 2 | 2 | 0 | | 2 | 3 | 1 | | 2 | 4 | 0 | and i want to reshape from this long form to this wide form: | ballot_id | option_id_1 | option_id_2 | option_id_3 | option_id_ 4| |-|-|-|-| -| | 1 | 0 | 1 | 0 | 1 | | 2 | 1 | 0 | 1| 0 | i could do such a think like this. ```sql select ballot_id, sum(choice) filter (where option_id = 1) as option_id_1, sum(choice) filter (where option_id = 2) as option_id_2, sum(choice) filter (where option_id = 3) as option_id_3, sum(choice) filter (where option_id = 4) as option_id_4 from vote group by ballot_id ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1310243385, https://github.com/simonw/sqlite-utils/issues/423#issuecomment-1189010812,https://api.github.com/repos/simonw/sqlite-utils/issues/423,1189010812,IC_kwDOCGYnMM5G3t18,536941,2022-07-19T12:47:39Z,2022-07-19T12:47:39Z,CONTRIBUTOR,just ran into this!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1199158210, https://github.com/simonw/sqlite-utils/issues/449#issuecomment-1179579878,https://api.github.com/repos/simonw/sqlite-utils/issues/449,1179579878,IC_kwDOCGYnMM5GTvXm,1690072,2022-07-09T17:41:32Z,2022-07-09T17:41:50Z,CONTRIBUTOR,Learnt that the types in Sqlite-utils differ somewhat from those in Sqlite. I've changed my test to account for this difference and the test has passed successfully. I will submit a PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279863844, https://github.com/simonw/sqlite-utils/issues/449#issuecomment-1174027079,https://api.github.com/repos/simonw/sqlite-utils/issues/449,1174027079,IC_kwDOCGYnMM5F-jtH,1690072,2022-07-04T17:33:04Z,2022-07-04T17:48:43Z,CONTRIBUTOR,"I've written the code and test. Would you be able to advise how to compare table columns in a pytest function properly? Experiencing a challenge when comparing columns. Test: ```python def test_duplicate(fresh_db): table = fresh_db.create_table( ""table1"", { ""text_col"": str, ""float_col"": float, ""int_col"": int, ""bool_col"": bool, ""bytes_col"": bytes, ""datetime_col"": datetime.datetime, }, ) dt = datetime.datetime.now() b = bytes('hello world', 'utf-8') data = {""text_col"": ""Cleo"", ""float_col"": 3.14, ""int_col"": -2, ""bool_col"": True, ""bytes_col"": b, ""datetime_col"": str(dt)} table1 = fresh_db[""table1""] row_id = table1.insert(data).last_rowid table1.duplicate('table2') table2 = fresh_db[""table2""] assert data == table2.get(row_id) assert table1.columns == table2.columns # FAILS HERE ``` Result: ![Screenshot 2022-07-05 at 1 31 55 AM](https://user-images.githubusercontent.com/1690072/177198814-daac48c9-5746-49d0-a14a-14fe181c5a2f.png) Failure is due to column types being named differently -- e.g. 'FLOAT' vs 'REAL', 'INTEGER' vs 'INT'. How should I go about comparing columns while accounting for equivalent types? Or did I miss out something in my duplication code correctly? Here's how I did it: in `db.py`, I've added the following code: ```python class Table(Queryable): [...] def duplicate( self, name_new: str ) -> ""Table"": """""" Duplicate this table in this database. :param name_new: Name of new table. """""" assert self.exists() with self.db.conn: sql = ""CREATE TABLE [{new_table}] AS SELECT * FROM [{table}];"".format( new_table = name_new, table = self.name, ) self.db.execute(sql) return self.db[name_new] ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279863844, https://github.com/simonw/datasette/issues/1713#issuecomment-1173358747,https://api.github.com/repos/simonw/datasette/issues/1713,1173358747,IC_kwDOBm6k_c5F8Aib,2670795,2022-07-04T05:16:35Z,2022-07-04T05:16:35Z,CONTRIBUTOR,"This feature is pretty important and would be nice if it would be all within Datasette (no separate CLI/deploy required). My workflow now is to basically just copy the result and paste into a Google Sheet, which works, but then it's not discoverable to other journalists browsing the Datasette instance. I started building a plugin similar to [datasette-saved-queries](https://datasette.io/plugins/datasette-saved-queries) but one that maintains its own DB (required if you're working with all immutable DBs), but got bogged down in details.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1203943272, https://github.com/simonw/datasette/pull/1693#issuecomment-1168704157,https://api.github.com/repos/simonw/datasette/issues/1693,1168704157,IC_kwDOBm6k_c5FqQKd,49699333,2022-06-28T13:11:36Z,2022-06-28T13:11:36Z,CONTRIBUTOR,Superseded by #1763.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1184850337, https://github.com/simonw/datasette/pull/1753#issuecomment-1163091750,https://api.github.com/repos/simonw/datasette/issues/1753,1163091750,IC_kwDOBm6k_c5FU18m,49699333,2022-06-22T13:22:34Z,2022-06-22T13:22:34Z,CONTRIBUTOR,Superseded by #1760.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1261826957, https://github.com/simonw/datasette/issues/1528#issuecomment-1151887842,https://api.github.com/repos/simonw/datasette/issues/1528,1151887842,IC_kwDOBm6k_c5EqGni,25778,2022-06-10T03:23:08Z,2022-06-10T03:23:08Z,CONTRIBUTOR,I just put together a version of this in a plugin: https://github.com/eyeseast/datasette-query-files. Happy to have any feedback.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1060631257, https://github.com/simonw/datasette/issues/1742#issuecomment-1128064864,https://api.github.com/repos/simonw/datasette/issues/1742,1128064864,IC_kwDOBm6k_c5DPOdg,25778,2022-05-16T19:42:13Z,2022-05-16T19:42:13Z,CONTRIBUTOR,"Just to add a wrinkle here, this loads fine: https://alltheplaces-datasette.fly.dev/alltheplaces/places.geojson?_trace=1 But also, this doesn't add any trace data: https://alltheplaces-datasette.fly.dev/alltheplaces/places.json?_trace=1 What am I missing?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1237586379, https://github.com/simonw/datasette/issues/1742#issuecomment-1128049716,https://api.github.com/repos/simonw/datasette/issues/1742,1128049716,IC_kwDOBm6k_c5DPKw0,25778,2022-05-16T19:24:44Z,2022-05-16T19:24:44Z,CONTRIBUTOR,"Where is `_trace` getting injected? And is it something a plugin should be able to handle? (If it is, I guess I should handle it in this case.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1237586379, https://github.com/simonw/datasette/issues/741#issuecomment-1125342229,https://api.github.com/repos/simonw/datasette/issues/741,1125342229,IC_kwDOBm6k_c5DE1wV,25778,2022-05-12T19:21:16Z,2022-05-12T19:21:16Z,CONTRIBUTOR,"Came here to check if this had been flagged already. Was helping a colleague get something on Cloud Run and had to dig to find `--extra-options=""--setting sql_time_limit_ms 2500""`. If I get some time next week, maybe I'll try to tackle it. Would definitely make things easier to be able to do something like this: ```sh datasette publish cloudrun something.db --setting sql_time_limit_ms 2500 ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",607223136, https://github.com/simonw/datasette/issues/1728#issuecomment-1111752676,https://api.github.com/repos/simonw/datasette/issues/1728,1111752676,IC_kwDOBm6k_c5CQ__k,127565,2022-04-28T05:11:54Z,2022-04-28T05:11:54Z,CONTRIBUTOR,"And in terms of the bug, yep I agree that option 2 would be the most useful and least frustrating.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1218133366, https://github.com/simonw/datasette/issues/1728#issuecomment-1111751734,https://api.github.com/repos/simonw/datasette/issues/1728,1111751734,IC_kwDOBm6k_c5CQ_w2,127565,2022-04-28T05:09:59Z,2022-04-28T05:09:59Z,CONTRIBUTOR,"Thanks, I'll give it a try!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1218133366, https://github.com/simonw/datasette/issues/1728#issuecomment-1111712953,https://api.github.com/repos/simonw/datasette/issues/1728,1111712953,IC_kwDOBm6k_c5CQ2S5,127565,2022-04-28T03:48:36Z,2022-04-28T03:48:36Z,CONTRIBUTOR,"I don't think that'd work for this project. The db is very big, and my aim was to have an environment where researchers could be making use of the data, but be easily able to add corrections to the HTR/OCR extracted data when they came across problems. It's in its immutable (!) form here: https://sydney-stock-exchange-xqtkxtd5za-ts.a.run.app/stock_exchange/stocks","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1218133366, https://github.com/simonw/datasette/issues/1728#issuecomment-1111705323,https://api.github.com/repos/simonw/datasette/issues/1728,1111705323,IC_kwDOBm6k_c5CQ0br,127565,2022-04-28T03:32:06Z,2022-04-28T03:32:06Z,CONTRIBUTOR,"Ah, that would be it! I have a core set of data which doesn't change to which I want authorised users to be able to submit corrections. I was going to deal with the persistence issue by just grabbing the user corrections at regular intervals and saving to GitHub. I might need to rethink. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1218133366, https://github.com/simonw/datasette/issues/1101#issuecomment-1105642187,https://api.github.com/repos/simonw/datasette/issues/1101,1105642187,IC_kwDOBm6k_c5B5sLL,25778,2022-04-21T18:59:08Z,2022-04-21T18:59:08Z,CONTRIBUTOR,"Ha! That was your idea (and a good one). But it's probably worth measuring to see what overhead it adds. It did require both passing in the database and making the whole thing `async`. Just timing the queries themselves: 1. [Using `AsGeoJSON(geometry) as geometry`](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++AsGeoJSON%28geometry%29+as+geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 10.235 ms 2. [Leaving as binary](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 8.63 ms Looking at the network panel: 1. Takes about 200 ms for the `fetch` request 2. Takes about 300 ms I'm not sure how best to time the GeoJSON generation, but it would be interesting to check. Maybe I'll write a plugin to add query times to response headers. The other thing to consider with async streaming is that it might be well-suited for a slower response. When I have to get the whole result and send a response in a fixed amount of time, I need the most efficient query possible. If I can hang onto a connection and get things one chunk at a time, maybe it's ok if there's some overhead. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",749283032, https://github.com/simonw/datasette/issues/1101#issuecomment-1105588651,https://api.github.com/repos/simonw/datasette/issues/1101,1105588651,IC_kwDOBm6k_c5B5fGr,25778,2022-04-21T18:15:39Z,2022-04-21T18:15:39Z,CONTRIBUTOR,"What if you split rendering and streaming into two things: - `render` is a function that returns a response - `stream` is a function that sends chunks, or yields chunks passed to an ASGI `send` callback That way current plugins still work, and streaming is purely additive. A `stream` function could get a cursor or iterator of rows, instead of a list, so it could more efficiently handle large queries. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",749283032, https://github.com/simonw/datasette/issues/1713#issuecomment-1103312860,https://api.github.com/repos/simonw/datasette/issues/1713,1103312860,IC_kwDOBm6k_c5Bwzfc,536941,2022-04-20T00:52:19Z,2022-04-20T00:52:19Z,CONTRIBUTOR,feels related to #1402 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1203943272, https://github.com/simonw/datasette/issues/1713#issuecomment-1099540225,https://api.github.com/repos/simonw/datasette/issues/1713,1099540225,IC_kwDOBm6k_c5BiacB,25778,2022-04-14T19:09:57Z,2022-04-14T19:09:57Z,CONTRIBUTOR,"I wonder if this overlaps with what I outlined in #1605. You could run something like this: ```sh datasette freeze -d exports/ aws s3 cp exports/ s3://my-export-bucket/$(date) ``` And maybe that does what you need. Of course, that plugin isn't built yet. But that's the idea.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1203943272, https://github.com/simonw/datasette/issues/1699#issuecomment-1094453751,https://api.github.com/repos/simonw/datasette/issues/1699,1094453751,IC_kwDOBm6k_c5BPAn3,25778,2022-04-11T01:32:12Z,2022-04-11T01:32:12Z,CONTRIBUTOR,"Was looking through old issues and realized a bunch of this got discussed in #1101 (including by me!), so sorry to rehash all this. Happy to help with whatever piece of it I can. Would be very excited to be able to use format plugins with exports.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1193090967, https://github.com/simonw/datasette/issues/1699#issuecomment-1092386254,https://api.github.com/repos/simonw/datasette/issues/1699,1092386254,IC_kwDOBm6k_c5BHH3O,25778,2022-04-08T02:39:25Z,2022-04-08T02:39:25Z,CONTRIBUTOR,"And just to think this through a little more, here's what `stream_geojson` might look like: ```python async def stream_geojson(datasette, columns, rows, database, stream): db = datasette.get_database(database) for row in rows: feature = await row_to_geojson(row, db) stream.write(feature + ""\n"") # just assuming newline mode for now ``` Alternately, that could be an async generator, like this: ```python async def stream_geojson(datasette, columns, rows, database): db = datasette.get_database(database) for row in rows: feature = await row_to_geojson(row, db) yield feature ``` Not sure which makes more sense, but I think this pattern would open up a lot of possibility. If you had your [stream_indented_json](https://til.simonwillison.net/python/output-json-array-streaming) function, you could do `yield from stream_indented_json(rows, 2)` and be one your way.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1193090967, https://github.com/simonw/datasette/issues/1699#issuecomment-1092370880,https://api.github.com/repos/simonw/datasette/issues/1699,1092370880,IC_kwDOBm6k_c5BHEHA,25778,2022-04-08T02:07:40Z,2022-04-08T02:07:40Z,CONTRIBUTOR,"So maybe `render_output_render` returns something like this: ```python @hookimpl def register_output_renderer(datasette): return { ""extension"": ""geojson"", ""render"": render_geojson, ""stream"": stream_geojson, ""can_render"": can_render_geojson, } ``` And stream gets an iterator, instead of a list of rows, so it can efficiently handle large queries. Maybe it also gets passed a destination stream, or it returns an iterator. I'm not sure what makes more sense. Either way, that might cover both CLI exports and streaming responses.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1193090967, https://github.com/simonw/datasette/issues/1699#issuecomment-1092357672,https://api.github.com/repos/simonw/datasette/issues/1699,1092357672,IC_kwDOBm6k_c5BHA4o,25778,2022-04-08T01:39:40Z,2022-04-08T01:39:40Z,CONTRIBUTOR,"> My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me. That's my thinking, too. It's really the thing I've been wanting since writing `datasette-geojson`, since I'm always exporting with `datasette --get`. The workflow I'm always looking for is something like this: ```sh cd alltheplaces-datasette datasette query dunkin_in_suffolk -f geojson -o dunkin_in_suffolk.geojson ``` I think this probably needs either a new plugin hook separate from `register_output_renderer` or a way to use that without going through the HTTP stack. Or maybe a render mode that writes to a stream instead of a response. Maybe there's a new key in the dictionary that `register_output_renderer` returns that handles CLI exports.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1193090967, https://github.com/simonw/datasette/issues/1549#issuecomment-1087428593,https://api.github.com/repos/simonw/datasette/issues/1549,1087428593,IC_kwDOBm6k_c5A0Nfx,536941,2022-04-04T11:17:13Z,2022-04-04T11:17:13Z,CONTRIBUTOR,"another way to get the behavior of downloading the file is to use the download attribute of the anchor tag https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#attr-download","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1077620955, https://github.com/simonw/datasette/issues/1688#issuecomment-1079806857,https://api.github.com/repos/simonw/datasette/issues/1688,1079806857,IC_kwDOBm6k_c5AXIuJ,9020979,2022-03-27T01:01:14Z,2022-03-27T01:01:14Z,CONTRIBUTOR,"Thank you! I went through the cookiecutter template, and published my first package here: https://github.com/hydrosquall/datasette-nteract-data-explorer","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1181432624, https://github.com/simonw/datasette/issues/1688#issuecomment-1079550754,https://api.github.com/repos/simonw/datasette/issues/1688,1079550754,IC_kwDOBm6k_c5AWKMi,9020979,2022-03-26T01:27:27Z,2022-03-26T03:16:29Z,CONTRIBUTOR,"> Is there a way to serve a static assets when using the plugins/ directory method instead of installing plugins as a new python package? As a workaround, I found I can serve my statics from a non-plugin specific folder using the [--static](https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files) CLI flag. ```bash datasette ~/Library/Safari/History.db \ --plugins-dir=plugins/ \ --static assets:dist/ ``` It's not ideal because it means I'll change the cache pattern path depending on how the plugin is running (via pip install or as a one off script), but it's usable as a workaround. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1181432624, https://github.com/simonw/datasette/issues/1684#issuecomment-1078126065,https://api.github.com/repos/simonw/datasette/issues/1684,1078126065,IC_kwDOBm6k_c5AQuXx,536941,2022-03-24T20:08:56Z,2022-03-24T20:13:19Z,CONTRIBUTOR,"would be nice if the behavior was 1. try to facet all the columns 2. for bigger tables try to facet the indexed columns 3. for the biggest tables, turn off autofacetting completely This is based on my assumption that what determines autofaceting is the rarity of unique values. Which may not be true!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1179998071, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1077671779,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1077671779,IC_kwDOCGYnMM5AO_dj,25778,2022-03-24T14:11:33Z,2022-03-24T14:11:43Z,CONTRIBUTOR,"Coming back to this. I was about to add a utility function to [datasette-geojson]() to convert lat/lng columns to geometries. Thankfully I googled first. There's a SpatiaLite function for this: [MakePoint](https://www.gaia-gis.it/gaia-sins/spatialite-sql-latest.html#p0). ```sql select MakePoint(longitude, latitude) as geometry from places; ``` I'm not sure if that would work with `conversions`, since it needs two columns, but it's an option for tables that already have latitude, longitude columns.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464, https://github.com/simonw/datasette/issues/1581#issuecomment-1077047295,https://api.github.com/repos/simonw/datasette/issues/1581,1077047295,IC_kwDOBm6k_c5AMm__,536941,2022-03-24T04:08:18Z,2022-03-24T04:08:18Z,CONTRIBUTOR,this has been addressed by the datasette-hashed-urls plugin,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1089529555, https://github.com/simonw/datasette/pull/1582#issuecomment-1077047152,https://api.github.com/repos/simonw/datasette/issues/1582,1077047152,IC_kwDOBm6k_c5AMm9w,536941,2022-03-24T04:07:58Z,2022-03-24T04:07:58Z,CONTRIBUTOR,this has been obviated by the datasette-hashed-urls plugin,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1090055810, https://github.com/simonw/sqlite-utils/issues/131#issuecomment-1067981656,https://api.github.com/repos/simonw/sqlite-utils/issues/131,1067981656,IC_kwDOCGYnMM4_qBtY,25778,2022-03-15T13:21:42Z,2022-03-15T13:21:42Z,CONTRIBUTOR,"Just ran into this issue last night. I have a big table that's _mostly_ numbers, but also a zip code column in a state where ZIP codes start with 0. Would be great to run something like this: ```sh sqlite-utils insert data.db places file.csv --csv --detect-types --type zipcode text ``` Maybe I'll take a crack at this one.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",675753042, https://github.com/simonw/datasette/issues/1384#issuecomment-1066222323,https://api.github.com/repos/simonw/datasette/issues/1384,1066222323,IC_kwDOBm6k_c4_jULz,2670795,2022-03-14T00:36:42Z,2022-03-14T00:36:42Z,CONTRIBUTOR,"> Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :) All good. Report back any issues you find with this stuff. Metadata/dynamic config hasn't been tested widely outside of what I've done AFAIK. If you find a strong use case for async meta, it's going to be better to know sooner rather than later!","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135, https://github.com/simonw/datasette/issues/1384#issuecomment-1066169718,https://api.github.com/repos/simonw/datasette/issues/1384,1066169718,IC_kwDOBm6k_c4_jHV2,2670795,2022-03-13T19:48:49Z,2022-03-13T19:48:49Z,CONTRIBUTOR,"> For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests? You shouldn't need to do this, as I mentioned previously. The code inside `render_cell` hook already has access to the most recently sync'd metadata via `datasette._metadata_local`. Refreshing the metadata for every cell seems ... excessive.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135, https://github.com/simonw/datasette/issues/1384#issuecomment-1066006292,https://api.github.com/repos/simonw/datasette/issues/1384,1066006292,IC_kwDOBm6k_c4_ifcU,2670795,2022-03-13T02:09:44Z,2022-03-13T02:09:44Z,CONTRIBUTOR,"> If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config? Reading from sqlite DBs is pretty quick and I didn't notice significant performance issues when I was benchmarking. I tested on very large Datasette deployments (hundreds of DBs, millions of rows). See [""Many small queries are efficient in sqlite""](https://sqlite.org/np1queryprob.html) for more information on the rationale here. Also note that in the [datasette-live-config](https://github.com/next-LI/datasette-live-config) reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had. If you need to ensure fresh metadata is being read inside of a `render_cell` hook specifically, you don't need to do anything further! `get_metadata` gets called before `render_cell` every request, so it already has access to the synced meta. There shouldn't be a need to call `get_metadata(...)` or `metadata(...)` inside `render_cell`, you can just use `datasette._metadata_local` if you're really worried about performance. > The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases. Yes correct, the datadette-remote-metadata plugin doesn't do that. But the datasette-live-config plugin does. [It supports a `__metadata` table](https://github.com/next-LI/datasette-live-config/blob/main/datasette_live_config/__init__.py#L107-L138) that, when it exists on an attached DB, gets pulled into the Datasette internal `_metadata` and is also accessible via `get_metadata`. Updating is instantaneous so there's no gotchas for users or security issues for users relying on the metadata-based permissions. Simon talked about eventually making something like this a standard feature of Datasette, but I'm not sure what the status is on that! Good luck!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135, https://github.com/simonw/datasette/issues/1384#issuecomment-1065940779,https://api.github.com/repos/simonw/datasette/issues/1384,1065940779,IC_kwDOBm6k_c4_iPcr,2670795,2022-03-12T18:49:29Z,2022-03-12T18:50:07Z,CONTRIBUTOR,"Hello! Just wanted to chime in and note that there's a plugin to have Datasette [watch for updates to an external metadata.yaml/json and update the internal settings accordingly](https://datasette.io/plugins/datasette-remote-metadata), so I think the cache/poll use case is already covered. @khusmann If you don't need truly dynamic metadata then what you've come up with or the plugin ought to work fine. Making the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict. Trying to force all paths through a async hook would have performance ramifications and making everything use the internal meta will cause problems for users that need changes to take effect immediately. This is why I came to the non-async solution as it was the path of least change within Datasette. As always, open to new ideas, etc!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135, https://github.com/simonw/sqlite-utils/issues/411#issuecomment-1065477258,https://api.github.com/repos/simonw/sqlite-utils/issues/411,1065477258,IC_kwDOCGYnMM4_geSK,25778,2022-03-11T20:14:59Z,2022-03-11T20:14:59Z,CONTRIBUTOR,"Good call on adding this to `create-table`, especially for stored columns. Having the stored/virtual split might make this tricky to implement, but I haven't gone any farther than thinking about what the CLI looks like. I'm going to try making the SQL side work first and figure that'll tell me more about what it needs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1160034488, https://github.com/simonw/datasette/issues/1655#issuecomment-1062450649,https://api.github.com/repos/simonw/datasette/issues/1655,1062450649,IC_kwDOBm6k_c4_U7XZ,536941,2022-03-09T01:10:46Z,2022-03-09T01:10:46Z,CONTRIBUTOR,"i increased the max_returned_row, because I have some scripts that get CSVs from this site, and this makes doing pagination of CSVs less annoying for many cases. i know that's streaming csvs is something you are hoping to address in 1.0. let me know if there's anything i can do to help with that. as for what if anything can be done about the size of the dom, I don't have any ideas right now, but i'll poke around.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515, https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059647114,https://api.github.com/repos/simonw/sqlite-utils/issues/412,1059647114,IC_kwDOCGYnMM4_KO6K,25778,2022-03-05T01:54:24Z,2022-03-05T01:54:24Z,CONTRIBUTOR,"I haven't tried this, but it looks like Pandas has a method for this: https://pandas.pydata.org/docs/reference/api/pandas.read_sql_query.html ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1160182768, https://github.com/simonw/datasette/issues/1641#issuecomment-1049879118,https://api.github.com/repos/simonw/datasette/issues/1641,1049879118,IC_kwDOBm6k_c4-k-JO,536941,2022-02-24T13:49:26Z,2022-02-24T13:49:26Z,CONTRIBUTOR,"maybe worth considering adding buttons for paren, asterisk, etc. under the input text box on mobile?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149310456, https://github.com/simonw/sqlite-utils/pull/407#issuecomment-1040998433,https://api.github.com/repos/simonw/sqlite-utils/issues/407,1040998433,IC_kwDOCGYnMM4-DGAh,25778,2022-02-16T01:29:39Z,2022-02-16T01:29:39Z,CONTRIBUTOR,Happy to do it and have it in the library. Going to use it a bunch. This whole SpatiaLite toolchain become a huge part of my work in the past year.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1138948786, https://github.com/simonw/sqlite-utils/pull/407#issuecomment-1040580250,https://api.github.com/repos/simonw/sqlite-utils/issues/407,1040580250,IC_kwDOCGYnMM4-Bf6a,25778,2022-02-15T17:40:00Z,2022-02-15T17:40:00Z,CONTRIBUTOR,@simonw I think this is ready for a look.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1138948786, https://github.com/simonw/sqlite-utils/issues/398#issuecomment-1038336591,https://api.github.com/repos/simonw/sqlite-utils/issues/398,1038336591,IC_kwDOCGYnMM4948JP,25778,2022-02-13T18:48:21Z,2022-02-13T18:49:49Z,CONTRIBUTOR,"Been chipping away at this between other things and realized `sqlite-utils init-spatialite` is probably unnecessary. Any of the other commands requires running `db.init_spatialite` to have the extension functions available, and that will do everything `init-spatialite` would do. I think it's probably worth keeping a SpatiaLite flag on `create-database` in case you wanted to create all the spatial metadata up front. Otherwise, it's going to get added the first time you run `add-geometry-column` or `create-spatial-index`, which is probably fine in most cases.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124237013, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1035057014,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1035057014,IC_kwDOCGYnMM49sbd2,25778,2022-02-10T15:30:28Z,2022-02-10T15:30:40Z,CONTRIBUTOR,"Yeah, the CLI experience is probably where any kind of multi-column, configured setup is going to fall apart. Sticking with GIS examples, one way I might think about this is using the [fiona CLI](https://fiona.readthedocs.io/en/latest/cli.html): ```sh # assuming a database is already created and has SpatiaLite fio cat boundary.shp | sqlite-utils insert boundaries --conversion geometry GeometryGeoJSON - ``` Anyway, very interested to see where you land here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737, https://github.com/simonw/sqlite-utils/issues/403#issuecomment-1033332570,https://api.github.com/repos/simonw/sqlite-utils/issues/403,1033332570,IC_kwDOCGYnMM49l2da,536941,2022-02-09T04:22:43Z,2022-02-09T04:22:43Z,CONTRIBUTOR,dddoooope,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1126692066, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1032732242,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1032732242,IC_kwDOCGYnMM49jj5S,25778,2022-02-08T15:26:59Z,2022-02-08T15:26:59Z,CONTRIBUTOR,"What if you did something like this: ```python class Conversion: def __init__(self, *args, **kwargs): ""Put whatever settings you need here"" def python(self, row, column, value): # not sure on args here ""Python step to transform value"" return value def sql(self, row, column, value): ""Return the actual sql that goes in the insert/update step, and maybe params"" # value is the return of self.python() return value, [] ``` This way, you're always passing an instance, which has methods that do the conversion. (Or you're passing a SQL string, as you would now.) The `__init__` could take column names, or SRID, or whatever other setup state you need per row, but the row is getting processed with the `python` and `sql` methods (or whatever you want to call them). This is pretty rough, so do what you will with names and args and such. You'd then use it like this: ```python # subclass might be unneeded here, if methods are present class LngLatConversion(Conversion): def __init__(self, x=""longitude"", y=""latitude""): self.x = x self.y = y def python(self, row, column, value): x = row[self.x] y = row[self.y] return x, y def sql(self, row, column, value): # value is now a tuple, returned above s = ""GeomFromText(POINT(? ?))"" return s, value table.insert_all(rows, conversions={""point"": LngLatConversion(""lng"", ""lat""))} ``` I haven't thought through all the implementation details here, and it'll probably break in ways I haven't foreseen, but wanted to get this idea out of my head. Hope it helps.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737, https://github.com/simonw/sqlite-utils/issues/403#issuecomment-1032126353,https://api.github.com/repos/simonw/sqlite-utils/issues/403,1032126353,IC_kwDOCGYnMM49hP-R,536941,2022-02-08T01:45:15Z,2022-02-08T01:45:31Z,CONTRIBUTOR,"you can hack something like this to achieve this result: `sqlite-utils convert my_database my_table rowid ""{'id': value}"" --multi`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1126692066, https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1032120014,https://api.github.com/repos/simonw/sqlite-utils/issues/26,1032120014,IC_kwDOCGYnMM49hObO,536941,2022-02-08T01:32:34Z,2022-02-08T01:32:34Z,CONTRIBUTOR,"if you are curious about prior art, https://github.com/jsnell/json-to-multicsv is really good!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455486286, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1031791783,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1031791783,IC_kwDOCGYnMM49f-Sn,25778,2022-02-07T18:37:40Z,2022-02-07T18:37:40Z,CONTRIBUTOR,"I've never used it either, but it's interesting, right? Feel like I should try it for something. I'm trying to get my head around how this conversions feature might work, because I really like the idea of it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737, https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1031779460,https://api.github.com/repos/simonw/sqlite-utils/issues/402,1031779460,IC_kwDOCGYnMM49f7SE,25778,2022-02-07T18:24:56Z,2022-02-07T18:24:56Z,CONTRIBUTOR,"I wonder if there's any overlap with the goals here and the `sqlite3` module's concept of adapters and converters: https://docs.python.org/3/library/sqlite3.html#sqlite-and-python-types I'm not sure that's _exactly_ what we're talking about here, but it might be a parallel with some useful ideas to borrow.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1125297737, https://github.com/simonw/datasette/pull/1593#issuecomment-1031455498,https://api.github.com/repos/simonw/datasette/issues/1593,1031455498,IC_kwDOBm6k_c49esMK,49699333,2022-02-07T13:13:22Z,2022-02-07T13:13:22Z,CONTRIBUTOR,Superseded by #1631.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1101705012, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030741289,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1030741289,IC_kwDOCGYnMM49b90p,25778,2022-02-06T03:03:43Z,2022-02-06T03:03:43Z,CONTRIBUTOR,"> I wonder if there are any interesting non-geospatial canned conversions that it would be worth including? Off the top of my head: - Un-nesting JSON objects into columns - Splitting arrays - Normalizing dates and times - URL munging with `urlparse` - Converting strings to numbers Some of this is easy enough with SQL functions, some is easier in Python. Maybe that's where having pre-built classes gets really handy, because it saves you from thinking about which way it's implemented.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030740826,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1030740826,IC_kwDOCGYnMM49b9ta,25778,2022-02-06T02:59:10Z,2022-02-06T02:59:10Z,CONTRIBUTOR,"All this said, I don't think it's unreasonable to point people to dedicated tools like `geojson-to-sqlite`. If I'm dealing with a bunch of GeoJSON or Shapefiles, I need to something to read those anyway (or I need to figure out virtual tables). But something like this might make it easier to build those libraries, or standardize the underlying parts.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464, https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030740653,https://api.github.com/repos/simonw/sqlite-utils/issues/399,1030740653,IC_kwDOCGYnMM49b9qt,25778,2022-02-06T02:57:17Z,2022-02-06T02:57:17Z,CONTRIBUTOR,"I like the idea of having stock conversions you could import. I'd actually move them to a dedicated module (call it `sqlite_utils.conversions` or something), because it's different from other utilities. Maybe they even take configuration, or they're composable. ```python from sqlite_utils.conversions import LongitudeLatitude db[""places""].insert( { ""name"": ""London"", ""lng"": -0.118092, ""lat"": 51.509865, }, conversions={""point"": LongitudeLatitude(""lng"", ""lat"")}, ) ``` I would definitely use that for every CSV I get with lat/lng columns where I actually need GeoJSON.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124731464, https://github.com/simonw/sqlite-utils/issues/398#issuecomment-1030629879,https://api.github.com/repos/simonw/sqlite-utils/issues/398,1030629879,IC_kwDOCGYnMM49bin3,25778,2022-02-05T13:57:33Z,2022-02-05T19:49:38Z,CONTRIBUTOR,"I'm mostly using [geojson-to-sqlite](https://github.com/simonw/geojson-to-sqlite) at the moment. Even with shapefiles, I'm usually converting to GeoJSON and projecting to EPSG:4326 (with [ogr2ogr](https://gdal.org/programs/ogr2ogr.html)) first. I think an open question here is how much you want to leave to external libraries and how much you want here. My thinking has been that adding Spatialite helpers here would make external stuff easier, but it would be nice to have some standard way to insert geometries. I'm in the middle of adding GeoJSON and Spatialite support to [geocode-sqlite](https://github.com/eyeseast/geocode-sqlite), and that will probably use WKT. Since that's all points, I think I can just make the string inline. But for polygons, I'd generally use Shapely, which probably isn't a dependency you want to add to sqlite-utils. I've also been trying to get some of the approaches [here](https://www.gaia-gis.it/fossil/libspatialite/wiki?name=Supporting+GeoJSON) to work, but haven't had any success so far.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1124237013, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1030002502,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1030002502,IC_kwDOCGYnMM49ZJdG,25778,2022-02-04T13:50:19Z,2022-02-04T13:50:19Z,CONTRIBUTOR,Awesome. Thanks for your help getting it in. Will now look at adding CLI versions of this. It's going to be super helpful on a bunch of my projects.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029370537,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029370537,IC_kwDOCGYnMM49WvKp,25778,2022-02-03T20:25:58Z,2022-02-03T20:25:58Z,CONTRIBUTOR,"OK, I moved all the GIS helpers into `db.py` as methods on `Database` and `Table`, and I put `find_spatialite` back in `utils.py`. I deleted `gis.py`, since there's nothing left it. Docs and tests are updated and passing. I think this is better.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029338360,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029338360,IC_kwDOCGYnMM49WnT4,25778,2022-02-03T19:43:56Z,2022-02-03T19:43:56Z,CONTRIBUTOR,"Works for me. I was just looking at how the FTS extensions work and they're just methods, too. So this can be consistent with that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029326568,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029326568,IC_kwDOCGYnMM49Wkbo,25778,2022-02-03T19:28:26Z,2022-02-03T19:28:26Z,CONTRIBUTOR,"> `from sqlite_utils.utils import find_spatialite` is part of the documented API already: > > https://sqlite-utils.datasette.io/en/3.22.1/python-api.html#finding-spatialite > > To avoid needing to bump the major version number to 4 to indicate a backwards incompatible change, we should keep a `from .gis import find_spatialite` line at the top of `utils.py` such that any existing code with that documented import continues to work. This is fixed now. I had to take out the type annotations for `Database` and `Table` to avoid a circular import, but that's fine and may be moot if these become class methods.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312, https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1029317527,https://api.github.com/repos/simonw/sqlite-utils/issues/79,1029317527,IC_kwDOCGYnMM49WiOX,25778,2022-02-03T19:18:02Z,2022-02-03T19:18:02Z,CONTRIBUTOR,"Taking part of the conversation from #385 here. > Would sqlite-utils add-geometry-column ... be a good CLI enhancement. for example? Yes. And also `sqlite-utils create-spatial-index` would be great to have. My plan would be to add those once the Python API is settled.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",557842245, https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029306428,https://api.github.com/repos/simonw/sqlite-utils/issues/385,1029306428,IC_kwDOCGYnMM49Wfg8,25778,2022-02-03T19:03:43Z,2022-02-03T19:03:43Z,CONTRIBUTOR,"I thought about adding these as methods on `Database` and `Table`, and I'm back and forth on it for the same reasons you are. It's certainly cleaner, and it's clearer what you're operating on. I could go either way. I do sort of like having all the Spatialite stuff in its own module, just because it's built around an extension you might not have or want, but I don't know if that's a good reason to have a different API. You could have `init_spatialite` add methods to `Database` and `Table`, so they're only there if you have Spatialite set up. Is that too clever? It feels too clever. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1102899312,