id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 2028698018,I_kwDOBm6k_c5463mi,2213,feature request: gzip compression of database downloads,536941,open,0,,,1,2023-12-06T14:35:03Z,2023-12-06T15:05:46Z,,CONTRIBUTOR,,"At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of ""application"" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2213/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1994857251,I_kwDOBm6k_c525xsj,2208,No suggested facets when a column named 'value' is included,198537,open,0,,,1,2023-11-15T14:11:17Z,2023-11-15T14:18:59Z,,CONTRIBUTOR,,"When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'. https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L168-L174 Currently the following is shown (from https://latest.datasette.io/fixtures/facetable) ![image](https://github.com/simonw/datasette/assets/198537/a919509a-ea88-461b-b25b-8b776720c7c5) When I add a column named 'value' only the JSON facets are processed. ![image](https://github.com/simonw/datasette/assets/198537/092bd0b3-4c20-434e-88f8-47e2b8994a1d) I think that not using aliases could be a solution (except if someone wants to use a column named `count(*)` though this seems to be unlikely). I'll open a PR with that. There is also a TODO with a similar question in the same file. I have not looked into that yet. https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L512",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2208/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1552368054,I_kwDOBm6k_c5ch0G2,2000,rewrite_sql hook,193185,open,0,,,1,2023-01-23T01:02:52Z,2023-01-23T06:08:01Z,,CONTRIBUTOR,,"I'm not sold that this is a good idea, but thought it'd be worth writing up a ticket. Proposal: add a hook like ```python def rewrite_sql(datasette, database, request, fn, sql, params) ``` It would be called from Database.execute, Database.execute_write, Database.execute_write_script, Database.execute_write_many before running the user's SQL. `fn` would indicate which method was being used, in case that's relevant for the SQL inspection -- for example `execute` only permits a single statement. The hook could return a SQL statement to be executed instead, or an async function to be awaited on that returned the SQL to be executed. Plugins that could be written with this hook: - https://github.com/cldellow/datasette-ersatz-table-valued-functions would use this to avoid monkey-patching - a plugin to inspect and reject unsafe Spatialite function calls (reported by [Simon in Discord](https://discord.com/channels/823971286308356157/823971286941302908/1066438832293159004)) - a plugin to do more general rewrites of queries to enforce table or row-level security, for example, based on the currently logged in actor's ID - a plugin to maintain audit tables when users write to a table - a plugin to cache expensive queries (eg the queries that drive facets) - these could allow stale reads if previously cached, then refresh them in an offline queue Flaws with this idea: `execute_fn` and `execute_write_fn` would not go through this hook, which limits the guarantees you can make about it for security purposes.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2000/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 860722711,MDU6SXNzdWU4NjA3MjI3MTE=,1301,Publishing to cloudrun with immutable mode?,5413548,open,0,,,1,2021-04-18T17:51:46Z,2022-10-07T02:38:04Z,,CONTRIBUTOR,,"I'm a bit confused about immutable mode and publishing to cloudrun. (I want to publish with immutable mode so that I can support database downloads.) Running `datasette publish cloudrun --extra-options=""-i example.db""` leads to an error: > Error: Invalid value for '-i' / '--immutable': Path 'example.db' does not exist. However, running `datasette publish cloudrun example.db` not only works but seems to publish in immutable mode anyway! I'm seeing this both with `/-/databases.json` and the fact that downloads are working. When I just `datasette serve` locally, this succeeds both ways and works as expected.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1301/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1200224939,I_kwDOBm6k_c5Hifqr,1707,[feature] expanded detail page,536941,open,0,,,1,2022-04-11T16:29:17Z,2022-04-11T16:33:00Z,,CONTRIBUTOR,,"Right now, if click on the detail page for a row you get the info for the row and links to related tables: ![Screenshot 2022-04-11 at 12-27-26 lm20 filing](https://user-images.githubusercontent.com/536941/162786802-90ac1a71-4624-47c4-ae55-b783f4f6c92d.png) It would be very cool if there was an option to expand the rows of the related tables from within this detail view. If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1707/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1096536240,I_kwDOBm6k_c5BW9Cw,1586,run analyze on all databases as part of start up or publishing,536941,open,0,,,1,2022-01-07T17:52:34Z,2022-02-02T07:13:37Z,,CONTRIBUTOR,,"Running `analyze;` lets sqlite's query planner make *much* better use of any indices. It might be nice if the analyze was run as part of the start up of ""serve"" or ""publish"".",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1586/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1090810196,I_kwDOBm6k_c5BBHFU,1583,consider adding deletion step of cloudbuild artifacts to gcloud publish,536941,open,0,,,1,2021-12-30T00:33:23Z,2021-12-30T00:34:16Z,,CONTRIBUTOR,,"right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1583/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 988556488,MDU6SXNzdWU5ODg1NTY0ODg=,1459,suggestion: allow `datasette --open` to take a relative URL,51016,open,0,,,1,2021-09-05T17:17:07Z,2021-09-05T19:59:15Z,,CONTRIBUTOR,,"(soft suggestion because I'm not sure I'm using datasette right yet) Over at https://github.com/ctb/2021-sourmash-datasette, I'm playing around with datasette, and I'm creating some static pages to send people to the right facets. There may well be better ways of achieving this end goal, and I will find out if so, I'm sure! But regardless I think it might be neat to support an option to allow `-o/--open` to take a relative URL, that then gets appended to the hostname and port. This would let me improve my documentation. I don't see any downsides, either, but 🤷 there may well be some :) Happy to dig in and provide a PR if it's of interest. I'm not sure off the top of my head how to support an optional value to a parameter in argparse - the current `-o` behavior is kinda nice so it'd be suboptimal to require a url for `-o`. Maybe `--open-url=` or something would work? ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1459/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 527710055,MDU6SXNzdWU1Mjc3MTAwNTU=,640,Nicer error message for heroku publish name clash,82988,open,0,,,1,2019-11-24T14:57:07Z,2019-12-06T07:19:34Z,,CONTRIBUTOR,,"If you try to publish to Heroku using no set name (i.e. the default `datasette` name) and a project already exists under that name, you get a meaningful error report on the first line followed by Py error messages that drown it out: ``` Creating datasette... ! ▸ Name datasette is already taken Traceback (most recent call last): File ""/usr/local/bin/datasette"", line 10, in sys.exit(cli()) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 764, in __call__ return self.main(*args, **kwargs) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 717, in main rv = self.invoke(ctx) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 555, in invoke return callback(*args, **kwargs) File ""/Users/NNNNN/Library/Python/3.7/lib/python/site-packages/datasette/publish/heroku.py"", line 124, in heroku create_output = check_output(cmd).decode(""utf8"") File ""/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py"", line 411, in check_output **kwargs).stdout File ""/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py"", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['heroku', 'apps:create', 'datasette', '--json']' returned non-zero exit status 1. ``` It would be neater if: - the Py error message was caught; - the report suggested setting a project name using `-n` etc. It may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/640/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 341228846,MDU6SXNzdWUzNDEyMjg4NDY=,343,Render boolean fields better by default,45057,open,0,,,1,2018-07-14T11:10:29Z,2018-07-14T14:17:14Z,,CONTRIBUTOR,,These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/343/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,