id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 273944952,MDU6SXNzdWUyNzM5NDQ5NTI=,93,Package as standalone binary,67420,closed,0,,,18,2017-11-14T21:14:07Z,2021-11-21T07:00:23Z,2021-11-21T07:00:23Z,NONE,,"hint: more than the docker image a standalone and multiplatform binary (containing the app and the database) could be simpler to distribute. i would like to investigate the possibility to package everything with [pyinstaller](http://www.pyinstaller.org/) adding the database as a [data file](https://pythonhosted.org/PyInstaller/spec-files.html#adding-data-files)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/93/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 274160723,MDU6SXNzdWUyNzQxNjA3MjM=,100,TemplateAssertionError: no filter named 'tojson',13304454,closed,0,,,2,2017-11-15T13:43:41Z,2017-11-16T09:25:10Z,2017-11-16T00:14:13Z,NONE,,"A 500 error is raised upon clicking on the name of a table on the homepage, say _http://0.0.0.0:8001/_ to _http://0.0.0.0:8001/test_check-c1f4771/users_ The API part seems to function as intended, though... ``` 2017-11-15 14:33:57 - (sanic)[ERROR]: Traceback (most recent call last): File ""/usr/local/lib/python3.5/dist-packages/sanic/app.py"", line 503, in handle_request response = await response File ""/usr/local/lib/python3.5/dist-packages/datasette/app.py"", line 155, in get return await self.view_get(request, name, hash, **kwargs) File ""/usr/local/lib/python3.5/dist-packages/datasette/app.py"", line 219, in view_get **context, File ""/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py"", line 84, in render return html(self.render_string(template, request, **context)) File ""/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py"", line 81, in render_string return self.env.get_template(template).render(**context) File ""/usr/lib/python3/dist-packages/jinja2/environment.py"", line 812, in get_template return self._load_template(name, self.make_globals(globals)) File ""/usr/lib/python3/dist-packages/jinja2/environment.py"", line 786, in _load_template template = self.loader.load(self, name, globals) File ""/usr/lib/python3/dist-packages/jinja2/loaders.py"", line 125, in load code = environment.compile(source, name, filename) File ""/usr/lib/python3/dist-packages/jinja2/environment.py"", line 565, in compile self.handle_exception(exc_info, source_hint=source_hint) File ""/usr/lib/python3/dist-packages/jinja2/environment.py"", line 754, in handle_exception reraise(exc_type, exc_value, tb) File ""/usr/lib/python3/dist-packages/jinja2/_compat.py"", line 37, in reraise raise value.with_traceback(tb) File ""/usr/local/lib/python3.5/dist-packages/datasette/templates/table.html"", line 29, in template
params = {{ query.params|tojson(4) }}
File ""/usr/lib/python3/dist-packages/jinja2/environment.py"", line 515, in _generate return generate(source, self, name, filename, defer_init=defer_init) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 62, in generate generator.visit(node) File ""/usr/lib/python3/dist-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 849, in visit_Template self.blockvisit(block.body, block_frame) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 492, in blockvisit self.visit(node, frame) File ""/usr/lib/python3/dist-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 1172, in visit_If self.blockvisit(node.body, if_frame) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 492, in blockvisit self.visit(node, frame) File ""/usr/lib/python3/dist-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 1353, in visit_Output self.visit(argument, frame) File ""/usr/lib/python3/dist-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 1565, in visit_Filter self.fail('no filter named %r' % node.name, node.lineno) File ""/usr/lib/python3/dist-packages/jinja2/compiler.py"", line 427, in fail raise TemplateAssertionError(msg, lineno, self.name, self.filename) jinja2.exceptions.TemplateAssertionError: no filter named 'tojson' 2017-11-15 14:33:57 - (network)[INFO][127.0.0.1:41316]: GET http://0.0.0.0:8001/test_check-c1f4771/users 500 144 2017-11-15 14:33:57 - (network)[INFO][127.0.0.1:41316]: GET http://0.0.0.0:8001/favicon.ico 200 0 ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/100/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 274161964,MDU6SXNzdWUyNzQxNjE5NjQ=,101,TemplateAssertionError: no filter named 'tojson',450244,closed,0,,,1,2017-11-15T13:47:32Z,2017-11-15T13:48:55Z,2017-11-15T13:48:55Z,NONE,,"I get an exception clicking on the table link: ``` 2017-11-15 08:40:10 - (sanic)[ERROR]: Traceback (most recent call last): File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic/app.py"", line 503, in handle_request response = await response File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/app.py"", line 155, in get return await self.view_get(request, name, hash, **kwargs) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/app.py"", line 219, in view_get **context, File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic_jinja2/__init__.py"", line 84, in render return html(self.render_string(template, request, **context)) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic_jinja2/__init__.py"", line 81, in render_string return self.env.get_template(template).render(**context) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py"", line 812, in get_template return self._load_template(name, self.make_globals(globals)) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py"", line 786, in _load_template template = self.loader.load(self, name, globals) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/loaders.py"", line 125, in load code = environment.compile(source, name, filename) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py"", line 565, in compile self.handle_exception(exc_info, source_hint=source_hint) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py"", line 754, in handle_exception reraise(exc_type, exc_value, tb) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/_compat.py"", line 37, in reraise raise value.with_traceback(tb) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/templates/table.html"", line 29, in template
params = {{ query.params|tojson(4) }}
File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py"", line 515, in _generate return generate(source, self, name, filename, defer_init=defer_init) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 62, in generate generator.visit(node) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 849, in visit_Template self.blockvisit(block.body, block_frame) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 492, in blockvisit self.visit(node, frame) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 1172, in visit_If self.blockvisit(node.body, if_frame) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 492, in blockvisit self.visit(node, frame) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 1353, in visit_Output self.visit(argument, frame) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/visitor.py"", line 38, in visit return f(node, *args, **kwargs) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 1565, in visit_Filter self.fail('no filter named %r' % node.name, node.lineno) File ""/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/compiler.py"", line 427, in fail raise TemplateAssertionError(msg, lineno, self.name, self.filename) jinja2.exceptions.TemplateAssertionError: no filter named 'tojson' ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/101/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 276091279,MDU6SXNzdWUyNzYwOTEyNzk=,144,apsw as alternative sqlite3 binding (for full text search),649467,closed,0,,,3,2017-11-22T14:40:39Z,2018-05-28T21:29:42Z,2018-05-28T21:29:42Z,NONE,,"Hey there, Have you considered providing apsw support as an alternative to stock python sqlite3? I use apsw because it keeps up with sqlite3 and is straightforward to bring in extensions like FTS5. FTS really accelerates the kind of searching often done by web clients. I may be able to help (it shouldn't be much code), but there are a couple of stylistic questions that come up when supporting an optional package. Also, apsw is tricky in that it doesn't have a pypi package (author says limitations in providing options to setup.py). ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/144/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 276842536,MDU6SXNzdWUyNzY4NDI1MzY=,153,Ability to customize presentation of specific columns in HTML view,20264,closed,0,,2949431,14,2017-11-26T17:46:11Z,2017-12-10T02:08:45Z,2017-12-07T06:17:33Z,NONE,,"This ties into https://github.com/simonw/datasette/issues/3 in some ways. It would be great to have some adaptability in the HTML views and to specific some columns as displaying in certain ways. - [x] 1. **Auto-parsing URIs into in-browser links.** Why? Lots of public data around cultural commons stuff links to a specific URL. This would be a great utility to turn on at the command line, just parse everything for URLs. Maybe they need to be underlined or represented in a different way than internal URLs. - [x] 2. **Ability to identify a column as plain/preformatted text.** Why? Was trying to import the Enron emails, the body collapses. Hard to read. These fields also tend to screw up the ability to scan a table view. If you knew it was text the system could set an `overflow` property on the relevant CSS, so you could still scan. - [x] 3. **Ability to identify a column as HTML.** Why? I want to spider some stuff and drop sections into SQLite, and just keep them as HTML.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/153/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 277589569,MDU6SXNzdWUyNzc1ODk1Njk=,155,A primary key column that has foreign key restriction associated won't rendering label column,388154,closed,0,,2949431,4,2017-11-29T00:40:02Z,2017-12-07T05:39:53Z,2017-12-07T05:39:53Z,NONE,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/155/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 278814220,MDU6SXNzdWUyNzg4MTQyMjA=,161,Support WITH query ,388154,closed,0,,,4,2017-12-03T20:00:40Z,2017-12-08T06:18:12Z,2017-12-04T04:52:41Z,NONE,,"Currently datasettle failed with error message: Statement must begin with SELECT Example query ```sql WITH RECURSIVE cnt(x) AS ( SELECT 1 UNION ALL SELECT x+1 FROM cnt LIMIT 1000000 ) SELECT x FROM cnt; ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/161/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 281110295,MDU6SXNzdWUyODExMTAyOTU=,173,I18n and L10n support,50138,open,0,,,2,2017-12-11T17:49:58Z,2021-04-26T12:10:01Z,,NONE,,It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/173/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 282971961,MDU6SXNzdWUyODI5NzE5NjE=,175,"Add project topic ""automatic-api""",3179832,closed,0,,,1,2017-12-18T18:09:17Z,2017-12-21T18:33:55Z,2017-12-21T18:33:55Z,NONE,,"Hi there! Could you add the ~~tag~~ topic `automatic-api` to your repository? I am [making a list](https://github.com/dbohdan/automatic-api) of all projects that automatically expose APIs to databases. (Your Show HN made me do it. :-) I knew about PostgREST and PostGraphQL, but it took adding Datasette to sell me on the concept.) They will be easier to discover if there is a standard GitHub tag, and `automatic-api` seems as good a candidate as any. Two projects [already use it](https://github.com/topics/automatic-api).",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/175/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 285168503,MDU6SXNzdWUyODUxNjg1MDM=,176,Add GraphQL endpoint,173848,open,0,,,8,2017-12-29T23:21:01Z,2020-04-21T14:16:24Z,,NONE,,Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/176/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 289425975,MDExOlB1bGxSZXF1ZXN0MTYzNTYxODMw,181,"add ""format sql"" button to query page, uses sql-formatter",1957344,closed,0,,,7,2018-01-17T21:50:04Z,2019-11-11T03:08:25Z,2019-11-11T03:08:25Z,NONE,simonw/datasette/pulls/181,"Cool project! This fixes #136 using the suggested [sql formatter](https://github.com/zeroturnaround/sql-formatter) library. I included the minified version in the bundle and added the relevant scripts to the codemirror includes instead of adding new files, though I could also add new files. I wanted to keep it all together, since the result of the format needs access to the editor in order to properly update the codemirror instance.",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/181/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 292011379,MDU6SXNzdWUyOTIwMTEzNzk=,184,500 from missing table name,222245,closed,0,,,4,2018-01-26T19:46:45Z,2019-05-21T16:17:29Z,2018-04-13T18:18:59Z,NONE,,"https://github.com/simonw/datasette/blob/56623e48da5412b25fb39cc26b9c743b684dd968/datasette/app.py#L517-L519 throws an error if it gets an empty list back. Simplest solution is to write a helper func that just says ```python result = list(await self.execute(name, sql, params) if result: return result[0][0] ``` and use it anywhere `[0][0]` is now.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/184/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 299760684,MDU6SXNzdWUyOTk3NjA2ODQ=,185,Metadata should be a nested arbitrary KV store,222245,open,0,,,12,2018-02-23T16:02:07Z,2019-05-13T18:33:33Z,,NONE,,"I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication. Ideally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like `metadata.page.key` vs. `metadata.this.key` might work as an interface.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/185/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 306811513,MDU6SXNzdWUzMDY4MTE1MTM=,186,proposal new option to disable user agents cache,47107,closed,0,,,3,2018-03-20T10:42:20Z,2018-03-21T09:07:22Z,2018-03-21T01:28:31Z,NONE,,"I think it would be very useful for debugging an option of adding headers to http replies ``` Cache-Control: no-cache ``` especially in the html output",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/186/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 309033998,MDU6SXNzdWUzMDkwMzM5OTg=,187,Windows installation error,11855322,closed,0,,,7,2018-03-27T16:04:37Z,2019-06-15T21:44:23Z,2019-06-15T21:44:23Z,NONE,,"On attempting install on a Win 7 PC with py 3.6.2 (Anaconda dist) I get the error: ``` Collecting uvloop>=0.5.3 (from Sanic==0.7.0->datasette) Downloading uvloop-0.9.1.tar.gz (1.8MB) 100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 1.8MB 12.8MB/s Complete output from command python setup.py egg_info: Traceback (most recent call last): File """", line 1, in File ""C:\Users\RCole\AppData\Local\Temp\pip-build-juakfqt8\uvloop\setup.py "", line 10, in raise RuntimeError('uvloop does not support Windows at the moment') RuntimeError: uvloop does not support Windows at the moment ``` ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/187/reactions"", ""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 314665147,MDU6SXNzdWUzMTQ2NjUxNDc=,216,Bug: Sort by column with NULL in next_page URL,222245,closed,0,,,15,2018-04-16T14:03:18Z,2018-04-17T01:45:24Z,2018-04-17T01:45:24Z,NONE,,"Copy-pasting from https://github.com/simonw/datasette/issues/189#issuecomment-381429213, since that issue is closed: I think I found a bug. I tried to sort by middle initial in my salaries set, and many middle initials are null. The `next_url` gets set by Datasette to: http://localhost:8001/salaries-d3a5631/2017+Maryland+state+salaries?_next=None%2C391&_sort=middle_initial But then None is interpreted literally and it tries to find a name with the middle initial ""None"" and ends up skipping ahead to O on page 2. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/216/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 319449852,MDU6SXNzdWUzMTk0NDk4NTI=,247,SQLite code decoupled from Datasette,11912854,open,0,,,1,2018-05-02T08:03:28Z,2018-05-21T15:29:31Z,,NONE,,"I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with [PyTables](https://github.com/PyTables/PyTables) format. In order to accomplish that, I've started [a fork for decoupling the code related with SQLite](https://github.com/jsancho-gpl/datasette/tree/feature/db-type-plugin) and putting it in an external connector to allow future connectors for a lot of file formats. It'd be nice if you could look at it and suggest improvements for a possible PR.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/247/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 322283067,MDU6SXNzdWUzMjIyODMwNjc=,254,Escaping named parameters in canned queries,247131,closed,0,,,4,2018-05-11T12:43:30Z,2020-05-10T14:54:14Z,2020-05-10T14:54:13Z,NONE,,"Thank you very much for this project. I have created some canned queries but some of the filters include a colon eg. ""com.ubuntu.cloud:server:18.04:amd64"". When saved these colons are parsed as named parameters. Is there a way to escape colons in a canned query?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/254/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 322741659,MDExOlB1bGxSZXF1ZXN0MTg3NzcwMzQ1,258,Add new metadata key persistent_urls which removes the hash from all database urls,247131,closed,0,,,3,2018-05-14T09:39:18Z,2018-05-21T07:38:15Z,2018-05-21T07:38:15Z,NONE,simonw/datasette/pulls/258,"Add new metadata key ""persistent_urls"" which removes the hash from all database urls when set to ""true"" This PR is just to gauge if this, or something like it, is something you would consider merging? I understand the reason why the substring of the hash is included in the url but there are some use cases where the urls should persist across deployments. For bookmarks for example or for scripts that use the JSON API. This is the initial commit for this feature. Tests and documentation updates to follow.",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/258/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 325553991,MDExOlB1bGxSZXF1ZXN0MTg5ODYwMDUy,281,Reduces image size using Alpine + Multistage (re: #278),487897,closed,0,,,1,2018-05-23T05:27:05Z,2018-05-26T02:10:38Z,2018-05-26T02:10:38Z,NONE,simonw/datasette/pulls/281,"Hey Simon! I got the image size down from 256MB to 110MB. Seems to be working okay, but you might want to test it a bit more. Example output of `docker run --rm -it datasette` ``` Serve! files=() on port 8001 [2018-05-23 05:23:08 +0000] [1] [INFO] Goin' Fast @ http://127.0.0.1:8001 [2018-05-23 05:23:08 +0000] [1] [INFO] Starting worker [1] ``` Related: https://github.com/simonw/datasette/issues/278 ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/281/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 330826972,MDU6SXNzdWUzMzA4MjY5NzI=,308,"Support extra Heroku apps:create options - region, space, team",78156,open,0,,,2,2018-06-08T23:08:33Z,2018-09-21T14:09:28Z,,NONE,,"It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 333238932,MDU6SXNzdWUzMzMyMzg5MzI=,316,datasette inspect takes a very long time on large dbs,132230,closed,0,,,5,2018-06-18T11:56:27Z,2019-05-11T18:26:25Z,2019-05-11T18:26:25Z,NONE,,"Hi, I want to expose data in a very large sqlite database (~600Gb) to the web. I have used datasette with success on smaller test databases with the same schema - it works very well (thanks!). However, using the full db, both `datasette inspect` and `datasette serve` seem to hang or pause for a very long time (tens of minutes) on startup. Is this expected behaviour? (I noticed that the output of `datasette inspect` includes row counts for each table. Simply counting the rows in this db will take a long time (tens of millions of rows across each of ~10 tables), so I wondered if this is the source of the problem.) Any help on a workaround would be appreciated. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/316/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 334190959,MDU6SXNzdWUzMzQxOTA5NTk=,321,Wildcard support in query parameters,12617395,closed,0,,3439337,8,2018-06-20T18:03:56Z,2018-06-21T17:00:10Z,2018-06-21T04:55:26Z,NONE,,"I haven't found a way to get the wildcard (%) inserted automatically in to a query parameter. This would be useful for cases the query parameter is followed by a LIKE clause. Wrapping the parameter name using the wildcard character within the metadata file (ie - ...where xyz like %:querystring%) does not seem to work. Can this be made possible? Or if not, can the template be extended to provide a tip to the user that they need to insert the wildcard characters themselves?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/321/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 334592281,MDExOlB1bGxSZXF1ZXN0MTk2NTI2ODYx,322,Feature/in operator,2691848,closed,0,,,0,2018-06-21T17:41:51Z,2018-06-21T17:45:25Z,2018-06-21T17:45:25Z,NONE,simonw/datasette/pulls/322,,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/322/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 339095976,MDU6SXNzdWUzMzkwOTU5NzY=,334,extra_options not passed to heroku publisher,719357,closed,0,,,2,2018-07-06T23:26:12Z,2018-07-24T04:53:21Z,2018-07-10T01:46:04Z,NONE,,"I might be wrong but I was not able to publish to `heroku` with `--extra-options`, I think `extra_options` is not being used in this function [here](https://github.com/simonw/datasette/blob/master/datasette/utils.py#L369). Any help appreciated! ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/334/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 340396247,MDU6SXNzdWUzNDAzOTYyNDc=,339,Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way,12617395,closed,0,,,4,2018-07-11T20:38:06Z,2022-03-21T22:22:40Z,2022-03-21T22:22:34Z,NONE,,"Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/339/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 341123355,MDU6SXNzdWUzNDExMjMzNTU=,342,Requesting support for query description,12617395,closed,0,,,4,2018-07-13T18:50:16Z,2018-07-24T04:53:21Z,2018-07-16T02:33:54Z,NONE,,"It would be great if the metadata file allowed you to enter a description for the query. We have a lot of pre-defined queries that can only be so descriptive by their name. It would be nice if an optional description could be included underneath the name within the UI, or on hover where it currently shows the SQL.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/342/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 343728754,MDU6SXNzdWUzNDM3Mjg3NTQ=,346,Logo design for DATASETTE,35750428,closed,0,,,0,2018-07-23T17:40:17Z,2018-08-02T02:31:59Z,2018-08-02T02:31:59Z,NONE,,"Hello :) , I'm a graphic designer, I'm interested in collaborating with open source projects, besides this helps me expand my portfolio. I would like to design a logo for your project. I will be happy to collaborate with you :). ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/346/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 352768017,MDU6SXNzdWUzNTI3NjgwMTc=,362,Add option to include/exclude columns in search filters,78156,open,0,,,1,2018-08-22T01:32:08Z,2020-11-03T19:01:59Z,,NONE,,"I have a dataset with many columns, of which only some are likely to be of interest for searching. It would be great for usability if the search filters in the UI could be configured to include/exclude columns. See also: https://github.com/simonw/datasette/issues/292",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/362/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 392610803,MDU6SXNzdWUzOTI2MTA4MDM=,391,Google Trends example doesn’t work,229881,closed,0,,,1,2018-12-19T13:51:38Z,2019-01-02T19:45:13Z,2019-01-02T19:45:12Z,NONE,,"https://google-trends.datasettes.com/ I see a cloud flare error. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/391/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 395236066,MDU6SXNzdWUzOTUyMzYwNjY=,393,"CSV export in ""Advanced export"" pane doesn't respect query",1727065,closed,0,,,6,2019-01-02T12:39:41Z,2021-06-17T18:14:24Z,2019-01-03T02:44:10Z,NONE,,"It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at [songs released in 1989](https://fivethirtyeight.datasettes.com/fivethirtyeight-c300360/classic-rock%2Fclassic-rock-song-list?Release+Year__exact=1989) in the `classic-rock/classic-rock-song-list` table from the Five Thirty Eight data. The JSON and CSV export links at the top of the page both give me filtered data using `Release+Year__exact=1989` in the URL. In the `Advanced export` tab, though, the CSV option gives me the whole data set, while the JSON options preserve the query. It may be that this is intended behaviour related to the streaming CSV stuff [discussed here](https://github.com/simonw/datasette/issues/266), but if that's the case then I think it should be a little clearer.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/393/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 397129564,MDU6SXNzdWUzOTcxMjk1NjQ=,397,Update official datasetteproject/datasette Docker container to SQLite 3.26.0,43564,closed,0,,,3,2019-01-08T22:51:50Z,2019-01-11T01:25:33Z,2019-01-11T00:56:18Z,NONE,,"I try to start datasette on a database that contains the below view It fails in a way that makes me think it does not support the window functions SQL syntax. ``` create view general_ledger as select transactions.account_number, strftime(""%Y-%m-%d"", verifications.verification_date) as verification_date, verifications.verification_number, verifications.verification_text, case when transactions.centi_amount >= 0 and verifications.verification_number > 0 then printf(""%.2f"", (transactions.centi_amount/100.0)) end as debit, case when transactions.centi_amount <= 0 and verifications.verification_number > 0 then printf(""%.2f"", (transactions.centi_amount/100.0)) end as credit, printf(""%.2f"", sum(transactions.centi_amount) over (partition by transactions.account_number order by verifications.verification_number range between unbounded preceding and current row)/100.0) from verifications inner join transactions on transactions.verification_id = verifications.id order by transactions.account_number, verifications.verification_number; ``` ``` docker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/ledger.db Serve! files=('/mnt/ledger.db',) on port 8001 Traceback (most recent call last): File ""/usr/local/bin/datasette"", line 11, in sys.exit(cli()) File ""/usr/local/lib/python3.6/site-packages/click/core.py"", line 722, in __call__ return self.main(*args, **kwargs) File ""/usr/local/lib/python3.6/site-packages/click/core.py"", line 697, in main rv = self.invoke(ctx) File ""/usr/local/lib/python3.6/site-packages/click/core.py"", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.6/site-packages/click/core.py"", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/usr/local/lib/python3.6/site-packages/click/core.py"", line 535, in invoke return callback(*args, **kwargs) File ""/usr/local/lib/python3.6/site-packages/datasette/cli.py"", line 375, in serve ds.inspect() File ""/usr/local/lib/python3.6/site-packages/datasette/app.py"", line 308, in inspect ""views"": inspect_views(conn), File ""/usr/local/lib/python3.6/site-packages/datasette/inspect.py"", line 30, in inspect_views return [v[0] for v in conn.execute('select name from sqlite_master where type = ""view""')] sqlite3.DatabaseError: malformed database schema (general_ledger) - near ""over"": syntax error ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/397/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 400229984,MDU6SXNzdWU0MDAyMjk5ODQ=,401,How to pass configuration to plugins?,1055831,closed,0,,,3,2019-01-17T11:20:41Z,2019-01-18T11:48:13Z,2019-01-18T06:49:07Z,NONE,,"Hi, Firstly, thanks for your work on datasette, it is a hugely useful tool! I've been working on a fork [https://github.com/dazzag24/datasette-cluster-map] of datasette-cluster-map to allow the tileserver to be easily switched. Primarily because the tiles being served in the current version use localised text for labels and I'd like to have English used for these names instead. It uses http://leaflet-extras.github.io/leaflet-providers/preview/ to allow you to simply set the tile provider using a call like so: ``` let tiles = L.tileLayer.provider('Esri.WorldTopoMap'); ``` instead of the current: ``` let tiles = L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 19, detectRetina: true, attribution: '© OpenStreetMap contributors' }), ``` However I've got stuck in trying to work out how to pass the provider string to the plugin. In the documentation: https://datasette.readthedocs.io/en/stable/plugins.html you discuss configuration of plugins and use an example of passing in which latitude and longitude columns should be used. However I cannot seem to see anywhere in the current datasette-cluster-map code where these config params are passed in or used. Can you please point me to an example or how to pass configuration from the metadata.json down into a plugin. Once I've over come this issue I was wondering if you would be interested in taking this change into your version? Many thanks Darren",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/401/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 400511206,MDU6SXNzdWU0MDA1MTEyMDY=,403,How does persistence work?,1794527,closed,0,,,2,2019-01-17T23:41:57Z,2019-01-19T05:47:55Z,2019-01-18T06:51:14Z,NONE,,I was under the impression that now.sh is for stateless microservices. So where are these SQLite databases stored and when do they get created and destroyed?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/403/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 407174173,MDU6SXNzdWU0MDcxNzQxNzM=,408,"Show metadata info (e.g. license, source) on custom SQL query pages",78356,closed,0,,,0,2019-02-06T10:43:34Z,2019-10-14T03:53:22Z,2019-10-14T03:53:22Z,NONE,,"Currently metadata info is not displayed on custom SQL pages. E.g. compare the footer of [this normal table page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/categories) with the footer [this custom SQL page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7?sql=select+*+from+categories). This is important in order to adhere to attribution license requirements.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/408/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 408376825,MDU6SXNzdWU0MDgzNzY4MjU=,409,Zeit API v1 does not work for new users - need to migrate to v2,209967,closed,0,,,3,2019-02-09T00:50:33Z,2020-04-06T15:44:46Z,2020-04-06T15:44:46Z,NONE,,"Hello there, This looks like a great tool. Thanks. Unfortunately, I hit the following error: ``` michael@hazel ~/src/cc-datasette/data/out datasette publish now cc-datasette.db > WARN! You are using an old version of the Now Platform. More: https://zeit.co/docs/v1-upgrade > Deploying /tmp/tmpjtrxwsyf/datasette under michaelmcandrew > Using project datasette > Error! You tried to create a Now 1.0 deployment. Please use Now 2.0 instead: https://zeit.co/upgrade ``` I'm guessing you might not hit this because you are not a 'new user' of Zeit (https://github.com/zeit/now-cli/issues/1805#issuecomment-452470953). Would it be a lot of work to upgrade to the new Zeit API, do you think?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/409/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 408518024,MDU6SXNzdWU0MDg1MTgwMjQ=,410,How to setup a multi database environment?,30607,closed,0,,,1,2019-02-10T09:39:24Z,2019-04-12T04:42:28Z,2019-04-12T04:42:27Z,NONE,,"Hi, first of all I need to write that Simon Willison and datasette are really great. I have probably a stupid question, but it seems to me that I do not have the reply in the documentation. I have installed datasette and run it with `datasette mydb.db`, and I can reach it on `http://127.0.0.1:8001`. But how to work with more than one db? Imagine I have ten sqlite databases, and that I need to explore/query these via datasette, how to run datasette? Is it possibile to create a sort of db index and than run `datasette serve myindex`? Thank you",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/410/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 410384988,MDU6SXNzdWU0MTAzODQ5ODg=,411,How to pass named parameter into spatialite MakePoint() function,1055831,closed,0,,,3,2019-02-14T16:30:22Z,2023-10-25T13:23:04Z,2019-05-05T12:25:04Z,NONE,,"Hi, datasette version: ""0.26.2"" extensions: spatialite: ""4.4.0-RC0"" sqlite version: ""3.22.0"" I have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables: ``` conn.execute('SELECT InitSpatialMetadata(1)') conn.execute(""SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);"") conn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||""longitude""||' '||""latitude""||')',4326);''') conn.execute(""SELECT CreateSpatialIndex('airports', 'point_geom');"") ``` I'm attempting to create a canned query and have this in my metadata.json file: ``` ""find_airports_nearest_to_point"":{ ""sql"":""SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a JOIN airports AS b ON (b.rowid = a.fid) WHERE f_table_name = \""airports\"" AND ref_geometry = MakePoint( :Long , :Lat ) AND max_items = 10;""} ``` which doesn't seem to perform the templating of the name parameters correctly and I get no results. Have also tired: ``` MakePoint( || :Long || , || :Lat || ) ``` which returns this error: ``` near ""||"": syntax error ``` However I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported? Thanks Darren",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/411/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 411257981,MDU6SXNzdWU0MTEyNTc5ODE=,412,Linked Data(sette),43340,open,0,,,2,2019-02-18T00:38:14Z,2019-03-19T10:09:46Z,,NONE,,"I've a radical feature idea (possible first as an extension in order to experiment?): I'd like to link to a remote table from a remote database, e.g. with a function ""linked_datasette()"". So one could do following query: ``` SELECT foo.id, foo.a, remote_party.b FROM foo JOIN linked_datasette(""https://parlgov.datasettes.com/parlgov-b42a2f2"") AS remote_party ON foo.id=remote_party.id ``` This is inspired by SPARQL's SERVICE keyword for remote RDF ""endpoints"". There's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ). And here's an implementation from me in Postgres FDW to connect another Postgres ""endpoint"": https://pastebin.com/Fz2v64Cz .",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/412/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 418329842,MDU6SXNzdWU0MTgzMjk4NDI=,415,Add query parameter to hide SQL textarea,36796532,closed,0,,,3,2019-03-07T14:11:30Z,2019-03-15T09:30:57Z,2019-03-15T05:22:43Z,NONE,,It would be cool if there was a query parameter to hide / remove the SQL textarea. Then I could simply save a bookmark for a certain query and open it to see the data without having to scroll below the (long) SQL query first.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/415/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 435819321,MDU6SXNzdWU0MzU4MTkzMjE=,436,400 Error when trying to register new user via https://publish.datasettes.com/,317694,closed,0,,,1,2019-04-22T17:55:00Z,2021-01-04T20:15:42Z,2021-01-04T20:15:41Z,NONE,,"Behavior: When registering a new user via Zeit - confirmation is sent and screen acknowledges registered user... When clicking grant access the next screen is a white 400 error message. Replicated: Chrome and Firefox; 2 different email accounts",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/436/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 448189298,MDU6SXNzdWU0NDgxODkyOTg=,486,Ability to add extra routes and related templates,2181410,closed,0,,,2,2019-05-24T14:04:25Z,2019-05-24T14:43:28Z,2019-05-24T14:43:09Z,NONE,,"Hi Simon Thank for an excellent job! Datasette is such an obviously good idea (once you have that idea!) and so well done. The only thing that I miss, is the ability to add extras routes (with associated jinja2-templates). For most of the datasets, that I would like to publish, I would also like at least a page, that describes the data (semantics, provenance, biases...) and a page explaining our cookie- and privacy-policies (which would allows us to use something like Goggle Analytics). ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/486/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 450862577,MDU6SXNzdWU0NTA4NjI1Nzc=,496,Additional options to gcloud build command in cloudrun - timeout,1740337,closed,0,,,1,2019-05-31T15:43:55Z,2019-05-31T23:05:05Z,2019-05-31T23:05:05Z,NONE,,"I am trying to deploy a 3.1 GB dataset to cloudrun with datasette. Currrently the docker build times out. Would be nice to have a timeout flag or additional gcloud commands that could be specified. Here is the line https://github.com/simonw/datasette/blob/f825e2012109247fa246e2b938f8174069e574f1/datasette/publish/cloudrun.py#L78 I would be happy to submit a PR to allow for a timeout option. What are your ideas of allowing the user additional build publishing flag options?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/496/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 451513541,MDU6SXNzdWU0NTE1MTM1NDE=,498,Full text search of all tables at once?,7936571,closed,0,,,12,2019-06-03T14:24:43Z,2020-05-30T17:26:02Z,2020-05-30T17:26:02Z,NONE,,"Does datasette have a built-in way, in a browser, to do a full-text search of all columns, in all databases and tables, that have full-text search enabled? Is there a plugin that does this?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/498/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 451585764,MDU6SXNzdWU0NTE1ODU3NjQ=,499,Accessibility for non-techie newsies? ,7936571,open,0,,,3,2019-06-03T16:49:37Z,2019-06-05T21:22:55Z,,NONE,,"Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/499/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 453131917,MDU6SXNzdWU0NTMxMzE5MTc=,502,Exporting sqlite database(s)?,7936571,closed,0,,,3,2019-06-06T16:39:53Z,2021-04-03T05:16:54Z,2019-06-11T18:50:42Z,NONE,,"I'm working on datasette from one computer. But if I want to work on it from another computer and want to copy the SQLite database(s) already on the Heroku datasette instance, how to I copy the database(s) to the second computer so that I can then update it and push to online via datasette's command line code that pushes code to Heroku?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/502/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 453243459,MDU6SXNzdWU0NTMyNDM0NTk=,503,Handle SQLite databases with spaces in their names?,7936571,closed,0,9599,,1,2019-06-06T21:20:59Z,2019-11-04T23:16:30Z,2019-11-04T23:16:30Z,NONE,,"I named my SQLite database ""Government workers"" and published it to Heroku. When I clicked the ""Government workers"" database online it lead to a 404 page: `Database not found: Government%20workers`. I believe this is because the database name has a space.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/503/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 457147936,MDU6SXNzdWU0NTcxNDc5MzY=,512,"""about"" parameter in metadata does not appear when alone",7936571,open,0,,,3,2019-06-17T21:04:20Z,2019-10-11T15:49:13Z,,NONE,,"Here's an example of metadata I have for one database on datasette. ``` ""Records-requests"": { ""tables"": { ""Some table"": { ""about"": ""This table has data."" } } } ``` The text in `about` does not show up when I publish the data. But it shows up after I add a `""source""` parameter in the metadata. Is this intended?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/512/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 457201907,MDU6SXNzdWU0NTcyMDE5MDc=,513,Is it possible to publish to Heroku despite slug size being too large?,7936571,closed,0,,,2,2019-06-18T00:12:02Z,2019-06-21T22:35:54Z,2019-06-21T22:35:54Z,NONE,,"I'm trying to push more than 1.5GB worth of SQLite databases -- 535MB compressed -- to Heroku but I get this error when I run the `datasette publish heroku` command. Compiled slug size: 535.5M is too large (max is 500M). Can I publish the databases and make datasette work on Heroku despite the large slug size?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/513/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 459397625,MDU6SXNzdWU0NTkzOTc2MjU=,514,Documentation with recommendations on running Datasette in production without using Docker,7936571,closed,0,,5971510,27,2019-06-21T22:48:12Z,2020-10-08T23:55:53Z,2020-10-08T23:33:05Z,NONE,,"I've got some SQLite databases too big to push to Heroku or the other services with built-in support in datasette. So instead I moved my datasette code and databases to a remote server on Kimsufi. In the folder containing the SQLite databases I run the following code. `nohup datasette serve -h 0.0.0.0 *.db --cors --port 8000 --metadata metadata.json > output.log 2>&1 &`. When I go to `http://my-remote-server.com:8000`, the site loads. But I know this is not a good long-term solution to running datasette on this server. What is the ""correct"" way to have this site run, preferably on server port 80?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/514/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 459882902,MDU6SXNzdWU0NTk4ODI5MDI=,526,Stream all results for arbitrary SQL and canned queries,50578294,open,0,,,23,2019-06-24T13:09:45Z,2022-09-28T04:01:25Z,,NONE,,"I think that there is a difficulty with canned queries. When I want to stream all results of a canned query TwoDays I get only first 1.000 records. Example: `http://myserver/history_sample/two_days.csv?_stream=on` returns only first 1.000 records. If I do the same with the whole database i.e. `http://myserver/history_sample/database.csv?_stream=on` I get correctly all records. Any ideas?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/526/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 459936585,MDU6SXNzdWU0NTk5MzY1ODU=,527,Unable to use rank when fts-table generated with csvs-to-sqlite,2181410,closed,0,,,3,2019-06-24T14:49:48Z,2019-06-24T15:21:18Z,2019-06-24T15:09:10Z,NONE,,"Hi Simon. If i generate a fts-table with the csvs-to-sqlite f-option, I'm unable to use (in datasette's GUI) the internal ranking of the table for sorting or viewing, but if I generate the fts-table with the enable-fts argument from sqlite-utils, everyrthing works ok. Eg.: datasette, version 0.28 sqlite-utils, version 1.2.1 csvs-to-sqlite, version 0.9 No column named rank with these commands: $ csvs-to-sqlite minutes.csv minutes.db -f text_data $ datasette -i minutes.db select rank, * from minutes_fts where minutes_fts match 'dog' Everything ok with these commands: $ csvs-to-sqlite minutes.csv minutes.db $ sqlite-utils enable-fts minutes.db text_data $ datasette -i minutes.db select rank, * from minutes_fts where minutes_fts match 'dog' Am I doing something wrong? Thank you for a great application!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/527/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 460396952,MDExOlB1bGxSZXF1ZXN0MjkxNTM0NTk2,529,Use keyed rows - fixes #521,1383872,closed,0,,,1,2019-06-25T12:33:48Z,2019-06-25T12:35:07Z,2019-06-25T12:35:07Z,NONE,simonw/datasette/pulls/529,"Supports template syntax like this: ``` {% for row in display_rows %}

{{ row[""First_Name""] }} {{ row[""Last_Name""] }}

... ```",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/529/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 467218270,MDU6SXNzdWU0NjcyMTgyNzA=,558,Support unicode in url,380586,closed,0,,,4,2019-07-12T04:43:24Z,2019-07-15T01:29:30Z,2019-07-14T02:49:33Z,NONE,,"Hi, I defined some custom queries in my `metadata.json`. There are Chinese characters in the names of the queries. So the urls are like `http://127.0.0.1:8001/mydb/测试查询`. When opening such urls, datasette will throw an exception. ``` Traceback (most recent call last): File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/asgi.py"", line 100, in __call__ return await view(new_scope, receive, send) File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/asgi.py"", line 172, in view request, **scope[""url_route""][""kwargs""] File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/views/base.py"", line 267, in get request, database, hash, correct_hash_provided, **kwargs File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/views/base.py"", line 471, in view_get for key in self.ds.renderers.keys() File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/views/base.py"", line 471, in for key in self.ds.renderers.keys() File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/__init__.py"", line 655, in path_with_format path = request.path File ""/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/asgi.py"", line 49, in path self.scope.get(""raw_path"", self.scope[""path""].encode(""latin-1"")) UnicodeEncodeError: 'latin-1' codec can't encode characters in position 9-11: ordinal not in range(256) ``` This used to work when datasette was based on sanic. Btw, thanks for the great work!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/558/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 473307794,MDU6SXNzdWU0NzMzMDc3OTQ=,565,Conflict between datasette and uvicorn click versions,440503,closed,0,,,1,2019-07-26T11:13:40Z,2020-10-02T00:09:55Z,2020-10-02T00:09:55Z,NONE,,"Hello Datasette is awesome thanks so much! I not very familiar with Python but I think there is a problem with datasette docker builds I keep getting this error ``` ERROR: uvicorn 0.8.4 has requirement click==7.*, but you'll have click 6.0 which is incompatible. ERROR: datasette 0.29.2 has requirement click~=7.0, but you'll have click 6.0 which is incompatible. ``` The full log from the docker build is here - https://gist.github.com/jonheslop/e01cd322e761cfaf34f0cb83f86411b0 Just in case it’s helpful this is my setup - https://github.com/dotwatcher/dotwatcher-data",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/565/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 476437213,MDU6SXNzdWU0NzY0MzcyMTM=,566,Unexpected keyword argument 'hidden',8330931,closed,0,,,1,2019-08-03T10:07:57Z,2019-08-03T16:13:36Z,2019-08-03T16:13:36Z,NONE,,"I couldn't get a test example running. I am running python 3.6.8 and tried both windows and windows subsystem for linux, getting the same error. My test.db was created by converting a five line csv file with csvs-to-sqlite. The csv file is: col1, col2, col3 1,2,3 4,5,6 7,8,9 10,11,12 Here is the error message: (myvenv) davido@DESKTOP-L29G79U:~/dot/datasette-eg$ datasette test.db Traceback (most recent call last): File ""/home/davido/dot/datasette-eg/myvenv/bin/datasette"", line 7, in from datasette.cli import cli File ""/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/datasette/cli.py"", line 2, in import uvicorn File ""/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/uvicorn/__init__.py"", line 2, in from uvicorn.main import Server, main, run File ""/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/uvicorn/main.py"", line 224, in headers: typing.List[str], File ""/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/click/decorators.py"", line 170, in decorator _param_memo(f, OptionClass(param_decls, **attrs)) File ""/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/click/core.py"", line 1430, in __init__ Parameter.__init__(self, param_decls, type=type, **attrs) TypeError: __init__() got an unexpected keyword argument 'hidden' Thanks.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/566/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 476852861,MDU6SXNzdWU0NzY4NTI4NjE=,568,Add database_color as a configurable option,50906992,open,0,,,1,2019-08-05T13:14:45Z,2023-08-11T05:19:42Z,,NONE,,This would be really useful as it would allow us to tie in with colour schemes.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/568/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 494685791,MDU6SXNzdWU0OTQ2ODU3OTE=,574,Improve usage description of --host option,132978,closed,0,,,2,2019-09-17T15:12:12Z,2019-11-01T21:58:17Z,2019-11-01T21:57:54Z,NONE,,"It would be nice if the `--host` option had a clearer description. I tried to get datasette running on an AWS instance and it took a while to realize it was only listening on localhost. So I wanted to make it listen on an non-localhost interface and tried giving a couple of values to `--host` (a host name, then an interface name), but none of them did. In the end I read the source to see that the option is passed to `uvicorn` and looked at the uvicorn docs, which also didn't help. Then I searched the web for ""example running datasette on a host"" which led me to https://github.com/simonw/datasette/issues/514 where I saw someone using `-h 0.0.0.0`. I tried that and it works. That usage could be mentioned somewhere, and might save someone else some time.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/574/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 499954048,MDExOlB1bGxSZXF1ZXN0MzIyNTI5Mzgx,578,Added support for multi arch builds,887095,closed,0,,,3,2019-09-29T18:43:03Z,2019-11-13T19:13:15Z,2019-11-13T19:13:15Z,NONE,simonw/datasette/pulls/578,Minor changes in Dockerfile and new Makefile to support Docker multi architecture builds. `make`will build one image per architecture and push them as one Docker manifest to Docker Hub. Feel free to change `IMAGE_NAME ` to `datasetteproject/datasette` to update your official Docker Hub image(s).,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/578/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 505512251,MDU6SXNzdWU1MDU1MTIyNTE=,588,Queries per DB table in metadata.json,12617395,closed,0,,,3,2019-10-10T21:08:19Z,2019-10-21T12:58:22Z,2019-10-21T01:48:42Z,NONE,,"It doesn't appear possible to have separate queries defined per database table. When I do something like below, my table descriptions show up but not the queries: ` ""databases"": { ""MYDB"": { ""tables"": { ""MYFIRSTTABLE"": { ""source"": ""Test"", ""source_url"": ""https://www.google.com"", ""queries"": { ""Query 1"": { ""sql"": ""select * from MYFIRSTTABLE"", ""title"": ""Query 1"", ""description"": ""This is the first query"" }, } }, ""MYSECONDTABLE"": { ""source"":""Test2"", ""source_url"":""https://www.google.com"", ""queries"": { ""Query 2"" : { ""sql"":""select * from MYSECONDTABLE;"", ""title"": ""Query 2"", ""description"":""This is the second query"" } } } }`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/588/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 506183241,MDU6SXNzdWU1MDYxODMyNDE=,593,make uvicorn optional dependancy (because not ok on windows python yet),4312421,closed,0,,,3,2019-10-12T12:51:07Z,2019-10-13T06:22:08Z,2019-10-13T06:22:07Z,NONE,,"would it be possible to: - remove uvicorn mandatory dependancy ? - eventually make a fallback to hypercorn ? reason: - uvloop not yet supported on Windows/Python-3.8 and below, may happen with Python-3.9 only. - it seems a 6 lines effort (but I'm not expert)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/593/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 506297048,MDU6SXNzdWU1MDYyOTcwNDg=,594,upgrade to uvicorn-0.9 to be Python-3.8 friendly,4312421,closed,0,,,3,2019-10-13T09:23:43Z,2019-11-12T04:47:04Z,2019-11-12T04:47:04Z,NONE,,uvicorn-0.8 relies on websockets-0.7 which lacks python-3.8 compatiblity,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/594/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 506300941,MDExOlB1bGxSZXF1ZXN0MzI3NTQxMDQ2,595,bump uvicorn to 0.9.0 to be Python-3.8 friendly,4312421,closed,0,,,9,2019-10-13T10:00:04Z,2019-11-12T04:46:48Z,2019-11-12T04:46:48Z,NONE,simonw/datasette/pulls/595,"as uvicorn-0.9 is needed to get websockets-8.0.2, which is needed to have Python-3.8 compatibility",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/595/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 508100844,MDU6SXNzdWU1MDgxMDA4NDQ=,598,Character encoding bug with CSV export,46313,closed,0,,,1,2019-10-16T21:09:30Z,2021-06-17T18:13:20Z,2019-10-18T22:52:21Z,NONE,,"I was just poking around, and at [this URL](https://sql-murder-mystery.datasette.io/sql-murder-mystery/crime_scene_report.csv?_stream=on&type=arson&_size=max), I encountered this error: ``` 'latin-1' codec can't encode character '\u2019' in position 27: ordinal not in range(256) ``` ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/598/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 510076368,MDU6SXNzdWU1MTAwNzYzNjg=,605,Support queries at the table level,12617395,open,0,,,2,2019-10-21T15:58:30Z,2019-10-30T18:55:37Z,,NONE,,"Per the issue described in [issue #588](https://github.com/simonw/datasette/issues/588), it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/605/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 512996469,MDU6SXNzdWU1MTI5OTY0Njk=,607,Ways to improve fuzzy search speed on larger data sets?,8431341,closed,0,,,6,2019-10-27T17:31:37Z,2019-11-07T03:38:10Z,2019-11-07T03:38:10Z,NONE,,"I have an sqlite table with 16 million rows in it. Having read @simonw article ""[Fast Autocomplete Search for Your Website](https://24ways.org/2018/fast-autocomplete-search-for-your-website/)"" I was curious to try datasette to see what kind of query performance I could get out of it. In truth I don't need to do full text search since all I would like to do is give my users a way to search for the names of investors such as ""Warren Buffet"", or ""Tim Cook"" (who's names are in a single column). On the first search, Datasette takes over 20 seconds to return all records associated with `elon musk`: > ![image](https://user-images.githubusercontent.com/8431341/67638889-a86e1100-f8b7-11e9-9f7e-a9d13a42e988.png) > ![image](https://user-images.githubusercontent.com/8431341/67638825-ed457800-f8b6-11e9-94d1-b44f1a40ee8c.png) If I rerun the same search, it then takes almost 9 seconds: > ![image](https://user-images.githubusercontent.com/8431341/67638908-e4a17180-f8b7-11e9-9d00-748c80ef1f21.png) That's far to slow to implement an autocomplete feature. I could reduce the latency by making a special table of only unique investor names, thereby reducing the search space to less than a million rows (then I'd need to implement a way to add only new investor names to the table as I received new data.. about 4,000 rows a day). If I did that, I'm still concerned the new table wouldn't be lean enough to lookup investor names quickly. Plus, even if I can implement the autocomplete feature, I would still finally have to lookup records for that investors which would take between 8 - 20 seconds. Are there any tricks for speeding this up? Here's my hardware: > ![image](https://user-images.githubusercontent.com/8431341/67638861-55945980-f8b7-11e9-96a8-ca76c7c68c5d.png) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/607/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 518506242,MDU6SXNzdWU1MTg1MDYyNDI=,616,Datasette FTS detection bug,49656826,closed,0,,,2,2019-11-06T14:25:47Z,2019-11-08T15:31:33Z,2019-11-08T02:06:56Z,NONE,,"I'm having a trouble with datasette. I deployed EXACTLY the same project on two different apps on Heroku. Both have databases (not all) with FTS activated but only one detects and works fine. You can take a look here: With search: http://teste-templates.herokuapp.com/amazonia_protege/car Without search: http://bases.vortex.media/amazonia_protege/car ![teste](https://user-images.githubusercontent.com/49656826/68306310-11a80e00-0088-11ea-8d1c-db3bd3375518.jpg) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/616/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 527670799,MDU6SXNzdWU1Mjc2NzA3OTk=,639,updating metadata.json without recreating the app,172847,open,0,,,6,2019-11-24T09:19:53Z,2019-11-30T06:08:50Z,,NONE,,"I've sucessfully ""uploaded"" an SQLite database (with a metadata.json file) to heroku using: $ datasette publish heroku so-sales.db -m metadata.json -n so-sales The question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database. The directions on heroku indicate I should run: heroku git:clone -a so-sales But this just results in an empty directory with a warning: warning: You appear to have cloned an empty repository. I've been able to ""clone"" the heroku ""app"" using the command: $ heroku slugs:download -a so-sales but this is not a git repository.... Ideally, it seems to me, there'd be an option of the `datasette` CLI to allow a file to be updated, or there'd be some way to create a local git ""clone"" of the app so that the heroku instructions for ""Deploying with git"" would apply. (p.s. I ran `datasette publish heroku -m metadata.json -n so-sales` in the hope that that would not cause the .db file to be wiped, but of course it was.) (p.p.s. Thanks for Datasette!)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/639/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 531502365,MDU6SXNzdWU1MzE1MDIzNjU=,646,Make database level information from metadata.json available in the index.html template,18017473,open,0,,3268330,3,2019-12-02T19:55:10Z,2022-03-15T20:50:34Z,,NONE,,"Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/646/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 539590148,MDU6SXNzdWU1Mzk1OTAxNDg=,651,fts5 syntax error when using punctuation,2181410,closed,0,,,3,2019-12-18T10:25:35Z,2021-07-14T19:26:06Z,2019-12-30T06:42:55Z,NONE,,"Hi Simon I get a syntax error when using punctuation or special characters in a fulltext search (using fts5). I created the virtual table using sqlite-utils' ""enable-fts""-command. The same error appears on Niche Museums [https://www.niche-museums.com/browse/search?q=park.](https://www.niche-museums.com/browse/search?q=park.), but works fine in most of your other datasette-examples, e.g. register-of-members-interests [https://register-of-members-interests.datasettes.com/regmem-98dc8b7/items?_search=mins.](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/items?_search=mins.) What am I doing wrong? Many thanks! ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/651/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 539985017,MDExOlB1bGxSZXF1ZXN0MzU0ODY5Mzkx,652,Quick (and uninformed and perhaps misguided) attempt to add a url for hosting datasette at a particular host/URI,132978,closed,0,,,1,2019-12-18T23:37:16Z,2020-03-24T22:14:50Z,2020-03-24T22:14:50Z,NONE,simonw/datasette/pulls/652,"As usual, I don't really know what I'm doing... so this is just a suggested approach. I've not written tests, I've not run the tests, I don't know if I've missed some absolute URLs that would need to have the leading slash dropped. BUT, I tested it with `--config base_url:http://127.0.0.1:8001/` on the command line and from what little I know about datasette it's at least working in some obvious cases. My changes are based on what I saw in https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf (thanks!) I'm happy to be more thorough on this if you think it's worth pursuing. Fixes #394 (he said, optimistically).",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/652/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 548591089,MDU6SXNzdWU1NDg1OTEwODk=,657,Allow creation of virtual tables at startup,1055831,open,0,,,4,2020-01-12T16:10:55Z,2021-01-15T20:24:35Z,,NONE,,"Hi, I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial CREATE VIRTUAL TABLE ......... command that is required to load the data from the parquet file into the table. It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time? My thoughts are that something at init time could detect that the user was loading a *.parquet file and then switch to a mode were it loads that via the ""CREATE VIRTUAL TABLE..."" rather than loading the *.db file in the default case?? I'm happy to contribute code and testing, I just need some pointers on the best approach. Thanks Darren",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/657/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 550293770,MDU6SXNzdWU1NTAyOTM3NzA=,658,How do I use the app.css as style sheet?,49656826,open,0,,,2,2020-01-15T16:27:57Z,2020-02-07T00:29:50Z,,NONE,,"Simon, I'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't. Is this possible? Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/658/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 551834842,MDU6SXNzdWU1NTE4MzQ4NDI=,659,README information is obscured by feature history,55480210,closed,0,,,1,2020-01-18T22:34:51Z,2020-12-10T23:28:51Z,2020-12-10T23:28:51Z,NONE,,"While it's sometimes valuable to know how a project has developed, there is usually little justification for including this information in the README, and certainly not immediately after other key information such as ""what does this package do, and who might want to use it?"" Might I recommend that the feature history is migrated to an Appendix in the documentation?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/659/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 555832585,MDU6SXNzdWU1NTU4MzI1ODU=,661,"--port option to expose a port other than 8001 in ""datasette package""",134771,closed,0,,,3,2020-01-27T21:05:56Z,2020-01-30T04:17:52Z,2020-01-29T22:46:45Z,NONE,,"I see how to alter the port using `datasette serve -p XXX` per the docs. However, I'm packaging up to server the container on AppEngine flexible, which [requires](https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build#listening_to_port_8080) that the container is serving traffic on port 8080. https://github.com/simonw/datasette/blob/7950105c278b140e6cb665c68b59df219870f9bc/Dockerfile#L41 Is there a way to inject a non-default port into the Dockerfile, or should I just do something like `sed` to replace 8001 with 8080 after `dataset package` has done it's thing? Thanks for the advice.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/661/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 556814876,MDU6SXNzdWU1NTY4MTQ4NzY=,662,Escape_fts5_query-hookimplementation does not work with queries to standard tables,2181410,closed,0,,,5,2020-01-29T11:56:03Z,2020-01-30T00:30:20Z,2020-01-30T00:30:19Z,NONE,,"Hi Simon Thank you for adding the escape_function, but it does not work on my datasette-installation (0.33). I've added the following file to my datasette-dir: /plugins/sql_functions.py: `from datasette import hookimpl def escape_fts_query(query): bits = query.split() return ' '.join('""{}""'.format(bit.replace('""', '')) for bit in bits) @hookimpl def prepare_connection(conn): conn.create_function(""escape_fts_query"", 1, escape_fts_query)` It has no effect on the standard queries to the tables though, as they still produce errors when including any characters like '-', '/', '+' or '?' Does the function only work when using costum queries, where I can include the escape_fts-function explicitly in the sql-query? PS. I'm calling datasette with --plugins=plugins, and my other plugins work just fine. PPS. The fts5 virtual table is created with 'sqlite3' like so: `CREATE VIRTUAL TABLE ""cases_fts"" USING FTS5( title, subtitle, resume, suggestion, presentation, detail = full, content_rowid = 'id', content = 'cases', tokenize='unicode61', 'remove_diacritics 2', 'tokenchars ""-_""' );` Thanks! _Originally posted by @clausjuhl in https://github.com/simonw/datasette/issues/651#issuecomment-579675357_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/662/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 562787785,MDU6SXNzdWU1NjI3ODc3ODU=,667,Allow injecting configuration data from plugins,870184,closed,0,,,2,2020-02-10T19:50:15Z,2020-02-12T16:18:22Z,2020-02-12T09:21:22Z,NONE,,"I'm trying to customize datasette as explorer for [CLDF](https://cldf.clld.org) datasets. Such datasets can be converted automatically to SQLite, which then can be fed to datasette, (e.g. https://github.com/cldf/cookbook/blob/master/recipes/datasette/README.md). Part of this customization would be support for the ""special"" data types described in the [CLDF ontology](https://cldf.clld.org/v1.0/terms.rdf). But while rendering of the values can be customized via the `render_cell` hook in a plugin, e.g. custom labels for foreign keys must be specified through the config file. It would be nice to be able to programmatically inject config data from plugins as well.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/667/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 567902704,MDU6SXNzdWU1Njc5MDI3MDQ=,675,--cp option for datasette publish and datasette package for shipping additional files and directories,141844,open,0,,,12,2020-02-19T22:55:56Z,2020-12-28T18:49:21Z,,NONE,,"I’m working on integrating Datasette into a documentation-oriented publishing workflow internally in my company, and in order to deploy the Docker image created by `datasette package` I need to add an additional file to the image — in my case, it’s a sort of a deployment directive. I’ve worked out a way to do this after the image has been created, but it’s convoluted and brittle. So it’d be excellent if there was an additional option for this command, something like, like, `--copy`. I’d envision it looking something like: ```shell $ datasette package --copy /the/source/path:/the/target/path data.db ``` I’d be happy to help design, specify, implement, and test this feature, if you’d be interested. Thanks for the fantastic tools!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/675/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 568091133,MDU6SXNzdWU1NjgwOTExMzM=,676,?_searchmode=raw option for running FTS searches without escaping characters,58088336,closed,0,,,9,2020-02-20T06:56:57Z,2020-02-25T05:57:24Z,2020-02-25T05:56:04Z,NONE,,"After the version 0.34. I am not able to use the wildchar in the _search option( or the full text search). It will not return any result unless I specify the whole word for text search. If I use 'match :search || ""*"" ' in the sql statement then it will work as expected.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/676/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 569317377,MDU6SXNzdWU1NjkzMTczNzc=,681,Cashe-header missing in http-response,2181410,closed,0,,,4,2020-02-22T10:50:45Z,2020-02-24T20:53:57Z,2020-02-24T20:53:56Z,NONE,,"Hi Simon. I need some help with both understanding and adding http-headers. If I call datasette on localhost with --config default_cache_ttl:120 and --cors, I only get the following response-headers: access-control-allow-origin: * content-type: text/html; charset=utf-8 date: Sat, 22 Feb 2020 10:32:15 GMT referrer-policy: no-referrer server: uvicorn transfer-encoding: chunked Cors works, but no caching-header is set? Same thing happens if I use the command in a Dockerfile and run datasette with docker. Second, how can one add headers to uvicorn? I've tried to add uvicorn commands to the Dockerfile, before the final datasette command, but it doesn't work. Is there any way to add headers to the uvicorn.run() command i datasette? I particular, I would like to add some of the missing security-headers: Thank you for a great product!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/681/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 573583971,MDU6SXNzdWU1NzM1ODM5NzE=,689,"""Templates considered"" comment broken in >=0.35",35075,closed,0,,,6,2020-03-01T17:31:21Z,2020-04-05T19:39:44Z,2020-04-05T19:39:44Z,NONE,,"Noticed that the ""Templates Considered"" comment is missing in 0.37. Believe I traced it back to #664 as you can see it in https://v0-34.datasette.io/ but not https://v0-35.datasette.io/. Looking at the template context debug between the two you can see what is missing from 0.35 vs. 0.34: ```diff < ""datasette_version"": ""0.34"", < ""app_css_hash"": ""ffa51a"", < ""select_templates"": [ < ""*index.html"" < ], < ""zip"": """", < ""body_scripts"": [], < ""extra_css_urls"": """", < ""extra_js_urls"": """", < ""format_bytes"": """", < ""database_url"": "">"", < ""database_color"": "">"" --- > ""datasette_version"": ""0.35"", > ""database_url"": "">"", > ""database_color"": "">"" ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/689/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 605806386,MDU6SXNzdWU2MDU4MDYzODY=,735,"Error when I click on ""View and edit SQL""",30607,closed,0,,,2,2020-04-23T19:31:32Z,2020-04-28T06:10:20Z,2020-04-27T19:00:30Z,NONE,,"Hi, when I do it [here](https://my-database.now.sh/commissioniComunePalermo/youtube), I have ""unrecognized token: ""["""" error. Is it normal? Thank you",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/735/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 606720674,MDU6SXNzdWU2MDY3MjA2NzQ=,736,strange behavior using accented characters,30607,closed,0,,,3,2020-04-25T08:34:51Z,2020-04-28T06:09:28Z,2020-04-27T18:59:16Z,NONE,,"Hi, when I search `incompatibilità` [here](https://my-database.now.sh/commissioniComunePalermo/youtube), using full text search, it becomes `incompatibilità` and I have no result. If I encode the `à` char in the URL (`incompatibilit%C3%A0`) I have the right result. ![image](https://user-images.githubusercontent.com/30607/80275201-00a79380-86e0-11ea-865e-f7e1474e8098.png) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/736/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 608058890,MDU6SXNzdWU2MDgwNTg4OTA=,744,link_or_copy_directory() error - Invalid cross-device link,30607,closed,0,,,28,2020-04-28T06:26:45Z,2020-05-28T14:32:53Z,2020-05-27T06:01:28Z,NONE,,"Hi, when I run ``` datasette publish heroku -n myapp --template-dir ./template mydb.db ``` I have this error ``` Traceback (most recent call last): File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py"", line 607, in link_or_copy_directory shutil.copytree(src, dst, copy_function=os.link) File ""/usr/lib/python3.7/shutil.py"", line 365, in copytree raise Error(errors) shutil.Error: [('/myfolder/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmps9_4mzc4/templates/base.html', ""[Errno 18] Invalid cross-device link: '/myfolder/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmps9_4mzc4/templates/base.html'""), ('/myfolder/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmps9_4mzc4/templates/index.html', ""[Errno 18] Invalid cross-device link: '/myfolder/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmps9_4mzc4/templates/index.html'"")] During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/home/aborruso/.local/bin/datasette"", line 8, in sys.exit(cli()) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/home/aborruso/.local/lib/python3.7/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py"", line 103, in heroku extra_metadata, File ""/usr/lib/python3.7/contextlib.py"", line 112, in __enter__ return next(self.gen) File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py"", line 191, in temporary_heroku_directory os.path.join(tmp.name, ""templates""), File ""/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py"", line 609, in link_or_copy_directory shutil.copytree(src, dst) File ""/usr/lib/python3.7/shutil.py"", line 321, in copytree os.makedirs(dst) File ""/usr/lib/python3.7/os.py"", line 221, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/tmp/tmps9_4mzc4/templates' ``` I'm attaching my very basic template folder. Thank you [template.zip](https://github.com/simonw/datasette/files/4543751/template.zip) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/744/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 611835285,MDU6SXNzdWU2MTE4MzUyODU=,752,Non-utf8 encoding in exceptionhandlers and custom-pages,2181410,closed,0,,,1,2020-05-04T12:24:42Z,2020-05-04T17:42:20Z,2020-05-04T17:42:20Z,NONE,,"Hi Simon. Whenever a response is not piped through a router-view, the template is encoded in latin-1 (I think). This is especially a problem (for me) with the new custom_pages-functionality, but also problematic with the 404- and 500-handlers. Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/752/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 612378203,MDU6SXNzdWU2MTIzNzgyMDM=,757,Question: Any fixed date for the release with the uft8-encoding fix?,2181410,closed,0,,,3,2020-05-05T06:51:20Z,2020-05-06T18:41:29Z,2020-05-06T18:41:29Z,NONE,,Just a little impatient :),107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/757/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 612382643,MDU6SXNzdWU2MTIzODI2NDM=,758,Question: Access to immutable database-path,2181410,open,0,,,6,2020-05-05T07:01:18Z,2020-05-28T08:23:27Z,,NONE,,"Hi Simon Is there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context (`databases[0][""path"")` when rendering urls of the database (eg. `/db-44b06v9/cases`...), but where can I find the hashed url when rendering the index-page? I'm trying to avoid redirects. Thanks!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/758/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 612673948,MDU6SXNzdWU2MTI2NzM5NDg=,759,fts search on a column doesn't work anymore due to escape_fts,133845,closed,0,,,3,2020-05-05T15:03:44Z,2021-07-16T02:11:54Z,2020-05-06T17:50:57Z,NONE,,"Hi and first, thank you for this awesome work you make with this projet. On a db indexed in full text search, I can't query on indexed column anymore. This request ""cauvin language:ita"": is running smoothly on a old version of datasette but not on the current version. Compare the current version query `select uuid, title, authors, year, series, language, formats, publisher, tags, identifiers from summary where rowid in (select rowid from summary_fts where summary_fts match escape_fts(:search)) order by uuid limit 101` To an older version: `select title, authors, series, uuid, language, identifiers, tags, publisher, formats, year, links from summary where rowid in (select rowid from summary_fts where summary_fts match :search) order by uuid limit 101` _language_ is a searchable column but now the search string is known as ""cauvin language:ita"" literally as a search term. columns are not parsed. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/759/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 617323873,MDU6SXNzdWU2MTczMjM4NzM=,766,Enable wildcard-searches by default,2181410,open,0,,,2,2020-05-13T10:14:48Z,2021-03-05T16:35:21Z,,NONE,,"Hi Simon. It seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the [docs](https://datasette.readthedocs.io/en/stable/full_text_search.html#the-table-page-and-table-view-api) (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance. I would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring ""_searchmode=raw"" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using ""_searchmode=raw""? Is the ""escape_fts""-function from datasette.utils ignored? Thanks! /Claus",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/766/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 637395097,MDU6SXNzdWU2MzczOTUwOTc=,838,Incorrect URLs when served behind a proxy with base_url set,79913,closed,0,,6026070,14,2020-06-11T23:58:55Z,2021-11-20T19:35:48Z,2021-11-20T19:35:48Z,NONE,,"I'm running `datasette serve --config base_url:/foo/ …`, proxying to it with this Apache config: ProxyPass /foo/ http://localhost:8001/ ProxyPassReverse /foo/ http://localhost:8001/ and then accessing it via `https://example.com/foo/`. Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include `base_url` or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at `http://localhost:8001`. I looked into this a little in the source code, and it seems to be an issue anywhere `request.url` or `request.path` is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the `path_with_…` family of utility functions and the `Datasette.absolute_url` method.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/838/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 642388564,MDU6SXNzdWU2NDIzODg1NjQ=,858,publish heroku does not work on Windows 10,870912,open,0,,,7,2020-06-20T14:40:28Z,2021-06-10T17:44:09Z,,NONE,,"When executing ""datasette publish heroku schools.db"" on Windows 10, I get the following error ```shell File ""c:\users\dell\.virtualenvs\sec-schools-jn-cwk8z\lib\site-packages\datasette\publish\heroku.py"", line 54, in heroku line.split()[0] for line in check_output([""heroku"", ""plugins""]).splitlines() File ""c:\python38\lib\subprocess.py"", line 411, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File ""c:\python38\lib\subprocess.py"", line 489, in run with Popen(*popenargs, **kwargs) as process: File ""c:\python38\lib\subprocess.py"", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File ""c:\python38\lib\subprocess.py"", line 1307, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified ``` Changing https://github.com/simonw/datasette/blob/55a6ffb93c57680e71a070416baae1129a0243b8/datasette/publish/heroku.py#L54 to ```python line.split()[0] for line in check_output([""heroku"", ""plugins""], shell=True).splitlines() ``` as well as the other `check_output()` and `call()` within the same file leads me to another recursive error about temp files",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/858/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 644582921,MDU6SXNzdWU2NDQ1ODI5MjE=,865,"base_url doesn't seem to work when adding criteria and clicking ""apply""",6739646,closed,0,,6026070,11,2020-06-24T12:39:57Z,2020-11-12T23:49:24Z,2020-10-20T05:22:59Z,NONE,,"Over on Apache Tika, we're using datasette to allow users to make sense of the metadata for our file regression testing corpus. This could be user error in how I've set up the reverse proxy! I started datasette like so: `docker run -d -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/corpora-metadata.db --config sql_time_limit_ms:60000 --config base_url:/datasette/` I then reverse proxied like so: ProxyPreserveHost On ProxyPass /datasette http://x.y.z.q:xxxx ProxyPassReverse /datasette http://x.y.z.q:xxx Regular sql works perfectly: https://corpora.tika.apache.org/datasette/corpora-metadata?sql=select+mime_string%2C+count%281%29+as+cnt%0D%0Afrom+profiles+p%0D%0Ajoin+mimes+m+on+p.mime_id%3Dm.mime_id%0D%0Agroup+by+mime_string%0D%0Aorder+by+cnt+desc However, adding criteria and clicking 'Apply' https://corpora.tika.apache.org/datasette/corpora-metadata/tika_1_24_1_mimes?_sort=file&mime__exact=text%2Fplain bounces back to: https://corpora.tika.apache.org/corpora-metadata/tika_1_24_1_mimes?_sort=file&file__contains=bug&mime__exact=text%2Fplain",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/865/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 656959584,MDU6SXNzdWU2NTY5NTk1ODQ=,893,pip3 install datasette not serving static on linuxbrew.,44167,closed,0,,,1,2020-07-14T23:33:38Z,2021-06-02T04:29:56Z,2021-06-02T04:29:56Z,NONE,,"*This error wasn't thrown* ``` Traceback (most recent call last): File ""/home/linuxbrew/.linuxbrew/opt/python@3.8/lib/python3.8/site-packages/datasette/utils/asgi.py"", line 289, in inner_static full_path.relative_to(root_path) File ""/home/linuxbrew/.linuxbrew/opt/python@3.8/lib/python3.8/pathlib.py"", line 904, in relative_to raise ValueError(""{!r} does not start with {!r}"" ValueError: '/home/linuxbrew/.linuxbrew/lib/python3.8/site-packages/datasette/static/app.css' does not start with '/home/linuxbrew/.linuxbrew/opt/python@3.8/lib/python3.8/site-packages/datasette/static' ``` Linuxbrew install python@3.8 with symbolic links when You call the full_path.relative_to(root_path) throw ValueError. This happened when you install from pip3 when you install with python3 setup.py develop , works good. Well at the end the static wasn't serving. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/893/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 660827546,MDU6SXNzdWU2NjA4Mjc1NDY=,899,How to setup a request limit per user,133845,closed,0,,,1,2020-07-19T13:08:25Z,2020-07-31T23:54:42Z,2020-07-31T23:54:42Z,NONE,,"Hello, Until now I'm using datasette without any authentication system but I would like to setup a configuration or limiting the number of requests per user (eventually by IP or with a cookie mechanism) and eventually allowing me to ban specific users/IPs. Is there a plugin available for this use case ? If not what are your insights regarding this UC ? Should I write a plugin ? Should I deploy datasette behind a reverse proxy to manage this ? ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/899/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 661605489,MDU6SXNzdWU2NjE2MDU0ODk=,900,Some links don't honor base_url,50220,closed,0,,6026070,3,2020-07-20T09:40:50Z,2020-10-23T19:44:04Z,2020-10-15T22:57:55Z,NONE,,"Hi, I've been playing with Datasette behind Nginx (awesome tool, thanks !). It seems some URLs are OK but some aren't. For instance in https://github.com/simonw/datasette/blob/master/datasette/templates/query.html#L61 it seems that `url_csv` includes a `/` prefix, resulting in the `base_url` not beeing honored. Actually here, it seems that dropping the prefix `/` to make the link relative is enough (so it may not be strictly related to `base_url`). Additional information: ``` datasette, version 0.45+0.gf1f581b.dirty ``` Relevant Nginx configuration (note that all the trailing slashes have some effect): ``` location /datasette/ { proxy_pass http://127.0.0.1:9001/; proxy_set_header Host $host; } ``` Relelvant Datasette configuration (slashes matter too): ``` --config base_url:/datasette/ ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/900/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 699947574,MDU6SXNzdWU2OTk5NDc1NzQ=,963,Currently selected array facets are not correctly persisted through hidden form fields,649467,closed,0,,5818042,1,2020-09-12T01:49:17Z,2020-09-12T21:54:29Z,2020-09-12T21:54:09Z,NONE,,"Faceted search uses JSON array elements as facets rather than the arrays. However, if a search is ""Apply""ed (using the Apply button), the array itself rather than its elements used. To reproduce: https://latest.datasette.io/fixtures/facetable?_sort=pk&_facet=created&_facet=tags&_facet_array=tags Press ""Apply"", which might be done when removing a filter. Notice that the ""tags"" facet values are now arrays, not array elements. It appears the ""&_facet_array=tags"" element of the query string is dropped.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/963/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 705057955,MDU6SXNzdWU3MDUwNTc5NTU=,969,"Add --tar option to ""datasette publish heroku""",1448859,closed,0,,5971510,3,2020-09-20T06:54:53Z,2020-10-08T23:55:59Z,2020-10-08T23:30:59Z,NONE,,"This issue is about how best to pass additional options to tools used for publishing datasettes. A concrete example is wanting to pass the `--tar` flag to the heroku CLI tool. I think there are at least two options for doing this: documentation for each publishing tool to explain how to set flags via env variables (if possible) or building a mechanism that lets users pass additional flags through datasette. When using `datasette publish heroku binder-launches.db --extra-options=""--config facet_time_limit_ms:35000 --config sql_time_limit_ms:35000"" --name=binderlytics --install=datasette-vega` to publish https://binderlytics.herokuapp.com/ the following error happens: ``` › Warning: heroku update available from 7.42.1 to 7.43.0. › Warning: heroku update available from 7.42.1 to 7.43.0. › Warning: heroku update available from 7.42.1 to 7.43.0. Setting WEB_CONCURRENCY and restarting ⬢ binderlytics... done, v13 WEB_CONCURRENCY: 1 › Warning: heroku update available from 7.42.1 to 7.43.0. ▸ Couldn't detect GNU tar. Builds could fail due to decompression errors ▸ See https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive ▸ Please install it, or specify the '--tar' option ▸ Falling back to node's built-in compressor buffer.js:358 throw new ERR_INVALID_OPT_VALUE.RangeError('size', size); ^ RangeError [ERR_INVALID_OPT_VALUE]: The value ""3303763968"" is invalid for option ""size"" at Function.alloc (buffer.js:367:3) at new Buffer (buffer.js:281:19) at Readable. (/Users/thead/.local/share/heroku/node_modules/archiver-utils/index.js:39:15) at Readable.emit (events.js:322:22) at endReadableNT (/Users/thead/.local/share/heroku/node_modules/readable-stream/lib/_stream_readable.js:1010:12) at processTicksAndRejections (internal/process/task_queues.js:84:21) { code: 'ERR_INVALID_OPT_VALUE' } ``` After installing GNU tar with `brew install gnu-tar` and modifying `datasette/publish/heroku.py` to include the `--tar=/path/to/gnu-tar` publishing works. I think the problem occurs once your heroku slug reaches a certain size. At least when I add only a few 100 entries to the datasette then the error does not occcur. datasette version 0.49.1 OSX 10.14.6 (18G103)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/969/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 705108492,MDU6SXNzdWU3MDUxMDg0OTI=,970,"request an ""-o"" option on ""datasette server"" to open the default browser at the running url",2861690,closed,0,,5971510,4,2020-09-20T13:16:34Z,2020-10-08T23:54:27Z,2020-09-22T14:27:04Z,NONE,,"This is a request for a ""convenience"" feature, and only a nice to have. It's based on seeing this feature in several little command line hypertext server apps. If you run, for example: datasette.exe serve --open ""mydb.s3db"" I would like it if default browser is launched, at the URL that is being served. The angular cli does this, for example ng serve --open #see https://angular.io/cli/serve ...as does my usual mini web server of choice when inspecting local static files.... npx http-server -o # see https://www.npmjs.com/package/http-server Just a tiny thing. Love your work!",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/970/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 707849175,MDU6SXNzdWU3MDc4NDkxNzU=,974,static assets and favicon aren't cached by the browser,45416,open,0,,,1,2020-09-24T04:44:55Z,2022-01-13T22:21:28Z,,NONE,,"Using datasette to solve some frustrating problems with our fulfillment provider today, I was surprised to see repeated requests for assets under /-/static and the favicon. While it won't likely be a huge performance bottleneck, I bet datasette would feel a bit zippier if you had Uvicorn serving up some caching-related headers telling the browser it was safe to cache static assets.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/974/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 712889459,MDExOlB1bGxSZXF1ZXN0NDk2Mjk4MTgw,986,"Allow facet by primary keys, fixes #985",39452697,closed,0,,,2,2020-10-01T14:18:55Z,2020-10-01T16:51:45Z,2020-10-01T16:51:45Z,NONE,simonw/datasette/pulls/986,"Hello! This PR makes it possible to facet by primary keys. Did I get it right that just removing the condition on UI side is enough? From testing it works fine with primary keys, just as with normal keys. If so, should I also remove unused `data-is-pk`?",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/986/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 718238967,MDU6SXNzdWU3MTgyMzg5Njc=,1003,from_json jinja2 filter,649467,open,0,,,4,2020-10-09T15:30:58Z,2020-10-09T17:17:07Z,,NONE,,"When JSON fields are rendered in a jinja2 template, it is handy to be able to manipulate them as data (e.g., iterate over an array of values). Ansible has a ""from_json"" function, which just called json.loads. It's a trivial as a datasette plugin, but it seems generally useful. Does it makes sense to add it directly into the app?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1003/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,