id,node_id,number,title,user,user_label,state,locked,assignee,assignee_label,milestone,milestone_label,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,repo_label,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 836273891,MDU6SXNzdWU4MzYyNzM4OTE=,1266,Documentation for Response.asgi_send(send) method,9599,simonw,closed,0,,,,,1,2021-03-19T18:52:49Z,2021-03-20T21:35:00Z,2021-03-20T21:32:28Z,OWNER,,"I found myself wanting to use this method for https://github.com/simonw/datasette-auth-passwords/issues/15 - but it's not documented. It should be documented. https://github.com/simonw/datasette/blob/8e18c7943181f228ce5ebcea48deb59ce50bee1f/datasette/utils/asgi.py#L320-L340",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1266/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 836123030,MDU6SXNzdWU4MzYxMjMwMzA=,1265,Support for HTTP Basic Authentication,468612,yunzheng,closed,0,,,,,3,2021-03-19T15:31:09Z,2021-03-19T22:05:12Z,2021-03-19T21:03:09Z,NONE,,"It would be nice if datasette could support [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication). For now I could ofcourse leverage Nginx for basic authentication, but it would be nice to have support for this in datasette by default or via a plugin like datasette-auth-github. My main usecase is to put the whole datasette instance behind a username/password prompt via Basic Auth and not specific urls.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1265/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 836064851,MDExOlB1bGxSZXF1ZXN0NTk2NjI3Nzgw,18,Add datetime parsing,1234956,n8henrie,open,0,,,,,0,2021-03-19T14:34:22Z,2021-03-19T14:34:22Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/healthkit-to-sqlite/pulls/18,"Parses the datetime columns so they are subsequently properly recognized as datetime. Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/17 ",197882382,healthkit-to-sqlite,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/18/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 836063389,MDU6SXNzdWU4MzYwNjMzODk=,17,Datetime columns are not properly formatted to be recognizes as datetime,1234956,n8henrie,open,0,,,,,0,2021-03-19T14:33:04Z,2021-03-19T14:33:04Z,,NONE,," Currently, the datetimes are formatted in a way that is not recognized by datasette-vega for plotting with a `Date/time` type for the axis. For example, if you have datasette running locally with `datasette-vega` installed and have a database that includes resting heart rate: ``` http://localhost:8001/healthkit/rRestingHeartRate#g.mark=line&g.x_column=startDate&g.x_type=temporal&g.y_column=value&g.y_type=quantitative ``` The plot is blank unless you choose `Label` as the type for the date data. The `startDate` (and `creationDate` and `endDate`) columns appear like: `2019-11-14 18:22:18 -0700` If instead the format for this column is changed slightly: `2019-11-14T18:22:18-07:00` they are recognized as proper dates and the charting works as expected. I have a PR that addresses this issue, will submit shortly.",197882382,healthkit-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/17/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 834602299,MDU6SXNzdWU4MzQ2MDIyOTk=,1262,Plugin hook that could support 'order by random()' for table view,19328961,henry501,open,0,,,,,3,2021-03-18T10:02:01Z,2021-03-18T17:55:01Z,,NONE,,"I am frequently using Datasette to quickly get a visual impression for a table without reviewing it in its entirety. Because I have some groups of similar records, the default sorting options mean that each page is very similar and not representative of the full dataset. The current interface allows sorting by columns, but random sorting is only available via custom SQL. Maybe this could be a button or link.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1262/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 830901133,MDExOlB1bGxSZXF1ZXN0NTkyMzY0MjU1,16,"Add a fallback ID, print if no ID found",1234956,n8henrie,open,0,,,,,0,2021-03-13T13:38:29Z,2021-03-13T14:44:04Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/healthkit-to-sqlite/pulls/16,"Fixes https://github.com/dogsheep/healthkit-to-sqlite/issues/14 ",197882382,healthkit-to-sqlite,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/16/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 830283447,MDU6SXNzdWU4MzAyODM0NDc=,34,bucket name,6213,dsisnero,open,0,,,,,0,2021-03-12T16:40:57Z,2021-03-12T16:40:57Z,,NONE,,I followed the instructions to setup credentials but I am getting a invalid bucket name. Can you put a sample auth.json file in the base that shows the correct format for this? Thanks,256834907,dogsheep-photos,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/34/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 787173276,MDU6SXNzdWU3ODcxNzMyNzY=,1193,Research plugin hook for alternative database backends,9599,simonw,open,0,,,,,1,2021-01-15T20:27:50Z,2021-03-12T01:01:54Z,,OWNER,,"I started exploring what Datasette would like running against PostgreSQL in #670 and @dazzag24 did some work on Parquet described in #657. I had initially thought this was WAY too much additional complexity, but I'm beginning to think that the `Database` class may be small enough that having it abstract away the details of running queries against alternative database backends could be feasible. A bigger issue is SQL generation, but I realized that most of Datasette's SQL generation code exists just in the `TableView` class that runs the table page. If this was abstracted into some kind of SQL builder that could be then customized per-database it might be reasonable to get it working. Very unlikely for this to make it into Datasette 1.0, but maybe this would be the defining feature of Datasette 2.0?",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1193/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 824067604,MDU6SXNzdWU4MjQwNjc2MDQ=,1250,Research: Plugin hook for alternative database connections,9599,simonw,closed,0,,,,,2,2021-03-08T00:28:15Z,2021-03-12T01:01:25Z,2021-03-12T01:01:17Z,OWNER,,"The `Database` class is a natural looking fit for a plugin hook to load custom database connections... potentially even databases other than SQLite. DuckDB (refs #968) could make for a great starting point, since it looks very compatible with the existing SQLite code. The real win would be if this could lead to running Datasette against PostgreSQL. I made some initial explorations in that direction a while ago in #670.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1250/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 797649915,MDExOlB1bGxSZXF1ZXN0NTY0NjA4MjY0,1211,Use context manager instead of plain open,4488943,kbaikov,closed,0,,,,,3,2021-01-31T07:58:10Z,2021-03-11T16:15:50Z,2021-03-11T16:15:50Z,CONTRIBUTOR,simonw/datasette/pulls/1211,"Context manager with open closes the files after usage. Fixes: https://github.com/simonw/datasette/issues/1208 When the object is already a pathlib.Path i used read_text write_text functions In some cases pathlib.Path.open were used in context manager, it is basically the same as builtin open. Tests are passing: 850 passed, 5 xfailed, 10 xpassed",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1211/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 794554881,MDU6SXNzdWU3OTQ1NTQ4ODE=,1208,A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper,4488943,kbaikov,closed,0,,,,,2,2021-01-26T20:56:28Z,2021-03-11T16:15:49Z,2021-03-11T16:15:49Z,CONTRIBUTOR,,"Your code is full of open files that are never closed, especially when you deal with reading/writing json/yaml files. If you run python with warnings enabled this problem becomes evident. This probably contributes to some memory leaks in long running datasettes if the GC will not 'collect' those resources properly. This is easily fixed by using a context manager instead of just using open: ```python with open('some_file', 'w') as opened_file: opened_file.write('string') ``` In some newer parts of the code you use Path objects 'read_text' and 'write_text' functions which close the file properly and are prefered in some cases. If you want I can create a PR for all places i found this pattern in. Bellow is a fraction of places where i found a ResourceWarning: ```python update-docs-help.py: 20 actual = actual.replace(""Usage: cli "", ""Usage: datasette "") 21: open(docs_path / filename, ""w"").write(actual) 22 datasette\app.py: 210 ): 211: inspect_data = json.load((config_dir / ""inspect-data.json"").open()) 212 if immutables is None: 266 if config_dir and (config_dir / ""settings.json"").exists() and not config: 267: config = json.load((config_dir / ""settings.json"").open()) 268 self._settings = dict(DEFAULT_SETTINGS, **(config or {})) 445 self._app_css_hash = hashlib.sha1( 446: open(os.path.join(str(app_root), ""datasette/static/app.css"")) 447 .read() datasette\cli.py: 130 else: 131: out = open(inspect_file, ""w"") 132 loop = asyncio.get_event_loop() 459 if inspect_file: 460: inspect_data = json.load(open(inspect_file)) 461 ``` ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1208/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 826613352,MDExOlB1bGxSZXF1ZXN0NTg4NjAxNjI3,1254,Update Docker Spatialite version to 5.0.1 + add support for Spatialite topology functions,3200608,durkie,closed,0,,,,,6,2021-03-09T20:49:08Z,2021-03-10T18:27:45Z,2021-03-09T22:04:23Z,NONE,simonw/datasette/pulls/1254,"This requires adding the RT Topology library (Spatialite changed to RT Topology from LWGEOM between 4.4 and 5.0), as well as upgrading the GEOS version (which is the reason for switching to `python:3.7.10-slim-buster` as the base image.) `autoconf` and `libtool` are added to build RT Topology, and Spatialite is now built with `--disable-minizip` (minizip wasn't an option in 4.4 and I didn't want to add another dependency) and `--disable-dependency-tracking` which, according to Spatialite, ""speeds up one-time builds""",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1254/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 827341657,MDExOlB1bGxSZXF1ZXN0NTg5MjYzMjk3,1256,Minor type in IP adress,6371750,JBPressac,closed,0,,,,,3,2021-03-10T08:28:22Z,2021-03-10T18:26:46Z,2021-03-10T18:26:40Z,CONTRIBUTOR,simonw/datasette/pulls/1256,127.0.01 replaced by 127.0.0.1,107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1256/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 824750134,MDU6SXNzdWU4MjQ3NTAxMzQ=,1251,facet option not appearing when table is big,15836677,verajosemanuel,open,0,,,,,0,2021-03-08T16:54:04Z,2021-03-08T16:54:16Z,,NONE,,"I have a big table with more than 500.000 rows. Trying to facet by one of my columns, the options are not available as for the other smaller tables. I have tried to set it in URL as: `&_facet=city_id` to no avail. is there any limit? how can I force the option ""facet"" to appear for big tables? ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1251/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 823035080,MDU6SXNzdWU4MjMwMzUwODA=,1248,duckdb database (very low performance in SQLite),15836677,verajosemanuel,closed,0,,,,,1,2021-03-05T12:20:29Z,2021-03-08T00:25:27Z,2021-03-08T00:25:27Z,NONE,,"My sqlite is getting too big to be processed by datasette (more than 10 minutes waiting to load) so I am working with duckdb and is waaaaay faster. I think the fastest embeddable database actually. https://duckdb.org/ Taking into account DuckDb is SQLite based it would be GREAT to use it with datasette. is that possible? Regards and thanks for a superb job",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1248/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 806918878,MDExOlB1bGxSZXF1ZXN0NTcyMjU0MTAz,1223,Add compile option to Dockerfile to fix failing test (fixes #696),7476523,bobwhitelock,closed,0,,,,,2,2021-02-12T03:38:05Z,2021-03-07T12:01:12Z,2021-03-07T07:41:17Z,CONTRIBUTOR,simonw/datasette/pulls/1223,"This test was failing when run inside the Docker container: `test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3]`, with this error: ``` def test_searchable(app_client, path, expected_rows): response = app_client.get(path) > assert expected_rows == response.json[""rows""] E AssertionError: assert [[1, 'barry c...sel', 'puma']] == [] E Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther'] E Full diff: E + [] E - [[1, 'barry cat', 'terry dog', 'panther'], E - [2, 'terry dog', 'sara weasel', 'puma']] ``` The issue was that the version of sqlite3 built inside the Docker container was built with FTS3 and FTS4 enabled, but without the `SQLITE_ENABLE_FTS3_PARENTHESIS` compile option passed, which adds support for using `AND` and `NOT` within `match` expressions (see https://sqlite.org/fts3.html#compiling_and_enabling_fts3_and_fts4 and https://www.sqlite.org/compile.html). Without this, the `AND` used in the search in this test was being interpreted as a literal string, and so no matches were found. Adding this compile option fixes this. --- I actually ran into this issue because the same test was failing when I ran the test suite on my own machine, outside of Docker, and so I eventually tracked this down to my system sqlite3 also being compiled without this option. I wonder if this is a sign of a slightly deeper issue, that Datasette can silently behave differently based on the version and compilation of sqlite3 it is being used with. On my own system I fixed the test suite by running `pip install pysqlite3-binary`, so that this would be picked up instead of the `sqlite` package, as this seems to be compiled using this option, . Maybe using `pysqlite3-binary` could be installed/recommended by default so a more deterministic version of sqlite is used? Or there could be some feature detection done on the available sqlite version, to know what features are available and can be used/tested?",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1223/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 617323873,MDU6SXNzdWU2MTczMjM4NzM=,766,Enable wildcard-searches by default,2181410,clausjuhl,open,0,,,,,2,2020-05-13T10:14:48Z,2021-03-05T16:35:21Z,,NONE,,"Hi Simon. It seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the [docs](https://datasette.readthedocs.io/en/stable/full_text_search.html#the-table-page-and-table-view-api) (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance. I would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring ""_searchmode=raw"" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using ""_searchmode=raw""? Is the ""escape_fts""-function from datasette.utils ignored? Thanks! /Claus",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/766/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 778380836,MDU6SXNzdWU3NzgzODA4MzY=,4,Feature Request: Gmail,203343,Btibert3,open,0,,,,,5,2021-01-04T21:31:09Z,2021-03-04T20:54:44Z,,NONE,,"From takeout, I only exported my Gmail account. Ideally I could parse this into sqlite via this tool.",206649770,google-takeout-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 821841046,MDU6SXNzdWU4MjE4NDEwNDY=,6,Upgrade to latest sqlite-utils,9599,simonw,open,0,,,,,1,2021-03-04T07:21:54Z,2021-03-04T07:22:51Z,,MEMBER,,This is pinned to v1 at the moment.,206649770,google-takeout-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 815955014,MDExOlB1bGxSZXF1ZXN0NTc5Njk3ODMz,1243,fix small typo,306240,UtahDave,closed,0,,,,,2,2021-02-25T00:22:34Z,2021-03-04T05:46:10Z,2021-03-04T05:46:10Z,CONTRIBUTOR,simonw/datasette/pulls/1243,,107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1243/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 818684978,MDU6SXNzdWU4MTg2ODQ5Nzg=,243,How can i use this utils to deal with fts on column meta of tables ?,27874014,svjack,open,0,,,,,0,2021-03-01T09:45:05Z,2021-03-01T09:45:05Z,,NONE,,"Thank you to release this bravo project. When i use this project on multi table db, I want to implement convenient search on column name from different tables. I want to develop a meta table to save the meta data of different columns of different tables and search on this meta table to get rows from the data table (which the meta table describes) does this project provide some simple function on it ? You can think a have a knowledge graph about the table in the db, and i save this knowledge graph into the db with fts enabled.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/243/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 818430405,MDU6SXNzdWU4MTg0MzA0MDU=,1247,datasette.add_memory_database() method,9599,simonw,closed,0,,,,,2,2021-03-01T03:48:38Z,2021-03-01T04:02:26Z,2021-03-01T04:02:26Z,OWNER,,"I just wrote this code: https://github.com/simonw/datasette/blob/47eb885cc2c3aafa03645c330c6f597bee9b3b25/tests/test_facets.py#L334-L335 It would be nice if you didn't have to separately instantiate a database object here.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1247/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 817597268,MDU6SXNzdWU4MTc1OTcyNjg=,1246,Suggest for ArrayFacet possibly confused by blank values,9599,simonw,closed,0,,,,,3,2021-02-26T19:11:52Z,2021-03-01T03:46:11Z,2021-03-01T03:46:11Z,OWNER,,I sometimes don't get the suggestion for facet-by-array for columns that contain arrays. I think it may be because they have empty spaces in them - or perhaps it's because the null detection doesn't actually work.,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1246/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 718259202,MDU6SXNzdWU3MTgyNTkyMDI=,1005,Remove xfail tests when new httpx is released,9599,simonw,closed,0,,,3268330,Datasette 1.0,3,2020-10-09T16:00:19Z,2021-02-28T22:41:08Z,2021-02-28T22:41:08Z,OWNER,,"> My `httpx` pull request adding `raw_path` support was just merged: https://github.com/encode/httpx/pull/1357 - but it's not in a release yet. > > I'm going to mark these tests as `xfail` so I can land this change - I'll remove that once an `httpx` release comes out that I can use to get the tests passing. > _Originally posted by @simonw in https://github.com/simonw/datasette/pull/1000#issuecomment-706263157_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1005/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 814591962,MDU6SXNzdWU4MTQ1OTE5NjI=,1240,Allow facetting on custom queries,7107523,Kabouik,closed,0,,,,,3,2021-02-23T15:52:19Z,2021-02-26T18:19:46Z,2021-02-26T18:18:18Z,NONE,,"Facets are a tremendously useful feature, especially for people peeking at the database for the first time and still having little knowledge about the details of the data. It is of great assistance to discover interesting features to explore futher in advanced queries. Yet, it seems it's impossible to use facets when running a custom SQL query, be it from the little gear icons in column names, the facet suggestions at the top (hidden when performing a custom query), or by appending a facet code to the URL. Is there a technical limitation, or is this something that could be unlocked easily?",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1240/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 817528452,MDU6SXNzdWU4MTc1Mjg0NTI=,1244,Plugin tip: look at the examples linked from the hooks page,9599,simonw,closed,0,,,,,1,2021-02-26T17:18:27Z,2021-02-26T17:30:38Z,2021-02-26T17:27:15Z,OWNER,,"Someone asked ""what are good example plugins I can look at?"" and I realized that the answer is to look through the example links on https://docs.datasette.io/en/stable/plugin_hooks.html - but that tip should be written down somewhere on the https://docs.datasette.io/en/stable/writing_plugins.html page.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1244/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 815554385,MDU6SXNzdWU4MTU1NTQzODU=,237,"db[""my_table""].drop(ignore=True) parameter, plus sqlite-utils drop-table --ignore and drop-view --ignore",649467,mhalle,closed,0,,,,,3,2021-02-24T14:55:06Z,2021-02-25T17:11:41Z,2021-02-25T17:11:41Z,NONE,,"When I'm generating a derived table in python, I often drop the table and create it from scratch. However, the first time I generate the table, it doesn't exist, so the drop raises an exception. That means more boilerplate. I was going to submit a pull request that adds an ""if_exists"" option to the `drop` method of tables and views. However, for a utility like sqlite_utils, perhaps the ""IF EXISTS"" SQL semantics is what you want most of the time, and thus should be the default. What do you think?",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/237/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 816523763,MDU6SXNzdWU4MTY1MjM3NjM=,238,.add_foreign_key() corrupts database if column contains a space,9599,simonw,closed,0,,,,,1,2021-02-25T15:07:20Z,2021-02-25T16:54:02Z,2021-02-25T16:54:02Z,OWNER,,"I ran this: db[""Reports""].add_foreign_key(""Reported by ID"", ""Reporters"", ""id"") And got this: ``` ~/jupyter-venv/lib/python3.9/site-packages/sqlite_utils/db.py in add_foreign_keys(self, foreign_keys) 616 # Have to VACUUM outside the transaction to ensure .foreign_keys property 617 # can see the newly created foreign key. --> 618 self.vacuum() 619 620 def index_foreign_keys(self): ~/jupyter-venv/lib/python3.9/site-packages/sqlite_utils/db.py in vacuum(self) 629 630 def vacuum(self): --> 631 self.execute(""VACUUM;"") 632 633 ~/jupyter-venv/lib/python3.9/site-packages/sqlite_utils/db.py in execute(self, sql, parameters) 234 return self.conn.execute(sql, parameters) 235 else: --> 236 return self.conn.execute(sql) 237 238 def executescript(self, sql): DatabaseError: database disk image is malformed ```",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/238/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 816560819,MDU6SXNzdWU4MTY1NjA4MTk=,240,table.pks_and_rows_where() method returning primary keys along with the rows,9599,simonw,closed,0,,,,,7,2021-02-25T15:49:28Z,2021-02-25T16:39:23Z,2021-02-25T16:28:23Z,OWNER,,"*Original title: Easier way to update a row returned from .rows* Here's a surprisingly hard problem I ran into while trying to implement #239 - given a row returned by `db[table].rows` how can you update that row? The problem is that the `db[table].update(...)` method requires a primary key. But if you have a row from the `db[table].rows` iterator it might not even contain the primary key - provided the table is a `rowid` table. Instead, currently, you need to introspect the table and, if `rowid` is a primary key, explicitly include that in the `select=` argument to `table.rows_where(...)` - otherwise it will not be returned. A utility mechanism to make this easier would be very welcome.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/240/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 816601354,MDExOlB1bGxSZXF1ZXN0NTgwMjM1NDI3,241,Extract expand - work in progress,9599,simonw,open,0,,,,,0,2021-02-25T16:36:38Z,2021-02-25T16:36:38Z,,OWNER,simonw/sqlite-utils/pulls/241,Refs #239. Still needs documentation and CLI implementation.,140912432,sqlite-utils,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/241/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1, 803356942,MDU6SXNzdWU4MDMzNTY5NDI=,1218, /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory,11855322,robmarkcole,open,0,,,,,1,2021-02-08T09:07:00Z,2021-02-23T12:12:17Z,,NONE,,"Error as above, however I do have python3.8 and the readme indicates this is supported. ``` (venv) (base) Robins-MacBook:datasette robin$ ls /usr/local/opt/python3/bin/ .. pip3 python3 python3.8 ```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 813978858,MDU6SXNzdWU4MTM5Nzg4NTg=,1239,JSON filter fails if column contains spaces,9599,simonw,closed,0,,,,,1,2021-02-23T00:18:07Z,2021-02-23T00:22:53Z,2021-02-23T00:22:53Z,OWNER,,"Got this exception: `ERROR: conn=, sql = 'select Address, Affiliation, County, [Has Report], [Latest report notes], [Latest report yes], Latitude, [Location Type], Longitude, Name, id, [Appointment scheduling instructions], [Availability Info], [Latest report] from locations where rowid in (\n select locations.rowid from locations, json_each(locations.Availability Info) j\n where j.value = :p0\n ) and ""Latest report yes"" = :p1 order by id limit 101', params = {'p0': 'Yes: appointment required', 'p1': '1'}: near ""Info"": syntax error`",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1239/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 811505638,MDU6SXNzdWU4MTE1MDU2Mzg=,1234,Runtime support for ATTACHing multiple databases,9599,simonw,open,0,,,,,1,2021-02-18T22:06:47Z,2021-02-22T21:06:28Z,,OWNER,,"> The implementation in #1232 is ready to land. It's the simplest-thing-that-could-possibly-work: you can run `datasette one.db two.db three.db --crossdb` and then use the `/_memory` page to run joins across tables from multiple databases. > > It only works on the first 10 databases that were passed to the command-line. This means that if you have a Datasette instance with hundreds of attached databases (see [Datasette Library](https://github.com/simonw/datasette/issues/417)) this won't be particularly useful for you. > > So... a better, future version of this feature would be one that lets you join across databases on command - maybe by hitting `/_memory?attach=db1&attach=db2` to get a special connection. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/283#issuecomment-781665560_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1234/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 783778672,MDU6SXNzdWU3ODM3Nzg2NzI=,220,Better error message for *_fts methods against views,649467,mhalle,closed,0,,,,,3,2021-01-11T23:24:00Z,2021-02-22T20:44:51Z,2021-02-14T22:34:26Z,NONE,,"enable_fts and its related methods only work on tables, not views. Could those methods and possibly others move up to the Queryable superclass? ",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/220/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 797651831,MDU6SXNzdWU3OTc2NTE4MzE=,1212,Tests are very slow. ,4488943,kbaikov,closed,0,,,,,4,2021-01-31T08:06:16Z,2021-02-19T22:54:13Z,2021-02-19T22:54:13Z,CONTRIBUTOR,,"Working on my PR i noticed that tests are very slow. The plain pytest run took about 37 minutes for me. However i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution. `pytest -n 8` is run only in 28 minutes on my machine. I can create a PR to mention that in your documentation. This will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation. Does that make sense to you? After a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run. Profiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense. Perhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases. And/or use conn = sqlite3.connect("":memory:"") which is much faster. And/or truncate tables after each TC instead of deleting the file and re-creating them. I can take a look which is the best approach if you give the go-ahead. ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1212/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 811680502,MDU6SXNzdWU4MTE2ODA1MDI=,236,--attach command line option for attaching extra databases,9599,simonw,closed,0,,,,,1,2021-02-19T04:38:30Z,2021-02-19T05:10:41Z,2021-02-19T05:08:43Z,OWNER,,"This will enable cross-database joins, as seen in https://github.com/simonw/datasette/issues/283 Also refs #113",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/236/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 621286870,MDU6SXNzdWU2MjEyODY4NzA=,113,Syntactic sugar for ATTACH DATABASE,9599,simonw,closed,0,,,,,2,2020-05-19T21:10:00Z,2021-02-19T05:09:12Z,2021-02-19T04:56:36Z,OWNER,,"https://www.sqlite.org/lang_attach.html Maybe something like this: ```python db.attach(""other_db"", ""other_db.db"") ```",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/113/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 811589344,MDU6SXNzdWU4MTE1ODkzNDQ=,1235,Upgrade Python version used by official Datasette Docker image,9599,simonw,closed,0,,,,,2,2021-02-19T00:47:40Z,2021-02-19T01:48:31Z,2021-02-19T01:48:30Z,OWNER,,"Currently uses 3.7.2: https://github.com/simonw/datasette/blob/73bed175631a79e13a521eee82f8451dd0477eb3/Dockerfile#L1 There's a security fix for Python which it would be good to ship in this image (even though I'm reasonably confident it doesn't affect Datasette): https://bugs.python.org/issue42938",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1235/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 811407131,MDExOlB1bGxSZXF1ZXN0NTc1OTQwMTkz,1232,--crossdb option for joining across databases,9599,simonw,closed,0,,,,,8,2021-02-18T19:48:50Z,2021-02-18T22:09:13Z,2021-02-18T22:09:12Z,OWNER,simonw/datasette/pulls/1232,"Refs #283. Still needs: - [x] Unit test for --crossdb queries - [x] Show warning on console if it truncates at ten databases (or on web interface) - [x] Show connected databases on the `/_memory` database page - [x] Documentation - [x] https://latest.datasette.io/ demo should demonstrate this feature",107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1232/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 811458446,MDU6SXNzdWU4MTE0NTg0NDY=,1233,"""datasette publish cloudrun"" cannot publish files with spaces in their name",9599,simonw,open,0,,,,,1,2021-02-18T21:08:31Z,2021-02-18T21:10:08Z,,OWNER,,"Got this error: ``` Step 6/9 : RUN datasette inspect fixtures.db extra database.db --inspect-file inspect-data.json ---> Running in db9da0068592 Usage: datasette inspect [OPTIONS] [FILES]... Try 'datasette inspect --help' for help. Error: Invalid value for '[FILES]...': Path 'extra' does not exist. The command '/bin/sh -c datasette inspect fixtures.db extra database.db --inspect-file inspect-data.json' returned a non-zero code: 2 ERROR ERROR: build step 0 ""gcr.io/cloud-builders/docker"" failed: step exited with non-zero status: 2 ``` While working on the demo for #1232, using this deploy command: ``` GITHUB_SHA=crossdb datasette publish cloudrun fixtures.db 'extra database.db' \ -m fixtures.json \ --plugins-dir=plugins \ --branch=$GITHUB_SHA \ --version-note=$GITHUB_SHA \ --extra-options=""--setting template_debug 1 --crossdb"" \ --install=pysqlite3-binary \ --service=datasette-latest-crossdb ```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1233/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 808843401,MDU6SXNzdWU4MDg4NDM0MDE=,1226,--port option should validate port is between 0 and 65535,9599,simonw,closed,0,,,,,4,2021-02-15T22:01:33Z,2021-02-18T18:41:27Z,2021-02-18T18:41:27Z,OWNER,,"Currently throws an ugly error message: ``` (datasette-graphql) datasette-graphql % datasette fivethirtyeight.db -p 80094 INFO: Started server process [45497] INFO: Waiting for application startup. INFO: Application startup complete. Traceback (most recent call last): File ""/Users/simon/.local/share/virtualenvs/datasette-graphql-n1OSJCS8/bin/datasette"", line 8, in sys.exit(cli()) ... server = await loop.create_server( File ""/Users/simon/.pyenv/versions/3.8.2/lib/python3.8/asyncio/base_events.py"", line 1461, in create_server sock.bind(sa) OverflowError: bind(): port must be 0-65535. ```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1226/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 811054000,MDU6SXNzdWU4MTEwNTQwMDA=,1230,"Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages",7107523,Kabouik,open,0,,,,,1,2021-02-18T12:27:02Z,2021-02-18T15:22:15Z,,NONE,,"I filtered a data set on some criteria and obtain 265 results, split over three pages (100, 100, 65), and reazlized that Vega plots are only applied to the results displayed on the current page, instead of the whole filtered data, _e.g._, 100 on page 1, 100 on page 2, 65 on page 3. Is there a way to force the graphs to consider all results instead of just the page, considering that pages rarely represent sensible information? Likewise, while the cluster map does show all results on the first page, if you go to next pages, it will show all remaining results except the previous page(s), _e.g._, 265 on page 1, 165 on page 2, 65 on page 3. In both cases, I don't see many situations where one would like to represent the data this way, and it might even lead to interpretation errors when viewing the data. Am I missing some cases where this would be best? Perhaps a clickable option to subset visual representations according visible pages _vs._ display all search results would do? [Edit] Oh, I just saw the ""Load all"" button under the cluster map as well as the [setting to alter the max number or results](https://docs.datasette.io/en/stable/settings.html#max-returned-rows). So I guess this issue only is about the Vega charts.",107914493,datasette,issue,,,,, 807174161,MDU6SXNzdWU4MDcxNzQxNjE=,227,Error reading csv files with large column data,295329,camallen,closed,0,,,,,4,2021-02-12T11:51:47Z,2021-02-16T11:48:03Z,2021-02-14T21:17:19Z,NONE,,"*Feel free to close this issue - I mostly added it for reference for future folks that run into this :)* I have a CSV file with one column that has very long strings. When i try to import this file via the `insert` command I get the following error: ``` sqlite-utils insert database.db table_name file_with_large_column.csv Traceback (most recent call last): File ""/usr/local/bin/sqlite-utils"", line 10, in sys.exit(cli()) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/usr/local/lib/python3.7/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py"", line 774, in insert default=default, File ""/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py"", line 705, in insert_upsert_implementation docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs File ""/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py"", line 1852, in insert_all first_record = next(records) File ""/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py"", line 703, in docs = (decode_base64_values(doc) for doc in docs) File ""/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py"", line 681, in docs = (dict(zip(headers, row)) for row in reader) _csv.Error: field larger than field limit (131072) ``` Built with the docker image `datasetteproject/datasette:0.54` with the following versions: ``` # sqlite-utils --version sqlite-utils, version 3.4.1 # datasette --version datasette, version 0.54 ``` It appears this is a [known issue](https://stackoverflow.com/a/54517228/2761423) reading in csv files in python and [doesn't look to be modifiable](https://github.com/python/cpython/blob/ea46579067fd2d4e164d6605719ffec690c4d621/Modules/_csv.c#L1685) through system / env vars (i may be very wrong on this). Noting that using sqlite3 `import` command work without error (not using the python csv reader) ``` sqlite3 database.db sqlite> .mode csv sqlite> .import file_with_large_column.csv table_name ``` Sadly I couldn't see an easy way around this while using the cli as it appears this value needs to be changed in python code. FWIW I've switched to using https://datasette.io/tools/csvs-to-sqlite for importing csv data and it's working well. Finally, I'm loving https://datasette.io/ thank you very much for an amazing tool and data ecosytem 🙇‍♀️ ",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/227/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 688670158,MDU6SXNzdWU2ODg2NzAxNTg=,147,SQLITE_MAX_VARS maybe hard-coded too low,96218,simonwiles,open,0,,,,,7,2020-08-30T07:26:45Z,2021-02-15T21:27:55Z,,CONTRIBUTOR,,"I came across this while about to open an issue and PR against the documentation for `batch_size`, which is a bit incomplete. As mentioned in #145, while: > [`SQLITE_MAX_VARIABLE_NUMBER`](https://www.sqlite.org/limits.html#max_variable_number) ... defaults to 999 for SQLite versions prior to 3.32.0 (2020-05-22) or 32766 for SQLite versions after 3.32.0. it is common that it is increased at compile time. Debian-based systems, for example, seem to ship with a version of sqlite compiled with SQLITE_MAX_VARIABLE_NUMBER set to 250,000, and I believe this is the case for homebrew installations too. In working to understand what `batch_size` was actually doing and why, I realized that by setting `SQLITE_MAX_VARS` in `db.py` to match the value my sqlite was compiled with (I'm on Debian), I was able to decrease the time to `insert_all()` my test data set (~128k records across 7 tables) from ~26.5s to ~3.5s. Given that this about .05% of my total dataset, this is time I am keen to save... Unfortunately, it seems that `sqlite3` in the python standard library doesn't expose the `get_limit()` C API (even though `pysqlite` used to), so it's hard to know what value sqlite has been compiled with (note that this could mean, I suppose, that it's less than 999, and even hardcoding `SQLITE_MAX_VARS` to the conservative default might not be adequate. It can also be lowered -- but not raised -- at runtime). The best I could come up with is `echo """" | sqlite3 -cmd "".limits variable_number""` (only available in `sqlite >= 2015-05-07 (3.8.10)`). Obviously this couldn't be relied upon in `sqlite_utils`, but I wonder what your opinion would be about exposing `SQLITE_MAX_VARS` as a user-configurable parameter (with suitable ""here be dragons"" warnings)? I'm going to go ahead and monkey-patch it for my purposes in any event, but it seems like it might be worth considering.",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/147/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 808771690,MDU6SXNzdWU4MDg3NzE2OTA=,1225,More flexible formatting of records with CSS grid,649467,mhalle,open,0,,,,,0,2021-02-15T19:28:17Z,2021-02-15T19:28:35Z,,NONE,,"In several applications I've been experimenting with alternate formatting of datasette query results. Lately I've found that CSS grids work very well and seem quite general for formatting rows. In CSS I use grid templates to define the layout of each record and the regions for each field, hiding the fields I don't want. It's pretty flexible and looks good. It's also a great basis for highly responsive layout. I initially thought I'd only use this feature for record detail views, but now I use it for index views as well. However, there are some limitations: * With the existing table templates, it seems that you can change the `display` property on the enclosing `table`, `tbody`, and `tr` to make them be grid-like, but that seems hacky (convert `table` and `tbody` to be `display: block` and `tr` to be `display: grid`). * More significantly, it's very nice to have the column name available when rendering each record to display headers/field labels. The existing templates don't do that, so a custom `_table` template is necessary. * I don't know if any plugins are sensitive to whether data is rendered as a table or not since I'm not completely clear how plugins get their data. * Regardless, you need custom CSS to take full advantage of grids. I don't have a proposal on how to integrate them more deeply. It would be helpful to at least have an official example or test that used a grid layout for records to make sure nothing in datasette breaks with it. ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1225/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 807817197,MDU6SXNzdWU4MDc4MTcxOTc=,229,Hitting `_csv.Error: field larger than field limit (131072)`,631242,frosencrantz,closed,0,,,,,3,2021-02-13T19:52:44Z,2021-02-14T21:33:33Z,2021-02-14T21:33:33Z,NONE,,"I have a csv file where one of the fields is so large it is throwing an exception with this error and stops loading: ``` _csv.Error: field larger than field limit (131072) ``` The stack trace occurs here: https://github.com/simonw/sqlite-utils/blob/3.1/sqlite_utils/cli.py#L633 There is a way to handle this that helps: https://stackoverflow.com/questions/15063936/csv-error-field-larger-than-field-limit-131072 One issue I had with this problem was sqlite-utils only provides limited context as to where the problem line is. There is the progress bar, but that is by percent rather than by line number. It would have been helpful if it could have provided a line number. Also, it would have been useful if it had allowed the loading to continue with later lines. ",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/229/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 808008305,MDU6SXNzdWU4MDgwMDgzMDU=,230,--sniff option for sniffing delimiters,9599,simonw,closed,0,,,,,8,2021-02-14T17:43:54Z,2021-02-14T21:15:33Z,2021-02-14T19:24:32Z,OWNER,,"> I just spotted that `csv.Sniffer` in the Python standard library has a `.has_header(sample)` method which detects if the first row appears to be a header or not, which is interesting. https://docs.python.org/3/library/csv.html#csv.Sniffer _Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778812050_",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/230/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 797159961,MDExOlB1bGxSZXF1ZXN0NTY0MjE1MDEx,225,fix for problem in Table.insert_all on search for columns per chunk of rows,261237,nieuwenhoven,closed,0,,,,,2,2021-01-29T20:16:07Z,2021-02-14T21:04:13Z,2021-02-14T21:04:13Z,NONE,simonw/sqlite-utils/pulls/225,"Hi, I ran into a problem when trying to create a database from my Apple Healthkit data using [healthkit-to-sqlite](https://github.com/dogsheep/healthkit-to-sqlite). The program crashed because of an invalid insert statement that was generated for table `rDistanceCycling`. The actual problem turned out to be in [sqlite-utils](https://github.com/simonw/sqlite-utils). `Table.insert_all` processes the data to be inserted in chunks of rows and checks for every chunk which columns are used, and it will collect all column names in the variable `all_columns`. The collection of columns is done using a nested list comprehension that is not completely correct. I'm using a Windows machine and had to make a few adjustments to the tests in order to be able to run them because they had a posix dependency. Thanks, kind regards, Frans ``` # this is a (condensed) chunk of data from my Apple healthkit export that caused the problem. # the 3 last items in the chunk have additional keys: metadata_HKMetadataKeySyncVersion and metadata_HKMetadataKeySyncIdentifier chunk = [{'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '7.0.1', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:7.0.1>', 'unit': 'km', 'creationDate': '2020-10-10 12:29:09 +0100', 'startDate': '2020-10-10 12:29:06 +0100', 'endDate': '2020-10-10 12:29:07 +0100', 'value': '0.00518016'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '7.0.1', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:7.0.1>', 'unit': 'km', 'creationDate': '2020-10-10 12:29:10 +0100', 'startDate': '2020-10-10 12:29:07 +0100', 'endDate': '2020-10-10 12:29:08 +0100', 'value': '0.00544049'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>', 'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:40:50 +0100', 'endDate': '2020-07-15 16:42:49 +0100', 'value': '0.952092', 'metadata_HKMetadataKeySyncVersion': '1', 'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520450.99823:616520569.99360:119'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>', 'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:42:49 +0100', 'endDate': '2020-07-15 16:44:51 +0100', 'value': '0.848983', 'metadata_HKMetadataKeySyncVersion': '1', 'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520569.99360:616520691.98826:119'}, {'sourceName': 'AppleÂ\xa0Watch van Frans', 'sourceVersion': '6.2.6', 'device': '<, name:Apple Watch, manufacturer:Apple Inc., model:Watch, hardware:Watch3,4, software:6.2.6>', 'unit': 'km', 'creationDate': '2020-10-14 05:54:12 +0100', 'startDate': '2020-07-15 16:44:51 +0100', 'endDate': '2020-07-15 16:46:50 +0100', 'value': '0.834403', 'metadata_HKMetadataKeySyncVersion': '1', 'metadata_HKMetadataKeySyncIdentifier': '3:674DBCDB-3FE8-40D1-9FC1-E54A2B413805:616520691.98826:616520810.98305:119'}] def all_columns_old(): all_columns = [col for col in chunk[0]] all_columns += [column for record in chunk for column in record if column not in all_columns] return all_columns def all_columns_new(): all_columns = [col for col in chunk[0]] for record in chunk: all_columns += [column for column in record if column not in all_columns] return all_columns if __name__ == '__main__': from pprint import pprint print('problem: ') pprint(all_columns_old()) print('\nfix: ') pprint(all_columns_new()) ``` ",140912432,sqlite-utils,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/225/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 808046597,MDU6SXNzdWU4MDgwNDY1OTc=,234,.insert_all() fails if subsequent chunks contain additional columns,9599,simonw,closed,0,,,,,1,2021-02-14T21:01:51Z,2021-02-14T21:03:40Z,2021-02-14T21:03:40Z,OWNER,,Reported by @nieuwenhoven in #225 along with a proposed fix.,140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/234/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 808036774,MDU6SXNzdWU4MDgwMzY3NzQ=,232,Run tests against Windows in GitHub Actions,9599,simonw,closed,0,,,,,0,2021-02-14T20:09:45Z,2021-02-14T20:39:55Z,2021-02-14T20:39:55Z,OWNER,,"> I'm going to try and get the test suite to run in Windows on GitHub Actions. _Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/225#issuecomment-778834504_",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/232/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 808037010,MDExOlB1bGxSZXF1ZXN0NTczMTQ3MTY4,233,"Run tests against Ubuntu, macOS and Windows",9599,simonw,closed,0,,,,,0,2021-02-14T20:11:02Z,2021-02-14T20:39:54Z,2021-02-14T20:39:54Z,OWNER,simonw/sqlite-utils/pulls/233,Refs #232,140912432,sqlite-utils,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/233/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 808028757,MDU6SXNzdWU4MDgwMjg3NTc=,231,"limit=X, offset=Y parameters for more Python methods",9599,simonw,closed,0,,,,,2,2021-02-14T19:31:23Z,2021-02-14T20:03:08Z,2021-02-14T20:03:08Z,OWNER,,"> I'm going to add a `offset=` parameter to support this case. Thanks for the suggestion! _Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/224#issuecomment-778828495_",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/231/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 792297010,MDExOlB1bGxSZXF1ZXN0NTYwMjA0MzA2,224,Add fts offset docs.,37962604,polyrand,closed,0,,,,,2,2021-01-22T20:50:58Z,2021-02-14T19:31:06Z,2021-02-14T19:31:06Z,NONE,simonw/sqlite-utils/pulls/224,"The limit can be passed as a string to the query builder to have an offset. I have tested it using the shorthand `limit=f""15, 30""`, the standard syntax should work too.",140912432,sqlite-utils,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/224/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 806743116,MDU6SXNzdWU4MDY3NDMxMTY=,1220,Installing datasette via docker: Path 'fixtures.db' does not exist,30607,aborruso,closed,0,,,,,4,2021-02-11T21:09:14Z,2021-02-12T21:35:17Z,2021-02-12T21:35:17Z,NONE,,"Hi, If I run ``` docker run -p 8001:8001 -v `pwd`:/mnt \ 1 ↵ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 fixtures.db ``` I have ``` Error: Invalid value for '[FILES]...': Path 'fixtures.db' does not exist. ``` If I run `test -f fixtures.db && echo ""it exists.""` I have `it exists.`. What's my error? Thank you",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1220/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 797097140,MDU6SXNzdWU3OTcwOTcxNDA=,60,Use Data from SQLite in other commands,22578954,daniel-butler,open,0,,,,,3,2021-01-29T18:35:52Z,2021-02-12T18:29:43Z,,CONTRIBUTOR,,"As a total beginner here how could you access data from the sqlite table to run other commands. What I am thinking is I want to get all the repos in an organization then using the repo list pull all the commit messages for each repo. I love this project by the way!",207052882,github-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 803338729,MDU6SXNzdWU4MDMzMzg3Mjk=,33,photo-to-sqlite: command not found,11855322,robmarkcole,open,0,,,,,4,2021-02-08T08:42:57Z,2021-02-12T15:00:44Z,,NONE,,"Having installed in a venv I get: ``` (venv) (base) Robins-MacBook:datasette robin$ photo-to-sqlite apple-photos photos.db -bash: photo-to-sqlite: command not found ```",256834907,dogsheep-photos,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 806861312,MDExOlB1bGxSZXF1ZXN0NTcyMjA5MjQz,1222,"--ssl-keyfile and --ssl-certfile, refs #1221",9599,simonw,closed,0,,,,,0,2021-02-12T00:45:58Z,2021-02-12T00:52:18Z,2021-02-12T00:52:17Z,OWNER,simonw/datasette/pulls/1222,,107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1222/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 770712149,MDExOlB1bGxSZXF1ZXN0NTQyNDA2OTEw,10,BugFix for encoding and not update info.,1277270,riverzhou,closed,0,,,,,1,2020-12-18T08:58:54Z,2021-02-11T22:37:56Z,2021-02-11T22:37:56Z,NONE,dogsheep/evernote-to-sqlite/pulls/10,"Bugfix 1: Traceback (most recent call last): File ""d:\anaconda3\lib\runpy.py"", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File ""d:\anaconda3\lib\runpy.py"", line 87, in _run_code exec(code, run_globals) File ""D:\Anaconda3\Scripts\evernote-to-sqlite.exe\__main__.py"", line 7, in File ""d:\anaconda3\lib\site-packages\click\core.py"", line 829, in __call__ File ""d:\anaconda3\lib\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""d:\anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) return ctx.invoke(self.callback, **ctx.params) File ""d:\anaconda3\lib\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""d:\anaconda3\lib\site-packages\evernote_to_sqlite\cli.py"", line 30, in enex for tag, note in find_all_tags(fp, [""note""], progress_callback=bar.update): File ""d:\anaconda3\lib\site-packages\evernote_to_sqlite\utils.py"", line 11, in find_all_tags chunk = fp.read(1024 * 1024) UnicodeDecodeError: 'gbk' codec can't decode byte 0xa4 in position 383: illegal multibyte sequence Bugfix 2: Traceback (most recent call last): File ""D:\Anaconda3\Scripts\evernote-to-sqlite-script.py"", line 33, in sys.exit(load_entry_point('evernote-to-sqlite==0.3', 'console_scripts', 'evernote-to-sqlite')()) File ""D:\Anaconda3\lib\site-packages\click\core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""D:\Anaconda3\lib\site-packages\click\core.py"", line 782, in main rv = self.invoke(ctx) File ""D:\Anaconda3\lib\site-packages\click\core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""D:\Anaconda3\lib\site-packages\click\core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""D:\Anaconda3\lib\site-packages\click\core.py"", line 610, in invoke return callback(*args, **kwargs) File ""D:\Anaconda3\lib\site-packages\evernote_to_sqlite-0.3-py3.8.egg\evernote_to_sqlite\cli.py"", line 31, in enex File ""D:\Anaconda3\lib\site-packages\evernote_to_sqlite-0.3-py3.8.egg\evernote_to_sqlite\utils.py"", line 28, in save_note AttributeError: 'NoneType' object has no attribute 'text'",303218369,evernote-to-sqlite,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/10/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 748370021,MDExOlB1bGxSZXF1ZXN0NTI1MzcxMDI5,8,"fix import error if note has no ""updated"" element",4028322,mkorosec,closed,0,,,,,0,2020-11-22T22:51:05Z,2021-02-11T22:34:06Z,2021-02-11T22:34:06Z,CONTRIBUTOR,dogsheep/evernote-to-sqlite/pulls/8,"I got the following error when executing evernote-to-sqlite enex evernote.db evernote.enex ``` ... File ""evernote_to_sqlite/cli.py"", line 31, in enex save_note(db, note) File ""evernote_to_sqlite/utils.py"", line 28, in save_note updated = note.find(""updated"").text AttributeError: 'NoneType' object has no attribute 'text' ``` Seems that in some cases the updated element is not added to the note, this is a part of the problematic note: ``` 20201019T074518Z web.clip7 webclipper.evernote ```",303218369,evernote-to-sqlite,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/8/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 743297582,MDU6SXNzdWU3NDMyOTc1ODI=,7,evernote-to-sqlite on windows 10 give this error: TypeError: insert() got an unexpected keyword argument 'replace',42387931,martinvanwieringen,closed,0,,,,,1,2020-11-15T16:57:28Z,2021-02-11T22:13:17Z,2021-02-11T22:13:17Z,NONE,,"running evernote-to-sqlite 0.2 on windows 10. Command: evernote-to-sqlite enex evernote.db MyNotes.enex I get the followinng error: File ""C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\evernote_to_sqlite\utils.py"", line 46, in save_note note_id = db[""notes""].insert(row, hash_id=""id"", replace=True, alter=True).last_pk TypeError: insert() got an unexpected keyword argument 'replace' Removing replace=True, Leads to below error: note_id = db[""notes""].insert(row, hash_id=""id"", alter=True).last_pk File ""C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py"", line 924, in insert return self.insert_all( File ""C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py"", line 1046, in insert_all result = self.db.conn.execute(sql, values) sqlite3.IntegrityError: UNIQUE constraint failed: notes.id",303218369,evernote-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/7/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 748372469,MDU6SXNzdWU3NDgzNzI0Njk=,9,ParseError: undefined entity š,4028322,mkorosec,closed,0,,,,,1,2020-11-22T23:04:35Z,2021-02-11T22:10:55Z,2021-02-11T22:10:55Z,CONTRIBUTOR,,"I encountered a parse error if the enex file contained š or   Run command: evernote-to-sqlite enex evernote.db evernote.enex ``` Traceback (most recent call last): ... File ""evernote_to_sqlite/cli.py"", line 31, in enex save_note(db, note) File ""evernote_to_sqlite/utils.py"", line 35, in save_note content = ET.tostring(ET.fromstring(content_xml)).decode(""utf-8"") File ""/usr/lib/python3.8/xml/etree/ElementTree.py"", line 1320, in XML parser.feed(text) xml.etree.ElementTree.ParseError: undefined entity š: line 3, column 35 ``` Workaround: ``` sed -i 's/š//g' evernote.enex sed -i 's/ //g' evernote.enex ```",303218369,evernote-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/9/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 792851444,MDU6SXNzdWU3OTI4NTE0NDQ=,11,XML parse error,3613583,dskrad,closed,0,,,,,2,2021-01-24T17:38:54Z,2021-02-11T21:18:58Z,2021-02-11T21:18:48Z,NONE,,"I am on Windows 10 using Windows Subsystem for Linux, Python 3.8. I installed evernote-to-sqlite via pipx (in a venv). I tried using enex files from the latest version of Evernote for Windows (10.6.9 which only lets you export 50 notes at a time) and from Legacy Evernote (6.25.2.9198 which lets you export all your notes at once). The enex file from latest evernote gives this error: File ""/usr/lib/python3.8/xml/etree/ElementTree.py"", line 1320, in XML parser.feed(text) xml.etree.ElementTree.ParseError: XML or text declaration not at start of entity: line 2, column 6 The enex file from Legacy Evernote gives this error: File ""/home/david/.local/pipx/venvs/evernote-to-sqlite/lib/python3.8/site-packages/evernote_to_sqlite/utils.py"", line 28, in save_note updated = note.find(""updated"").text AttributeError: 'NoneType' object has no attribute 'text'",303218369,evernote-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/11/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 792890765,MDU6SXNzdWU3OTI4OTA3NjU=,1200,?_size=10 option for the arbitrary query page would be useful,9599,simonw,open,0,,,,,2,2021-01-24T20:55:35Z,2021-02-11T03:13:59Z,,OWNER,,"https://latest.datasette.io/fixtures?sql=select+*+from+compound_three_primary_keys&_size=10 - `_size=10` does not do anything at the moment. It would be useful if it did. Would also be good if it persisted in a hidden form field.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1200/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 803929694,MDU6SXNzdWU4MDM5Mjk2OTQ=,1219,Try profiling Datasette using scalene,9599,simonw,open,0,,,,,2,2021-02-08T20:37:06Z,2021-02-08T22:13:00Z,,OWNER,,https://github.com/emeryberger/scalene looks like an interesting profiling tool.,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1219/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 801780625,MDU6SXNzdWU4MDE3ODA2MjU=,9,SSL Error,12669260,jfeiwell,open,0,,,,,2,2021-02-05T02:12:56Z,2021-02-07T18:45:04Z,,NONE,,"Here's the error I get when running `pip install pocket-to-sqlite`: ``` Could not fetch URL https://pypi.python.org/simple/pocket-to-sqlite/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:661) - skipping Could not find a version that satisfies the requirement pocket-to-sqlite (from versions: ) No matching distribution found for pocket-to-sqlite ``` Does this require python 3? ",213286752,pocket-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 802583450,MDU6SXNzdWU4MDI1ODM0NTA=,226,3.4 release is broken - includes a rogue line,9599,simonw,closed,0,,,,,0,2021-02-06T02:08:01Z,2021-02-06T02:10:26Z,2021-02-06T02:10:26Z,OWNER,,"I started seeing weird errors, caused by this line: https://github.com/simonw/sqlite-utils/blob/f8010ca78fed8c5fca6cde19658ec09fdd468420/sqlite_utils/cli.py#L1-L3 That was added by accident in 1b666f9315d4ea6bb332b2e75e48480c26100199 I'm surprised the tests didn't catch this!",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/226/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 788527932,MDU6SXNzdWU3ODg1Mjc5MzI=,223,--delimiter option for CSV import,9599,simonw,closed,0,,,,,2,2021-01-18T20:25:03Z,2021-02-06T01:39:47Z,2021-02-06T01:34:54Z,OWNER,,"https://bruxellesdata.opendatasoft.com/explore/dataset/dog-toilets/export/?location=12,50.85802,4.38054 says: > CSV uses semicolon (;) as a separator. Would be useful to be able to do this: sqlite-utils insert places.db places places.csv --delimiter ';' `--delimiter` could imply `--csv`",140912432,sqlite-utils,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/223/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 796234313,MDU6SXNzdWU3OTYyMzQzMTM=,1210,Immutable Database w/ Canned Queries,525780,heyarne,closed,0,,,,,2,2021-01-28T18:08:29Z,2021-02-05T11:30:34Z,2021-02-05T11:30:34Z,NONE,,"I have a database that I only want to read from; when instructing datasette to treat the database as immutable my defined canned queries disappear. Are these two features incompatible or have I hit an unintended bug? Thanks for datasette in any way, it's a joy to use!",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1210/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 796736607,MDU6SXNzdWU3OTY3MzY2MDc=,56,Not all quoted statuses get fetched?,42315895,gsajko,closed,0,,,,,3,2021-01-29T09:48:44Z,2021-02-03T10:36:36Z,2021-02-03T10:36:36Z,NONE,," ![image](https://user-images.githubusercontent.com/42315895/106259325-5f75dc80-621f-11eb-8311-db8f2fe2a257.png) In my database I have 13300 quote tweets, but eta 3600 have `quoted_status` empty. I fetched some of them using `https://api.twitter.com/1.1/statuses/show.json?id=xx` and they did have ids of quoted tweets.",206156866,twitter-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/56/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 799693777,MDU6SXNzdWU3OTk2OTM3Nzc=,1214,Re-submitting filter form duplicates _x querystring arguments,9599,simonw,closed,0,,,,,3,2021-02-02T21:13:35Z,2021-02-02T21:28:53Z,2021-02-02T21:21:13Z,OWNER,,"Really nasty bug, caused by #1194 fix in 07e163561592c743e4117f72102fcd350a600909 Navigate to this page: https://github-to-sqlite.dogsheep.net/github/labels?_search=help&_sort=id Click ""Apply"" to submit the form and the resulting URL is https://github-to-sqlite.dogsheep.net/github/labels?_search=help&_sort=id&_search=help&_sort=id That's because the (truncated) HTML for the form looks like this: ```html ... ...
... ```",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1214/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 799663959,MDU6SXNzdWU3OTk2NjM5NTk=,1213,gzip support for HTML (and JSON) responses,9599,simonw,open,0,,,,,3,2021-02-02T20:36:28Z,2021-02-02T20:41:55Z,,OWNER,,"This page https://datasette-tiles-demo.datasette.io/San_Francisco/tiles is 2MB because of all of the base64 images. Gzipped it's 1.5MB. Since Datasette is usually deployed without a frontend gzipping proxy, Datasette itself needs to solve for this. Gzipping everything won't work because some endpoints - the all-rows CSV endpoint and the download-database endpoint - are streaming and hence can't be buffered-and-gzipped.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1213/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 797784080,MDU6SXNzdWU3OTc3ODQwODA=,62,Stargazers and workflows commands always require an auth file when using GITHUB_TOKEN ,631242,frosencrantz,open,0,,,,,0,2021-01-31T18:56:05Z,2021-01-31T18:56:05Z,,CONTRIBUTOR,,"Requested fix in https://github.com/dogsheep/github-to-sqlite/pull/59 The stargazers and workflows commands always require an auth file, even when using a `GITHUB_TOKEN`. Other commands don't require the auth file. ",207052882,github-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/62/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 797728929,MDU6SXNzdWU3OTc3Mjg5Mjk=,8,QUESTION: extract full text,417363,darribas,open,0,,,,,0,2021-01-31T14:50:10Z,2021-01-31T14:50:10Z,,NONE,,"This may be solved or a feature already, but I couldn't figure it out, is it possible to extract and store also full text from the saved pages? The same way that Pocket parses the text, it'd be amazing to be able to store (and thus make searchable later) the text. Thank you very much for the project, it's such an amazing idea! ",213286752,pocket-to-sqlite,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/8/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 793881756,MDU6SXNzdWU3OTM4ODE3NTY=,1207,"Document the Datasette(..., pdb=True) testing pattern",9599,simonw,closed,0,,,,,1,2021-01-26T02:48:10Z,2021-01-29T02:37:19Z,2021-01-29T02:12:34Z,OWNER,,"If you're writing tests for a Datasette plugin and you get a 500 error from inside Datasette, you can cause Datasette to open a PDB session within the application server code by doing this: ```python ds = Datasette([db_path], pdb=True) response = await ds.client.get(""/"") ``` You'll need to run `pytest -s` to interact with the debugger, otherwise you'll get an error.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1207/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 795367402,MDU6SXNzdWU3OTUzNjc0MDI=,1209,v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround,11788561,jrdmb,open,0,,,,,1,2021-01-27T19:08:13Z,2021-01-28T23:00:27Z,,NONE,,"v0.54 500 error in sql query template; code worked in v0.53; found a workaround **schema:** CREATE TABLE ""talks"" (""talk"" TEXT,""series"" INTEGER, ""talkdate"" TEXT) CREATE TABLE ""series"" (""id"" INTEGER PRIMARY KEY, ""series"" TEXT, talks_list TEXT default '', website TEXT default ''); **Live example of correctly rendered template in v.053:** https://cosmotalks-cy6xkkbezq-uw.a.run.app/cosmotalks/talks/1 **Description of problem:** I needed 'sql select' code in a custom row-mydatabase-mytable.html template to lookup the series name for a foreign key integer value in the talks table. So `metadata.json` specifies the `datasette-template-sql` plugin. The code below worked perfectly in v0.53 (just the relevant sql statement part is shown; full code is [here](https://github.com/jrdmb/cosmotalks-datasette/blob/main/templates/row-cosmotalks-talks.html)): ``` {# custom addition #} {% for row in display_rows %} ... {% set sname = sql(""select series from series where id = ?"", [row.series]) %} Series name: {{ sname[0].series }} ... {% endfor %} {# End of custom addition #} ``` **In v0.54, that code resulted in a 500 error with a 'no such table series' message.** A second query in that template also did not work but the above is fully illustrative of the problem. All templates were up-to-date along with datasette v0.54. **Workaround:** After fiddling around with trying different things, what worked was the syntax from [Querying a different database from the datasette-template-sql github repo](https://github.com/simonw/datasette-template-sql#querying-a-different-database) to add the database name to the sql statement: `{% set sname = sql(""select series from series where id = ?"", [row.series], database=""cosmotalks"") %}` Though this was found to work, it should not be necessary to add `database=""cosmotalks""` since per the `datasette-template-sql` README, it's only needed when querying a different database, but here it's a table within the same database. ",107914493,datasette,issue,,,,, 793027837,MDU6SXNzdWU3OTMwMjc4Mzc=,1205,Rename /:memory: to /_memory,9599,simonw,closed,0,,,3268330,Datasette 1.0,3,2021-01-25T05:04:56Z,2021-01-28T22:55:02Z,2021-01-28T22:51:42Z,OWNER,,"For consistency with `/_internal` - and because then we don't need to escape the `:` characters. This change would need to be in before Datasette 1.0. I could land it earlier and set up redirects from the old URLs though.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1205/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 770448622,MDU6SXNzdWU3NzA0NDg2MjI=,1151,Database class mechanism for cross-connection in-memory databases,9599,simonw,closed,0,,,6346396,Datasette 0.54,11,2020-12-17T23:25:43Z,2021-01-26T19:07:44Z,2020-12-18T01:01:26Z,OWNER,,"> Next challenge: figure out how to use the `Database` class from https://github.com/simonw/datasette/blob/0.53/datasette/database.py for an in-memory database which persists data for the duration of the lifetime of the server, and allows access to that in-memory database from multiple threads in a way that lets them see each other's changes. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1150#issuecomment-747768112_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1151/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 714377268,MDU6SXNzdWU3MTQzNzcyNjg=,991,Redesign application homepage,9599,simonw,open,0,,,,,7,2020-10-04T18:48:45Z,2021-01-26T19:06:36Z,,OWNER,,"Most Datasette instances only host a single database, but the current homepage design assumes that it should leave plenty of space for multiple databases: Reconsider this design - should the default show more information? The Covid-19 Datasette homepage looks particularly sparse I think: https://covid-19.datasettes.com/ ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/991/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 793907673,MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz,15,added try / except to write_records ,9857779,ryancheley,open,0,,,,,0,2021-01-26T03:56:21Z,2021-01-26T03:56:21Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/healthkit-to-sqlite/pulls/15,"to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following: ``` db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables --------------------------------------------------------------------------------------------------------------------------------------------------------------------- db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables --------------------------------------------------------------------------------------------------------------------------------------------------------------------- db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered --------------------------------------------------------------------------------------------------------------------------------------------------------------------- healthkit-to-sqlite 8 sys.exit(cli()) core.py 829 __call__ return self.main(*args, **kwargs) core.py 782 main rv = self.invoke(ctx) core.py 1066 invoke return ctx.invoke(self.callback, **ctx.params) core.py 610 invoke return callback(*args, **kwargs) cli.py 57 cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) utils.py 42 convert_xml_to_sqlite write_records(records, db) utils.py 143 write_records db[table].insert_all( db.py 1899 insert_all self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1714 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered ``` Adding the try / except in the `write_records` seems to fix that issue. ",197882382,healthkit-to-sqlite,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 792904595,MDU6SXNzdWU3OTI5MDQ1OTU=,1201,Release notes for Datasette 0.54,9599,simonw,closed,0,,,6346396,Datasette 0.54,5,2021-01-24T21:22:28Z,2021-01-25T17:42:21Z,2021-01-25T17:42:21Z,OWNER,,"These will incorporate the release notes from the alpha, much expanded: https://github.com/simonw/datasette/releases/tag/0.54a0",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1201/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 793086333,MDExOlB1bGxSZXF1ZXN0NTYwODMxNjM4,1206,Release 0.54,9599,simonw,closed,0,,,,,3,2021-01-25T06:45:47Z,2021-01-25T17:33:30Z,2021-01-25T17:33:29Z,OWNER,simonw/datasette/pulls/1206,Refs #1201,107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1206/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 780278550,MDU6SXNzdWU3ODAyNzg1NTA=,1179,Make original path available to render hooks,9599,simonw,open,0,,,,,8,2021-01-06T08:31:45Z,2021-01-25T04:44:33Z,,OWNER,,"https://github.com/simonw/datasette-export-notebook/blob/0.1/datasette_export_notebook/__init__.py ```python async def render_notebook(datasette, request): return Response.html( await datasette.render_template( ""export_notebook.html"", { ""csv_stream_url"": datasette.absolute_url( request, path_with_format( request=request, format=""csv"", extra_qs={""_stream"": ""on""} ), ), ""json_url"": datasette.absolute_url( request, path_with_format( request=request, format=""json"", extra_qs={""_shape"": ""array""} ), ), ""json"": json, }, ) ) ``` This results in https://latest-with-plugins.datasette.io/github/issue_comments.Notebook showing `http://latest-with-plugins.datasette.io/github/issue_comments.Notebook?_format=json&_shape=array`",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1179/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 712260429,MDU6SXNzdWU3MTIyNjA0Mjk=,983,JavaScript plugin hooks mechanism similar to pluggy,9599,simonw,open,0,,,,,47,2020-09-30T20:32:43Z,2021-01-25T04:43:58Z,,OWNER,,"> It would be neat to provide a JavaScript plugin hook that plugins can use to add their own options to this menu. No idea what that would look like though. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/981#issuecomment-701616922_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/983/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 789336592,MDU6SXNzdWU3ODkzMzY1OTI=,1195,"view_name = ""query"" for the query page",9599,simonw,open,0,,,,,4,2021-01-19T20:21:36Z,2021-01-25T04:40:08Z,,OWNER,,It uses `view_name` of `database` at the moment which isn't as useful.,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1195/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 712984738,MDU6SXNzdWU3MTI5ODQ3Mzg=,987,Documented HTML hooks for JavaScript plugin authors,9599,simonw,open,0,,,,,7,2020-10-01T16:10:14Z,2021-01-25T04:00:03Z,,OWNER,,In #981 I added `data-column=` attributes to the `` on the table page. These should become part of Datasette's documented API so JavaScript plugin authors can use them to derive things about the tables shown on a page (`datasette-cluster-map uses them as-of https://github.com/simonw/datasette-cluster-map/issues/18).,107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/987/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 788447787,MDU6SXNzdWU3ODg0NDc3ODc=,1194,?_size= argument is not persisted by hidden form fields in the table filters,9599,simonw,closed,0,,,6346396,Datasette 0.54,3,2021-01-18T17:41:52Z,2021-01-25T03:10:23Z,2021-01-25T03:10:23Z,OWNER,,"Click ""Apply"" on https://covid-19.datasettes.com/covid/ny_times_us_counties?_size=1000&county__exact=San+Francisco&state__exact=California&_sort_desc=date#g.mark=line&g.x_column=date&g.x_type=temporal&g.y_column=cases&g.y_type=quantitative and the `?_size=1000` parameter from the URL will no longer apply on the reloaded page.",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1194/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 777145954,MDU6SXNzdWU3NzcxNDU5NTQ=,1167,Add Prettier to contributing documentation,9599,simonw,closed,0,,,6346396,Datasette 0.54,3,2020-12-31T22:00:55Z,2021-01-25T02:01:19Z,2021-01-25T01:58:28Z,OWNER,,"Following #1166 - the docs at https://docs.datasette.io/en/stable/contributing.html should include a section about JavaScript, and it should document how to run Prettier. I run it in VS Code but it can be run on the command-line too: npx prettier 'datasette/static/*[!.min].js' --write ",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1167/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 792958773,MDExOlB1bGxSZXF1ZXN0NTYwNzI1NzE0,1203,Easier way to run Prettier locally,9599,simonw,closed,0,,,,,0,2021-01-25T01:39:06Z,2021-01-25T01:41:46Z,2021-01-25T01:41:46Z,OWNER,simonw/datasette/pulls/1203,Refs #1167,107914493,datasette,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1203/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 771208009,MDU6SXNzdWU3NzEyMDgwMDk=,1154,Documentation for new _internal database and tables,9599,simonw,closed,0,,,6346396,Datasette 0.54,2,2020-12-18T22:34:52Z,2021-01-25T00:09:22Z,2021-01-25T00:08:41Z,OWNER,,"> Needs documentation, but I can wait to write that until I've tested out the feature a bit more. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1150#issuecomment-748352106_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1154/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 792931244,MDU6SXNzdWU3OTI5MzEyNDQ=,1202,Documentation convention for marking unstable APIs.,9599,simonw,closed,0,,,6346396,Datasette 0.54,2,2021-01-24T23:47:18Z,2021-01-25T00:01:02Z,2021-01-25T00:01:02Z,OWNER,,"> I'm going to document this but mark it as unstable, using a new documentation convention for marking unstable APIs. _Originally posted by @simonw in https://github.com/simonw/datasette/issues/1154#issuecomment-766462197_",107914493,datasette,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1202/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed 785588942,MDU6SXNzdWU3ODU1ODg5NDI=,1187,"extra_body_script() support for script type=""module""",9599,simonw,closed,0,,,6346396,Datasette 0.54,1,2021-01-14T02:01:47Z,2021-01-24T21:21:44Z,2021-01-14T02:14:39Z,OWNER,,"Follows #1186. The `extra_body_script()` plugin hook should provide a mechanism for specifying that the script should use `