id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 1880968405,PR_kwDOJHON9s5ZhYny,14,fix: fix the problem of Chinese character garbling,2698003,open,0,,,0,2023-09-04T23:48:28Z,2023-09-04T23:48:28Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/apple-notes-to-sqlite/pulls/14,"1. The code uses two different ways of writing encoding formats, `mac_roman` and `macroman`. It is uncertain whether there are any typo errors. 2. When there are Chinese characters in the content, exporting it results in garbled code. Changing it to `utf8` can fix the issue.",611552758,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/14/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1650984552,PR_kwDOJHON9s5NbyYN,13,use universal command,14314871,open,0,,,0,2023-04-02T15:10:54Z,2023-04-02T15:37:34Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/apple-notes-to-sqlite/pulls/13,,611552758,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/13/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1042759769,PR_kwDOEhK-wc4uAJb9,15,include note tags in the export,436138,open,0,,,0,2021-11-02T20:04:31Z,2021-11-02T20:04:31Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/evernote-to-sqlite/pulls/15,"When parsing the Evernote `` elements, the script will now also parse any nested `` elements, writing them out into a separate sqlite table. Here is an example of how to query the data after the script has run: ``` select notes.*, (select group_concat(tag) from notes_tags where notes_tags.note_id=notes.id) as tags from notes; ``` My .enex source file is 3+ years old so I am assuming the structure hasn't changed. Interestingly, my _notebook names_ show up in the _tags_ list where the tag name is prefixed with `notebook_`, so this could maybe help work around the first limitation mentioned in the [evernote-to-sqlite blog post](https://simonwillison.net/2020/Oct/16/building-evernote-sqlite-exporter/). ",303218369,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/15/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1641117021,PR_kwDODtX3eM5M66op,6,Add permalink virtual field to items table,1231935,open,0,,,1,2023-03-26T22:22:38Z,2023-03-29T18:38:52Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/hacker-news-to-sqlite/pulls/6,"I added a virtual column (no storage overhead) to the output that easily links back to the source. It works nicely out of the box with datasette: ![](https://cdn.zappy.app/faf43661d539ee0fee02c0421de22d65.png) I got bit a bit by https://github.com/simonw/sqlite-utils/issues/411, so I went with a manual `table_xinfo` and creating the table via execute. Happy to adjust if that issue moves, but this seems like it works. I also added my best-guess instructions for local development on this package. I'm shooting in the dark, so feel free to replace with how you work on it locally.",248903544,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/6/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1353418822,PR_kwDODtX3eM497MOV,5,The program fails when the user has no submissions,2467,open,0,,,0,2022-08-28T17:25:45Z,2022-08-28T17:25:45Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/hacker-news-to-sqlite/pulls/5,"Tested with: hacker-news-to-sqlite user hacker-news.db fernand0 Result: ` Traceback (most recent call last): File ""/home/ftricas/.pyenv/versions/3.10.6/bin/hacker-news-to-sqlite"", line 8, in sys.exit(cli()) File ""/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py"", line 1130, in __call__ return self.main(*args, **kwargs) File ""/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py"", line 1055, in main rv = self.invoke(ctx) File ""/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py"", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py"", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py"", line 760, in invoke return __callback(*args, **kwargs) File ""/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/hacker_news_to_sqlite/cli.py"", line 27, in user submitted = user.pop(""submitted"", None) or [] AttributeError: 'NoneType' object has no attribute 'pop' ` There is a problem of style with the patch (but not sure what to do) because with the new inicialization ( submitted = []) the part or [] is not needed. Maybe there is a more adequate way of doing this.",248903544,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/5/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1149402080,PR_kwDODFdgUs4zaUta,70,scrape-dependents: enable paging through package menu option if present,36061055,open,0,,,0,2022-02-24T15:07:25Z,2022-02-24T15:07:25Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/github-to-sqlite/pulls/70,Some repos organize network dependents by a Package toggle. This PR adds the ability to page through those options and scrape underlying dependents.,207052882,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/70/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1013506559,PR_kwDODFdgUs4skaNS,68,Add support for retrieving teams / members,68329,open,0,,,0,2021-10-01T15:55:02Z,2021-10-01T15:59:53Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/github-to-sqlite/pulls/68,Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table `team_members` beteween `teams` and `users`.,207052882,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/68/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1363280254,PR_kwDODFdgUs4-cIa_,76,Add organization support to repos command,2757699,open,0,,,1,2022-09-06T13:21:42Z,2022-09-06T13:59:08Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/github-to-sqlite/pulls/76,"New --organization flag to signify all given ""usernames"" are private orgs. Adapts API URL to the organization path instead. Not the best implementation, but a first draft to talk around Fixes #75 (badly, no tests, overly vague, untested)",207052882,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/76/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1884499674,PR_kwDODFE5qs5ZtYMc,13,"use poetry for packages, asdf for versioning, and gh actions for ci",150855,open,0,,,0,2023-09-06T17:59:16Z,2023-09-06T17:59:16Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/google-takeout-to-sqlite/pulls/13,"- build: use poetry for package management, asdf for python version - build: cleanup poetry config, add keywords, ignore dist - ci: migrate circleci to gh actions - fix: dup method definition ",206649770,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/13/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1046887492,PR_kwDODFE5qs4uMsMJ,9,Removed space from filename My Activity.json,91880982,open,0,,,0,2021-11-08T00:04:31Z,2021-11-08T00:04:31Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/google-takeout-to-sqlite/pulls/9,"File name from google takeout has no space. The code only runs without error if filename is ""MyActivity.json"" and not ""My Activity.json"". Is it a new change by Google?",206649770,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/9/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1250287607,PR_kwDODFE5qs44jvRV,11,Update README.md,11887,open,0,,,0,2022-05-27T03:13:59Z,2022-05-27T03:13:59Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/google-takeout-to-sqlite/pulls/11,Fix typo,206649770,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/11/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1513238455,PR_kwDODEm0Qs5GUoPm,71,"Archive: Fix ""ni devices"" typo in importer",26161409,open,0,,,0,2022-12-28T23:33:31Z,2022-12-28T23:33:31Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/71,,206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/71/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1513238314,PR_kwDODEm0Qs5GUoN6,70,Archive: Import Twitter Circle data,26161409,open,0,,,0,2022-12-28T23:33:09Z,2022-12-28T23:33:09Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/70,,206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/70/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1513238152,PR_kwDODEm0Qs5GUoMM,69,Archive: Import new tweets table name,26161409,open,0,,,0,2022-12-28T23:32:44Z,2022-12-28T23:32:44Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/69,"Given the code here, it seems like in the past this file was named ""tweet.js"". In recent exports, it's named ""tweets.js"". The archive importer needs to be modified to take this into account. Existing logic is reused for importing this table. (However, the resulting table name will be different, matching the different file name -- archive_tweets, rather than archive_tweet).",206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/69/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1513237982,PR_kwDODEm0Qs5GUoKL,68,Archive: Import mute table,26161409,open,0,,,0,2022-12-28T23:32:06Z,2022-12-28T23:32:06Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/68,,206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/68/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1513237712,PR_kwDODEm0Qs5GUoG_,67,Add support for app-only bearer tokens,26161409,open,0,,,0,2022-12-28T23:31:20Z,2022-12-28T23:31:20Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/67,"Previously, twitter-to-sqlite only supported OAuth1 authentication, and the token must be on behalf of a user. However, Twitter also supports application-only bearer tokens, documented here: https://developer.twitter.com/en/docs/authentication/oauth-2-0/bearer-tokens This PR adds support to twitter-to-sqlite for using application-only bearer tokens. To use, the auth.json file just needs to contain a ""bearer_token"" key instead of ""api_key"", ""api_secret_key"", etc.",206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/67/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1160327106,PR_kwDODEm0Qs4z_V3w,65,"Update Twitter dev link, clarify apps vs projects",2657547,open,0,,,0,2022-03-05T11:56:08Z,2022-03-05T11:56:08Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/65,"Twitter pushes you heavily towards v2 projects instead of v1 apps – I know the README mentions v1 API compatibility at the top, but I still nearly got turned around here.",206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/65/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1244082183,PR_kwDODEm0Qs44PPLy,66,Ageinfo workaround,11887,open,0,,,0,2022-05-21T21:08:29Z,2022-05-21T21:09:16Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/twitter-to-sqlite/pulls/66,"I'm not sure if this is due to a new format or just because my ageinfo file is blank, but trying to import an archive would crash when it got to that file. This PR adds a guard clause in the `ageinfo` transformer and sets a default value that doesn't throw an exception. Seems likely to be the same issue mentioned by danp in https://github.com/dogsheep/twitter-to-sqlite/issues/54, my ageinfo file looks the same. Added that same ageinfo file to the test archive as well to help confirm my workaround didn't break anything. Let me know if you want any changes!",206156866,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/66/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1393330070,PR_kwDODD6af84__DNJ,14,Photo links,6782721,open,0,,,0,2022-10-01T09:44:15Z,2022-11-18T17:10:49Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/swarm-to-sqlite/pulls/14,"* add to `checkin_details` view new column for a calculated photo links * supported multiple links split by newline * create `events` table if there's no events in the history to avoid SQL errors Fixes #9.",205429375,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/14/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1827436260,PR_kwDOD079W85WtVyk,39,Missing option in datasette instructions,319473,open,0,,,0,2023-07-29T10:34:48Z,2023-07-29T10:34:48Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/dogsheep-photos/pulls/39,Gotta tell it where to look,256834907,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/39/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1293698966,PR_kwDOD079W84600uh,37,Fix former command name in readme,578773,open,0,,,0,2022-07-05T02:09:13Z,2022-07-05T02:09:13Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/dogsheep-photos/pulls/37,Looks like a previous commit missed a `photo-to-sqlite`→ `dogsheep-photos` replacement.,256834907,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/37/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1405196044,PR_kwDOCGYnMM5AmYzy,499,feat: recreate fts triggers after table transform,7908073,open,0,,,2,2022-10-11T20:35:39Z,2022-10-26T17:54:51Z,,CONTRIBUTOR,simonw/sqlite-utils/pulls/499,"https://github.com/simonw/sqlite-utils/pull/498 ---- :books: Documentation preview :books:: https://sqlite-utils--499.org.readthedocs.build/en/499/ alternatively, `self.disable_fts()`",140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/499/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1066603133,PR_kwDOCGYnMM4vKAzW,347,Test against pysqlite3 running SQLite 3.37,9599,open,0,,,9,2021-11-29T23:17:57Z,2021-12-11T01:02:19Z,,OWNER,simonw/sqlite-utils/pulls/347,Refs #346 and #344.,140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/347/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1515717718,PR_kwDOC8tyDs5Gc-VH,23,Include workout statistics,2129,open,0,,,0,2023-01-01T17:29:57Z,2023-01-01T17:29:57Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/healthkit-to-sqlite/pulls/23,"Not sure when this changed (iOS 16 maybe?), but the `WorkoutStatistics` now has a whole bunch of information about workouts, e.g. for runs it contains the distance (as a `` element). Adding it as another column at leat allows me to pull these out (using SQLite's JSON support). I'm running with this patch on my own data now.",197882382,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/23/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1994861266,PR_kwDOBm6k_c5fhgOS,2209,Fix query for suggested facets with column named value,198537,open,0,,,3,2023-11-15T14:13:30Z,2023-11-15T15:31:12Z,,CONTRIBUTOR,simonw/datasette/pulls/2209,"See discussion in https://github.com/simonw/datasette/issues/2208 ---- :books: Documentation preview :books:: https://datasette--2209.org.readthedocs.build/en/2209/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2209/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1983600865,PR_kwDOBm6k_c5e7WH7,2206,Bump the python-packages group with 1 update,49699333,open,0,,,1,2023-11-08T13:18:56Z,2023-12-08T13:46:24Z,,CONTRIBUTOR,simonw/datasette/pulls/2206,"Bumps the python-packages group with 1 update: [black](https://github.com/psf/black).
Release notes

Sourced from black's releases.

23.11.0

Highlights

  • Support formatting ranges of lines with the new --line-ranges command-line option (#4020)

Stable style

  • Fix crash on formatting bytes strings that look like docstrings (#4003)
  • Fix crash when whitespace followed a backslash before newline in a docstring (#4008)
  • Fix standalone comments inside complex blocks crashing Black (#4016)
  • Fix crash on formatting code like await (a ** b) (#3994)
  • No longer treat leading f-strings as docstrings. This matches Python's behaviour and fixes a crash (#4019)

Preview style

  • Multiline dicts and lists that are the sole argument to a function are now indented less (#3964)
  • Multiline unpacked dicts and lists as the sole argument to a function are now also indented less (#3992)
  • In f-string debug expressions, quote types that are visible in the final string are now preserved (#4005)
  • Fix a bug where long case blocks were not split into multiple lines. Also enable general trailing comma rules on case blocks (#4024)
  • Keep requiring two empty lines between module-level docstring and first function or class definition (#4028)
  • Add support for single-line format skip with other comments on the same line (#3959)

Configuration

  • Consistently apply force exclusion logic before resolving symlinks (#4015)
  • Fix a bug in the matching of absolute path names in --include (#3976)

Performance

  • Fix mypyc builds on arm64 on macOS (#4017)

Integrations

  • Black's pre-commit integration will now run only on git hooks appropriate for a code formatter (#3940)

23.10.1

Highlights

  • Maintanence release to get a fix out for GitHub Action edge case (#3957)

Preview style

... (truncated)

Changelog

Sourced from black's changelog.

23.11.0

Highlights

  • Support formatting ranges of lines with the new --line-ranges command-line option (#4020)

Stable style

  • Fix crash on formatting bytes strings that look like docstrings (#4003)
  • Fix crash when whitespace followed a backslash before newline in a docstring (#4008)
  • Fix standalone comments inside complex blocks crashing Black (#4016)
  • Fix crash on formatting code like await (a ** b) (#3994)
  • No longer treat leading f-strings as docstrings. This matches Python's behaviour and fixes a crash (#4019)

Preview style

  • Multiline dicts and lists that are the sole argument to a function are now indented less (#3964)
  • Multiline unpacked dicts and lists as the sole argument to a function are now also indented less (#3992)
  • In f-string debug expressions, quote types that are visible in the final string are now preserved (#4005)
  • Fix a bug where long case blocks were not split into multiple lines. Also enable general trailing comma rules on case blocks (#4024)
  • Keep requiring two empty lines between module-level docstring and first function or class definition (#4028)
  • Add support for single-line format skip with other comments on the same line (#3959)

Configuration

  • Consistently apply force exclusion logic before resolving symlinks (#4015)
  • Fix a bug in the matching of absolute path names in --include (#3976)

Performance

  • Fix mypyc builds on arm64 on macOS (#4017)

Integrations

  • Black's pre-commit integration will now run only on git hooks appropriate for a code formatter (#3940)

23.10.1

Highlights

  • Maintenance release to get a fix out for GitHub Action edge case (#3957)

... (truncated)

Commits
  • 2a1c67e Prepare release 23.11.0 (#4032)
  • 72e7a2e Remove redundant condition from has_magic_trailing_comma (#4023)
  • 1a7d9c2 Preserve visible quote types for f-string debug expressions (#4005)
  • f4c7be5 docs: fix minor typo (#4030)
  • 2e4fac9 Apply force exclude logic before symlink resolution (#4015)
  • 66008fd [563] Fix standalone comments inside complex blocks crashing Black (#4016)
  • 50ed622 Fix long case blocks not split into multiple lines (#4024)
  • 46be1f8 Support formatting specified lines (#4020)
  • ecbd9e8 Fix crash with f-string docstrings (#4019)
  • e808e61 Preview: Keep requiring two empty lines between module-level docstring and fi...
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=black&package-manager=pip&previous-version=23.9.1&new-version=23.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions
---- :books: Documentation preview :books:: https://datasette--2206.org.readthedocs.build/en/2206/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2206/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1884330740,PR_kwDOBm6k_c5ZszDF,2174,Use $DATASETTE_INTERNAL in absence of --internal,15178711,open,0,,,3,2023-09-06T16:07:15Z,2023-09-08T00:46:13Z,,CONTRIBUTOR,simonw/datasette/pulls/2174,"#refs 2157, specifically [this comment](https://github.com/simonw/datasette/issues/2157#issuecomment-1700291967) Passing in `--internal my_internal.db` over and over again can get repetitive. This PR adds a new configurable env variable `DATASETTE_INTERNAL_DB_PATH`. If it's defined, then it takes place as the path to the internal database. Users can still overwrite this behavior by passing in their own `--internal internal.db` flag. In draft mode for now, needs tests and documentation. Side note: Maybe we can have a sections in the docs that lists all the ""configuration environment variables"" that Datasette respects? I did a quick grep and found: - `DATASETTE_LOAD_PLUGINS` - `DATASETTE_SECRETS` ---- :books: Documentation preview :books:: https://datasette--2174.org.readthedocs.build/en/2174/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2174/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1866815458,PR_kwDOBm6k_c5YyF-C,2159,Implement Dark Mode colour scheme,3315059,open,0,,,0,2023-08-25T10:46:23Z,2023-08-25T10:46:35Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2159,"Closes #2095. ---- :books: Documentation preview :books:: https://datasette--2159.org.readthedocs.build/en/2159/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2159/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1, 1865983069,PR_kwDOBm6k_c5YvQSi,2158,add brand option to metadata.json.,52261150,open,0,,,0,2023-08-24T22:37:41Z,2023-08-24T22:37:57Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2158,"This adds a brand link to the top navbar if 'brand' key is populated in metadata.json. The link will be either '#' or use the contents of 'brand_url' in metadata.json for href. I was able to get this done on my own site by replacing `templates/_crumbs.html` with a custom version, but I thought it would be nice to incorporate this in the tool directly. ![image](https://github.com/simonw/datasette/assets/52261150/fdfe9bb5-fee4-466c-8074-6132071d94e6) ---- :books: Documentation preview :books:: https://datasette--2158.org.readthedocs.build/en/2158/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2158/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1865572575,PR_kwDOBm6k_c5Yt2eO,2155,Fix hupper.start_reloader entry point,79087,open,0,,,2,2023-08-24T17:14:08Z,2023-09-27T18:44:02Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2155,"Update hupper's entry point so that click commands are processed properly. Fixes #2123 ---- :books: Documentation preview :books:: https://datasette--2155.org.readthedocs.build/en/2155/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2155/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 2, ""eyes"": 0}",0, 1864112887,PR_kwDOBm6k_c5Yo7bk,2151,Test Datasette on multiple SQLite versions,15178711,open,0,,,1,2023-08-23T22:42:51Z,2023-08-23T22:58:13Z,,CONTRIBUTOR,simonw/datasette/pulls/2151,"still testing, hope it works! ---- :books: Documentation preview :books:: https://datasette--2151.org.readthedocs.build/en/2151/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2151/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1, 1802613340,PR_kwDOBm6k_c5VZhfw,2100,Make primary key view accessible to render_cell hook,1563881,open,0,,,0,2023-07-13T09:30:36Z,2023-08-10T13:15:41Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2100," ---- :books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2100/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1794604602,PR_kwDOBm6k_c5U-akg,2096,Clarify docs for descriptions in metadata,15906,open,0,,,0,2023-07-08T01:57:58Z,2023-07-08T01:58:13Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2096,"G'day! I got confused while debugging, earlier today. That's on me, but it does strike me a little repetition in the metadata documentation might help those flicking around it rather than reading it from top to bottom. No worries if you think otherwise. ---- :books: Documentation preview :books:: https://datasette--2096.org.readthedocs.build/en/2096/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2096/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1734786661,PR_kwDOBm6k_c5R0fcK,2082,Catch query interrupted on facet suggest row count,10843208,open,0,,,0,2023-05-31T18:42:46Z,2023-05-31T18:45:26Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2082,"Just like facet's `suggest()` is trapping `QueryInterrupted` for facet columns, we also need to trap `get_row_count()`, which can reach timeout if database tables are big enough. I've included `get_columns()` inside the block as that's just another query, despite it's a really cheap one and might never raise the exception. ---- :books: Documentation preview :books:: https://datasette--2082.org.readthedocs.build/en/2082/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2082/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1715468032,PR_kwDOBm6k_c5QzEAM,2076,Datsette gpt plugin,130708713,open,0,,,0,2023-05-18T11:22:30Z,2023-05-18T11:22:45Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2076," ---- :books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2076/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1708981860,PR_kwDOBm6k_c5QdMea,2074,sort files by mtime,3919561,open,0,,,0,2023-05-14T15:25:15Z,2023-05-14T15:25:29Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2074,"serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;) ---- :books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2074/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1674322631,PR_kwDOBm6k_c5OpEz_,2061,"Add ""Packaging a plugin using Poetry"" section in docs",1238873,open,0,,,0,2023-04-19T07:23:28Z,2023-04-19T07:27:18Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2061,"This PR adds a new section about packaging a plugin using `poetry` within the ""Writing plugins"" page of the documentation. ---- :books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2061/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1661860507,PR_kwDOBm6k_c5N_bMw,2056,GitHub Action to lint Python code with ruff,3709715,open,0,,,6,2023-04-11T06:41:27Z,2023-04-15T14:24:46Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2056,"[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) and can be used to replace [Flake8](https://pypi.org/project/flake8/) (plus dozens of plugins), [isort](https://pypi.org/project/isort/), [pydocstyle](https://pypi.org/project/pydocstyle/), [yesqa](https://github.com/asottile/yesqa), [eradicate](https://pypi.org/project/eradicate/), [pyupgrade](https://pypi.org/project/pyupgrade/), and [autoflake](https://pypi.org/project/autoflake/), all while executing (in Rust) tens or hundreds of times faster than any individual tool. The ruff Action uses minimal steps to run in ~5 seconds, rapidly providing intuitive GitHub Annotations to contributors. ![image](https://user-images.githubusercontent.com/3709715/223758136-afc386d2-70aa-4eff-953a-2c2d82ceea23.png) ---- :books: Documentation preview :books:: https://datasette--2056.org.readthedocs.build/en/2056/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2056/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1639873822,PR_kwDOBm6k_c5M29tt,2044,Expand labels in row view as well (patch for 0.64.x branch),82332573,open,0,,,0,2023-03-24T18:44:44Z,2023-03-24T18:44:57Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2044,"This is a version of #2031 for the 0.64.x branch. ---- :books: Documentation preview :books:: https://datasette--2044.org.readthedocs.build/en/2044/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2044/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1613974869,PR_kwDOBm6k_c5LgPS-,2034,remove an unused `app` var in cli.py,4370201,open,0,,,2,2023-03-07T18:19:05Z,2023-03-29T20:56:20Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2034,"this var `app` isn't actually used? unless init it does some side-effect outside of the event loop, idon't think it's necessary. Feel free to ignore this PR if the deleted line actually does something. ---- :books: Documentation preview :books:: https://datasette--2034.org.readthedocs.build/en/2034/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2034/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1605481359,PR_kwDOBm6k_c5LDwrF,2031,Expand foreign key references in row view as well,82332573,open,0,,,5,2023-03-01T18:43:09Z,2023-03-24T18:35:25Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2031,"Unlike the table view, the single row view does not resolve foreign key references into labels. This patch extracts the foreign key reference expansion code from TableView.data() into a standalone function that is then called by both TableView.data() and RowView.data(). ---- :books: Documentation preview :books:: https://datasette--2031.org.readthedocs.build/en/2031/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2031/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1586980089,PR_kwDOBm6k_c5KF-by,2026,Avoid repeating primary key columns if included in _col args,8513,open,0,,,0,2023-02-16T04:16:25Z,2023-02-16T04:16:41Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2026,"...while maintaining given order. Fixes #1975 (if I'm understanding correctly). ---- :books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2026/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1581218043,PR_kwDOBm6k_c5JyqPy,2025,Add database metadata to index.html template context,9993,open,0,,,0,2023-02-12T11:16:58Z,2023-02-12T11:17:14Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2025,"Fixes #2016 ---- :books: Documentation preview :books:: https://datasette--2025.org.readthedocs.build/en/2025/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2025/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1560982210,PR_kwDOBm6k_c5IvYKw,2008,array facet: don't materialize unnecessary columns,193185,open,0,,,8,2023-01-28T19:33:40Z,2023-01-29T18:17:40Z,,CONTRIBUTOR,simonw/datasette/pulls/2008,"The presence of `inner.*` causes SQLite to materialize a row with all the columns. Those columns will be discarded later. Instead, we can select only the column we'll use. This lets SQLite's optimizer realize that the other columns in the CTE definition aren't needed. On a test table with 278K rows, 98K of which had an array, this speeds up the facet calculation from 4 sec to 1 sec. ---- :books: Documentation preview :books:: https://datasette--2008.org.readthedocs.build/en/2008/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2008/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1556065335,PR_kwDOBm6k_c5Ie5nA,2004,"use single quotes for string literals, fixes #2001",193185,open,0,,,1,2023-01-25T05:08:46Z,2023-02-01T06:37:18Z,,CONTRIBUTOR,simonw/datasette/pulls/2004,"This modernizes some uses of double quotes for string literals to use only single quotes, fixes simonw/datasette#2001 While developing it, I manually enabled the stricter mode by using the code snippet at https://gist.github.com/cldellow/85bba507c314b127f85563869cd94820 I think that code snippet isn't generally safe/portable, so I haven't tried to automate it in the tests. ---- :books: Documentation preview :books:: https://datasette--2004.org.readthedocs.build/en/2004/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2004/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1555701851,PR_kwDOBm6k_c5IdsD7,2003,Show referring tables and rows when the referring foreign key is compound,536941,open,0,,,3,2023-01-24T21:31:31Z,2023-01-25T18:44:42Z,,CONTRIBUTOR,simonw/datasette/pulls/2003,"sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys. in particular, 1. in a table view, there is not a link from the row to the referenced row if the foreign key is compound 2. in a row view, there is no listing of tables and rows that refer to the focal row if those referencing foreign keys are compound. Both of these issues are discussed in #1099. This PR only fixes the second one, because it's not clear what the right UX is for the first issue. ![Screenshot 2023-01-24 at 19-47-40 nlrb bargaining_unit](https://user-images.githubusercontent.com/536941/214454749-d53deead-4151-4329-a5d4-8a7a454de7d3.png) Some things that might not be desirable about this approach. 1. it changes the external API, by changing `column` => `columns` and `other_column` => `other_columns` (see inline comment for more discussion. 2. There are various places where the plural foreign keys have to be checked for length and discarded or transformed to singular. ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2003/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1538342965,PR_kwDOBm6k_c5HpNYo,1996,Document custom json encoder,25778,open,0,,,1,2023-01-18T16:54:14Z,2023-01-19T12:55:57Z,,CONTRIBUTOR,simonw/datasette/pulls/1996,"Closes #1983 All documentation here. Edits welcome. ---- :books: Documentation preview :books:: https://datasette--1996.org.readthedocs.build/en/1996/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1996/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1426379903,PR_kwDOBm6k_c5BtJNn,1870,"don't use immutable=1, only mode=ro",536941,open,0,,,7,2022-10-27T23:33:04Z,2023-10-03T19:12:37Z,,CONTRIBUTOR,simonw/datasette/pulls/1870,"Opening db files in immutable mode sometimes leads to the file being mutated, which causes duplication in the docker image layers: see #1836, #1480 That this happens in ""immutable"" mode is surprising, because the sqlite docs say that setting this should open the database as read only. https://www.sqlite.org/c3ref/open.html > immutable: The immutable parameter is a boolean query parameter that indicates that the database file is stored on read-only media. When immutable is set, SQLite assumes that the database file cannot be changed, even by a process with higher privilege, and so the database is opened read-only and all locking and change detection is disabled. Caution: Setting the immutable property on a database file that does in fact change can result in incorrect query results and/or [SQLITE_CORRUPT](https://www.sqlite.org/rescode.html#corrupt) errors. See also: [SQLITE_IOCAP_IMMUTABLE](https://www.sqlite.org/c3ref/c_iocap_atomic.html). Perhaps this is a bug in sqlite? ---- :books: Documentation preview :books:: https://datasette--1870.org.readthedocs.build/en/1870/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1870/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1122451096,PR_kwDOBm6k_c4x_mXy,1626,Try test suite against macOS and Windows,9599,open,0,,,3,2022-02-02T22:26:51Z,2022-02-03T01:22:44Z,,OWNER,simonw/datasette/pulls/1626,Refs #1625,107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1626/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1033678984,PR_kwDOBm6k_c4tjgJ8,1495,Allow routes to have extra options,536941,open,0,,,5,2021-10-22T15:00:45Z,2021-11-19T15:36:27Z,,CONTRIBUTOR,simonw/datasette/pulls/1495,"Right now, datasette routes can only be a 2-tuple of `(regex, view_fn)`. If it was possible for datasette to handle extra options, like [standard Django does](https://docs.djangoproject.com/en/3.2/topics/http/urls/#passing-extra-options-to-view-functions), it would add flexibility for plugin authors. For example, if extra options were enabled, then it would be easy to make a single table the home page (#1284). This plugin would accomplish it. ```python from datasette import hookimpl from datasette.views.table import TableView @hookimpl def register_routes(datasette): return [ (r""^/$"", TableView.as_view(datasette), {'db_name': 'DB_NAME', 'table': 'TABLE_NAME'}) ] ``` ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1495/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1001104942,PR_kwDOBm6k_c4r-EVH,1475,feat: allow joins using _through in both directions,5268174,open,0,,,0,2021-09-20T15:28:20Z,2021-09-20T15:28:20Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/1475,"Currently the `_through` clause can only work if the FK relationship is defined in a specific direction. I don't think there is any reason for this limitation, as an FK allows joining in both directions. This is an admittedly hacky change to implement bidirectional joins using `_through`. It does work for our use-case, but I don't know if there are other implications that I haven't thought of. Also if this change is desirable we probably want to make the code a little nicer.",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1475/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1386917344,PR_kwDOBm6k_c4_prjN,1823,Keyword-only arguments for a bunch of internal methods,9599,open,0,,,3,2022-09-27T00:44:59Z,2022-10-05T04:37:54Z,,OWNER,simonw/datasette/pulls/1823,"Refs #1822 ---- :books: Documentation preview :books:: https://datasette--1823.org.readthedocs.build/en/1823/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1823/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1307359454,PR_kwDOBm6k_c47iWbd,1772,Convert to setup.cfg,89725,open,0,,,0,2022-07-18T03:39:53Z,2022-07-18T03:39:53Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/1772,"Recent versions of setuptools can run most things from setup.cfg so one can have a simpler version that does not require executing code on install. The bulk of the changes were automated by running https://pypi.org/project/setup-py-upgrade/ with a few minor edits for the bits that it can not auto convert (the initial `get_long_description()` and `get_version()` can not be automatically converted)",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1772/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1268121674,PR_kwDOBm6k_c45fz-O,1757,feat: add a wildcard for _json columns,163156,open,0,,,1,2022-06-11T01:01:17Z,2022-09-06T00:51:21Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/1757,"This allows _json to accept a wildcard for when there are many JSON columns that the user wants to convert. I hope this is useful. I've tested it on our datasette and haven't ran into any issues. I imagine on a large set of results, there could be some performance issues, but it will probably be negligible for most use cases. On a side note, I ran into an issue where I had to upgrade black on my system beyond the pinned version in setup.py. Here is the upstream issue < . I didn't include this in the PR yet since I didn't look into the issue too far, but I can if you would like.",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1757/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 377166793,MDU6SXNzdWUzNzcxNjY3OTM=,372,Docker build tools,82988,open,0,,,0,2018-11-04T16:02:35Z,2018-11-04T16:02:35Z,,CONTRIBUTOR,,"In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as `requirements.txt` files. If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base `datasette` container may be required. Examples of Dockerfile generators / container builders: - [openshift/source-to-image (s2i)](https://github.com/openshift/source-to-image) - [jupyter/repo2docker](https://github.com/jupyter/repo2docker) - [stencila/dockter](https://github.com/stencila/dockter) Discussions / threads (via Binderhub gitter) on: - [why `repo2docker` not `s2i`](http://words.yuvi.in/post/why-not-s2i/) - [why `dockter` not `repo2docker`](https://twitter.com/choldgraf/status/1058499607309647872) - [composability in `s2i`](https://trello.com/c/AexIVZNf/1008-8-composable-builds-builds-evg) Relates to things like: - https://github.com/simonw/datasette/pull/280",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/372/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 2, ""rocket"": 0, ""eyes"": 0}",, 377155320,MDU6SXNzdWUzNzcxNTUzMjA=,370,Integration with JupyterLab,82988,open,0,,,4,2018-11-04T13:57:13Z,2022-09-29T08:17:47Z,,CONTRIBUTOR,,"I just watched a demo video for the [JupyterLab Chart Editor](https://www.crowdcast.io/e/introducing-JupyterLab-Chart-Editor/) which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue). JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe. This made me wonder about what `datasette` integration with JupyterLab might do. For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a *View / Run as datasette* file viewer option that will: - run the CSV file through `csvs-to-sqlite`; - launch the `datasette` server and display the `datasette` view in a JupyterLab panel. (? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all `datasette` viewed CSVs and runs on a single port?) As a freebie, the `datasette` API would allow you to run efficient SQL queries against the file eg using using `pandas.read_sql()` queries in a notebook in the same space. Related: - [JupyterLab extensions docs](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html) - a [cookiecutter for wrting JupyterLab extensions using Javascript](https://github.com/jupyterlab/extension-cookiecutter-js) - a [cookiecutter for writing JupyterLab extensions using Typescript](https://github.com/jupyterlab/extension-cookiecutter-ts) - tutorial: [Let’s Make an xkcd JupyterLab Extension](https://jupyterlab.readthedocs.io/en/stable/developer/xkcd_extension_tutorial.html)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/370/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 374953006,MDU6SXNzdWUzNzQ5NTMwMDY=,369,Interface should show same JSON shape options for custom SQL queries,416374,open,0,,3268330,2,2018-10-29T10:39:15Z,2020-05-30T17:24:06Z,,CONTRIBUTOR,,"At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the `_shape` parameter to the JSON API URL manually still works, so perhaps there should be consistency in the interface by having the same ""Advanced Export"" box for custom SQL queries.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/369/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 352768017,MDU6SXNzdWUzNTI3NjgwMTc=,362,Add option to include/exclude columns in search filters,78156,open,0,,,1,2018-08-22T01:32:08Z,2020-11-03T19:01:59Z,,NONE,,"I have a dataset with many columns, of which only some are likely to be of interest for searching. It would be great for usability if the search filters in the UI could be configured to include/exclude columns. See also: https://github.com/simonw/datasette/issues/292",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/362/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 348043884,MDU6SXNzdWUzNDgwNDM4ODQ=,357,Plugin hook for loading metadata.json,9599,open,0,,,6,2018-08-06T19:00:01Z,2020-06-21T22:19:58Z,,OWNER,,"For https://github.com/simonw/russian-ira-facebook-ads-datasette/tree/af6d956995e14afd585c35a6a06bb01da32043ba I wrote a script to convert YAML to JSON because YAML is a better format for embedding multi-line HTML descriptions and canned SQL statements. Example yaml metadata file: https://github.com/simonw/russian-ira-facebook-ads-datasette/blob/af6d956995e14afd585c35a6a06bb01da32043ba/russian-ads-metadata.yaml It would be useful if Datasette could be fed a YAML file directly: datasette -m metadata.yaml Question is... should this be a native feature (hence adding a YAML dependency) or should it be handled by a `datasette-metadata-yaml` plugin, using a new plugin hook for loading metadata? If so, what would other use-cases for that plugin hook be?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/357/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 346027040,MDU6SXNzdWUzNDYwMjcwNDA=,355,Table view should support filtering via many-to-many relationships,9599,open,0,,,10,2018-07-31T04:04:16Z,2019-05-23T06:04:03Z,,OWNER,,Parent: #354 ,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/355/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 346026869,MDU6SXNzdWUzNDYwMjY4Njk=,354,Handle many-to-many relationships,9599,open,0,,,0,2018-07-31T04:03:13Z,2020-11-24T19:51:18Z,,OWNER,,This is a master tracking ticket for various many-2-many features.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/354/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 344654623,MDU6SXNzdWUzNDQ2NTQ2MjM=,347,"Rename ""datasette package"" to ""datasette publish docker""",9599,open,0,,,0,2018-07-26T00:42:46Z,2018-07-26T00:42:46Z,,OWNER,,,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/347/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 341228846,MDU6SXNzdWUzNDEyMjg4NDY=,343,Render boolean fields better by default,45057,open,0,,,1,2018-07-14T11:10:29Z,2018-07-14T14:17:14Z,,CONTRIBUTOR,,These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/343/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 335200136,MDU6SXNzdWUzMzUyMDAxMzY=,327,Explore if SquashFS can be used to shrink size of packaged Docker containers,9599,open,0,,,4,2018-06-24T18:15:16Z,2022-02-17T23:37:24Z,,OWNER,,"Inspired by this article: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html#sqlite-database-indexed--squashed https://en.wikipedia.org/wiki/SquashFS is ""a compressed read-only file system for Linux"" - which means it could be a really nice fit for Datasette and its read-only SQLite databases. It would be interesting to explore a Dockerfile recipe that used SquashFS to compress the SQLite database file that was bundled up by `datasette package` and friends.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/327/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 330826972,MDU6SXNzdWUzMzA4MjY5NzI=,308,"Support extra Heroku apps:create options - region, space, team",78156,open,0,,,2,2018-06-08T23:08:33Z,2018-09-21T14:09:28Z,,NONE,,"It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 328155946,MDU6SXNzdWUzMjgxNTU5NDY=,301,"--spatialite option for ""datasette publish heroku""",9599,open,0,,,1,2018-05-31T14:13:09Z,2022-01-20T21:28:50Z,,OWNER,,Split off from #243. Need to figure out how to install and configure SpatiaLite on Heroku.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/301/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 327395270,MDU6SXNzdWUzMjczOTUyNzA=,296,Per-database and per-table /-/ URL namespace,9599,open,0,,,3,2018-05-29T16:23:13Z,2019-06-28T16:46:34Z,,OWNER,,"Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph` To start: * `/dbname/-/inspect` * `/dbname/-/metadata` * `/dbname/tablename/-/inspect` * `/dbname/tablename/-/metadata` This means we will no longer allow databases or tables to have the name `""-""` - I think that's OK We will continue to support rows with a primary key of `""-""` at the following URL: * `/dbname/tablename/-`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/296/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 327365110,MDU6SXNzdWUzMjczNjUxMTA=,294,inspect should record column types,9599,open,0,,,7,2018-05-29T15:10:41Z,2019-06-28T16:45:28Z,,OWNER,,"For each table we want to know the columns, their order and what type they are. I'm going to break with SQLite defaults a little on this one and allow datasette to define additional types - to start with just a `geometry` type for columns that are detected as SpatiaLite geometries. Possible JSON design: ""columns"": [{ ""name"": ""title"", ""type"": ""text"" }, ...] Refs #276",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/294/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 326778161,MDU6SXNzdWUzMjY3NzgxNjE=,290,Consider increasing the default for num_sql_threads (currently 3),9599,open,0,,,0,2018-05-27T00:52:41Z,2018-05-27T00:52:41Z,,OWNER,,"I ran a very rough micro-benchmark on the new `num_sql_threads` config option (added in #285) datasette --config num_sql_threads:1 fivethirtyeight.db Then ab -n 100 -c 10 'http://127.0.0.1:8011/fivethirtyeight-2628db9/twitter-ratio%2Fsenators' | Number of threads | Requests/second | |---|---| | 1 | 4.57 | | 3 | 9.77 | | 10 | 13.53 | | 20 | 15.24 | 50 | 8.21 | This was on my early 2018 OS X laptop. Need to benchmark in other common environments before making a decision on changing the default. That said, the default of 3 was a number I plucked out of thin air.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/290/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 326599525,MDU6SXNzdWUzMjY1OTk1MjU=,286,Database hash should include current datasette version,9599,open,0,,,2,2018-05-25T17:03:42Z,2018-05-25T17:07:36Z,,OWNER,,"Right now deploying a new version of datasette doesn't invalidate existing URLs, so users may still see a cached copy of the old templates. We can fix this by including the current datasette version in the input to the hash function (which currently just the database file contents).",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/286/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323223872,MDU6SXNzdWUzMjMyMjM4NzI=,260,Validate metadata.json on startup,9599,open,0,,,7,2018-05-15T13:42:56Z,2023-06-21T12:51:22Z,,OWNER,,"It's easy to misspell the name of a database or table and then be puzzled when the metadata settings silently fail. To avoid this, let's sanity check the provided metadata.json on startup and quit with a useful error message if we find any obvious mistakes.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/260/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323718842,MDU6SXNzdWUzMjM3MTg4NDI=,268,Mechanism for ranking results from SQLite full-text search,9599,open,0,,,12,2018-05-16T17:36:40Z,2022-01-13T22:21:28Z,,OWNER,,This isn't particularly straight-forward - all the more reason for Datasette to implement it for you. This article is helpful: http://charlesleifer.com/blog/using-sqlite-full-text-search-with-python/,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/268/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 323658641,MDU6SXNzdWUzMjM2NTg2NDE=,262,Add ?_extra= mechanism for requesting extra properties in JSON,9599,open,0,,3268330,27,2018-05-16T14:55:42Z,2023-03-29T06:22:22Z,,OWNER,,"Datasette views currently work by creating a set of data that should be returned as JSON, then defining an additional, optional `template_data()` function which is called if the view is being rendered as HTML. This `template_data()` function calculates extra template context variables which are necessary for the HTML view but should not be included in the JSON. Example of how that is used today: https://github.com/simonw/datasette/blob/2b79f2bdeb1efa86e0756e741292d625f91cb93d/datasette/views/table.py#L672-L704 With features like Facets in #255 I'm beginning to want to move more items into the `template_data()` - in the case of facets it's the `suggested_facets` array. This saves that feature from being calculated (involving several SQL queries) for the JSON case where it is unlikely to be used. But... as an API user, I want to still optionally be able to access that information. Solution: Add a `?_extra=suggested_facets&_extra=table_metadata` argument which can be used to optionally request additional blocks to be added to the JSON API. Then redefine as many of the current `template_data()` features as extra arguments instead, and teach Datasette to return certain extras by default when rendering templates. This could allow the JSON representation to be slimmed down further (removing e.g. the `table_definition` and `view_definition` keys) while still making that information available to API users who need it.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/262/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 320132682,MDU6SXNzdWUzMjAxMzI2ODI=,250,Setup some issue templates,9599,open,0,,,0,2018-05-04T01:49:07Z,2018-05-04T01:49:07Z,,OWNER,,"https://twitter.com/left_pad/status/99216385740464537 I like the idea of using these to help people understand some of the ways I want to use issues.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/250/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 319449852,MDU6SXNzdWUzMTk0NDk4NTI=,247,SQLite code decoupled from Datasette,11912854,open,0,,,1,2018-05-02T08:03:28Z,2018-05-21T15:29:31Z,,NONE,,"I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with [PyTables](https://github.com/PyTables/PyTables) format. In order to accomplish that, I've started [a fork for decoupling the code related with SQLite](https://github.com/jsancho-gpl/datasette/tree/feature/db-type-plugin) and putting it in an external connector to allow future connectors for a lot of file formats. It'd be nice if you could look at it and suggest improvements for a possible PR.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/247/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 318490133,MDU6SXNzdWUzMTg0OTAxMzM=,241,Default datasette logging format should be JSON,9599,open,0,,,0,2018-04-27T17:32:48Z,2018-07-10T17:45:40Z,,OWNER,,"Structured logs are better. Datasette should default to outputting it's HTTP access log lines as newline delimited JSON instead of the Sanic default format it uses at the moment. For improved greppability these logs should have keys ordered in a consistent way. Python's JSON module can do this with ordered dictionaries.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/241/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 317001500,MDU6SXNzdWUzMTcwMDE1MDA=,236,datasette publish lambda plugin,9599,open,0,,,11,2018-04-23T22:10:30Z,2023-03-12T14:04:15Z,,OWNER,,"Refs #217 - create a publish plugin that can deploy to AWS Lambda. https://docs.aws.amazon.com/lambda/latest/dg/limits.html says lambda packages can be up to 50 MB, so this would only work with smaller databases (the command can check the filesize before attempting to package and deploy it). Lambdas do get a 512 MB `/tmp` directory too, so for larger databases the function could start and then download up to 512MB from an S3 bucket - so the plugin could take an optional S3 bucket to write to and know how to upload the `.db` file there and then have the lambda download it on startup.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/236/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 316621102,MDU6SXNzdWUzMTY2MjExMDI=,235,Add limit on the size in KB of data returned from a single query,9599,open,0,,,2,2018-04-22T23:01:15Z,2018-04-24T00:30:02Z,,OWNER,,"Datasette limits the number of rows returned to 1,000 and limits the time spent executing a SQL query to 1000ms - and both of these limits can be customized. It does not have a limit on the size of the response returned. It's possible to compose maliciously large SQL responses in a small number of rows using mechanisms like the `group_concat()` aggregate function. It would be good to avoid malicious SQL creating 100MB+ responses and potentially crashing the server. I think the easiest place to implement that is here: https://github.com/simonw/datasette/blob/f3f42957128c1e7ece584d45d9167f2ac003a3b8/datasette/app.py#L175-L190 Currently we use `cursor.fetchmany()` to fetch up to 1,001 rows at once. Instead, we could switch to iterating through `cursor.fetchone()` (or just using `for row in cursor`) and keeping a running tally of the size of the response as we go - maybe just using `rough_response_size += len(str(row))`. If that goes above a certain threshold we can terminate the response with an error, like we do with timelimits. The bigger challenge here is understanding how well this approach works and what impact it will have on overall Datasette performance. I think I need #33 for this.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/235/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 314834783,MDU6SXNzdWUzMTQ4MzQ3ODM=,219,Expose units in the JSON API?,45057,open,0,,,0,2018-04-16T22:04:25Z,2018-04-16T22:04:25Z,,CONTRIBUTOR,,"From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map. I'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/219/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 314771615,MDU6SXNzdWUzMTQ3NzE2MTU=,218,"Support custom unit display in order to handle ""$10,000""",9599,open,0,,,0,2018-04-16T18:39:31Z,2018-07-10T17:45:38Z,,OWNER,,"I tried to get Datasette to display `$10,000` using the new units support but we currently only display units as a suffix: https://github.com/simonw/datasette/blob/10a34f995c70daa37a8a2aa02c3135a4b023a24c/datasette/app.py#L563-L572 It would be neat if there was a mechanism for specifying a custom unit display - maybe something like this: ``` { ""custom_units"": { ""us_dollar"": { ""unit"": ""us_dollar = [] = $"", ""format"": ""${:,}"" } } } ```",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 312396095,MDU6SXNzdWUzMTIzOTYwOTU=,198,Ability to sort with nulls last,9599,open,0,,,0,2018-04-09T05:15:40Z,2018-07-10T17:45:37Z,,OWNER,,"Split off from #189 Here's how to do that in SQL: https://fivethirtyeight.datasettes.com/fivethirtyeight-2628db9?sql=select+rowid%2C+*+from+%5Bnfl-wide-receivers%2Fadvanced-historical%5D%0D%0Aorder+by+case+when+career_ranypa+is+null+then+1+else+0+end%2C+career_ranypa%2C+rowid order by case when career_ranypa is null then 1 else 0 end, career_ranypa",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/198/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 312395790,MDU6SXNzdWUzMTIzOTU3OTA=,197,Ability to sort by more than one column,9599,open,0,,,0,2018-04-09T05:13:30Z,2018-07-10T17:45:37Z,,OWNER,,"Split off from #189. I'd like to support ""sort by X descending, then by Y ascending if there are dupes for X"" as well. Suggested syntax for that: ?_sort_desc=X&_sort=Y we currently only allow one argument to be sent. We should allow as many arguments as there are columns, for example: ?_sort=department&_sort_desc=precinct&_sort=age&_sort_desc=size",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/197/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 309047460,MDU6SXNzdWUzMDkwNDc0NjA=,188,Ability to bundle metadata and templates inside the SQLite file,9599,open,0,,,4,2018-03-27T16:42:07Z,2020-12-04T17:18:34Z,,OWNER,,"One of the nicest qualities of SQLite as a data format is that you get a single file which you can then backup or share with other people. Datasette breaks this a little once you start including custom metadata.json or template files and CSS. It would be cool if there was an optional mechanism for baking that extra configuration into the SQLite file itself. That way entire datasette mini-applications (including canned queries and custom HTML and CSS) could be constructed as single .db files. Since datasette configuration is all file-based, one way to achieve that would be to support a ""datasette_files"" table which, if present is used to search for file contents by path. This is inline with the philosophy described by https://www.sqlite.org/appfileformat.html ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/188/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 299760684,MDU6SXNzdWUyOTk3NjA2ODQ=,185,Metadata should be a nested arbitrary KV store,222245,open,0,,,12,2018-02-23T16:02:07Z,2019-05-13T18:33:33Z,,NONE,,"I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication. Ideally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like `metadata.page.key` vs. `metadata.this.key` might work as an interface.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/185/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 288438570,MDU6SXNzdWUyODg0Mzg1NzA=,179,More metadata options for template authors ,9599,open,0,,,2,2018-01-14T20:51:04Z,2019-05-13T18:33:33Z,,OWNER,,See this thread on Twitter: https://twitter.com/simonw/status/952637152797458432,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/179/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 285168503,MDU6SXNzdWUyODUxNjg1MDM=,176,Add GraphQL endpoint,173848,open,0,,,8,2017-12-29T23:21:01Z,2020-04-21T14:16:24Z,,NONE,,Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/176/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 281110295,MDU6SXNzdWUyODExMTAyOTU=,173,I18n and L10n support,50138,open,0,,,2,2017-12-11T17:49:58Z,2021-04-26T12:10:01Z,,NONE,,It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/173/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275159710,MDU6SXNzdWUyNzUxNTk3MTA=,128,"Every visualization should have an ""embed"" button",9599,open,0,,,0,2017-11-19T13:38:13Z,2019-05-13T18:33:51Z,,OWNER,,"At least for the first round of visualizations, any time you construct one using the UI the result should include an ""embed this"" button that returns source code to copy and paste These examples should use unpkg.com (or similarl) urls with SRI hashes, eg https://www.srihash.org - and should load data from the datasette JSON API.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/128/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275125561,MDU6SXNzdWUyNzUxMjU1NjE=,123,Datasette serve should accept paths/URLs to CSVs and other file formats,9599,open,0,,,9,2017-11-19T02:05:48Z,2021-07-19T00:04:32Z,,OWNER,,"This would remove the csvs-to-sqlite step which I end up using for almost everything. I'm hesitant to introduce pandas as a required dependency though since it require compiling numpy. Could build it so this option is only available if you have pandas installed.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/123/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",, 275755475,MDU6SXNzdWUyNzU3NTU0NzU=,140,Heatmap visualization plugin,9599,open,0,,,2,2017-11-21T15:34:23Z,2019-05-13T18:33:51Z,,OWNER,,Could use https://github.com/scottbedard/svelte-heatmap,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/140/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 275415799,MDU6SXNzdWUyNzU0MTU3OTk=,137,Ability to combine multiple SQL queries on a single graph,9599,open,0,,,1,2017-11-20T16:26:57Z,2019-05-13T18:33:51Z,,OWNER,,This would make visualizations significantly more powerful. The interesting challenge will be around the URL design. It would be useful to be able to combine either multiple explicit SQL queries or multiple queries based on the filter string parameters passed to one or more table views.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/137/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 274615452,MDU6SXNzdWUyNzQ2MTU0NTI=,111,Add “updated” to metadata,9599,open,0,,,12,2017-11-16T18:22:20Z,2021-09-21T22:48:27Z,,OWNER,,"To give an indication as to when the data was last updated. This should be a field in the metadata that is then shown on the index page and in the footer, if it is set. Also support setting it using an option to “datasette publish” and “datasette package” - which can either be a string or can be the magic string “today” to set it to today’s date: datasette publish file.db --updated=today",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/111/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 268110769,MDU6SXNzdWUyNjgxMTA3Njk=,33,Use locust for benchmarking and load tests,9599,open,0,,,0,2017-10-24T17:00:09Z,2017-12-10T03:12:16Z,,OWNER,,"https://github.com/locustio/locust Needed for #32 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/33/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 267515678,MDU6SXNzdWUyNjc1MTU2Nzg=,3,"Make individual column valuables addressable, with smart content types",9599,open,0,,,1,2017-10-23T01:11:32Z,2017-12-10T03:11:58Z,,OWNER,,"Some SQLite databases embed images in columns. It would be cool if these had URLs. /database-name-7sha256/table-name/compound-pk/column /database-name-7sha256/table-name/compound-pk/column.json /database-name-7sha256/table-name/compound-pk/column.png /database-name-7sha256/table-name/compound-pk/column.gif /database-name-7sha256/table-name/compound-pk/column.txt The one without an explicit file extension auto-detects the correct extension.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/3/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 991191951,MDU6SXNzdWU5OTExOTE5NTE=,1464,clean checkout & clean environment has test failures,51016,open,0,,,6,2021-09-08T14:16:23Z,2021-09-13T22:17:17Z,,CONTRIBUTOR,,"I followed the instructions [here](https://docs.datasette.io/en/stable/contributing.html#setting-up-a-development-environment), and even after running `python update-docs-help.py` I get the following failed tests -- any thoughts? ``` FAILED tests/test_api.py::test_searchable[/fixtures/searchable.json?_search=te*+AND+do*&_searchmode=raw-expected_rows3] FAILED tests/test_api.py::test_searchmode[table_metadata1-_search=te*+AND+do*-expected_rows1] FAILED tests/test_api.py::test_searchmode[table_metadata2-_search=te*+AND+do*&_searchmode=raw-expected_rows2] ``` This is with python 3.9.7 and lots of other packages, as in attached environment listing from `conda list`. [conda-installed.txt](https://github.com/simonw/datasette/files/7129487/conda-installed.txt) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1464/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 990367646,MDU6SXNzdWU5OTAzNjc2NDY=,1462,"Separate out ""debug"" options from ""root"" options",9599,open,0,,,1,2021-09-07T21:27:34Z,2021-09-07T21:34:33Z,,OWNER,,"> I ditched ""root"" for ""admin"" because root by default gives you a whole bunch of stuff which I think could be confusing: > > > > Maybe the real problem here is that I'm conflating ""root"" permissions with ""debug"" options. Perhaps there should be an extra Datasette mode that unlocks debug tools for the root user? _Originally posted by @simonw in https://github.com/simonw/datasette-app-support/issues/8#issuecomment-914638998_",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1462/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 989109888,MDU6SXNzdWU5ODkxMDk4ODg=,1460,Override column metadata with metadata from another column,72577720,open,0,,,0,2021-09-06T12:13:33Z,2021-09-06T12:13:33Z,,CONTRIBUTOR,,"I have a table from the PUDL project (https://github.com/catalyst-cooperative/pudl) that looks like this: ``` CREATE TABLE fuel_ferc1 ( id INTEGER NOT NULL, record_id TEXT, utility_id_ferc1 INTEGER, report_year INTEGER, plant_name_ferc1 TEXT, fuel_type_code_pudl VARCHAR(7), fuel_unit VARCHAR(7), fuel_qty_burned FLOAT, fuel_mmbtu_per_unit FLOAT, fuel_cost_per_unit_burned FLOAT, fuel_cost_per_unit_delivered FLOAT, fuel_cost_per_mmbtu FLOAT, PRIMARY KEY (id), FOREIGN KEY(plant_name_ferc1, utility_id_ferc1) REFERENCES plants_ferc1 (plant_name_ferc1, utility_id_ferc1), CONSTRAINT fuel_ferc1_fuel_type_code_pudl_enum CHECK (fuel_type_code_pudl IN ('coal', 'oil', 'gas', 'solar', 'wind', 'hydro', 'nuclear', 'waste', 'unknown')), CONSTRAINT fuel_ferc1_fuel_unit_enum CHECK (fuel_unit IN ('ton', 'mcf', 'bbl', 'gal', 'kgal', 'gramsU', 'kgU', 'klbs', 'btu', 'mmbtu', 'mwdth', 'mwhth', 'unknown')) ); ``` Note that `fuel_unit` is a unit that **pint** can understand, and that `fuel_qty_burned` is a column of data that could be expressed in terms of actual units, not merely as a dimensionless number. Ditto the `fuel_cost_per_unit_...` columns. Is there a way to give a column a default metadata unit (such as *tons* or *USD/ton*) and then let that be overridden when the metadata in another column says *barrels* or *USD/gramsU*? @catalyst-cooperative",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1460/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 988556488,MDU6SXNzdWU5ODg1NTY0ODg=,1459,suggestion: allow `datasette --open` to take a relative URL,51016,open,0,,,1,2021-09-05T17:17:07Z,2021-09-05T19:59:15Z,,CONTRIBUTOR,,"(soft suggestion because I'm not sure I'm using datasette right yet) Over at https://github.com/ctb/2021-sourmash-datasette, I'm playing around with datasette, and I'm creating some static pages to send people to the right facets. There may well be better ways of achieving this end goal, and I will find out if so, I'm sure! But regardless I think it might be neat to support an option to allow `-o/--open` to take a relative URL, that then gets appended to the hostname and port. This would let me improve my documentation. I don't see any downsides, either, but 🤷 there may well be some :) Happy to dig in and provide a PR if it's of interest. I'm not sure off the top of my head how to support an optional value to a parameter in argparse - the current `-o` behavior is kinda nice so it'd be suboptimal to require a url for `-o`. Maybe `--open-url=` or something would work? ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1459/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 988552851,MDU6SXNzdWU5ODg1NTI4NTE=,1456,conda install results in non-functioning `datasette serve` due to out-of-date asgiref,51016,open,0,,,0,2021-09-05T16:59:55Z,2021-09-05T16:59:55Z,,CONTRIBUTOR,,"Over in https://github.com/ctb/2021-sourmash-datasette, I discovered that the following commands fail: ``` conda create -n datasette4 -y datasette=0.58.1 conda activate datasette4 datasette gathertax.db ``` with `ImportError: cannot import name 'WebSocketScope' from 'asgiref.typing'`. This appears to be because asgiref 3.3.4 doesn't have WebSocketScope, but later versions do - a simple ``` pip install asgiref==3.4.1 ``` fixes the problem for me, at least to the point where I can run datasette and poke around as usual. I note that over in the conda-forge recipe, https://github.com/conda-forge/datasette-feedstock/blob/master/recipe/meta.yaml pins asgiref to < 3.4.0, but I'm not sure why - so I'm not sure how to best resolve this issue :).",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1456/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 986829194,MDU6SXNzdWU5ODY4MjkxOTQ=,14,xml.etree.ElementTree.Parse Error - mismatched tag,46968,open,0,,,1,2021-09-02T14:46:36Z,2021-09-02T14:53:11Z,,NONE,,"This is an error message I get upon parsing the enex file of my Inbox. Please find the full error message below. Any hints welcome. ``` Importing from ENEX [##################------------------] 50% 00:00:50 Traceback (most recent call last): File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/bin/evernote-to-sqlite"", line 8, in sys.exit(cli()) File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/click/core.py"", line 1137, in __call__ return self.main(*args, **kwargs) File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/click/core.py"", line 1062, in main rv = self.invoke(ctx) File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/click/core.py"", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/click/core.py"", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/click/core.py"", line 763, in invoke return __callback(*args, **kwargs) File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/evernote_to_sqlite/cli.py"", line 30, in enex for tag, note in find_all_tags(fp, [""note""], progress_callback=bar.update): File ""/Users/utopist/.virtualenvs/evernote-to-sqlite-Og2PIW3Y/lib/python3.9/site-packages/evernote_to_sqlite/utils.py"", line 17, in find_all_tags for event, el in parser.read_events(): File ""/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/xml/etree/ElementTree.py"", line 1329, in read_events raise event File ""/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/xml/etree/ElementTree.py"", line 1301, in feed self._parser.feed(data) xml.etree.ElementTree.ParseError: mismatched tag: line 6837961, column 2 ``` ",303218369,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/14/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 983221851,MDU6SXNzdWU5ODMyMjE4NTE=,34,Data folder as index command parameter,1223625,open,0,,,0,2021-08-30T21:29:33Z,2021-08-30T21:29:33Z,,NONE,,"Hi, First of all, thank you for this wonderful project :smile: I started to use dogsheep to make my personal data searchable, and by using the project I noticed an issue with the index command. It always expects you are running it from the root folder from where the data is located, so I got some errors while trying to make it work on my setup. I separate all databases inside a `data` folder (I published my setup to be easier to follow: https://github.com/humrochagf/my-dogsheep) Before, I configured `dogsheep.yml` to add the data folder to its path like this: ```yml data/twitter.db: tweets: sql: |- ... ``` And running the index command like this: ``` dogsheep-beta index data/dogsheep.db dogsheep.yml ``` It worked to the normal search feature with no problem this way, but when I started adding `display_sql` rules the app started to crash, because at datasette `get_database` it was looking for `data/twitter` and it only had a db called `twitter` there. So my workaround to that was to cd into the data folder and run the indexer. You can check the way I'm doing it at this line of the makefile: https://github.com/humrochagf/my-dogsheep/blob/main/makefile#L3 It works but it would be nice to have an option to pass the path where the data is located to the index function.",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/34/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,