issues
555 rows where repo = 107914493 and state = "open" sorted by user descending
This data as json, CSV (advanced)
Suggested facets: milestone, comments, author_association, draft, created_at (date), updated_at (date)
id | node_id | number | title | user ▲ | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1715468032 | PR_kwDOBm6k_c5QzEAM | 2076 | Datsette gpt plugin | StudioCordillera 130708713 | open | 0 | 0 | 2023-05-18T11:22:30Z | 2023-05-18T11:22:45Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2076 | :books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
947596222 | MDExOlB1bGxSZXF1ZXN0NjkyNTU3Mzgx | 1399 | Multiple sort | jgryko5 87192257 | open | 0 | 0 | 2021-07-19T12:20:14Z | 2021-07-19T12:20:14Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1399 | Closes #197. I have added support for sorting by multiple parameters as mentioned in the issue above, and together with that, a suggestion on how to implement such sorting in the user interface. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
855446829 | MDExOlB1bGxSZXF1ZXN0NjEzMTc4OTY4 | 1296 | Dockerfile: use Ubuntu 20.10 as base | tmcl-it 82332573 | open | 0 | 4 | 2021-04-12T00:23:32Z | 2021-07-20T08:52:13Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1296 | This PR changes the main Dockerfile to use ubuntu:20.10 as base image instead of python:3.9.2-slim-buster (itself based on debian:buster-slim). The Dockerfile is essentially the one from https://github.com/simonw/datasette/issues/1249#issuecomment-803698983 with some additional cleanups to slim it down. This fixes a couple of issues: 1. The SQLite version in Debian Buster (2.6.0) doesn't support generated columns 2. Installing SpatiaLite from the Debian sid repositories has the side effect of also installing updates to libc and libstdc++ from sid. As a bonus, the Docker image becomes smaller:
Reproduction of the first issue``` $ curl -O https://latest.datasette.io/fixtures.db % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 260k 0 260k 0 0 489k 0 --:--:-- --:--:-- --:--:-- 489k $ docker run -v Here is the SQLite version:
Reproduction of the second issue
Both libc and libstdc++ are backwards compatible, so the image still works, but it will result in a combination of libraries and Python versions that exists only in the Datasette image, so it's likely untested. In addition, since Debian sid is an always-changing rolling-release, the versions of libc, libstdc++, Spatialite, and their dependencies change frequently, so the library versions in the Datasette image will depend on the day when it was built. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1296/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1605481359 | PR_kwDOBm6k_c5LDwrF | 2031 | Expand foreign key references in row view as well | tmcl-it 82332573 | open | 0 | 5 | 2023-03-01T18:43:09Z | 2023-03-24T18:35:25Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2031 | Unlike the table view, the single row view does not resolve foreign key references into labels. This patch extracts the foreign key reference expansion code from TableView.data() into a standalone function that is then called by both TableView.data() and RowView.data(). :books: Documentation preview :books:: https://datasette--2031.org.readthedocs.build/en/2031/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1639873822 | PR_kwDOBm6k_c5M29tt | 2044 | Expand labels in row view as well (patch for 0.64.x branch) | tmcl-it 82332573 | open | 0 | 0 | 2023-03-24T18:44:44Z | 2023-03-24T18:44:57Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2044 | This is a version of #2031 for the 0.64.x branch. :books: Documentation preview :books:: https://datasette--2044.org.readthedocs.build/en/2044/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1010112818 | I_kwDOBm6k_c48NRky | 1479 | Win32 "used by another process" error with datasette publish | kirajano 76450761 | open | 0 | 7 | 2021-09-28T19:12:00Z | 2023-09-07T02:14:16Z | NONE | I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette. Failed to deploy. Attaching logs:
1. Tried with an app created via Error error connecting to docker: An unknown error occured. Traceback (most recent call last): File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly__init__.py", line 156, in fly "--remote-only", File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpgcm8cz66\frosty-fog-8565' ```
Error not possible to validate configuration: server returned Post "https://api.fly.io/graphql": unexpected EOF Traceback (most recent call last):
File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main These are also the contents of the generated .toml file in 2 scenario: ``` fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00app = "dark-feather-168" kill_signal = "SIGINT" kill_timeout = 5 processes = [] [env] [experimental] allowed_public_ports = [] auto_rollback = true [[services]] http_checks = [] internal_port = 8080 processes = ["app"] protocol = "tcp" script_checks = [] [services.concurrency] hard_limit = 25 soft_limit = 20 type = "connections" [[services.ports]] handlers = ["http"] port = 80 [[services.ports]] handlers = ["tls", "http"] port = 443 [[services.tcp_checks]] grace_period = "1s" interval = "15s" restart_limit = 6 timeout = "2s" ```
```[+] Building 147.3s (11/11) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 396B 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.8 4.7s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.1s => => transferring context: 82.37kB 0.0s => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s => [2/5] COPY . /app 2.3s => [3/5] WORKDIR /app 0.2s => [4/5] RUN pip install -U datasette 26.9s => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s => exporting to image 1.2s => => exporting layers 1.2s => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Traceback (most recent call last): "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette\cli.py", line 283, in package call(args) File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpkb27qid3\datasette'``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1479/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
989109888 | MDU6SXNzdWU5ODkxMDk4ODg= | 1460 | Override column metadata with metadata from another column | MichaelTiemannOSC 72577720 | open | 0 | 0 | 2021-09-06T12:13:33Z | 2021-09-06T12:13:33Z | CONTRIBUTOR | I have a table from the PUDL project (https://github.com/catalyst-cooperative/pudl) that looks like this:
Note that @catalyst-cooperative |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1091257796 | I_kwDOBm6k_c5BC0XE | 1584 | give error with recursive sql | tunguyenatwork 58088336 | open | 0 | 0 | 2021-12-30T18:53:16Z | 2021-12-30T18:53:16Z | NONE | I got an error "near "WITH": syntax error" after I upgraded to version 0.59 from 0.52.4. This error is related to recursive sql. It works great on the previous version but it failed after upgraded. Below is an example of sql: WITH RECURSIVE manager_of(position, super_position) AS (SELECT position, case ifnull(INDIRECT_SUPER_POSITION,'') when '' then super_position else INDIRECT_SUPER_POSITION end as SUPER_POSITION FROM position where super_position<>'SGV000000001' and super_position!='' and position <> super_position),chain_manager_of_position(position, level) AS (SELECT super_position, 1 as level FROM manager_of WHERE super_position!='' and (position=:pos or position in (Select position from employee where employee=:ein)) UNION ALL SELECT super_position, level+1 as level FROM manager_of JOIN chain_manager_of_position USING(position)) SELECT * FROM chain_manager_of_position left join employee using(position) where employee is not NULL order by level limit 1 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1148725876 | I_kwDOBm6k_c5EeCp0 | 1640 | Support static assets where file length may change, e.g. logs | broccolihighkicks 57859326 | open | 0 | 2 | 2022-02-24T00:34:42Z | 2022-03-05T01:19:25Z | NONE | This is a bit of an oxymoron. I am serving a log.txt file for a background process using the Datasette --static CLI. This is useful as I can observe a background process from the web UI to see any errors that occur (instead of spelunking the logs via docker exec/ssh etc). I get this error, which I think is because Datasette assumes that the size of the content does not change (but appending new log lines means the content length changes).
Thanks, I am finding Datasette very useful. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1865983069 | PR_kwDOBm6k_c5YvQSi | 2158 | add brand option to metadata.json. | publicmatt 52261150 | open | 0 | 0 | 2023-08-24T22:37:41Z | 2023-08-24T22:37:57Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2158 | This adds a brand link to the top navbar if 'brand' key is populated in metadata.json. The link will be either '#' or use the contents of 'brand_url' in metadata.json for href. I was able to get this done on my own site by replacing :books: Documentation preview :books:: https://datasette--2158.org.readthedocs.build/en/2158/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
476852861 | MDU6SXNzdWU0NzY4NTI4NjE= | 568 | Add database_color as a configurable option | LBHELewis 50906992 | open | 0 | 1 | 2019-08-05T13:14:45Z | 2023-08-11T05:19:42Z | NONE | This would be really useful as it would allow us to tie in with colour schemes. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
459882902 | MDU6SXNzdWU0NTk4ODI5MDI= | 526 | Stream all results for arbitrary SQL and canned queries | matej-fr 50578294 | open | 0 | 23 | 2019-06-24T13:09:45Z | 2022-09-28T04:01:25Z | NONE | I think that there is a difficulty with canned queries. When I want to stream all results of a canned query TwoDays I get only first 1.000 records. Example:
returns only first 1.000 records. If I do the same with the whole database i.e.
I get correctly all records. Any ideas? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1983600865 | PR_kwDOBm6k_c5e7WH7 | 2206 | Bump the python-packages group with 1 update | dependabot[bot] 49699333 | open | 0 | 1 | 2023-11-08T13:18:56Z | 2023-12-08T13:46:24Z | CONTRIBUTOR | simonw/datasette/pulls/2206 | Bumps the python-packages group with 1 update: black. Release notesSourced from black's releases.
... (truncated) ChangelogSourced from black's changelog.
... (truncated) Commits
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting Dependabot commands and optionsYou can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions :books: Documentation preview :books:: https://datasette--2206.org.readthedocs.build/en/2206/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
550293770 | MDU6SXNzdWU1NTAyOTM3NzA= | 658 | How do I use the app.css as style sheet? | null92 49656826 | open | 0 | 2 | 2020-01-15T16:27:57Z | 2020-02-07T00:29:50Z | NONE | Simon, I'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't. Is this possible? Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/658/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
735852274 | MDU6SXNzdWU3MzU4NTIyNzQ= | 1082 | DigitalOcean buildpack memory errors for large sqlite db? | justmars 39538958 | open | 0 | 3 | 2020-11-04T06:35:32Z | 2020-11-04T19:35:44Z | NONE |
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1690765434 | I_kwDOBm6k_c5kxwh6 | 2067 | Litestream-restored db: errors on 3.11 and 3.10.8; but works on py3.10.7 and 3.10.6 | justmars 39538958 | open | 0 | 1 | 2023-05-01T12:42:28Z | 2023-05-03T00:16:03Z | NONE | Hi! Wondering if this issue is limited to my local system or if it affects others as well. It seems like 3.11 errors out on a "litestream-restored" database. On further investigation, it also appears to conk out on 3.10.8 but works on 3.10.7 and 3.10.6. To demo issue I created a test database, replicated it to an aws s3 bucket, then restored the same under various .pyenv-versioned shells where I test whether I can read the database via the sqlite3 cli. ```sh create new shell with 3.11.3litestream restore -o data/db.sqlite s3://mytestbucketxx/db sqlite3 data/db.sqlite SQLite version 3.41.2 2023-03-22 11:56:21Enter ".help" for usage hints.sqlite> .tables_litestream_lock _litestream_seq moviesqlite>``` However this get me an Error on 3.11.3 and 3.10.8```sh datasette data/db.sqlite ``` ```console /tester/.venv/lib/python3.11/site-packages/pkg_resources/__init__.py:121: DeprecationWarning: pkg_resources is deprecated as an API warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning) Traceback (most recent call last): File "/tester/.venv/bin/datasette", line 8, in <module> sys.exit(cli()) ^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/cli.py", line 143, in wrapped return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/cli.py", line 615, in serve asyncio.get_event_loop().run_until_complete(check_databases(ds)) File "/Users/mv/.pyenv/versions/3.11.3/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/cli.py", line 660, in check_databases await database.execute_fn(check_connection) File "/tester/.venv/lib/python3.11/site-packages/datasette/database.py", line 213, in execute_fn return await asyncio.get_event_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mv/.pyenv/versions/3.11.3/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/database.py", line 211, in in_thread return fn(conn) ^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/utils/__init__.py", line 951, in check_connection for r in conn.execute( ^^^^^^^^^^^^^ sqlite3.OperationalError: unable to open database file ```Works on 3.10.7, 3.10.6```sh # create new shell with 3.10.7 / 3.10.6 litestream restore -o data/db.sqlite s3://mytestbucketxx/db datasette data/db.sqlite # ... # INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) ```In both scenarios, the only dependencies were the pinned python version and the latest Datasette version 0.64. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1698865182 | I_kwDOBm6k_c5lQqAe | 2069 | [BUG] Cannot insert new data to deployed instance | yqlbu 31861128 | open | 0 | 1 | 2023-05-07T02:59:42Z | 2023-05-07T03:17:35Z | NONE | SummaryRecently, I deployed an instance of datasette to Vercel with the following plugins:
With the above plugins, I was able to insert new data to local sqlite db. However, when it comes to the deployment on Vercel, things behave differently. I observed some errors from the logs console on Vercel:
I think it is a potential bug. Reproducemetadata.json```json { "plugins": { "datasette-insert": { "allow": { "id": "*" } }, "datasette-auth-tokens": { "tokens": [ { "token": { "$env": "INSERT_TOKEN" }, "actor": { "id": "repeater" } } ], "param": "_auth_token" } } } ``` commands```bash # deploy datasette publish vercel remote.db \ --project=repeater-bot-sqlite \ --metadata metadata.json \ --install datasette-auth-tokens \ --install datasette-insert \ --vercel-json=vercel.json # test insert cat fixtures/dogs.json | curl --request POST -d @- -H "Authorization: Bearer <token>" \ 'https://repeater-bot-sqlite.vercel.app/-/insert/remote/dogs?pk=id' ``` logs```console Traceback (most recent call last): File "/var/task/datasette/app.py", line 1354, in route_path response = await view(request, send) File "/var/task/datasette/app.py", line 1500, in async_view_fn response = await async_call_with_supported_arguments( File "/var/task/datasette/utils/__init__.py", line 1005, in async_call_with_supported_arguments return await fn(*call_with) File "/var/task/datasette_insert/__init__.py", line 14, in insert_or_upsert response = await insert_or_upsert_implementation(request, datasette) File "/var/task/datasette_insert/__init__.py", line 91, in insert_or_upsert_implementation table_count = await db.execute_write_fn(write_in_thread, block=True) File "/var/task/datasette/database.py", line 167, in execute_write_fn raise result File "/var/task/datasette/database.py", line 179, in _execute_writes conn = self.connect(write=True) File "/var/task/datasette/database.py", line 93, in connect assert not (write and not self.is_mutable) AssertionError ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1762180409 | I_kwDOBm6k_c5pCL05 | 2085 | Interactive row selection in Datasette | learning4life 24938923 | open | 0 | 0 | 2023-06-18T08:29:45Z | 2023-06-18T08:31:23Z | NONE | Simon did a excellent prototype of an interactive row selection in Datasette. I hope this functionality can be turned into a Datasette plugin. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1899310542 | I_kwDOBm6k_c5xNS3O | 2187 | Datasette for serving JSON only | geofinder 19705106 | open | 0 | 0 | 2023-09-16T05:48:29Z | 2023-09-16T05:48:29Z | NONE | Hi, is there any way to use datasette for serving json only without displaying webpage? I've tried to search about this in documentation but didn't get any information |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1761613778 | I_kwDOBm6k_c5pABfS | 2084 | Support facets for columns that contain timestamps | devxpy 19492893 | open | 0 | 0 | 2023-06-17T03:33:54Z | 2023-06-17T03:33:54Z | NONE | Django has this very nice filter for datetime fields - It would be nice to have something similar to facet by a field that contains a timestamp in datasette too - Which doesn't seem to do anything with timestamps right now... |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
834602299 | MDU6SXNzdWU4MzQ2MDIyOTk= | 1262 | Plugin hook that could support 'order by random()' for table view | henry501 19328961 | open | 0 | 3 | 2021-03-18T10:02:01Z | 2021-03-18T17:55:01Z | NONE | I am frequently using Datasette to quickly get a visual impression for a table without reviewing it in its entirety. Because I have some groups of similar records, the default sorting options mean that each page is very similar and not representative of the full dataset. The current interface allows sorting by columns, but random sorting is only available via custom SQL. Maybe this could be a button or link. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1262/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
531502365 | MDU6SXNzdWU1MzE1MDIzNjU= | 646 | Make database level information from metadata.json available in the index.html template | lagolucas 18017473 | open | 0 | Datasette 1.0 3268330 | 3 | 2019-12-02T19:55:10Z | 2022-03-15T20:50:34Z | NONE | Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
824750134 | MDU6SXNzdWU4MjQ3NTAxMzQ= | 1251 | facet option not appearing when table is big | verajosemanuel 15836677 | open | 0 | 0 | 2021-03-08T16:54:04Z | 2021-03-08T16:54:16Z | NONE | I have a big table with more than 500.000 rows. Trying to facet by one of my columns, the options are not available as for the other smaller tables. I have tried to set it in URL as:
to no avail. is there any limit? how can I force the option "facet" to appear for big tables? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
751195017 | MDU6SXNzdWU3NTExOTUwMTc= | 1111 | Accessing a database's `.json` is slow for very large SQLite files | asg017 15178711 | open | 0 | 3 | 2020-11-26T00:27:27Z | 2021-01-04T19:57:53Z | CONTRIBUTOR | I have a SQLite DB that's pretty large, 23GB and something like 300 million rows. I expect that most queries I run on it will be slow, which is fine, but there are some things that Datasette does that makes working with the DB very slow. Specifically, when I access the
I suspect this is because a ```bash $ time sqlite3 out.db < <(echo "select count(*) from PageviewsHour;") 362794272 real 0m44.523s user 0m2.497s sys 0m6.703s ``` I'm using the
More than happy to debug further, or send a PR if you like one of the proposals above! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1060631257 | I_kwDOBm6k_c4_N_LZ | 1528 | Add new `"sql_file"` key to Canned Queries in metadata? | asg017 15178711 | open | 0 | 3 | 2021-11-22T21:58:01Z | 2022-06-10T03:23:08Z | CONTRIBUTOR | Currently for canned queries, you have to inline SQL in your
This works fine, but for a few reasons, I usually have my canned queries already written in separate So, I'd like to see a new
Both of these would work in the exact same way, where Datasette would instead open + include A few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:
Let me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1620515757 | I_kwDOBm6k_c5glxut | 2039 | Subtle bug with `--load-extension` and `--static` flags with absolute Windows paths with`C:\` | asg017 15178711 | open | 0 | 0 | 2023-03-12T21:18:52Z | 2023-03-12T21:18:52Z | CONTRIBUTOR | From the Datasette discord: A user tried running the following command on windows:
This is hard because most absolute windows paths have a colon in them, like The "solution" is to use a relative path instead, but that doesn't feel that great. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1781530343 | I_kwDOBm6k_c5qL_7n | 2093 | Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File | asg017 15178711 | open | 0 | 8 | 2023-06-29T21:18:23Z | 2023-09-11T20:19:32Z | CONTRIBUTOR | Very often I get tripped up when trying to configure my Datasette instances. For example: if I want to change the port my app listen too, do I do that with a CLI flag, a Normally I need to look it up in Datasette docs, and I quickly find my answer, but the number of places where "config" goes it overwhelming.
Typically my Datasette deploys are extremely long shell commands, with multiple Proposal: Consolidate all "config" into
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1783304750 | I_kwDOBm6k_c5qSxIu | 2094 | JS Plugin Hooks for the Code Editor | asg017 15178711 | open | 0 | 0 | 2023-07-01T00:51:57Z | 2023-07-01T00:51:57Z | CONTRIBUTOR | When #2052 merges, I'd like to add support to add extensions/functions to the Datasette code editor. I'd eventually like to build a JS plugin for
I did some hacking to see what this would look like, see here:
There can be a new hook that allows JS plugins to add new "extension" in the CodeMirror editorview here: Will need some more planning. For example, the Codemirror bundle in Datasette has functions that we could re-export for plugins to use (so we don't load 2 version of |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2094/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1855885427 | I_kwDOBm6k_c5unpBz | 2143 | De-tangling Metadata before Datasette 1.0 | asg017 15178711 | open | 0 | 24 | 2023-08-18T00:51:50Z | 2023-08-24T18:28:27Z | CONTRIBUTOR | Metadata in Datasette is a really powerful feature, but is a bit difficult to work with. It was initially a way to add "metadata" about your "data" in Datasette instances, like descriptions for databases/tables/columns, titles, source URLs, licenses, etc. But it later became the go-to spot for other Datasette features that have nothing to do with metadata, like permissions/plugins/canned queries. Specifically, I've found the following problems when working with Datasette metadata:
Possible solutionsHere's a few ideas of Datasette core changes we can make to address these problems. Re-vamp the Datasette Python metadata APIsThe Datasette object has a single The (I'm a bit fuzzy on what to actually do here, but I imagine it'll be very small breaking changes to a few Python methods) Add an optional
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1864112887 | PR_kwDOBm6k_c5Yo7bk | 2151 | Test Datasette on multiple SQLite versions | asg017 15178711 | open | 0 | 1 | 2023-08-23T22:42:51Z | 2023-08-23T22:58:13Z | CONTRIBUTOR | simonw/datasette/pulls/2151 | still testing, hope it works! :books: Documentation preview :books:: https://datasette--2151.org.readthedocs.build/en/2151/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
1865869205 | I_kwDOBm6k_c5vNueV | 2157 | Proposal: Make the `_internal` database persistent, customizable, and hidden | asg017 15178711 | open | 0 | 3 | 2023-08-24T20:54:29Z | 2023-08-31T02:45:56Z | CONTRIBUTOR | The current The current
Additionally, it would be really nice if plugins could use this
In general, these are specific features that Datasette plugins would have access to if there was a central internal database they could read/write to:
Proposal
New features unlocked with thisThese features don't really need a standardized
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1884330740 | PR_kwDOBm6k_c5ZszDF | 2174 | Use $DATASETTE_INTERNAL in absence of --internal | asg017 15178711 | open | 0 | 3 | 2023-09-06T16:07:15Z | 2023-09-08T00:46:13Z | CONTRIBUTOR | simonw/datasette/pulls/2174 | refs 2157, specifically this commentPassing in This PR adds a new configurable env variable In draft mode for now, needs tests and documentation. Side note: Maybe we can have a sections in the docs that lists all the "configuration environment variables" that Datasette respects? I did a quick grep and found:
:books: Documentation preview :books:: https://datasette--2174.org.readthedocs.build/en/2174/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1900026059 | I_kwDOBm6k_c5xQBjL | 2188 | Plugin Hooks for "compile to SQL" languages | asg017 15178711 | open | 0 | 2 | 2023-09-18T01:37:15Z | 2023-09-18T06:58:53Z | CONTRIBUTOR | There's a ton of tools/languages that compile to SQL, which may be nice in Datasette. Some examples:
It would be cool if plugins could extend Datasette to use these languages, in both the code editor and API usage. A few things I'd imagine a
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1727478903 | I_kwDOBm6k_c5m9zx3 | 2081 | Update Endpoints defined in metadata throws 403 Forbidden after a while | cutmasta-kun 15085007 | open | 0 | 0 | 2023-05-26T11:52:30Z | 2023-05-26T11:52:30Z | NONE | Hello. I expose an endpoint to update This works really well! But after a while, the Datasette Instanz answers with 403 Forbidden. I have to delete the database and recreate it in order to work again. Any help here? (´。_。`) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
510076368 | MDU6SXNzdWU1MTAwNzYzNjg= | 605 | Support queries at the table level | bsilverm 12617395 | open | 0 | 2 | 2019-10-21T15:58:30Z | 2019-10-30T18:55:37Z | NONE | Per the issue described in issue #588, it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
319449852 | MDU6SXNzdWUzMTk0NDk4NTI= | 247 | SQLite code decoupled from Datasette | jsancho-gpl 11912854 | open | 0 | 1 | 2018-05-02T08:03:28Z | 2018-05-21T15:29:31Z | NONE | I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with PyTables format. In order to accomplish that, I've started a fork for decoupling the code related with SQLite and putting it in an external connector to allow future connectors for a lot of file formats. It'd be nice if you could look at it and suggest improvements for a possible PR. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
359075028 | MDExOlB1bGxSZXF1ZXN0MjE0NjUzNjQx | 364 | Support for other types of databases using external connectors | jsancho-gpl 11912854 | open | 0 | 0 | 2018-09-11T14:31:47Z | 2018-09-11T14:31:47Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/364 | This PR is related to #293, but now all commits have been merged. The purpose is to support other file formats that aren't SQLite, like files with PyTables format. I've tried to accomplish that using external connectors published with entry points. The modifications in the original datasette code are minimal and many are in a separated file. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
803356942 | MDU6SXNzdWU4MDMzNTY5NDI= | 1218 | /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory | robmarkcole 11855322 | open | 0 | 1 | 2021-02-08T09:07:00Z | 2021-02-23T12:12:17Z | NONE | Error as above, however I do have python3.8 and the readme indicates this is supported. ``` (venv) (base) Robins-MacBook:datasette robin$ ls /usr/local/opt/python3/bin/ .. pip3 python3 python3.8 ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
795367402 | MDU6SXNzdWU3OTUzNjc0MDI= | 1209 | v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround | jrdmb 11788561 | open | 0 | 1 | 2021-01-27T19:08:13Z | 2021-01-28T23:00:27Z | NONE | v0.54 500 error in sql query template; code worked in v0.53; found a workaround schema: Live example of correctly rendered template in v.053: https://cosmotalks-cy6xkkbezq-uw.a.run.app/cosmotalks/talks/1 Description of problem: I needed 'sql select' code in a custom row-mydatabase-mytable.html template to lookup the series name for a foreign key integer value in the talks table. So The code below worked perfectly in v0.53 (just the relevant sql statement part is shown; full code is here):
In v0.54, that code resulted in a 500 error with a 'no such table series' message. A second query in that template also did not work but the above is fully illustrative of the problem. All templates were up-to-date along with datasette v0.54. Workaround: After fiddling around with trying different things, what worked was the syntax from Querying a different database from the datasette-template-sql github repo to add the database name to the sql statement:
Though this was found to work, it should not be necessary to add |
datasette 107914493 | issue | |||||||||
1955676270 | I_kwDOBm6k_c50kUBu | 2201 | Discord invite link is invalid | andrewsanchez 11708906 | open | 0 | 0 | 2023-10-21T21:50:05Z | 2023-10-21T21:50:05Z | NONE | https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1536851861 | I_kwDOBm6k_c5bmn-V | 1994 | Stuck on loading screen | jackhagley 10913053 | open | 0 | 1 | 2023-01-17T18:33:49Z | 2023-01-23T08:21:08Z | NONE | Can’t actually open it! Downloaded today from the releases tab Running macOS13.1
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1994/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1734786661 | PR_kwDOBm6k_c5R0fcK | 2082 | Catch query interrupted on facet suggest row count | redraw 10843208 | open | 0 | 0 | 2023-05-31T18:42:46Z | 2023-05-31T18:45:26Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2082 | Just like facet's I've included :books: Documentation preview :books:: https://datasette--2082.org.readthedocs.build/en/2082/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
904598267 | MDExOlB1bGxSZXF1ZXN0NjU1NzQxNDI4 | 1348 | DRAFT: add test and scan for docker images | blairdrummond 10801138 | open | 0 | 2 | 2021-05-28T03:02:12Z | 2021-05-28T03:06:16Z | CONTRIBUTOR | simonw/datasette/pulls/1348 | NOTE: I don't think this PR is ready, since the arm/v6 and arm/v7 images are failing pytest due to missing dependencies (gcc and friends). But it's pretty close. Closes https://github.com/simonw/datasette/issues/1344 . Using a build-matrix for the platforms and this test, we test all the platforms in parallel. I also threw in container scanning. Switch
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
826064552 | MDU6SXNzdWU4MjYwNjQ1NTI= | 1253 | Capture "Ctrl + Enter" or "⌘ + Enter" to send SQL query? | rayvoelker 9308268 | open | 0 | 1 | 2021-03-09T15:00:50Z | 2021-10-30T16:00:42Z | NONE | It appears as though "Shift + Enter" triggers the form submit action to submit SQL, but could that action be bound to the "Ctrl + Enter" or "⌘ + Enter" action? I feel like that pattern already exists in a number of similar tools and could improve usability of the editor. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
863884805 | MDU6SXNzdWU4NjM4ODQ4MDU= | 1304 | Document how to send multiple values for "Named parameters" | rayvoelker 9308268 | open | 0 | 4 | 2021-04-21T13:19:06Z | 2021-12-08T03:23:14Z | NONE | https://docs.datasette.io/en/stable/sql_queries.html#named-parameters I thought that I had seen an example of how to do this example below, but I can't seem to find it
Or, maybe this isn't a fully supported feature. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1174655187 | I_kwDOBm6k_c5GA9DT | 1671 | Filters fail to work correctly against calculated numeric columns returned by SQL views because type affinity rules do not apply | rayvoelker 9308268 | open | 0 | 8 | 2022-03-20T19:17:24Z | 2022-03-22T17:43:12Z | NONE | I found a strange behavior, and I'm not sure if it's related to views and boolean values perhaps, or if there's something else weird going on here, but I'll provide an example that may help show what I'm seeing happen. ```bash !/bin/bashecho "\"id\",\"expiration_date\" 0,2018-01-04 1,2019-01-05 2,2020-01-06 3,2021-01-07 4,2022-01-08 5,2023-01-09 6,2024-01-10 7,2025-01-11 8,2026-01-12 9,2027-01-13 " > test.csv csvs-to-sqlite test.csv test.db sqlite-utils create-view --replace test.db test_view "select id, expiration_date, case when julianday('NOW') >= julianday(expiration_date) then 1 else 0 end as has_expired FROM test" ```
Thanks again and let me know if you want me to provide anything else! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1182227211 | I_kwDOBm6k_c5Gd1sL | 1692 | [plugins][feature request]: Support additional script tag attributes when loading custom JS | hydrosquall 9020979 | open | 0 | 2 | 2022-03-27T01:16:03Z | 2022-03-30T06:14:51Z | CONTRIBUTOR | Motivation
To be able to support legacy browsers without slowing down users with modern browsers, I would like to be able to set additional HTML attributes on the tag fallback script, ```html <script type="module" src="/index.my-es-module-bundle.js"></script> <script src="/index.my-legacy-fallback-bundle.js" nomodule="" defer></script>``` ProposalTo achieve this, I propose additional optional properties to the API accepted by the Under this API, I'd write something like this to get the above HTML rendered in Datasette.
Resources
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1198822563 | I_kwDOBm6k_c5HdJSj | 1706 | [feature] immutable mode for a directory, not just individual sqlite file | hydrosquall 9020979 | open | 0 | 4 | 2022-04-10T00:50:57Z | 2022-12-09T19:11:40Z | CONTRIBUTOR | Motivation
ProposalImmutable flag works for both single files and directories
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1706/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1646068413 | I_kwDOBm6k_c5iHQK9 | 2048 | Test failures encountered while packaging for GNU Guix | Apteryks 8332263 | open | 0 | 0 | 2023-03-29T15:36:54Z | 2023-03-29T15:36:54Z | NONE | Hello, While reviewing a packaged submitted to Guix to add app_client = <datasette.utils.testing.TestClient object at 0x7fffef099be0>
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:701: AssertionError ----------------------------- Captured stderr call ----------------------------- ERROR: conn=<sqlite3.Connection object at 0x7fffeedfe5d0>, sql = 'select rowid, * from [table%7E2Fwith%7E2Fslashes%7E2Ecsv] where "rowid"=:p0', params = {'p0': '3'}: no such table: table%7E2Fwith%7E2Fslashes%7E2Ecsv __ test_database_page_for_database_with_dot_in_name __ [gw15] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_with_dot = <datasette.utils.testing.TestClient object at 0x7fffef3416a0>
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:633: AssertionError ___ test_tilde_encoded_database_names[fo%o] ______ [gw6] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python db_name = 'fo%o'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:983: AssertionError ___ testtilde_encoded_database_names[f~/c.d] _____ [gw7] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python db_name = 'f~/c.d'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:983: AssertionError __ test_database_with_space_in_name[/searchable.json] __ [gw21] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_two_attached_databases = <datasette.utils.testing.TestClient object at 0x7fffef11d730> path = '/searchable.json'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in call return call_result.result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result return self.__get_result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result raise self._exception /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap result = await self.awaitable(args, *kwargs) /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get return await self._request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request httpx_response = await self.ds.client.request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request return await client.request( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request return await self.send(request, auth=auth, follow_redirects=follow_redirects) /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send response = await self._send_handling_auth( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth response = await self._send_handling_redirects( self = <httpx.AsyncClient object at 0x7fffef11d940> request = <Request('GET', 'http://localhost/extra%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E20database/searchable%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E2Ejson')> follow_redirects = True history = [<Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, ...]
/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects ___ test_database_with_space_in_name[.json] ______ [gw19] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_two_attached_databases = <datasette.utils.testing.TestClient object at 0x7fffef085a90> path = '.json'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in call return call_result.result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result return self.__get_result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result raise self._exception /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap result = await self.awaitable(args, *kwargs) /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get return await self._request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request httpx_response = await self.ds.client.request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request return await client.request( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request return await self.send(request, auth=auth, follow_redirects=follow_redirects) /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send response = await self._send_handling_auth( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth response = await self._send_handling_redirects( self = <httpx.AsyncClient object at 0x7fffecd99ca0> request = <Request('GET', 'http://localhost/extra%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E20database%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E2Ejson')> follow_redirects = True history = [<Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, ...]
/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects __ test_database_with_space_in_name[/searchable_view] __ [gw22] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_two_attached_databases = <datasette.utils.testing.TestClient object at 0x7fffeeab4c70> path = '/searchable_view'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in call return call_result.result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result return self.__get_result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result raise self._exception /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap result = await self.awaitable(args, *kwargs) /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get return await self._request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request httpx_response = await self.ds.client.request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request return await client.request( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request return await self.send(request, auth=auth, follow_redirects=follow_redirects) /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send response = await self._send_handling_auth( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth response = await self._send_handling_redirects( self = <httpx.AsyncClient object at 0x7fffec5b3580> request = <Request('GET', 'http://localhost/extra%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E20database/searchable_view')> follow_redirects = True history = [<Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, ...]
/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/client.py:1672: TooManyRedirects ___ test_database_with_space_in_name[/] ___ [gw18] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_two_attached_databases = <datasette.utils.testing.TestClient object at 0x7fffef085be0> path = '/'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in call return call_result.result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result return self.__get_result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result raise self._exception /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap result = await self.awaitable(args, *kwargs) /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get return await self._request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request httpx_response = await self.ds.client.request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request return await client.request( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request return await self.send(request, auth=auth, follow_redirects=follow_redirects) /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send response = await self._send_handling_auth( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth response = await self._send_handling_redirects( self = <httpx.AsyncClient object at 0x7fffec5f1370> request = <Request('GET', 'http://localhost/extra%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E20database/')> follow_redirects = True history = [<Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, ...]
/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects ____ test_database_with_space_in_name[/searchable] _____ [gw20] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_two_attached_databases = <datasette.utils.testing.TestClient object at 0x7fffef099f10> path = '/searchable'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in call return call_result.result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result return self.__get_result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result raise self._exception /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap result = await self.awaitable(args, *kwargs) /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get return await self._request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request httpx_response = await self.ds.client.request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request return await client.request( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request return await self.send(request, auth=auth, follow_redirects=follow_redirects) /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send response = await self._send_handling_auth( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth response = await self._send_handling_redirects( self = <httpx.AsyncClient object at 0x7fffecd8c790> request = <Request('GET', 'http://localhost/extra%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E20database/searchable')> follow_redirects = True history = [<Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, ...]
/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects __ testdatabase_with_space_in_name[/searchable_view.json] ____ [gw23] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client_two_attached_databases = <datasette.utils.testing.TestClient object at 0x7fffef341520> path = '/searchable_view.json'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_api.py:920: /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:223: in call return call_result.result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:438: in result return self.__get_result() /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/concurrent/futures/_base.py:390: in __get_result raise self._exception /gnu/store/mcclmphjgbrgpa0v037a4nlq336482g8-python-asgiref-3.4.1/lib/python3.9/site-packages/asgiref/sync.py:292: in main_wrap result = await self.awaitable(args, *kwargs) /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:66: in get return await self._request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:156: in _request httpx_response = await self.ds.client.request( /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/app.py:1602: in request return await client.request( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1527: in request return await self.send(request, auth=auth, follow_redirects=follow_redirects) /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1614: in send response = await self._send_handling_auth( /gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1642: in _send_handling_auth response = await self._send_handling_redirects( self = <httpx.AsyncClient object at 0x7fffef085460> request = <Request('GET', 'http://localhost/extra%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E20database/searchable_view%7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E7E2Ejson')> follow_redirects = True history = [<Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, <Response [302 Found]>, ...]
/gnu/store/bj5lb299rfb4cbbq5kczq9imdk9a7y64-python-httpx-0.23.0/lib/python3.9/site-packages/httpx/_client.py:1672: TooManyRedirects __ test_weird_database_names[database (1).sqlite] __ [gw7] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python tmpdir = local('/tmp/guix-build-datasette-0.64.2.drv-0/pytest-of-nixbld/pytest-0/popen-gw7/test_weird_database_names_data0') filename = 'database (1).sqlite'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_cli.py:321: AssertionError ___ test_weird_database_names[test-database (1).sqlite] ______ [gw6] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python tmpdir = local('/tmp/guix-build-datasette-0.64.2.drv-0/pytest-of-nixbld/pytest-0/popen-gw6/test_weird_database_names_test0') filename = 'test-database (1).sqlite'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_cli.py:321: AssertionError _ test_row_html_compound_primary_key[/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd-expected1] _ [gw11] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffec2d37f0> path = '/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd' expected = [['<td class="col-pk1 type-str">a/b</td>', '<td class="col-pk2 type-str">.c-d</td>', '<td class="col-content type-str">c</td>']]
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:370: AssertionError _ test_css_classes_on_body[/fixtures/table~2Fwith~2Fslashes~2Ecsv-expected_classes5] _ [gw3] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffd4743fa0> path = '/fixtures/table~2Fwith~2Fslashes~2Ecsv' expected_classes = ['table', 'db-fixtures', 'table-tablewithslashescsv-fa7563']
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:238: AssertionError _ test_templates_considered[/fixtures/table~2Fwith~2Fslashes~2Ecsv-table-fixtures-tablewithslashescsv-fa7563.html, *table.html] _ [gw3] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffd4743fa0> path = '/fixtures/table~2Fwith~2Fslashes~2Ecsv' expected_considered = 'table-fixtures-tablewithslashescsv-fa7563.html, *table.html'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:264: AssertionError _ test_alternate_url_json[/fixtures/table~2Fwith~2Fslashes~2Ecsv-http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json] _ [gw21] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffecd9fac0> path = '/fixtures/table~2Fwith~2Fslashes~2Ecsv' expected = 'http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:948: AssertionError _ test_edit_sql_link_on_canned_queries[/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC-/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B] _ [gw18] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffec5952e0> path = '/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC' expected = '/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B'
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_html.py:841: AssertionError _____ test_table_with_slashes_in_name ______ [gw9] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffec0860a0>
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:141: AssertionError ___ testcustom_query_with_unicode_characters _____ [gw8] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffec1cda90>
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:1042: /tmp/guix-build-datasette-0.64.2.drv-0/source/datasette/utils/testing.py:40: in json return json.loads(self.text) /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/json/init.py:346: in loads return _default_decoder.decode(s) /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/json/decoder.py:337: in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) self = <json.decoder.JSONDecoder object at 0x7ffff7479760>, s = '', idx = 0
/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/json/decoder.py:355: JSONDecodeError _ test_searchable[/fixtures/searchable.json?_search=te+AND+do&_searchmode=raw-expected_rows3] _ [gw13] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python app_client = <datasette.utils.testing.TestClient object at 0x7fffd470bf10> path = '/fixtures/searchable.json?_search=te+AND+do&_searchmode=raw' expected_rows = [[1, 'barry cat', 'terry dog', 'panther'], [2, 'terry dog', 'sara weasel', 'puma']]
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:402: AssertionError _ test_searchmode[table_metadata1-_search=te+AND+do-expected_rows1] ____ [gw20] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python table_metadata = {'searchmode': 'raw'}, querystring = '_search=te+AND+do' expected_rows = [[1, 'barry cat', 'terry dog', 'panther'], [2, 'terry dog', 'sara weasel', 'puma']]
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:442: AssertionError _ test_searchmode[table_metadata2-_search=te+AND+do&_searchmode=raw-expected_rows2] _ [gw20] linux -- Python 3.9.9 /gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin/python table_metadata = {}, querystring = '_search=te+AND+do&_searchmode=raw' expected_rows = [[1, 'barry cat', 'terry dog', 'panther'], [2, 'terry dog', 'sara weasel', 'puma']]
/tmp/guix-build-datasette-0.64.2.drv-0/source/tests/test_table_api.py:442: AssertionError
=========================== short test summary info ============================
FAILED tests/test_api.py::test_row_strange_table_name - assert 400 == 200
FAILED tests/test_api.py::test_database_page_for_database_with_dot_in_name - ...
FAILED tests/test_api.py::test_tilde_encoded_database_names[fo%o] - assert 30...
FAILED tests/test_api.py::test_tilde_encoded_database_names[f~/c.d] - assert ...
FAILED tests/test_api.py::test_database_with_space_in_name[/searchable.json]
FAILED tests/test_api.py::test_database_with_space_in_name[.json] - httpx.Too...
FAILED tests/test_api.py::test_database_with_space_in_name[/searchable_view]
FAILED tests/test_api.py::test_database_with_space_in_name[/] - httpx.TooMany...
FAILED tests/test_api.py::test_database_with_space_in_name[/searchable] - htt...
FAILED tests/test_api.py::test_database_with_space_in_name[/searchable_view.json]
FAILED tests/test_cli.py::test_weird_database_names[database (1).sqlite] - As...
FAILED tests/test_cli.py::test_weird_database_names[test-database (1).sqlite]
FAILED tests/test_html.py::test_row_html_compound_primary_key[/fixtures/compound_primary_key/a~2Fb,~2Ec~2Dd-expected1]
FAILED tests/test_html.py::test_css_classes_on_body[/fixtures/table~2Fwith~2Fslashes~2Ecsv-expected_classes5]
FAILED tests/test_html.py::test_templates_considered[/fixtures/table~2Fwith~2Fslashes~2Ecsv-table-fixtures-tablewithslashescsv-fa7563.html, table.html]
FAILED tests/test_html.py::test_alternate_url_json[/fixtures/table~2Fwith~2Fslashes~2Ecsv-http://localhost/fixtures/table~2Fwith~2Fslashes~2Ecsv.json]
FAILED tests/test_html.py::test_edit_sql_link_on_canned_queries[/fixtures/~F0~9D~90~9C~F0~9D~90~A2~F0~9D~90~AD~F0~9D~90~A2~F0~9D~90~9E~F0~9D~90~AC-/fixtures?sql=select+id%2C+name+from+facet_cities+order+by+id+limit+1%3B]
FAILED tests/test_table_api.py::test_table_with_slashes_in_name - assert 302 ...
FAILED tests/test_table_api.py::test_custom_query_with_unicode_characters - j...
FAILED tests/test_table_api.py::test_searchable[/fixtures/searchable.json?_search=te+AND+do&_searchmode=raw-expected_rows3]
FAILED tests/test_table_api.py::test_searchmode[table_metadata1-_search=te+AND+do-expected_rows1]
FAILED tests/test_table_api.py::test_searchmode[table_metadata2-_search=te+AND+do*&_searchmode=raw-expected_rows2]
=========== 22 failed, 1049 passed, 3 skipped in 1522.28s (0:25:22) ============
error: in phase 'check': uncaught exception:
%exception #<&invoke-error program: "/gnu/store/ziqwkzz6znb5d3c245xn0cq5ra2ly0w3-python-pytest-7.1.3/bin/pytest" arguments: ("-vv" "-n" "24" "-m" "not serial") exit-status: 1 term-signal: #f stop-signal: #f>
phase `check' failed after 1523.3 seconds
Thank you! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
451585764 | MDU6SXNzdWU0NTE1ODU3NjQ= | 499 | Accessibility for non-techie newsies? | chrismp 7936571 | open | 0 | 3 | 2019-06-03T16:49:37Z | 2019-06-05T21:22:55Z | NONE | Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
457147936 | MDU6SXNzdWU0NTcxNDc5MzY= | 512 | "about" parameter in metadata does not appear when alone | chrismp 7936571 | open | 0 | 3 | 2019-06-17T21:04:20Z | 2019-10-11T15:49:13Z | NONE | Here's an example of metadata I have for one database on datasette.
The text in Is this intended? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1977726056 | I_kwDOBm6k_c514bRo | 2203 | custom plugin not seen as sql function | LyzardKing 7113541 | open | 0 | 0 | 2023-11-05T10:30:19Z | 2023-11-05T10:30:19Z | NONE | Hi, I'm not sure if this is the right repo for this issue. I'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly. Now I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works).
If I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error:
Since the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
811054000 | MDU6SXNzdWU4MTEwNTQwMDA= | 1230 | Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages | Kabouik 7107523 | open | 0 | 1 | 2021-02-18T12:27:02Z | 2021-02-18T15:22:15Z | NONE | I filtered a data set on some criteria and obtain 265 results, split over three pages (100, 100, 65), and reazlized that Vega plots are only applied to the results displayed on the current page, instead of the whole filtered data, e.g., 100 on page 1, 100 on page 2, 65 on page 3. Is there a way to force the graphs to consider all results instead of just the page, considering that pages rarely represent sensible information? Likewise, while the cluster map does show all results on the first page, if you go to next pages, it will show all remaining results except the previous page(s), e.g., 265 on page 1, 165 on page 2, 65 on page 3. In both cases, I don't see many situations where one would like to represent the data this way, and it might even lead to interpretation errors when viewing the data. Am I missing some cases where this would be best? Perhaps a clickable option to subset visual representations according visible pages vs. display all search results would do? [Edit] Oh, I just saw the "Load all" button under the cluster map as well as the setting to alter the max number or results. So I guess this issue only is about the Vega charts. |
datasette 107914493 | issue | |||||||||
814595021 | MDU6SXNzdWU4MTQ1OTUwMjE= | 1241 | Share button for copying current URL | Kabouik 7107523 | open | 0 | 6 | 2021-02-23T15:55:40Z | 2023-08-24T20:09:52Z | NONE | I use datasette in an This particular use prevents users to access the full URLs of their datasette views and queries, which is a shame because the way datasette handles URLs to make every view or query easy to share is awesome. I know how to get the URL from the context menu of my browser, but I don't think many visitors would do it or even notice that datasette uses permalinks for pretty much every action they do. Would it be possible to add a "Share link" button to the interface, either in datasette itself or in a plugin? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1129052172 | I_kwDOBm6k_c5DS_gM | 1633 | base_url or prefix does not work with _exact match | henrikek 6613091 | open | 0 | 2 | 2022-02-09T21:45:07Z | 2022-04-28T09:12:56Z | NONE | When i hit "Apply" button to search with "_exact" for a column syntax the URL prefix is removed from the url. And the result is: If I add the marked row to url_builder.py it seams to work: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
802513359 | MDU6SXNzdWU4MDI1MTMzNTk= | 1217 | Possible to deploy as a python app (for Rstudio connect server)? | plpxsk 6165713 | open | 0 | 4 | 2021-02-05T22:21:24Z | 2022-11-04T11:37:52Z | NONE | Is it possible to deploy a In my enterprise, I have option to deploy python apps via Rstudio Connect, and I would like to publish a I welcome any pointers to converting |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1217/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1592327343 | I_kwDOBm6k_c5e6Pyv | 2029 | Sorry Simon, didn't know how else to contact you | llchristopherson 5804626 | open | 0 | 0 | 2023-02-20T19:02:53Z | 2023-02-20T19:02:53Z | NONE | Hi Simon, Would you be willing to chat with me about Datasette? I have some questions. I am working on a project to evaluate data ingestion tools for a research organization and I ran across Datasette. I have looked through a lot of your documentation, but still have some questions, which are very specific. If you would be willing to write me back about this, my email is laura@renci.org. Thanks, Laura |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
860722711 | MDU6SXNzdWU4NjA3MjI3MTE= | 1301 | Publishing to cloudrun with immutable mode? | louispotok 5413548 | open | 0 | 1 | 2021-04-18T17:51:46Z | 2022-10-07T02:38:04Z | CONTRIBUTOR | I'm a bit confused about immutable mode and publishing to cloudrun. (I want to publish with immutable mode so that I can support database downloads.) Running
However, running When I just |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
870946764 | MDU6SXNzdWU4NzA5NDY3NjQ= | 1312 | how to query many-to-many relationship via json API? | bram2000 5268174 | open | 0 | 0 | 2021-04-29T12:09:49Z | 2021-04-29T12:09:49Z | NONE | Hi, Firstly thanks for Datasette, it's great! I'm trying to use the JSON API to query data from a Datasette instance. I have a simple 3 table many-to-many relationship, like so:
the Now I want to return "all documents within category X" but I cannot see a way to do this without executing two queries; the first to lookup the row_id of category X, and the second to join I could easily write this in SQL, but this makes programmatic handling of pagination much more difficult (we'd have to dynamically modify the SQL to select the row_id and include the correct where and limit clauses). Is there a way to achieve this using the JSON API? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1001104942 | PR_kwDOBm6k_c4r-EVH | 1475 | feat: allow joins using _through in both directions | bram2000 5268174 | open | 0 | 0 | 2021-09-20T15:28:20Z | 2021-09-20T15:28:20Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1475 | Currently the This is an admittedly hacky change to implement bidirectional joins using |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1446657889 | I_kwDOBm6k_c5WOj9h | 1885 | Integrate inside GUI app (tkinter) | dmalves 5115787 | open | 0 | 0 | 2022-11-13T00:10:43Z | 2022-11-13T00:11:09Z | NONE | Hi, I'd like to integrate datasette inside a tkinter app. The app should be able to start/stop datasette server. How could I integrate datasette inside my app, so it can start and stop datasette server? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1613974869 | PR_kwDOBm6k_c5LgPS- | 2034 | remove an unused `app` var in cli.py | wenhoujx 4370201 | open | 0 | 2 | 2023-03-07T18:19:05Z | 2023-03-29T20:56:20Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2034 | this var Feel free to ignore this PR if the deleted line actually does something. :books: Documentation preview :books:: https://datasette--2034.org.readthedocs.build/en/2034/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1708981860 | PR_kwDOBm6k_c5QdMea | 2074 | sort files by mtime | abbbi 3919561 | open | 0 | 0 | 2023-05-14T15:25:15Z | 2023-05-14T15:25:29Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2074 | serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;) :books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1661860507 | PR_kwDOBm6k_c5N_bMw | 2056 | GitHub Action to lint Python code with ruff | cclauss 3709715 | open | 0 | 6 | 2023-04-11T06:41:27Z | 2023-04-15T14:24:46Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2056 | Ruff supports over 500 lint rules and can be used to replace Flake8 (plus dozens of plugins), isort, pydocstyle, yesqa, eradicate, pyupgrade, and autoflake, all while executing (in Rust) tens or hundreds of times faster than any individual tool. The ruff Action uses minimal steps to run in ~5 seconds, rapidly providing intuitive GitHub Annotations to contributors. :books: Documentation preview :books:: https://datasette--2056.org.readthedocs.build/en/2056/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1794097871 | I_kwDOBm6k_c5q78LP | 2095 | Introduce "dark mode" CSS | jamietanna 3315059 | open | 0 | 0 | 2023-07-07T19:15:58Z | 2023-07-07T19:15:58Z | NONE | Using the CSS media query |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1866815458 | PR_kwDOBm6k_c5YyF-C | 2159 | Implement Dark Mode colour scheme | jamietanna 3315059 | open | 0 | 0 | 2023-08-25T10:46:23Z | 2023-08-25T10:46:35Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2159 | Closes #2095. :books: Documentation preview :books:: https://datasette--2159.org.readthedocs.build/en/2159/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | ||||||
642572841 | MDU6SXNzdWU2NDI1NzI4NDE= | 859 | Database page loads too slowly with many large tables (due to table counts) | abdusco 3243482 | open | 0 | 21 | 2020-06-21T14:23:17Z | 2021-08-25T21:59:55Z | CONTRIBUTOR | Hey, I have a database that I save in HTML from couple of web scrapers. There are around 200k+, 50+ rows in a couple of tables, with sqlite file weighing around 600MB. The app runs on a VPS with 2 core CPU, 4GB RAM and refreshing database page regularly takes more than 10 seconds. I was suspecting that counting tables was the culprit, but manually running I've looked at the source code. There's a check for index page for mutable databases larger than 100MB https://github.com/simonw/datasette/blob/799c5d53570d773203527f19530cf772dc2eeb24/datasette/views/index.py#L15 but this check is not performed for database page.
I've manually crippled now the page loads in <100ms. Is it possible to apply size check on database page too? /-/versions output{ "python": { "version": "3.8.0", "full": "3.8.0 (default, Oct 28 2019, 16:14:01) \n[GCC 8.3.0]" }, "datasette": { "version": "0.44" }, "asgi": "3.0", "uvicorn": "0.11.5", "sqlite": { "version": "3.22.0", "fts_versions": [ "FTS5", "FTS4", "FTS3" ], "extensions": { "json1": null }, "compile_options": [ "COMPILER=gcc-7.4.0", "ENABLE_COLUMN_METADATA", "ENABLE_DBSTAT_VTAB", "ENABLE_FTS3", "ENABLE_FTS3_PARENTHESIS", "ENABLE_FTS3_TOKENIZER", "ENABLE_FTS4", "ENABLE_FTS5", "ENABLE_JSON1", "ENABLE_LOAD_EXTENSION", "ENABLE_PREUPDATE_HOOK", "ENABLE_RTREE", "ENABLE_SESSION", "ENABLE_STMTVTAB", "ENABLE_UNLOCK_NOTIFY", "ENABLE_UPDATE_DELETE_LIMIT", "HAVE_ISNAN", "LIKE_DOESNT_MATCH_BLOBS", "MAX_SCHEMA_RETRY=25", "MAX_VARIABLE_NUMBER=250000", "OMIT_LOOKASIDE", "SECURE_DELETE", "SOUNDEX", "TEMP_STORE=1", "THREADSAFE=1" ] } } |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
648749062 | MDExOlB1bGxSZXF1ZXN0NDQyNTA1MDg4 | 883 | Skip counting hidden tables | abdusco 3243482 | open | 0 | 4 | 2020-07-01T07:38:08Z | 2020-07-02T00:25:44Z | CONTRIBUTOR | simonw/datasette/pulls/883 | Potential fix for https://github.com/simonw/datasette/issues/859. Disabling table counts for hidden tables speeds up database page quite a bit. In my setup it reduced load time by 2/3 (~300 -> ~90ms) |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
756875827 | MDU6SXNzdWU3NTY4NzU4Mjc= | 1129 | Fix footer to the bottom of the page | abdusco 3243482 | open | 0 | 0 | 2020-12-04T07:28:07Z | 2020-12-04T16:04:29Z | CONTRIBUTOR | Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport. This can be fixed using flexbox. ```css body { min-height: 100vh; display: flex; flex-direction: column; } .content { flex-grow: 1; } ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
756876238 | MDExOlB1bGxSZXF1ZXN0NTMyMzQ4OTE5 | 1130 | Fix footer not sticking to bottom in short pages | abdusco 3243482 | open | 0 | 4 | 2020-12-04T07:29:01Z | 2021-06-15T13:27:48Z | CONTRIBUTOR | simonw/datasette/pulls/1130 | datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||||
791237799 | MDU6SXNzdWU3OTEyMzc3OTk= | 1196 | Access Denied Error in Windows | QAInsights 2826376 | open | 0 | 2 | 2021-01-21T15:40:40Z | 2021-04-14T19:28:38Z | NONE | I am trying to publish a db to vercel. But while issuing the below command throwing I am using PyCharm and Python 3.9. I have reinstalled both and launched PyCharm as Admin in Windows 10. But still the issue persists. Issued command PS: localhost is working fine. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1811824307 | I_kwDOBm6k_c5r_j6z | 2105 | When reverse proxying datasette with nginx an URL element gets erronously added | aki-k 2235371 | open | 0 | 3 | 2023-07-19T12:16:53Z | 2023-07-21T21:17:09Z | NONE | I use this nginx config: ``` location /datasette-llm { return 302 /datasette-llm/; }
https://192.168.1.3:5432/datasette-llm/datasette-llm/logs.json?sql=select+*+from+_llm_migrations https://192.168.1.3:5432/datasette-llm/datasette-llm/logs.csv?sql=select+*+from+_llm_migrations&_size=max When I remove that extra "datasette-llm" from the URL, those links work too. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
612382643 | MDU6SXNzdWU2MTIzODI2NDM= | 758 | Question: Access to immutable database-path | clausjuhl 2181410 | open | 0 | 6 | 2020-05-05T07:01:18Z | 2020-05-28T08:23:27Z | NONE | Hi Simon Is there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context ( |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
617323873 | MDU6SXNzdWU2MTczMjM4NzM= | 766 | Enable wildcard-searches by default | clausjuhl 2181410 | open | 0 | 2 | 2020-05-13T10:14:48Z | 2021-03-05T16:35:21Z | NONE | Hi Simon. It seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the docs (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance. I would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring "_searchmode=raw" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using "_searchmode=raw"? Is the "escape_fts"-function from datasette.utils ignored? Thanks! /Claus |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1181037277 | I_kwDOBm6k_c5GZTLd | 1686 | heroku bails if app name specifed in datasette publish is the same as existing app | tlongers 2115933 | open | 0 | 0 | 2022-03-25T17:10:34Z | 2022-03-25T17:10:34Z | NONE | Seem that
The resulting error has the below traceback:
It's a solid failsafe, but does |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1347717749 | I_kwDOBm6k_c5QVIp1 | 1791 | Updating metadata.json on Datasette for MacOS | ment4list 1780782 | open | 0 | 1 | 2022-08-23T10:41:16Z | 2022-08-23T13:29:51Z | NONE | I've installed Datasette for Mac as per the documentation and it's working great! However, I'm not sure how to go about adding something like "Canned Queries" or utilising other advanced features or settings by manipulating the I can view these files from the Datasette App from the top right "burger" menu but it only shows the contents of the file with no way to edit or change it. Am I missing something? Where can I update the PS: This is a fantastic tool! Thanks so much for all the effort and especially adding a bunch of different ways to get started quickly! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
919822817 | MDU6SXNzdWU5MTk4MjI4MTc= | 1376 | Official Datasette Docker image should use SQLite >= 3.31.0 (for generated columns) | jcgregorio 1726460 | open | 0 | 3 | 2021-06-13T15:25:51Z | 2021-06-13T15:39:37Z | NONE | Trying to run datasette via the Docker container doesn't seem to work:
I have confirmed that the downloaded ``` [skia-public] jcgregorio@jcgregorio840 ~/Downloads $ sqlite3 fixtures.db SQLite version 3.34.1 2021-01-20 14:10:07 Enter ".help" for usage hints. sqlite> pragma integrity_check; ok sqlite> ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1577548579 | I_kwDOBm6k_c5eB3sj | 2021 | Docker images for 1.0 alphas? | meowcat 1563881 | open | 0 | 0 | 2023-02-09T09:35:52Z | 2023-02-09T09:35:52Z | NONE | Hi, would you consider putting 1.0alpha images on Dockerhub? (Also, how usable are the alphas?) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1802613340 | PR_kwDOBm6k_c5VZhfw | 2100 | Make primary key view accessible to render_cell hook | meowcat 1563881 | open | 0 | 0 | 2023-07-13T09:30:36Z | 2023-08-10T13:15:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2100 | :books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1824457306 | I_kwDOBm6k_c5svwJa | 2122 | Parameters on canned queries: fixed or query-generated list? | meowcat 1563881 | open | 0 | 0 | 2023-07-27T14:07:07Z | 2023-07-27T14:07:07Z | NONE | Hi, currently parameters in canned queries are just text fields. It would be cool to have one of the options below. Would you accept a PR doing something in this direction? (Possibly this could even work as a plugin.)
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
982803408 | MDU6SXNzdWU5ODI4MDM0MDg= | 1454 | Feature Request: Publish to IPFS | blitmap 1560788 | open | 0 | 0 | 2021-08-30T13:36:18Z | 2021-08-30T13:36:18Z | NONE | Hello, I am a huge fan of this being used for exploring data. I think it has a lot of flexibility not found in other tools. I'm not sure if what I'm asking for is possible: Can this be extended to publish to IPFS? IPFS is an attractive hosting option for decentralized journalism. Food for thought ~ |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1280799259 | I_kwDOBm6k_c5MV3Ib | 1761 | ensure_ascii=False | mustafa0x 1473102 | open | 0 | 0 | 2022-06-22T19:58:13Z | 2022-06-22T19:58:30Z | NONE | Hi, thanks for the project! For the JSON output, I would consider defaulting to |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1365741480 | I_kwDOBm6k_c5RZ4-o | 1806 | UX to recover from Error 500: "You can only execute one statement at a time." | jieter 1470389 | open | 0 | 0 | 2022-09-08T08:01:27Z | 2022-09-08T08:01:37Z | NONE | When using the Custom SQL query view, when accidentally adding a semicolon in the middle of my query, datasette errors with:
The error view doesn't contain the query textarea anymore, so it provides no easy way recover from the error. It would be nice if I could change and submit it again. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
828858421 | MDU6SXNzdWU4Mjg4NTg0MjE= | 1258 | Allow canned query params to specify default values | wdccdw 1385831 | open | 0 | 5 | 2021-03-11T07:19:02Z | 2023-02-20T23:39:58Z | NONE | If I call a canned query that includes named parameters, without passing any parameters, datasette runs the query anyway, resulting in an HTTP status code 400, and a visible error in the browser, with only a link back to home. This means that one of the default links on https://site/database/ will lead to a broken page with no apparent way out. Is there any way to skip performing the query when parameters aren't supplied, but otherwise render the usual canned query page? Alternatively, can I supply default values for my parameters, either when defining my canned queries or when linking to the canned query page from the default database template. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1258/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1575880841 | I_kwDOBm6k_c5d7giJ | 2020 | Documentation refers to "off" setting; doesn't seem to work, "false" does | dmick 1350673 | open | 0 | 0 | 2023-02-08T10:38:10Z | 2023-02-08T10:38:10Z | NONE | https://docs.datasette.io/en/stable/settings.html#suggest-facets, among others, suggests using "off" to disable the setting; however, this doesn't appear to work in the JSON config files, where it apparently needs to be a "JSON boolean" and have the values "true" or "false". Perhaps the Python code is more flexible?...but either way, the documentation probably should mention it. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1590183272 | I_kwDOBm6k_c5eyEVo | 2027 | How to redirect from "/" to a specific db/table | dmick 1350673 | open | 0 | 4 | 2023-02-18T03:14:01Z | 2023-03-08T04:42:22Z | NONE | Using nginx to redirect public IP to the local uvicorn server as 'normal'. I can't figure out how to redirect such that '/' results in accessing the one db/table I want to serve; redirecting / to /db/table breaks some of the CSS; fooling with base_url doesn't seem to help. Can someone explain this, if it's possible? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1674322631 | PR_kwDOBm6k_c5OpEz_ | 2061 | Add "Packaging a plugin using Poetry" section in docs | rclement 1238873 | open | 0 | 0 | 2023-04-19T07:23:28Z | 2023-04-19T07:27:18Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2061 | This PR adds a new section about packaging a plugin using :books: Documentation preview :books:: https://datasette--2061.org.readthedocs.build/en/2061/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
826700095 | MDU6SXNzdWU4MjY3MDAwOTU= | 1255 | Facets timing out but work when filtering | robroc 1219001 | open | 0 | 2 | 2021-03-09T22:01:39Z | 2021-04-02T20:50:08Z | NONE | System info: Windows 10 Datasette 0.55 installed via pip Python 3.8.5 in a conda environment I'm getting the message |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1255/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
2019811176 | I_kwDOBm6k_c54Y99o | 2211 | Unreachable exception handlers for `sqlite3.OperationalError` | mattparmett 1214074 | open | 0 | 0 | 2023-12-01T00:50:22Z | 2023-12-01T00:50:22Z | NONE | There are several places where Because the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then This issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it. One example: Other instances: https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270 https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1665053646 | I_kwDOBm6k_c5jPrPO | 2059 | "Deceptive site ahead" alert on Heroku deployment | mtdukes 1186275 | open | 0 | 1 | 2023-04-12T18:34:51Z | 2023-04-13T01:13:01Z | NONE | I deployed a fairly basic instance of Datasette ( Is there way around this? Maybe a way to add ownership verification through Google's search console? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1266207143 | I_kwDOBm6k_c5LeMmn | 1755 | Gunicorn | ar-jan 1176293 | open | 0 | 0 | 2022-06-09T14:18:46Z | 2022-06-09T14:18:46Z | NONE | I've read issue #514 which resulted in running Datasette via systemd as recommended approach. We've also adopted this (for now), but I notice that Uvicorn says the following:
We usually deploy Python applications via Gunicorn for these process management features (e.g. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
849512840 | MDU6SXNzdWU4NDk1MTI4NDA= | 1288 | Facets: show counts for null | jungle-boogie 1111743 | open | 0 | 0 | 2021-04-02T22:33:44Z | 2021-04-02T22:33:44Z | NONE | Hi, Thank you for Datasette and being a fan of SQLite! Not all rows in a record will always contain data. So when using a facet on a column where some records have data and others don't, you don't get an accurate count of the results. Please consider also counting and showing null records with facets. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1288/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
548591089 | MDU6SXNzdWU1NDg1OTEwODk= | 657 | Allow creation of virtual tables at startup | dazzag24 1055831 | open | 0 | 4 | 2020-01-12T16:10:55Z | 2021-01-15T20:24:35Z | NONE | Hi, I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial CREATE VIRTUAL TABLE ......... command that is required to load the data from the parquet file into the table. It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time? My thoughts are that something at init time could detect that the user was loading a .parquet file and then switch to a mode were it loads that via the "CREATE VIRTUAL TABLE..." rather than loading the .db file in the default case?? I'm happy to contribute code and testing, I just need some pointers on the best approach. Thanks Darren |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
787104850 | MDU6SXNzdWU3ODcxMDQ4NTA= | 1192 | Form Plugin for in-depth Datasette Querying | tomershvueli 1024355 | open | 0 | 0 | 2021-01-15T18:24:50Z | 2021-01-15T18:24:50Z | NONE | I envision a sort of easy-to-build form plugin that would be able to map a user's inputs to different fields/columns in a Datasette database. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
642388564 | MDU6SXNzdWU2NDIzODg1NjQ= | 858 | publish heroku does not work on Windows 10 | simonlau 870912 | open | 0 | 7 | 2020-06-20T14:40:28Z | 2021-06-10T17:44:09Z | NONE | When executing "datasette publish heroku schools.db" on Windows 10, I get the following error
to
as well as the other |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1121121305 | I_kwDOBm6k_c5C0vQZ | 1618 | Reconsider policy on blocking queries containing the string "pragma" | strada 770231 | open | 0 | 6 | 2022-02-01T19:39:46Z | 2022-02-02T19:42:03Z | NONE | First of all, thanks for creating this cool project, and also supporting publishing to various hosting services out of the box. While testing out, I noticed legitimate queries such as
Example as seen from a Datasette instance: https://fivethirtyeight.datasettes.com/polls?sql=select+*+from+books+where+title+like+%27Pragmatic%25%27%0D%0A I'd propose a regular expression like
I can create a pull request with this change, unless the maintainers think it would allow unwanted queries to be executed. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
646448486 | MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0 | 868 | initial windows ci setup | joshmgrant 702729 | open | 0 | 3 | 2020-06-26T18:49:13Z | 2021-07-10T23:41:43Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/868 | Picking up the work done on #557 with a new PR. Seeing if I can get this working. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1154399841 | I_kwDOBm6k_c5Ezr5h | 1645 | Sensible `cache-control` headers for static assets, including those served by plugins | curiousleo 697092 | open | 0 | Datasette 1.0 3268330 | 4 | 2022-02-28T18:12:03Z | 2022-03-08T02:59:29Z | NONE | What I'm seeingWith A table view returns A static asset returns no What I expected to seeI expected the static asset to return a Why this mattersI'm productionising a Datasette deployment right now and was looking into putting it behind a Varnish instance. I was surprised to see requests for static assets being served from Datasette rather than Varnish, this is what led me to look more closely at the response headers. While Datasette serves those static assets pretty quickly, I don't see why Datasette should serve them. By their nature, static assets like images and JS files are very cacheable, so it should be easy to serve them from a cache like Varnish. (Note that Varnish can easily be configured to override this header, enabling caching for static assets. But it would be better if this override was not necessary.) DiscussionIt seems clear to me that serving static assets without a I see two options here: A. Static assets use the same logic as table / SQL views to set the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1645/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
718238967 | MDU6SXNzdWU3MTgyMzg5Njc= | 1003 | from_json jinja2 filter | mhalle 649467 | open | 0 | 4 | 2020-10-09T15:30:58Z | 2020-10-09T17:17:07Z | NONE | When JSON fields are rendered in a jinja2 template, it is handy to be able to manipulate them as data (e.g., iterate over an array of values). Ansible has a "from_json" function, which just called json.loads. It's a trivial as a datasette plugin, but it seems generally useful. Does it makes sense to add it directly into the app? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
718395987 | MDExOlB1bGxSZXF1ZXN0NTAwNzk4MDkx | 1008 | Add json_loads and json_dumps jinja2 filters | mhalle 649467 | open | 0 | 1 | 2020-10-09T20:11:34Z | 2020-12-15T02:30:28Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/1008 | datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);