issues
1,662 rows where repo = 107914493 and type = "issue" sorted by author_association
This data as json, CSV (advanced)
Suggested facets: milestone, state_reason, created_at (date), updated_at (date)
closed_at (date) >30 ✖
- 2017-11-13 14
- 2018-05-28 13
- 2019-06-24 13
- 2020-09-15 12
- 2020-10-31 12
- 2017-10-24 11
- 2017-12-07 11
- 2020-06-09 11
- 2020-05-28 10
- 2017-11-11 9
- 2018-07-10 9
- 2019-05-16 9
- 2019-05-19 9
- 2021-01-04 9
- 2022-08-14 8
- 2022-11-29 8
- 2017-11-15 7
- 2017-12-10 7
- 2018-06-21 7
- 2020-04-27 7
- 2020-10-30 7
- 2020-11-24 7
- 2021-06-02 7
- 2021-12-18 7
- 2022-01-20 7
- 2022-03-21 7
- 2017-11-14 6
- 2017-11-16 6
- 2018-04-26 6
- 2020-06-11 6
- …
type 1
- issue · 1,662 ✖
repo 1
- datasette · 1,662 ✖
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association ▼ | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
273775212 | MDU6SXNzdWUyNzM3NzUyMTI= | 88 | Add NHS England Hospitals example to wiki | tomdyson 15543 | closed | 0 | 4 | 2017-11-14T12:29:10Z | 2021-03-22T23:46:36Z | 2017-11-14T22:54:06Z | CONTRIBUTOR | https://nhs-england-hospitals.now.sh and an associated map visualisation: http://run.plnkr.co/preview/cj9zlf1qc0003414y90ajkwpk/ Datasette is wonderful! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/88/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
275814941 | MDU6SXNzdWUyNzU4MTQ5NDE= | 141 | datasette publish can fail if /tmp is on a different device | jacobian 21148 | closed | 0 | Custom templates edition 2949431 | 5 | 2017-11-21T18:28:05Z | 2020-04-29T03:27:54Z | 2017-12-08T16:06:36Z | CONTRIBUTOR |
I'm not sure if it's possible to detect this (can you figure out which device |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
286938589 | MDU6SXNzdWUyODY5Mzg1ODk= | 177 | Publishing to Heroku - metadata file not uploaded? | psychemedia 82988 | closed | 0 | 0 | 2018-01-09T01:04:31Z | 2018-01-25T16:45:32Z | 2018-01-25T16:45:32Z | CONTRIBUTOR | Trying to run datasette (version 0.14) on Heroku with a On a Mac with dodgy
Could that be causing the issue? Also, I'm not seeing custom query links anywhere obvious when I run the metadata file with a local datasette server? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
291639118 | MDU6SXNzdWUyOTE2MzkxMTg= | 183 | Custom Queries - escaping strings | psychemedia 82988 | closed | 0 | 2 | 2018-01-25T16:49:13Z | 2019-06-24T06:45:07Z | 2019-06-24T06:45:07Z | CONTRIBUTOR | If a SQLite table column name contains spaces, they are usually referred to in double quotes:
In the JSON metadata file, this is passed by escaping the double quotes:
When specifying a custom query in
which does not work. Alternatively, a valid custom query can be passed using backticks (`) to quote the column name and single (unescaped) quotes for the matched value:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
313837303 | MDU6SXNzdWUzMTM4MzczMDM= | 203 | Support for units | russss 45057 | closed | 0 | 10 | 2018-04-12T18:24:28Z | 2018-04-16T21:59:17Z | 2018-04-16T21:59:17Z | CONTRIBUTOR | It would be nice to be able to attach a unit to a column in the metadata, and have it rendered with that unit (and SI prefix) when it's displayed. It would also be nice to support entering the prefixes in variables when querying. With my radio licensing app I've put all frequencies in Hz. It's easy enough to special-case the row rendering to add the SI prefixes, but it's pretty unusable when querying by that field. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/203/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
314834783 | MDU6SXNzdWUzMTQ4MzQ3ODM= | 219 | Expose units in the JSON API? | russss 45057 | open | 0 | 0 | 2018-04-16T22:04:25Z | 2018-04-16T22:04:25Z | CONTRIBUTOR | From #203: it would be nice for the JSON API to (optionally) return columns rendered with units in them - if, for example, you're consuming the JSON to render the rows on a map. I'm not entirely sure how useful this will be though - at the moment my map queries are custom SQL queries (a few have joins in, the rest might be fetching large amounts of data so it makes sense to limit columns fetched). Perhaps the SQL function is a better approach in general. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
324835838 | MDU6SXNzdWUzMjQ4MzU4Mzg= | 276 | Handle spatialite geometry columns better | russss 45057 | closed | 0 | 21 | 2018-05-21T08:46:55Z | 2022-03-21T22:22:20Z | 2022-03-21T22:22:20Z | CONTRIBUTOR | I'd like to see spatialite geometry columns rendered more sensibly - at the moment they come through as well-known-binary unless you use custom SQL, and WKB isn't of much use to anyone on the web. In HTML: they should be shown either as simple lat/long (if it's just a point, for example), or as a sensible placeholder if they're more complex geometries. In JSON: they should be GeoJSON geometries, (which means they can be automatically fed into a leaflet map with no further messing around). In CSV: they should be WKT. I briefly wondered if this should go into a plugin, but I suspect it needs hooking in at a deeper level than the plugin architecture will support any time soon. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/276/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
336924199 | MDU6SXNzdWUzMzY5MjQxOTk= | 330 | Limit text display in cells containing large amounts of text | psychemedia 82988 | closed | 0 | 4 | 2018-06-29T09:15:22Z | 2018-07-24T04:53:20Z | 2018-07-10T16:20:48Z | CONTRIBUTOR | The default preview of a database shows all columns (is the row count limited?) which is fine in many cases but can take a long time to load / offer a large overhead if the table is a SpatiaLite table containing geometry columns that include large shapefiles. Would it make sense to have a setting that can limit the amount of text displayed in any given cell in the table preview, or (less useful?) suppress (with notification) the display of overlong columns unless enabled by the user? An issue then arises if a user does want to see all the text in a cell: 1) for a particular cell; 2) for every cell in the table; 3) for all cells in a particular column or columns (I haven't checked but what if a column contains e.g. raw image data? Does this display as raw data? Or can this be rendered in a context aware way as an image preview? I guess a custom template would be one way to do that?) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
336936010 | MDU6SXNzdWUzMzY5MzYwMTA= | 331 | Datasette throws error when loading spatialite db without extension loaded | psychemedia 82988 | closed | 0 | 2 | 2018-06-29T09:51:14Z | 2022-01-20T21:29:40Z | 2018-07-10T15:13:36Z | CONTRIBUTOR | When starting datasette on a SpatialLite database without loading the SpatiaLite extension (using eg
It would be nice to trap this and return a message saying something like: ``` It looks like you're trying to load a SpatiaLite database? Make sure you load in the SpatiaLite extension when starting datasette. Read more: https://datasette.readthedocs.io/en/latest/spatialite.html ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
341228846 | MDU6SXNzdWUzNDEyMjg4NDY= | 343 | Render boolean fields better by default | russss 45057 | open | 0 | 1 | 2018-07-14T11:10:29Z | 2018-07-14T14:17:14Z | CONTRIBUTOR | These show up as 0 or 1 because sqlite. I think Yes/No would be fine in most cases? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
341229113 | MDU6SXNzdWUzNDEyMjkxMTM= | 344 | datasette publish heroku fails without name provided | russss 45057 | closed | 0 | 1 | 2018-07-14T11:15:56Z | 2018-07-14T13:00:48Z | 2018-07-14T13:00:48Z | CONTRIBUTOR | It fails with the following JSON traceback if the
```
Traceback (most recent call last):
File "/usr/local/bin/datasette", line 11, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/datasette/cli.py", line 265, in publish
app_name = json.loads(create_output)["name"]
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
369716228 | MDU6SXNzdWUzNjk3MTYyMjg= | 366 | Default built image size over Zeit Now 100MiB limit | gfrmin 416374 | closed | 0 | 2 | 2018-10-12T21:27:17Z | 2018-11-05T06:23:32Z | 2018-11-05T06:23:32Z | CONTRIBUTOR | Using |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
374953006 | MDU6SXNzdWUzNzQ5NTMwMDY= | 369 | Interface should show same JSON shape options for custom SQL queries | gfrmin 416374 | open | 0 | Datasette 1.0 3268330 | 2 | 2018-10-29T10:39:15Z | 2020-05-30T17:24:06Z | CONTRIBUTOR | At the moment the page returning a custom SQL query shows the JSON and CSV APIs, but not the multiple JSON shapes. However, adding the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
377155320 | MDU6SXNzdWUzNzcxNTUzMjA= | 370 | Integration with JupyterLab | psychemedia 82988 | open | 0 | 4 | 2018-11-04T13:57:13Z | 2022-09-29T08:17:47Z | CONTRIBUTOR | I just watched a demo video for the JupyterLab Chart Editor which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue). JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe. This made me wonder about what For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a View / Run as datasette file viewer option that will:
(? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all As a freebie, the Related: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
377156339 | MDU6SXNzdWUzNzcxNTYzMzk= | 371 | datasette publish digitalocean plugin | psychemedia 82988 | closed | 0 | 3 | 2018-11-04T14:07:41Z | 2021-01-04T20:14:28Z | 2021-01-04T20:14:28Z | CONTRIBUTOR | Provide support for launching Example: Deploy Docker containers into Digital Ocean. Digital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: Docker One-Click Application. Related: - Launching containers in Digital Ocean servers running docker: How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04 - How To Use Doctl, the Official DigitalOcean Command-Line Client |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
377166793 | MDU6SXNzdWUzNzcxNjY3OTM= | 372 | Docker build tools | psychemedia 82988 | open | 0 | 0 | 2018-11-04T16:02:35Z | 2018-11-04T16:02:35Z | CONTRIBUTOR | In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base Examples of Dockerfile generators / container builders: Discussions / threads (via Binderhub gitter) on:
- why Relates to things like: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/372/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
||||||||
377266351 | MDU6SXNzdWUzNzcyNjYzNTE= | 373 | Views should be shown on root/index page along with tables | gfrmin 416374 | closed | 0 | 0.28 4305096 | 1 | 2018-11-05T06:28:41Z | 2019-05-16T00:29:22Z | 2019-05-16T00:29:22Z | CONTRIBUTOR | At the moment the number of views is given on a datasette "homepage", but not links to any views themselves |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
398559195 | MDU6SXNzdWUzOTg1NTkxOTU= | 400 | datasette publish cloudrun plugin | rprimet 10352819 | closed | 0 | 1 | 2019-01-12T14:35:11Z | 2019-05-03T16:57:35Z | 2019-05-03T16:57:35Z | CONTRIBUTOR | Google announced that they may launch a simple service for running Docker containers (previously serverless containers, now called "cloud run" -- link to alpha here). If/when this happens, it might be a good fit for publishing datasettes? (at least using the current version, manually publishing a datasette seems relatively painless). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
415575624 | MDU6SXNzdWU0MTU1NzU2MjQ= | 414 | datasette requires specific version of Click | psychemedia 82988 | closed | 0 | 1 | 2019-02-28T11:24:59Z | 2019-03-15T04:42:13Z | 2019-03-15T04:42:13Z | CONTRIBUTOR | Is Current release is at 7.0. Can the requirement be liberalised, eg to |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
432870248 | MDU6SXNzdWU0MzI4NzAyNDg= | 431 | Datasette doesn't reload when database file changes | psychemedia 82988 | closed | 0 | 3 | 2019-04-13T16:50:43Z | 2019-05-02T05:13:55Z | 2019-05-02T05:13:54Z | CONTRIBUTOR | My understanding of the I'm running on a Mac and from the I was also expecting to see some sort of log statement in the datasette logging to say that it had detected a file change and restarted, but don't see anything there? Will try to check on an Ubuntu box when I get a chance to see if this is a Mac thing. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
438200529 | MDU6SXNzdWU0MzgyMDA1Mjk= | 438 | Plugins are loaded when running pytest | russss 45057 | closed | 0 | 2 | 2019-04-29T08:25:58Z | 2019-05-02T05:09:18Z | 2019-05-02T05:09:11Z | CONTRIBUTOR | If I have a datasette plugin installed on my system, its hooks are called when running the main datasette tests. This is probably undesirable, especially with the inspect hook in #437, as the plugin may rely on inspected state that the tests don't know about. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
438259941 | MDU6SXNzdWU0MzgyNTk5NDE= | 440 | Plugin hook for additional data export formats | russss 45057 | closed | 0 | 0 | 2019-04-29T11:01:39Z | 2019-05-01T23:01:57Z | 2019-05-01T23:01:57Z | CONTRIBUTOR | It would be nice to have a simple way for plugins to provide additional data export formats. Might require a bit of work on the internals. I can work around this at a lower level with the I guess plugins should be able to register a function which takes a row or list of rows and returns the rendered data. They'll also need to provide a file extension and probably a Content-Type. Datasette could then automatically include this format in the list of export formats on each page. Looks like this is related to #119. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
442327592 | MDU6SXNzdWU0NDIzMjc1OTI= | 456 | Installing installs the tests package | hellerve 7725188 | closed | 0 | 3 | 2019-05-09T16:35:16Z | 2020-07-24T20:39:54Z | 2020-07-24T20:39:54Z | CONTRIBUTOR | Because The offending line is here: https://github.com/simonw/datasette/blob/bfa2ae0d16d39bb82dbe4da4f3fdc3c7f6257418/setup.py#L40 And only
This should be a relatively simple fix, and I could drop a PR if desired! Cheers |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
459627549 | MDU6SXNzdWU0NTk2Mjc1NDk= | 523 | Show total/unfiltered row count when filtering | rixx 2657547 | closed | 0 | 2 | 2019-06-23T22:56:48Z | 2019-06-24T01:38:14Z | 2019-06-24T01:38:14Z | CONTRIBUTOR | When I'm seeing a filtered view of a table, I'd like to be able to see something like '2 rows where status != "closed" (of 1000 total)' to have a context for the data I'm seeing – e.g. currently my database is being filled by an importer, so this information would be super helpful. Since this information would be a performance hit, maybe something like '12 rows where status != "closed" (of ??? total)' with lazy-loading on-click(?) could be applied (Or via a "How many total?" tooltip, or …) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
465731062 | MDU6SXNzdWU0NjU3MzEwNjI= | 555 | Static mounts with relative paths not working | abdusco 3243482 | closed | 0 | 0 | 2019-07-09T11:38:35Z | 2019-07-11T16:13:22Z | 2019-07-11T16:13:22Z | CONTRIBUTOR | Datasette fails to serve files from static mounts that are created using relative paths |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
471292050 | MDU6SXNzdWU0NzEyOTIwNTA= | 563 | incorrect json url for row-level data? | rprimet 10352819 | closed | 0 | 0 | 2019-07-22T19:59:38Z | 2019-10-21T02:03:09Z | 2019-10-21T02:03:09Z | CONTRIBUTOR | While visiting this example page (linked from Datasette documentation), manually clicking on the link ("This data as .json") to the json data results in an error 500 The JSON page linked to from the documentation however is correct (the page address ends in This particular datasette demo page is now a few versions behind, but I was able to reproduce the issue using v0.29.2 and a downloaded copy of the demo database (and also with the current HEAD). Here is a stack trace: ``` Traceback (most recent call last): File "/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/utils/asgi.py", line 101, in call return await view(new_scope, receive, send) File "/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/utils/asgi.py", line 173, in view request, scope["url_route"]["kwargs"] File "/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/views/base.py", line 267, in get request, database, hash, correct_hash_provided, kwargs File "/home/romain/miniconda3/envs/dsbug/lib/python3.7/site-packages/datasette/views/base.py", line 399, in view_get request, database, hash, **kwargs TypeError: data() got an unexpected keyword argument 'as_format' ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
492153532 | MDU6SXNzdWU0OTIxNTM1MzI= | 573 | Exposing Datasette via Jupyter-server-proxy | psychemedia 82988 | closed | 0 | 3 | 2019-09-11T10:32:36Z | 2020-03-26T09:41:30Z | 2020-03-26T09:41:30Z | CONTRIBUTOR | It is possible to expose a running For example, using this demo Binder which has the server proxy installed, we can then upload a simple test database from the notebook homepage, from a Jupyter termianl install datasette and set it running against the test db on eg port 8001 and then view it via the path Clicking links results in 404s though because the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
527710055 | MDU6SXNzdWU1Mjc3MTAwNTU= | 640 | Nicer error message for heroku publish name clash | psychemedia 82988 | open | 0 | 1 | 2019-11-24T14:57:07Z | 2019-12-06T07:19:34Z | CONTRIBUTOR | If you try to publish to Heroku using no set name (i.e. the default
It would be neater if:
It may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
546961357 | MDU6SXNzdWU1NDY5NjEzNTc= | 656 | Display of the column definitions | JBPressac 6371750 | closed | 0 | 1 | 2020-01-08T16:16:53Z | 2020-01-20T14:17:11Z | 2020-01-20T14:14:33Z | CONTRIBUTOR | Hello, Is the nice display of headers and definitions at the top of https://fivethirtyeight.datasettes.com/fivethirtyeight-ac35616/antiquities-act%2Factions_under_antiquities_act is configured in the metadata.json file ? Thank you, |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
600120439 | MDU6SXNzdWU2MDAxMjA0Mzk= | 726 | Foreign key : case of a link to the associated row not displayed | JBPressac 6371750 | closed | 0 | 1 | 2020-04-15T08:31:27Z | 2020-04-27T22:05:47Z | 2020-04-27T22:05:46Z | CONTRIBUTOR | Hello, I use Datasette to publish tsv files linked together by foreign keys declared thanks to sqlite-utils. In one table, prelib_personne, the foreign keys are properly noticed by a link to the associated row (for instance ville_naissance_id is properly linked to prelib_ville). But every link to the foreign key prelib_oeuvre.id fails. For instance, prelib_ecritoeuvre has links to prelib_personne but none to prelib_oeuvre. In despite of the schema: CREATE TABLE "prelib_ecritoeuvre" ( "id" INTEGER, "fonction_id" INTEGER, "oeuvre_id" INTEGER, "personne_id" INTEGER ,PRIMARY KEY ([id]), FOREIGN KEY(fonction_id) REFERENCES prelib_fonctionecritoeuvre(id), FOREIGN KEY(personne_id) REFERENCES prelib_personne(id), FOREIGN KEY(oeuvre_id) REFERENCES prelib_oeuvre(id) ); Would you have any clue to investigate the reason of this problem? Thanks, |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
620969465 | MDU6SXNzdWU2MjA5Njk0NjU= | 767 | Allow to specify a URL fragment for canned queries | rixx 2657547 | closed | 0 | Datasette 0.43 5471110 | 2 | 2020-05-19T13:17:42Z | 2020-05-27T21:52:25Z | 2020-05-27T21:52:25Z | CONTRIBUTOR | Canned queries are very useful to direct users to prepared data and views. I like to use them with charts using datasette-vega a lot, because people get a direct impression at first glance. datasette-vega doesn't show up by default though, and users have to click through to it. Also, datasette-vega does not always guess the best way to render columns correctly though, so it would be nice if I could specify a URL fragment in my canned queries to make sure people see what I want them to see. My current workaround is to include a fragement link in |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
640330278 | MDU6SXNzdWU2NDAzMzAyNzg= | 851 | Having trouble getting writable canned queries to work | abdusco 3243482 | closed | 0 | 1 | 2020-06-17T10:30:28Z | 2020-06-17T10:33:25Z | 2020-06-17T10:32:33Z | CONTRIBUTOR | Hey, I'm trying to get canned inserts to work. I have an DB with following metadata: ```text sqlite> .mode line sqlite> select name, sql from sqlite_master where name like '%search%'; name = search sql = CREATE TABLE "search" ("id" INTEGER NOT NULL PRIMARY KEY, "name" VARCHAR(255) NOT NULL, "url" VARCHAR(255) NOT NULL) ``` ```yaml ...queries:
add_search:
sql: insert into search(name, url) VALUES (:name, :url),
write: true
but when submit post the form I've attached a debugger to see where the error comes from, because Inside
this line raises an exception. That led me to believe I had something wrong with my SQL. But running the command in
So I'm a bit lost here.
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
642572841 | MDU6SXNzdWU2NDI1NzI4NDE= | 859 | Database page loads too slowly with many large tables (due to table counts) | abdusco 3243482 | open | 0 | 21 | 2020-06-21T14:23:17Z | 2021-08-25T21:59:55Z | CONTRIBUTOR | Hey, I have a database that I save in HTML from couple of web scrapers. There are around 200k+, 50+ rows in a couple of tables, with sqlite file weighing around 600MB. The app runs on a VPS with 2 core CPU, 4GB RAM and refreshing database page regularly takes more than 10 seconds. I was suspecting that counting tables was the culprit, but manually running I've looked at the source code. There's a check for index page for mutable databases larger than 100MB https://github.com/simonw/datasette/blob/799c5d53570d773203527f19530cf772dc2eeb24/datasette/views/index.py#L15 but this check is not performed for database page.
I've manually crippled now the page loads in <100ms. Is it possible to apply size check on database page too? /-/versions output{ "python": { "version": "3.8.0", "full": "3.8.0 (default, Oct 28 2019, 16:14:01) \n[GCC 8.3.0]" }, "datasette": { "version": "0.44" }, "asgi": "3.0", "uvicorn": "0.11.5", "sqlite": { "version": "3.22.0", "fts_versions": [ "FTS5", "FTS4", "FTS3" ], "extensions": { "json1": null }, "compile_options": [ "COMPILER=gcc-7.4.0", "ENABLE_COLUMN_METADATA", "ENABLE_DBSTAT_VTAB", "ENABLE_FTS3", "ENABLE_FTS3_PARENTHESIS", "ENABLE_FTS3_TOKENIZER", "ENABLE_FTS4", "ENABLE_FTS5", "ENABLE_JSON1", "ENABLE_LOAD_EXTENSION", "ENABLE_PREUPDATE_HOOK", "ENABLE_RTREE", "ENABLE_SESSION", "ENABLE_STMTVTAB", "ENABLE_UNLOCK_NOTIFY", "ENABLE_UPDATE_DELETE_LIMIT", "HAVE_ISNAN", "LIKE_DOESNT_MATCH_BLOBS", "MAX_SCHEMA_RETRY=25", "MAX_VARIABLE_NUMBER=250000", "OMIT_LOOKASIDE", "SECURE_DELETE", "SOUNDEX", "TEMP_STORE=1", "THREADSAFE=1" ] } } |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
649702801 | MDU6SXNzdWU2NDk3MDI4MDE= | 888 | URLs in release notes point to 127.0.0.1 | abdusco 3243482 | closed | 0 | 1 | 2020-07-02T07:28:04Z | 2020-09-15T20:39:50Z | 2020-09-15T20:39:49Z | CONTRIBUTOR | Just a quick heads up: Release notes for 0.45 include urls that point to localhost. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
649907676 | MDU6SXNzdWU2NDk5MDc2NzY= | 889 | asgi_wrapper plugin hook is crashing at startup | amjith 49260 | closed | 0 | 3 | 2020-07-02T12:53:13Z | 2020-09-15T20:40:52Z | 2020-09-15T20:40:52Z | CONTRIBUTOR | Steps to reproduce:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
708185405 | MDU6SXNzdWU3MDgxODU0MDU= | 975 | Dependabot couldn't authenticate with https://pypi.python.org/simple/ | dependabot-preview[bot] 27856297 | closed | 0 | 0 | 2020-09-24T13:44:40Z | 2020-09-25T13:34:34Z | 2020-09-25T13:34:34Z | CONTRIBUTOR | Dependabot couldn't authenticate with https://pypi.python.org/simple/. You can provide authentication details in your Dependabot dashboard by clicking into the account menu (in the top right) and selecting 'Config variables'. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
721050815 | MDU6SXNzdWU3MjEwNTA4MTU= | 1019 | "Edit SQL" button on canned queries | jsfenfen 639012 | closed | 0 | 0.51 6026070 | 7 | 2020-10-14T00:51:39Z | 2020-10-23T19:44:06Z | 2020-10-14T03:44:23Z | CONTRIBUTOR | Feature request: Would it be possible to add an "edit this query" button on canned queries? Clicking it would open the canned query as an editable sql query. I think the intent is to have named parameters to allow this, but sometimes you just gotta rewrite it? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
751195017 | MDU6SXNzdWU3NTExOTUwMTc= | 1111 | Accessing a database's `.json` is slow for very large SQLite files | asg017 15178711 | open | 0 | 3 | 2020-11-26T00:27:27Z | 2021-01-04T19:57:53Z | CONTRIBUTOR | I have a SQLite DB that's pretty large, 23GB and something like 300 million rows. I expect that most queries I run on it will be slow, which is fine, but there are some things that Datasette does that makes working with the DB very slow. Specifically, when I access the
I suspect this is because a ```bash $ time sqlite3 out.db < <(echo "select count(*) from PageviewsHour;") 362794272 real 0m44.523s user 0m2.497s sys 0m6.703s ``` I'm using the
More than happy to debug further, or send a PR if you like one of the proposals above! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
752966476 | MDU6SXNzdWU3NTI5NjY0NzY= | 1114 | --load-extension=spatialite not working with datasetteproject/datasette docker image | danp 2182 | closed | 0 | 4 | 2020-11-29T17:35:20Z | 2022-01-20T21:29:42Z | 2020-11-29T17:37:45Z | CONTRIBUTOR | https://github.com/simonw/datasette/commit/6aa5886379dd9017215904fb28567b80018902f9 added the https://github.com/simonw/datasette/blob/12877d7a48e2aa28bb5e780f929a218f7265d849/datasette/utils/init.py#L56-L60 However, in the datasetteproject/datasette docker image the file is at This results in the example command here failing:
But it does work when given an explicit path:
Perhaps |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
754178780 | MDU6SXNzdWU3NTQxNzg3ODA= | 1121 | Table actions cog is misaligned | abdusco 3243482 | closed | 0 | 1 | 2020-12-01T08:41:25Z | 2020-12-03T01:03:19Z | 2020-12-03T00:33:36Z | CONTRIBUTOR | At the moment it looks like this https://datasette-graphql-demo.datasette.io/github/repos Adding a few flex statements fixes the alignment and centers |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
756875827 | MDU6SXNzdWU3NTY4NzU4Mjc= | 1129 | Fix footer to the bottom of the page | abdusco 3243482 | open | 0 | 0 | 2020-12-04T07:28:07Z | 2020-12-04T16:04:29Z | CONTRIBUTOR | Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport. This can be fixed using flexbox. ```css body { min-height: 100vh; display: flex; flex-direction: column; } .content { flex-grow: 1; } ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
777677671 | MDU6SXNzdWU3Nzc2Nzc2NzE= | 1169 | Prettier package not actually being cached | benpickles 3637 | closed | 0 | 4 | 2021-01-03T17:04:41Z | 2021-01-04T19:52:34Z | 2021-01-04T19:52:33Z | CONTRIBUTOR | With the current configuration Prettier seems to be installed on every run - which can been seen from the output:
Prettier isn't explicitly being installed (it's surprising that actually installing the dependencies isn't included in the actions/cache docs) but it turns out that
I think there are a couple of approaches to tackling this, you could manually install/cache Prettier within the action, or add a I've tested the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
794554881 | MDU6SXNzdWU3OTQ1NTQ4ODE= | 1208 | A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper | kbaikov 4488943 | closed | 0 | 2 | 2021-01-26T20:56:28Z | 2021-03-11T16:15:49Z | 2021-03-11T16:15:49Z | CONTRIBUTOR | Your code is full of open files that are never closed, especially when you deal with reading/writing json/yaml files. If you run python with warnings enabled this problem becomes evident. This probably contributes to some memory leaks in long running datasettes if the GC will not 'collect' those resources properly. This is easily fixed by using a context manager instead of just using open:
In some newer parts of the code you use Path objects 'read_text' and 'write_text' functions which close the file properly and are prefered in some cases. If you want I can create a PR for all places i found this pattern in. Bellow is a fraction of places where i found a ResourceWarning: ```python update-docs-help.py: 20 actual = actual.replace("Usage: cli ", "Usage: datasette ") 21: open(docs_path / filename, "w").write(actual) 22 datasette\app.py: 210 ): 211: inspect_data = json.load((config_dir / "inspect-data.json").open()) 212 if immutables is None: 266 if config_dir and (config_dir / "settings.json").exists() and not config: 267: config = json.load((config_dir / "settings.json").open()) 268 self._settings = dict(DEFAULT_SETTINGS, **(config or {})) 445 self._app_css_hash = hashlib.sha1( 446: open(os.path.join(str(app_root), "datasette/static/app.css")) 447 .read() datasette\cli.py: 130 else: 131: out = open(inspect_file, "w") 132 loop = asyncio.get_event_loop() 459 if inspect_file: 460: inspect_data = json.load(open(inspect_file)) 461 ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
797651831 | MDU6SXNzdWU3OTc2NTE4MzE= | 1212 | Tests are very slow. | kbaikov 4488943 | closed | 0 | 4 | 2021-01-31T08:06:16Z | 2021-02-19T22:54:13Z | 2021-02-19T22:54:13Z | CONTRIBUTOR | Working on my PR i noticed that tests are very slow. The plain pytest run took about 37 minutes for me.
However i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution.
I can create a PR to mention that in your documentation. This will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation. Does that make sense to you? After a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run. Profiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense. Perhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases. And/or use conn = sqlite3.connect(":memory:") which is much faster. And/or truncate tables after each TC instead of deleting the file and re-creating them. I can take a look which is the best approach if you give the go-ahead. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
807433181 | MDU6SXNzdWU4MDc0MzMxODE= | 1224 | can't start immutable databases from configuration dir mode | camallen 295329 | closed | 0 | 0 | 2021-02-12T17:50:13Z | 2021-03-29T00:17:31Z | 2021-03-29T00:17:31Z | CONTRIBUTOR | Say I have a If I start datasette via I don't want to have to list out the databases by name, e.g. According to the docs outlined in https://docs.datasette.io/en/latest/settings.html?highlight=immutable#configuration-directory-mode this should be possible
I believe that if the However it appears the Click Multiple Options will return a tuple via https://github.com/simonw/datasette/blob/9603d893b9b72653895318c9104d754229fdb146/datasette/cli.py#L311-L317 The resulting tuple is passed to the Datasette app via If you think this is a bug and needs fixing, I am willing to make a PR to check for the empty Thoughts? Also - i'm loving Datasette, it truly is a wonderful tool, thank you :) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1224/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
839008371 | MDU6SXNzdWU4MzkwMDgzNzE= | 1274 | Might there be some way to comment metadata.json? | mroswell 192568 | closed | 0 | 2 | 2021-03-23T18:33:00Z | 2021-03-23T20:14:54Z | 2021-03-23T20:14:54Z | CONTRIBUTOR | I don't know what license to use... Would be nice to be able to add a comment regarding that uncertainty in my metadata.json file I like laktak's little video comment in favor of Human json (Hjson) https://stackoverflow.com/questions/244777/can-comments-be-used-in-json Hmmm... one of the commenters there said comments are allowed in yaml... so that's a good argument for yaml. Anyhow, just came to mind, and thought I'd mention it here. Looks like https://hjson.github.io/ has the details. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
843884745 | MDU6SXNzdWU4NDM4ODQ3NDU= | 1283 | advanced #export causes unexpected scrolling | mroswell 192568 | open | 0 | 0 | 2021-03-29T22:46:57Z | 2021-03-29T22:46:57Z | CONTRIBUTOR |
The user remedy seems to be to manually remove the "#export" from the URL. This behavior happens in my project, and in: https://covid-19.datasettes.com/covid/economist_excess_deaths (for instance) but not in this table: https://global-power-plants.datasettes.com/global-power-plants/global-power-plants |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
845794436 | MDU6SXNzdWU4NDU3OTQ0MzY= | 1284 | Feature or Documentation Request: Individual table as home page template | mroswell 192568 | open | 0 | 4 | 2021-03-31T03:56:17Z | 2021-11-04T03:15:01Z | CONTRIBUTOR | It would be great to have a sample showing how to move a single database that has a single table, to the index page. I'm trying it now, and find there is a real depth of Datasette and Python understanding that's required to be successful. I've got all the basic jinja concepts down... variables, template control structures, template inheritance, template overrides, css, html, the --template-dir and --static arguments, etc. But copying the table.html file to index.html doesn't work. There are undocumented functions and filters... I can figure some of them out (yay, url_builder.py and utils/init.py!) but it's a slog better handled by a much stronger Python developer. One sample would make a world of difference. The ideal form of this documentation would be a diff between the default table.html and how that would look if essentially moved to index.html. The use case is for everyone who wants to create a public-facing website to explore a single table at the root directory. (Maybe a second bit of documentation for people who have a single database with multiple tables.) (Hmm... might be cool to have a setting for that, where it happens automagically! If only one table, then home page is at the table level. if only one database, then home page is at the database level.... as an option.) I suppose I could ignore this, and somehow do this in the DNS settings once I hook up Vercel to a domain name, maybe.. and remove the breadcrumbs in table.html... but for now, a documentation request in the form of a diff... for viewing a single table (or a single database) at the root. (Actually, there's probably room for a whole expanded section on templates. Noticed some nice table metadata in one of the datasette examples, for instance... Hmm... maybe a whole library of solutions in one place... maybe a documentation hackathon! If that's of interest, of course it's a separate issue. ) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
847700726 | MDU6SXNzdWU4NDc3MDA3MjY= | 1285 | Feature Request or Plugin Request: Numeric Range Facets | mroswell 192568 | open | 0 | 0 | 2021-04-01T01:50:20Z | 2021-04-01T02:28:19Z | CONTRIBUTOR | It would be great to offer facets for numeric data ranges. The ranges could pull from typical GIS methods of creating choropleth maps. https://gisgeography.com/choropleth-maps-data-classification/ Of the following, for mapping, I've always preferred a Jenks Natural Breaks, or a cross between Jenks and Pretty breaks.
Here are some links for Natural Breaks, in case this method is unfamiliar.
Per that last link, there is a Jenks Python module... They also describe it as data-intensive for larger datasets. Maybe this is a good plugin idea. An example of equal Intervals would be 0 – < 10 10 – < 20 20 – < 30 30 – < 40 It's kind of confusing to have that less-than sign in there. it could also be displayed as: 0 – 10 10 – 20 20 – 30 30 – 40 But then it's not completely clear which category 10 is in, for instance. (Best to right-justify.. and use an "en dash" between numbers.) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
849220154 | MDU6SXNzdWU4NDkyMjAxNTQ= | 1286 | Better default display of arrays of items | mroswell 192568 | open | 0 | 5 | 2021-04-02T13:31:40Z | 2021-06-12T12:36:15Z | CONTRIBUTOR | Would be great to have template filters that convert array fields to bullets and/or delimited lists upon table display:
Of course, the fields themselves would remain as facetable arrays. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
853672224 | MDU6SXNzdWU4NTM2NzIyMjQ= | 1294 | "You can check out any time you like. But you can never leave!" | mroswell 192568 | open | 0 | 0 | 2021-04-08T17:02:15Z | 2021-04-08T18:35:50Z | CONTRIBUTOR | (Feel free to rename this one.)
UPDATE: - It would be helpful to have a "Previous page" available for all but the first table page. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
855451460 | MDU6SXNzdWU4NTU0NTE0NjA= | 1297 | Documentation: json1, and introspection endpoints | mroswell 192568 | open | 0 | 0 | 2021-04-12T00:38:00Z | 2021-04-12T01:29:33Z | CONTRIBUTOR | https://docs.datasette.io/en/stable/facets.html notes that:
When I check -/versions I see two sections relevant to json1:
The ENABLE_JSON1 makes me think json1 is likely available. But the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
855476501 | MDU6SXNzdWU4NTU0NzY1MDE= | 1298 | improve table horizontal scroll experience | mroswell 192568 | open | 0 | 4 | 2021-04-12T01:55:16Z | 2022-08-30T21:11:49Z | CONTRIBUTOR | Wide tables aren't a huge problem if you know to click and drag right. But it's not at all obvious to do that. (it also tends to blue-select any content as it's dragging.) Depending on column widths, public users might entirely miss all the columns to the right. There is a scrollbar at the bottom of the table, but I'm displaying ALL my records because it's the only way for datasette-vega to make accurate charts. So that bottom scrollbar is likely to be missed. I wonder if some sort of javascript-y mouseover to an arrow might help, similar to those seen in image carousels. Ah: here's a perfect example:
Might be tricky to do that on a table, rather than a one-row carousel, but it's worth experimenting with. Another option is just to put the scrollbars at the top of the table, too. Meantime, I'm trying to build a button like the "View/hide all columns on https://salaries.news.baltimoresun.com/salaries-be494cf/2019+Maryland+state+salaries Might be nice to have that available by default, with settings in the metadata showing which are on by default. (I saw some other closed issues related to horizontal scrolling, and admit I don't entirely understand them. For instance, the animated gif at https://github.com/simonw/datasette/issues/998#issuecomment-714117534 confuses me. ) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1298/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
860625833 | MDU6SXNzdWU4NjA2MjU4MzM= | 1300 | Make row available to `render_cell` plugin hook | abdusco 3243482 | closed | 0 | 5 | 2021-04-18T10:14:37Z | 2022-07-07T16:34:05Z | 2022-07-07T16:31:22Z | CONTRIBUTOR | Original title: Generating URL for a row inside Hey, I am using Datasette to view a database that contains video metadata. It has BLOB columns that contain video thumbnails in JPG format (around 100-500KB per row). I've registered an output formatter that extends ```python from datasette.blob_renderer import render_blob async def render_jpg(datasette, database, rows, columns, request, table, view_name): response = await render_blob(datasette, database, rows, columns, request, table, view_name) response.content_type = "image/jpeg" response.headers["Content-Disposition"] = f'inline; filename="image.jpg"' return response @hookimpl def register_output_renderer(): return { "extension": "jpg", "render": render_jpg, "can_render": lambda: True, } ``` This works well. I can visit I want to display the image directly with an Datasette generates a link with But I have no way of getting the row inside the
Any pointers? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
860722711 | MDU6SXNzdWU4NjA3MjI3MTE= | 1301 | Publishing to cloudrun with immutable mode? | louispotok 5413548 | open | 0 | 1 | 2021-04-18T17:51:46Z | 2022-10-07T02:38:04Z | CONTRIBUTOR | I'm a bit confused about immutable mode and publishing to cloudrun. (I want to publish with immutable mode so that I can support database downloads.) Running
However, running When I just |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
860734722 | MDU6SXNzdWU4NjA3MzQ3MjI= | 1302 | Fix disappearing facets | mroswell 192568 | open | 0 | 0 | 2021-04-18T18:42:33Z | 2021-04-20T07:40:15Z | CONTRIBUTOR |
Since my site is devoted to whether disinfectants are Safer or Toxic, having the suggested facet disappear from the suggested facet list is very confusing* to end-users. This, along with a few other issues, unfortunately proved beyond my own programming ability to address. So I hired a Senior-level developer to address a number of issues, including this disappearing act.
I'm not sure how to do a pull request for this, because the plugin contains other functionality that goes beyond this bug. I wanted the facets sorted in a certain order (both in the suggested facet list, and the detail lists) (... the detail lists were hopping around all over the place before...) I wanted the duplicate facets removed (leaving only the one where you can facet by individual item in an array.) I wanted the arrays to be presented in a prettier fashion (I did that in the template... That could be moved over to the plugin at some point) I'm thinking it'll be very helpful if applicable parts of my project's plugin (sort_suggested_facets_plugin.py) will be able to be incorporated back into datasette, but I leave that to you to consider. (* The disappearing facet bug was especially confusing because I'm removing the filters and sql from the table page, at the request of the organization. The filters and sql detail created a lot of confusion for end users who try to find disinfectants used by Hospitals, for instance, as an '=' won't find them, since they are part of the Use_site array.) My disappearing-facet confusion was documented in my own issue: https://github.com/mroswell/list-N/issues/57 (addressed by the plugin). Other facet-related issues here: https://github.com/mroswell/list-N/issues/54 (addressed by the plugin); https://github.com/mroswell/list-N/issues/15 (addressed by template); https://github.com/mroswell/list-N/issues/53 (not yet addressed). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
864969683 | MDU6SXNzdWU4NjQ5Njk2ODM= | 1305 | Index view crashes when any database table is not accessible to actor | gfrmin 416374 | closed | 0 | 0 | 2021-04-22T13:44:22Z | 2021-06-02T04:26:29Z | 2021-06-02T04:26:29Z | CONTRIBUTOR | Because of https://github.com/simonw/datasette/blob/main/datasette/views/index.py#L63, the This error can be recreated with the fixtures.db if any table is hidden, e.g. by adding something like I'm not sure how to fix this error; perhaps by testing if the table is in the aforementions |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
884952179 | MDU6SXNzdWU4ODQ5NTIxNzk= | 1320 | Can't use apt-get in Dockerfile when using datasetteproj/datasette as base | brandonrobertz 2670795 | closed | 0 | 4 | 2021-05-10T19:37:27Z | 2021-05-24T18:15:56Z | 2021-05-24T18:07:08Z | CONTRIBUTOR | The datasette base Docker image is super convenient, but there's one problem: if any of the plugins you install require additional system dependencies (e.g., xz, git, curl) then any attempt to use apt in said Dockerfile results in an explosion: ``` $ docker-compose build Building server [+] Building 9.9s (7/9) => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 666B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 34B 0.0s => [internal] load metadata for docker.io/datasetteproject/datasette:latest 0.6s => [base 1/4] FROM docker.io/datasetteproject/datasette@sha256:2250d0fbe57b1d615a8d6df0c9d43deb9533532e00bac68854773d8ff8dcf00a 0.0s => [internal] load build context 1.8s => => transferring context: 2.44MB 1.8s => CACHED [base 2/4] WORKDIR /datasette 0.0s => ERROR [base 3/4] RUN apt-get update && apt-get install --no-install-recommends -y git ssh curl xz-utils 9.2s
6 0.446 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]6 0.449 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]6 0.459 Get:3 http://httpredir.debian.org/debian sid InRelease [157 kB]6 0.784 Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]6 0.790 Get:5 http://httpredir.debian.org/debian sid/main amd64 Packages [8626 kB]6 1.003 Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]6 1.180 Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [286 kB]6 7.095 Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [10.9 kB]6 8.058 Fetched 17.2 MB in 8s (2243 kB/s)6 8.058 Reading package lists...6 9.166 E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)6 9.166 E: Could not open file - open (2: No such file or directory)6 9.166 E: Problem opening6 9.166 E: The package lists or status file could not be parsed or opened.``` The problem seems to be from completely wiping out https://github.com/simonw/datasette/blob/1b697539f5b53cec3fe13c0f4ada13ba655c88c7/Dockerfile#L18 I've tested without removing the directory and apt works as expected. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
893890496 | MDU6SXNzdWU4OTM4OTA0OTY= | 1332 | ?_facet_size=X to increase number of facets results on the page | mroswell 192568 | closed | 0 | 5 | 2021-05-18T02:40:16Z | 2021-05-27T16:13:07Z | 2021-05-23T00:34:37Z | CONTRIBUTOR | Is there a way to add a parameter to the URL to modify default_facet_size? LIkewise, a way to produce a link on the three dots to expand to all items (or match previous number of items, or add x more)? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
913017577 | MDU6SXNzdWU5MTMwMTc1Nzc= | 1365 | pathlib.Path breaks internal schema | eyeseast 25778 | closed | 0 | 1 | 2021-06-07T01:40:37Z | 2021-06-21T15:57:39Z | 2021-06-21T15:57:39Z | CONTRIBUTOR | Ran into an issue while trying to build a plugin to render GeoJSON. I'm using pytest's My test looked like this: ```python @pytest.mark.asyncio async def test_render_feature_collection(tmp_path): database = tmp_path / "test.db" datasette = Datasette([database])
``` I only ran into this while running tests, because passing in database paths from the CLI uses strings, but it's a weird error and probably something other people have run into. The fix is easy enough: Convert the path to a string and everything works. So this: ```python @pytest.mark.asyncio async def test_render_feature_collection(tmp_path): database = tmp_path / "test.db" datasette = Datasette([str(database)])
``` This could (probably, haven't tested) be fixed here by calling |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
939051549 | MDU6SXNzdWU5MzkwNTE1NDk= | 1388 | Serve using UNIX domain socket | aslakr 80737 | closed | 0 | 13 | 2021-07-07T16:13:37Z | 2021-07-11T01:18:38Z | 2021-07-10T23:38:32Z | CONTRIBUTOR | Would it be possible to make datasette serve using UNIX domain socket similar to Uvicorn's |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
950664971 | MDU6SXNzdWU5NTA2NjQ5NzE= | 1401 | unordered list is not rendering bullet points in description_html on database page | fgregg 536941 | open | 0 | 2 | 2021-07-22T13:24:18Z | 2021-10-23T13:09:10Z | CONTRIBUTOR | Thanks for this tremendous package, @simonw! In the However, on the database page on the deployed site, it is not rendering this as a bulleted list. Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5 The documentation gives an example of using an unordered list in a |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
951185411 | MDU6SXNzdWU5NTExODU0MTE= | 1402 | feature request: social meta tags | fgregg 536941 | open | 0 | 2 | 2021-07-23T01:57:23Z | 2021-07-26T19:31:41Z | CONTRIBUTOR | it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959137143 | MDU6SXNzdWU5NTkxMzcxNDM= | 1415 | feature request: document minimum permissions for service account for cloudrun | fgregg 536941 | open | 0 | 4 | 2021-08-03T13:48:43Z | 2023-11-05T16:46:59Z | CONTRIBUTOR | Thanks again for such a powerful project. For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions. It would be great to document what those minimum permission that need to be set in the IAM. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959710008 | MDU6SXNzdWU5NTk3MTAwMDg= | 1419 | `publish cloudrun` should deploy a more recent SQLite version | fgregg 536941 | open | 0 | 3 | 2021-08-04T00:45:55Z | 2021-08-05T03:23:24Z | CONTRIBUTOR | I recently changed from deploying a datasette using I suspect this is because they are running different versions of sqlite3.
If so, it would be great to
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
988552851 | MDU6SXNzdWU5ODg1NTI4NTE= | 1456 | conda install results in non-functioning `datasette serve` due to out-of-date asgiref | ctb 51016 | open | 0 | 0 | 2021-09-05T16:59:55Z | 2021-09-05T16:59:55Z | CONTRIBUTOR | Over in https://github.com/ctb/2021-sourmash-datasette, I discovered that the following commands fail:
This appears to be because asgiref 3.3.4 doesn't have WebSocketScope, but later versions do - a simple
fixes the problem for me, at least to the point where I can run datasette and poke around as usual. I note that over in the conda-forge recipe, https://github.com/conda-forge/datasette-feedstock/blob/master/recipe/meta.yaml pins asgiref to < 3.4.0, but I'm not sure why - so I'm not sure how to best resolve this issue :). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
988553806 | MDU6SXNzdWU5ODg1NTM4MDY= | 1457 | suggestion: distinguish names in `--static` documentation | ctb 51016 | closed | 0 | 0 | 2021-09-05T17:04:27Z | 2021-10-14T18:39:55Z | 2021-10-14T18:39:55Z | CONTRIBUTOR | Over in https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files, there is the slightly comical example command -
(now, with MORE STATIC!) It took me a while to sort out all the URLs and paths involved because I wasn't being very clever. But in the interests of simplification and distinction, I might suggest something like
I will submit a PR for your consideration. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
988556488 | MDU6SXNzdWU5ODg1NTY0ODg= | 1459 | suggestion: allow `datasette --open` to take a relative URL | ctb 51016 | open | 0 | 1 | 2021-09-05T17:17:07Z | 2021-09-05T19:59:15Z | CONTRIBUTOR | (soft suggestion because I'm not sure I'm using datasette right yet) Over at https://github.com/ctb/2021-sourmash-datasette, I'm playing around with datasette, and I'm creating some static pages to send people to the right facets. There may well be better ways of achieving this end goal, and I will find out if so, I'm sure! But regardless I think it might be neat to support an option to allow Happy to dig in and provide a PR if it's of interest. I'm not sure off the top of my head how to support an optional value to a parameter in argparse - the current |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
989109888 | MDU6SXNzdWU5ODkxMDk4ODg= | 1460 | Override column metadata with metadata from another column | MichaelTiemannOSC 72577720 | open | 0 | 0 | 2021-09-06T12:13:33Z | 2021-09-06T12:13:33Z | CONTRIBUTOR | I have a table from the PUDL project (https://github.com/catalyst-cooperative/pudl) that looks like this:
Note that @catalyst-cooperative |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
991191951 | MDU6SXNzdWU5OTExOTE5NTE= | 1464 | clean checkout & clean environment has test failures | ctb 51016 | open | 0 | 6 | 2021-09-08T14:16:23Z | 2021-09-13T22:17:17Z | CONTRIBUTOR | I followed the instructions here, and even after running
This is with python 3.9.7 and lots of other packages, as in attached environment listing from |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
994390593 | MDU6SXNzdWU5OTQzOTA1OTM= | 1468 | Faceting for custom SQL queries | MichaelTiemannOSC 72577720 | closed | 0 | 2 | 2021-09-13T02:52:16Z | 2021-09-13T04:54:22Z | 2021-09-13T04:54:17Z | CONTRIBUTOR | Facets are awesome. But not when I need to join to tidy tables together. Or even just running explicitly the default SQL query that simply lists all the rows and columns of a table (up to SIZE). That is to say, when I browse a table, I see facets: https://latest.datasette.io/fixtures/compound_three_primary_keys But when I run a custom query, I don't: Is there an idiom to cause custom SQL to come back with facet suggestions? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
999902754 | I_kwDOBm6k_c47mU4i | 1473 | base logo link visits `undefined` rather than href url | mroswell 192568 | open | 0 | 2 | 2021-09-18T04:17:04Z | 2021-09-19T00:45:32Z | CONTRIBUTOR | I have two connected sites: http://www.SaferOrToxic.org (a Hugo website) and: http://disinfectants.SaferOrToxic.org/disinfectants/listN (a datasette table page) The latter is linked as "The List" in the former's menu. (I'd love a prettier URL, but that's what I've got.) On: http://disinfectants.SaferOrToxic.org/disinfectants/listN ... all the other menu links should point back to: https://www.SaferOrToxic.org And they do! But the logo, for some reason--though it has an href pointing to: https://www.SaferOrToxic.org Keeps going to this instead: https://disinfectants.saferortoxic.org/disinfectants/undefined What is causing that? How can I fix it? In #1284 back in March, I was doing battle with the index.html template, in a still unresolved issue. (I wanted only a single table page at the root.) But I thought, well, if I can't resolve that, at least I could just point the main website to the datasette page ("The List,") and then have the List point back to the home website. The menu hrefs to https://www.SaferOrToxic.org work just fine, exactly as they should, from the datasette page. Even the Home link works properly. But the logo link keeps rewriting to: https://disinfectants.saferortoxic.org/disinfectants/undefined This is the HTML:
Is this somehow related to cloudflare? Or something in the datasette code? I'm starting to think it's a cloudflare issue. Can I at least rule out it being a datasette issue? My repository is here: https://github.com/mroswell/list-N (BTW, I couldn't figure out how to reference a local image, either, on the datasette side, which is why I'm using the image from the www home page.) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1006781949 | I_kwDOBm6k_c48AkX9 | 1478 | Documentation Request: Feature alternative ID instead of default ID | mroswell 192568 | open | 0 | 0 | 2021-09-24T19:56:13Z | 2021-09-25T16:18:54Z | CONTRIBUTOR | My data already has an ID that comes from a federal agency. Would love to have documentation on how to modify the template to: - Remove the generated ID from the table - Link the federal ID to the detail page - and to ensure that the JSON file uses that as the ID. I'd be happy to include the database ID in the export, but not as a key. I don't want to remove the ID from the database, though, because my experience with the federal agency is that data often has anomalies. I don't want all hell to break loose if they end up applying the same ID to multiple rows (which they haven't done yet). I just don't want it to display in the table or the data exports. Perhaps this isn't a template issue, maybe more of a db manipulation... Margie |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1015646369 | I_kwDOBm6k_c48iYih | 1480 | Exceeding Cloud Run memory limits when deploying a 4.8G database | ghing 110420 | open | 0 | 9 | 2021-10-04T21:20:24Z | 2022-10-07T04:39:10Z | CONTRIBUTOR | When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message:
Unfortunately, the maximum amount of memory that can be allocated to an instance is 8192M. Naively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally:
I'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow. This is somewhat related to #1082, but on a different platform, so I decided to open a new issue. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1480/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1023243105 | I_kwDOBm6k_c48_XNh | 1486 | pipx installation instructions for plugins don't reference pipx inject | RhetTbull 41546558 | closed | 0 | 0 | 2021-10-12T00:43:42Z | 2021-10-13T21:09:11Z | 2021-10-13T21:09:11Z | CONTRIBUTOR | The datasette installation instructions discuss how to install with pipx, how to upgrade with pipx, and how to upgrade plugins with pipx but do not mention how to install a plugin with pipx. You discussed this on your blog but looks like this didn't make it in when you updated the docs for pipx (#756). I'll submit a PR shortly to fix this. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1059549523 | I_kwDOBm6k_c4_J3FT | 1526 | Add to vercel.json, rather than overwriting it. | mroswell 192568 | closed | 0 | 2 | 2021-11-22T00:47:12Z | 2021-11-22T04:49:45Z | 2021-11-22T04:13:47Z | CONTRIBUTOR | I'd like to be able to add to vercel.json. But Datasette overwrites whatever I put in that file. I originally reported this here: https://github.com/simonw/datasette-publish-vercel/issues/51 In that case, I wanted to do a rewrite... and now I need to do 301 redirects (because we had to rename our site). Can this be addressed? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1060631257 | I_kwDOBm6k_c4_N_LZ | 1528 | Add new `"sql_file"` key to Canned Queries in metadata? | asg017 15178711 | open | 0 | 3 | 2021-11-22T21:58:01Z | 2022-06-10T03:23:08Z | CONTRIBUTOR | Currently for canned queries, you have to inline SQL in your
This works fine, but for a few reasons, I usually have my canned queries already written in separate So, I'd like to see a new
Both of these would work in the exact same way, where Datasette would instead open + include A few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:
Let me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1076388044 | I_kwDOBm6k_c5AKGDM | 1547 | Writable canned queries fail to load custom templates | wragge 127565 | closed | 0 | Datasette 0.60 7571612 | 6 | 2021-12-10T03:31:48Z | 2022-01-13T22:27:59Z | 2021-12-19T21:12:00Z | CONTRIBUTOR | I've created a canned query with
My non-writeable canned queries pick up custom templates as expected, and if I look at their HTML I see the canned query name added to the templates considered (the canned query here is
So it seems like the writeable canned query is behaving differently for some reason. Is it an authentication thing? I'm using the built in Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1077620955 | I_kwDOBm6k_c5AOzDb | 1549 | Redesign CSV export to improve usability | fgregg 536941 | open | 0 | Datasette 1.0 3268330 | 5 | 2021-12-11T19:02:12Z | 2022-04-04T11:17:13Z | CONTRIBUTOR | Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser Right now, if the user clicks on the CSV related to a <s>table or a</s> query, the response header for the content type is "content-type: text/plain; charset=utf-8" Most browsers will try to open a file with this content-type in the browser. This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser. It would be great if the response header could be something like
which would lead browsers to open a download dialog. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1078702875 | I_kwDOBm6k_c5AS7Mb | 1552 | Allow to set `facets_array` in metadata (like current `facets`) | davidbgk 3556 | closed | 0 | Datasette 0.60 7571612 | 9 | 2021-12-13T16:00:44Z | 2022-01-13T22:26:15Z | 2021-12-16T18:47:48Z | CONTRIBUTOR | For now, you can set a I'm new to datasette, and I'm willing to help with a PR if that is not already implemented and I missed it! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1079111498 | I_kwDOBm6k_c5AUe9K | 1553 | if csv export is truncated in non streaming mode set informative response header | fgregg 536941 | open | 0 | 3 | 2021-12-13T22:50:44Z | 2021-12-16T19:17:28Z | CONTRIBUTOR | streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit. it would be great if a response is truncated that an header signalling that was set in the header. i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1082765654 | I_kwDOBm6k_c5AibFW | 1561 | add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on | fgregg 536941 | closed | 0 | 3 | 2021-12-17T00:45:12Z | 2022-03-19T04:45:40Z | 2022-03-19T04:45:40Z | CONTRIBUTOR | If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. One way that it could work is to have the _memory hash be a hash of all the individual databases. Otherwise, crossdb queries can get quit out of data if using aggressive caching. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1089529555 | I_kwDOBm6k_c5A8ObT | 1581 | when hashed urls are turned on, the _memory db has improperly long-lived cache expiry | fgregg 536941 | closed | 0 | 1 | 2021-12-28T00:05:48Z | 2022-03-24T04:08:18Z | 2022-03-24T04:08:18Z | CONTRIBUTOR | if hashed_urls are on, then a -000 suffix is added to the in particular, this header is set:
this is not appropriate because the Either the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1090810196 | I_kwDOBm6k_c5BBHFU | 1583 | consider adding deletion step of cloudbuild artifacts to gcloud publish | fgregg 536941 | open | 0 | 1 | 2021-12-30T00:33:23Z | 2021-12-30T00:34:16Z | CONTRIBUTOR | right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1096536240 | I_kwDOBm6k_c5BW9Cw | 1586 | run analyze on all databases as part of start up or publishing | fgregg 536941 | open | 0 | 1 | 2022-01-07T17:52:34Z | 2022-02-02T07:13:37Z | CONTRIBUTOR | Running It might be nice if the analyze was run as part of the start up of "serve" or "publish". |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1105916061 | I_kwDOBm6k_c5B6vCd | 1601 | Add KNN and data_licenses to hidden tables list | eyeseast 25778 | closed | 0 | 5 | 2022-01-17T14:19:57Z | 2022-01-20T21:29:44Z | 2022-01-20T04:38:54Z | CONTRIBUTOR | They're generated by Spatialite and not very interesting in most cases. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1108671952 | I_kwDOBm6k_c5CFP3Q | 1605 | Scripted exports | eyeseast 25778 | open | 0 | 10 | 2022-01-19T23:45:55Z | 2022-11-30T15:06:38Z | CONTRIBUTOR | Posting this while I'm thinking about it: I mentioned at the end of this thread that I'm usually doing I used to use a tool called datafreeze to do scripted exports, but that project looks dead now. The ergonomics of it are pretty nice, though, and the This is related to the idea for |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1114147905 | I_kwDOBm6k_c5CaIxB | 1612 | Move canned queries closer to the SQL input area | jsfenfen 639012 | closed | 0 | Datasette 1.0 3268330 | 5 | 2022-01-25T17:06:39Z | 2022-03-19T04:04:49Z | 2022-01-25T18:34:21Z | CONTRIBUTOR | Original title: Consider placing example queries above the sql input? Hi! Have been enjoying deploying ad hoc datasettes for collaborators to pick over! I keep finding myself manually "fixing" the database.html template so that the "example queries" (canned queries) appear directly over the sql box? So they are sorta more a suggestion for collaborators who aren't inclined to write their own queries? My sense is any time I go to the trouble of writing canned queries my users should see 'em? (( I have also considered a client-side reactive-ish option where selecting a query just places the raw SQL in the box and doesn't execute it, but this seems to end up being an inconvenience, rather than a teaching tool. )) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1163369515 | I_kwDOBm6k_c5FV5wr | 1655 | query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data | fgregg 536941 | open | 0 | 8 | 2022-03-09T00:56:40Z | 2023-10-17T21:53:17Z | CONTRIBUTOR | is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb. it's using over a 1G on chrome 99. i found this because, i was trying to figure out why editing the SQL was getting very slow. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1181432624 | I_kwDOBm6k_c5Gazsw | 1688 | [plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins? | hydrosquall 9020979 | closed | 0 | 3 | 2022-03-26T01:17:44Z | 2022-03-27T01:01:14Z | 2022-03-26T21:34:47Z | CONTRIBUTOR | I'm trying to make a small plugin that depends on static assets, by following the guide here. I made a I am trying to follow the example of Unfortunately, datasette doesn't seem to be able to find my assets. Input:
Output: I suspect this issue might go away if I move away from "one-off" plugin mode, but it's been a while since I created a new python package so I'm not sure how much work there is to go between "one off" and "packaged for PyPI". I'd like to try to avoid needing to repackage a new
Thanks for your help! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1182227211 | I_kwDOBm6k_c5Gd1sL | 1692 | [plugins][feature request]: Support additional script tag attributes when loading custom JS | hydrosquall 9020979 | open | 0 | 2 | 2022-03-27T01:16:03Z | 2022-03-30T06:14:51Z | CONTRIBUTOR | Motivation
To be able to support legacy browsers without slowing down users with modern browsers, I would like to be able to set additional HTML attributes on the tag fallback script, ```html <script type="module" src="/index.my-es-module-bundle.js"></script> <script src="/index.my-legacy-fallback-bundle.js" nomodule="" defer></script>``` ProposalTo achieve this, I propose additional optional properties to the API accepted by the Under this API, I'd write something like this to get the above HTML rendered in Datasette.
Resources
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1193090967 | I_kwDOBm6k_c5HHR-X | 1699 | Proposal: datasette query | eyeseast 25778 | open | 0 | 6 | 2022-04-05T12:36:43Z | 2022-04-11T01:32:12Z | CONTRIBUTOR | I started sketching out a plugin to add a At its most basic, it will write the results of a query to STDOUT.
This isn't much improvement over using sqlite-utils. To make better use of datasette and its ecosystem, run For example, using the metadata file from alltheplaces-datasette:
That query would be good to get as CSV, and we can auto-discover metadata and databases in the current directory:
In this case, If a query takes parameters, I can pass them in at runtime, using the
I'm very interested in feedback on this, including whether it should be a plugin or in Datasette core. (I don't have a strong opinion about this, but I'm prototyping it as a plugin to start.) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1198822563 | I_kwDOBm6k_c5HdJSj | 1706 | [feature] immutable mode for a directory, not just individual sqlite file | hydrosquall 9020979 | open | 0 | 4 | 2022-04-10T00:50:57Z | 2022-12-09T19:11:40Z | CONTRIBUTOR | Motivation
ProposalImmutable flag works for both single files and directories
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1706/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1200224939 | I_kwDOBm6k_c5Hifqr | 1707 | [feature] expanded detail page | fgregg 536941 | open | 0 | 1 | 2022-04-11T16:29:17Z | 2022-04-11T16:33:00Z | CONTRIBUTOR | Right now, if click on the detail page for a row you get the info for the row and links to related tables: It would be very cool if there was an option to expand the rows of the related tables from within this detail view. If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1218133366 | I_kwDOBm6k_c5Imz12 | 1728 | Writable canned queries fail with useless non-error against immutable databases | wragge 127565 | closed | 0 | Datasette 0.62 8303187 | 13 | 2022-04-28T03:10:34Z | 2022-08-14T16:34:40Z | 2022-08-14T16:34:40Z | CONTRIBUTOR | I've been banging my head against a wall for a while and would appreciate any pointers...
Everything seems to be the same, but the canned query works perfectly when run locally, and fails when I try it on Cloudrun. I'm redirected back to the canned query page and the db is not changed. There's nothing in the Cloudstor logs to indicate an error. Any clues as to where I should be looking for the problem? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1292368833 | I_kwDOBm6k_c5NB_vB | 1764 | Keep track of config_dir in directory mode (for plugins) | eyeseast 25778 | closed | 0 | 0 | 2022-07-03T16:57:49Z | 2022-07-18T01:12:45Z | 2022-07-18T01:12:45Z | CONTRIBUTOR | I started working on using Here's the reference issue: https://github.com/eyeseast/datasette-query-files/issues/4 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1334628400 | I_kwDOBm6k_c5PjNAw | 1779 | google cloudrun updated their limits on maxscale based on memory and cpu count | fgregg 536941 | closed | 0 | Datasette 0.62 8303187 | 13 | 2022-08-10T13:27:21Z | 2022-08-14T19:42:59Z | 2022-08-14T17:07:34Z | CONTRIBUTOR | if you don't set an explicit limit on container scaling, then google defaults to 100 google recently updated the limits on container scaling, such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100. would be nice if Log of an failing publish run.
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1339663518 | I_kwDOBm6k_c5P2aSe | 1784 | Include "entrypoint" option on `--load-extension`? | asg017 15178711 | closed | 0 | 2 | 2022-08-16T00:22:57Z | 2022-08-23T18:34:31Z | 2022-08-23T18:34:31Z | CONTRIBUTOR | ProblemSQLite extensions have the option to define multiple "entrypoints" in each loadable extension. For example, the upcoming version of (Similar multiple entrypoints will also be added for sqlite-http). The ProposalI want there to be a new command line option of the Then, under the hood, this line of code: Would look something like this:
One potential problem: For backward compatibility, I'm not sure if Click allows cli flags to have variable number of options ("arity"). So I guess it could also use a
Or maybe even a new flag name?
Personally I prefer the
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1377811868 | I_kwDOBm6k_c5SH72c | 1813 | missing next and next_url in JSON responses from an instance deployed on Fly | adipasquale 883348 | closed | 0 | 1 | 2022-09-19T11:32:34Z | 2022-09-19T11:34:45Z | 2022-09-19T11:34:45Z | CONTRIBUTOR | 👋 thank you for an incredibly useful project! I have noticed that my deployed instance on Fly does not include the This is publically accessible here: However when I run the dataset server locally with the same data I get these next keys for the exact same query: I am wondering if I've missed some config or something specific to deployments on Fly.io? I am running datasette v0.62, without any specific config :
as visible in the Makefile. The very limited codebase is public but the sqlite db is not versioned yet because it is too large. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1385026210 | I_kwDOBm6k_c5SjdKi | 1819 | Preserve query on timeout | danp 2182 | closed | 0 | 3 | 2022-09-25T13:32:31Z | 2022-09-26T23:16:15Z | 2022-09-26T23:06:06Z | CONTRIBUTOR | If a query hits the timeout it shows a message like:
But the query is lost. Hitting the browser back button shows the query before the one that errored. It would be nice if the query that errored was preserved for more tweaking. This would make it similar to how "invalid syntax" works since #1346 / #619. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);