issues
26 rows where type = "issue" and user = 536941 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2028698018 | I_kwDOBm6k_c5463mi | 2213 | feature request: gzip compression of database downloads | fgregg 536941 | open | 0 | 1 | 2023-12-06T14:35:03Z | 2023-12-06T15:05:46Z | CONTRIBUTOR | At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959137143 | MDU6SXNzdWU5NTkxMzcxNDM= | 1415 | feature request: document minimum permissions for service account for cloudrun | fgregg 536941 | open | 0 | 4 | 2021-08-03T13:48:43Z | 2023-11-05T16:46:59Z | CONTRIBUTOR | Thanks again for such a powerful project. For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions. It would be great to document what those minimum permission that need to be set in the IAM. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1163369515 | I_kwDOBm6k_c5FV5wr | 1655 | query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data | fgregg 536941 | open | 0 | 8 | 2022-03-09T00:56:40Z | 2023-10-17T21:53:17Z | CONTRIBUTOR | is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb. it's using over a 1G on chrome 99. i found this because, i was trying to figure out why editing the SQL was getting very slow. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1822813627 | I_kwDOBm6k_c5spe27 | 2108 | some (many?) SQL syntax errors are not throwing errors with a .csv endpoint | fgregg 536941 | open | 0 | 0 | 2023-07-26T16:57:45Z | 2023-07-26T16:58:07Z | CONTRIBUTOR | here's a CTE query that should always fail with a syntax error:
when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B same with this bad sql
vs but, datasette catches this bad sql at both endpoints:
https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1044267332 | I_kwDOCGYnMM4-PkFE | 336 | sqlite-util tranform --column-order mangles columns of type "timestamp" | fgregg 536941 | closed | 0 | 1 | 2021-11-04T01:15:38Z | 2023-05-08T21:13:38Z | 2023-05-08T21:13:38Z | CONTRIBUTOR | Reproducible code below: ```bash
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1595340692 | I_kwDOCGYnMM5fFveU | 530 | add ability to configure "on delete" and "on update" attributes of foreign keys: | fgregg 536941 | open | 0 | 2 | 2023-02-22T15:44:14Z | 2023-05-08T20:39:01Z | CONTRIBUTOR | sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1560651350 | I_kwDOCGYnMM5dBaZW | 523 | Feature request: trim all leading and trailing white space for all columns for all tables in a database | fgregg 536941 | open | 0 | 1 | 2023-01-28T02:40:10Z | 2023-01-28T02:41:14Z | CONTRIBUTOR | It's pretty common that i need to trim leading or trailing white space from lots of columns in a database a part of an initial ETL. I use the following recipe a lot, and it would be great to include this functionality into sqlite-utils
then something like:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1509783085 | I_kwDOBm6k_c5Z_XYt | 1969 | sql-formatter javascript is not now working with CloudFlare rocketloader | fgregg 536941 | open | 0 | 0 | 2022-12-23T21:14:06Z | 2023-01-10T01:56:33Z | CONTRIBUTOR | This is probably not a bug with datasette, but I thought you might want to know, @simonw. I noticed today that my CloudFlare proxied datasette instance lost the "Format SQL" option. I'm pretty sure it was there last week. In the CloudFlare settings, if I turn off Rocket Loader, I get the "Format SQL" option back. Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading? I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1448143294 | I_kwDOBm6k_c5WUOm- | 1890 | Autocomplete text entry for filter values that correspond to facets | fgregg 536941 | closed | 0 | 16 | 2022-11-14T14:11:31Z | 2022-11-17T00:47:36Z | 2022-11-16T03:23:01Z | CONTRIBUTOR | datasette allows users to enter in the value for named parameters into a free-text form field. I think it would add a lot of usability, if the form field could be a drop down of options when query value is already a faceted column. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1400374908 | I_kwDOBm6k_c5TeAZ8 | 1836 | docker image is duplicating db files somehow | fgregg 536941 | open | 0 | 13 | 2022-10-06T22:35:54Z | 2022-10-08T16:56:51Z | CONTRIBUTOR | if you look into the docker image created by docker publish, the here's the result of the inspect command: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1400083043 | I_kwDOBm6k_c5Tc5Jj | 1834 | inspect data is not used for caching database hash | fgregg 536941 | closed | 0 | 0 | 2022-10-06T17:52:01Z | 2022-10-06T20:06:21Z | 2022-10-06T20:06:08Z | CONTRIBUTOR | When databases are loaded, there is nothing preventing the rehashing of the database for immutable databases. what i might expect is that relevant values of With data that is many gigs large, this is a significant start up time. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1834/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1334628400 | I_kwDOBm6k_c5PjNAw | 1779 | google cloudrun updated their limits on maxscale based on memory and cpu count | fgregg 536941 | closed | 0 | Datasette 0.62 8303187 | 13 | 2022-08-10T13:27:21Z | 2022-08-14T19:42:59Z | 2022-08-14T17:07:34Z | CONTRIBUTOR | if you don't set an explicit limit on container scaling, then google defaults to 100 google recently updated the limits on container scaling, such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100. would be nice if Log of an failing publish run.
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1310243385 | I_kwDOCGYnMM5OGLo5 | 456 | feature request: pivot command | fgregg 536941 | open | 0 | 5 | 2022-07-20T00:58:08Z | 2022-07-20T17:50:50Z | CONTRIBUTOR | pivoting long-format table to wide-format tables is pretty common and kind of pain. would love to see this feature in sqlite-utils! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1200224939 | I_kwDOBm6k_c5Hifqr | 1707 | [feature] expanded detail page | fgregg 536941 | open | 0 | 1 | 2022-04-11T16:29:17Z | 2022-04-11T16:33:00Z | CONTRIBUTOR | Right now, if click on the detail page for a row you get the info for the row and links to related tables: It would be very cool if there was an option to expand the rows of the related tables from within this detail view. If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1077620955 | I_kwDOBm6k_c5AOzDb | 1549 | Redesign CSV export to improve usability | fgregg 536941 | open | 0 | Datasette 1.0 3268330 | 5 | 2021-12-11T19:02:12Z | 2022-04-04T11:17:13Z | CONTRIBUTOR | Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser Right now, if the user clicks on the CSV related to a <s>table or a</s> query, the response header for the content type is "content-type: text/plain; charset=utf-8" Most browsers will try to open a file with this content-type in the browser. This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser. It would be great if the response header could be something like
which would lead browsers to open a download dialog. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1089529555 | I_kwDOBm6k_c5A8ObT | 1581 | when hashed urls are turned on, the _memory db has improperly long-lived cache expiry | fgregg 536941 | closed | 0 | 1 | 2021-12-28T00:05:48Z | 2022-03-24T04:08:18Z | 2022-03-24T04:08:18Z | CONTRIBUTOR | if hashed_urls are on, then a -000 suffix is added to the in particular, this header is set:
this is not appropriate because the Either the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1082765654 | I_kwDOBm6k_c5AibFW | 1561 | add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on | fgregg 536941 | closed | 0 | 3 | 2021-12-17T00:45:12Z | 2022-03-19T04:45:40Z | 2022-03-19T04:45:40Z | CONTRIBUTOR | If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. One way that it could work is to have the _memory hash be a hash of all the individual databases. Otherwise, crossdb queries can get quit out of data if using aggressive caching. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1126692066 | I_kwDOCGYnMM5DJ_Ti | 403 | Document how to add a primary key to a rowid table using `sqlite-utils transform --pk` | fgregg 536941 | closed | 0 | 4 | 2022-02-08T01:39:40Z | 2022-02-09T04:22:43Z | 2022-02-08T19:33:59Z | CONTRIBUTOR | Original title: Add option for adding a new, serial, primary key sometimes we have tables that don't have primary keys, but ought to have them. we can use rowid for that, but it would often be nicer to have an explicit primary key. using the current value of rowid would be fine. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1096536240 | I_kwDOBm6k_c5BW9Cw | 1586 | run analyze on all databases as part of start up or publishing | fgregg 536941 | open | 0 | 1 | 2022-01-07T17:52:34Z | 2022-02-02T07:13:37Z | CONTRIBUTOR | Running It might be nice if the analyze was run as part of the start up of "serve" or "publish". |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1096558279 | I_kwDOCGYnMM5BXCbH | 365 | create-index should run analyze after creating index | fgregg 536941 | closed | 0 | 3.21 7558727 | 16 | 2022-01-07T18:21:25Z | 2022-01-11T02:43:34Z | 2022-01-11T01:36:48Z | CONTRIBUTOR | sqlite's query planner depends upon analyze to make good use of indices. It would be nice if analyze was run as part of the create-index command. If data is inserted later, things can get out date, but it would still probably be a net win. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1090810196 | I_kwDOBm6k_c5BBHFU | 1583 | consider adding deletion step of cloudbuild artifacts to gcloud publish | fgregg 536941 | open | 0 | 1 | 2021-12-30T00:33:23Z | 2021-12-30T00:34:16Z | CONTRIBUTOR | right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1079111498 | I_kwDOBm6k_c5AUe9K | 1553 | if csv export is truncated in non streaming mode set informative response header | fgregg 536941 | open | 0 | 3 | 2021-12-13T22:50:44Z | 2021-12-16T19:17:28Z | CONTRIBUTOR | streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit. it would be great if a response is truncated that an header signalling that was set in the header. i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1077102934 | I_kwDOCGYnMM5AM0lW | 353 | Allow passing a file of code to "sqlite-utils convert" | fgregg 536941 | closed | 0 | 8 | 2021-12-10T18:06:14Z | 2021-12-11T01:38:29Z | 2021-12-11T01:09:39Z | CONTRIBUTOR | sqlite-utils is so nice, but the ergonomics of the multiline code in kind of tough. It's really hard (maybe impossible) to make the newlines play well with Makefiles. it would be great to write your code fragment in a separate file and direct it into the sqlite-utils either like
or
Thanks, as ever, for these great tools! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
950664971 | MDU6SXNzdWU5NTA2NjQ5NzE= | 1401 | unordered list is not rendering bullet points in description_html on database page | fgregg 536941 | open | 0 | 2 | 2021-07-22T13:24:18Z | 2021-10-23T13:09:10Z | CONTRIBUTOR | Thanks for this tremendous package, @simonw! In the However, on the database page on the deployed site, it is not rendering this as a bulleted list. Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5 The documentation gives an example of using an unordered list in a |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959710008 | MDU6SXNzdWU5NTk3MTAwMDg= | 1419 | `publish cloudrun` should deploy a more recent SQLite version | fgregg 536941 | open | 0 | 3 | 2021-08-04T00:45:55Z | 2021-08-05T03:23:24Z | CONTRIBUTOR | I recently changed from deploying a datasette using I suspect this is because they are running different versions of sqlite3.
If so, it would be great to
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
951185411 | MDU6SXNzdWU5NTExODU0MTE= | 1402 | feature request: social meta tags | fgregg 536941 | open | 0 | 2 | 2021-07-23T01:57:23Z | 2021-07-26T19:31:41Z | CONTRIBUTOR | it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);