issues
14 rows where state = "closed" and user = 536941 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1044267332 | I_kwDOCGYnMM4-PkFE | 336 | sqlite-util tranform --column-order mangles columns of type "timestamp" | fgregg 536941 | closed | 0 | 1 | 2021-11-04T01:15:38Z | 2023-05-08T21:13:38Z | 2023-05-08T21:13:38Z | CONTRIBUTOR | Reproducible code below: ```bash
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1448143294 | I_kwDOBm6k_c5WUOm- | 1890 | Autocomplete text entry for filter values that correspond to facets | fgregg 536941 | closed | 0 | 16 | 2022-11-14T14:11:31Z | 2022-11-17T00:47:36Z | 2022-11-16T03:23:01Z | CONTRIBUTOR | datasette allows users to enter in the value for named parameters into a free-text form field. I think it would add a lot of usability, if the form field could be a drop down of options when query value is already a faceted column. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1400121355 | PR_kwDOBm6k_c5AVujU | 1835 | use inspect data for hash and file size | fgregg 536941 | closed | 0 | 3 | 2022-10-06T18:25:24Z | 2022-10-27T20:51:30Z | 2022-10-06T20:06:07Z | CONTRIBUTOR | simonw/datasette/pulls/1835 |
closes #1834 |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1400431789 | PR_kwDOBm6k_c5AWyQK | 1837 | Make hash and size a lazy property | fgregg 536941 | closed | 0 | 2 | 2022-10-06T23:51:22Z | 2022-10-27T20:51:21Z | 2022-10-27T20:51:20Z | CONTRIBUTOR | simonw/datasette/pulls/1837 | Many apologies, @simonw. My previous PR #1835 did not really solve the problem because the name of the database is often not known to database object in the init method. I took a cue from how you dealt with this issue and made hash a lazy property and did something similar with size. :books: Documentation preview :books:: https://datasette--1837.org.readthedocs.build/en/1837/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1386456717 | PR_kwDOBm6k_c4_oHI4 | 1820 | [SPIKE] Don't truncate query CSVs | fgregg 536941 | closed | 0 | 2 | 2022-09-26T17:27:01Z | 2022-10-07T16:12:17Z | 2022-10-07T16:12:17Z | CONTRIBUTOR | simonw/datasette/pulls/1820 | Relates to #526 This is a minimal set of changes needed for having query CSVs attempt to download all the rows. What's good about it is the minimalism. What's bad about it:
:books: Documentation preview :books:: https://datasette--1820.org.readthedocs.build/en/1820/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1 | |||||
1400083043 | I_kwDOBm6k_c5Tc5Jj | 1834 | inspect data is not used for caching database hash | fgregg 536941 | closed | 0 | 0 | 2022-10-06T17:52:01Z | 2022-10-06T20:06:21Z | 2022-10-06T20:06:08Z | CONTRIBUTOR | When databases are loaded, there is nothing preventing the rehashing of the database for immutable databases. what i might expect is that relevant values of With data that is many gigs large, this is a significant start up time. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1834/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1309542173 | PR_kwDOCGYnMM47pwAb | 455 | in extract code, check equality with IS instead of = for nulls | fgregg 536941 | closed | 0 | 3 | 2022-07-19T13:40:25Z | 2022-08-27T14:45:03Z | 2022-08-27T14:45:03Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/455 | sqlite "IS" is equivalent to SQL "IS NOT DISTINCT FROM" closes #423 |
sqlite-utils 140912432 | pull | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1334628400 | I_kwDOBm6k_c5PjNAw | 1779 | google cloudrun updated their limits on maxscale based on memory and cpu count | fgregg 536941 | closed | 0 | Datasette 0.62 8303187 | 13 | 2022-08-10T13:27:21Z | 2022-08-14T19:42:59Z | 2022-08-14T17:07:34Z | CONTRIBUTOR | if you don't set an explicit limit on container scaling, then google defaults to 100 google recently updated the limits on container scaling, such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100. would be nice if Log of an failing publish run.
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1089529555 | I_kwDOBm6k_c5A8ObT | 1581 | when hashed urls are turned on, the _memory db has improperly long-lived cache expiry | fgregg 536941 | closed | 0 | 1 | 2021-12-28T00:05:48Z | 2022-03-24T04:08:18Z | 2022-03-24T04:08:18Z | CONTRIBUTOR | if hashed_urls are on, then a -000 suffix is added to the in particular, this header is set:
this is not appropriate because the Either the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1090055810 | PR_kwDOBm6k_c4wWDxH | 1582 | don't set far expiry if hash is '000' | fgregg 536941 | closed | 0 | 1 | 2021-12-28T18:16:13Z | 2022-03-24T04:07:58Z | 2022-03-24T04:07:58Z | CONTRIBUTOR | simonw/datasette/pulls/1582 | This will close #1581. I couldn't find any unit tests related to the testing hashed urls, and I know that you want to break that code out of the core application (#1561), so I'm not quite sure what you would like me to for testing. |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1082765654 | I_kwDOBm6k_c5AibFW | 1561 | add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on | fgregg 536941 | closed | 0 | 3 | 2021-12-17T00:45:12Z | 2022-03-19T04:45:40Z | 2022-03-19T04:45:40Z | CONTRIBUTOR | If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. One way that it could work is to have the _memory hash be a hash of all the individual databases. Otherwise, crossdb queries can get quit out of data if using aggressive caching. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1126692066 | I_kwDOCGYnMM5DJ_Ti | 403 | Document how to add a primary key to a rowid table using `sqlite-utils transform --pk` | fgregg 536941 | closed | 0 | 4 | 2022-02-08T01:39:40Z | 2022-02-09T04:22:43Z | 2022-02-08T19:33:59Z | CONTRIBUTOR | Original title: Add option for adding a new, serial, primary key sometimes we have tables that don't have primary keys, but ought to have them. we can use rowid for that, but it would often be nicer to have an explicit primary key. using the current value of rowid would be fine. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1096558279 | I_kwDOCGYnMM5BXCbH | 365 | create-index should run analyze after creating index | fgregg 536941 | closed | 0 | 3.21 7558727 | 16 | 2022-01-07T18:21:25Z | 2022-01-11T02:43:34Z | 2022-01-11T01:36:48Z | CONTRIBUTOR | sqlite's query planner depends upon analyze to make good use of indices. It would be nice if analyze was run as part of the create-index command. If data is inserted later, things can get out date, but it would still probably be a net win. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
1077102934 | I_kwDOCGYnMM5AM0lW | 353 | Allow passing a file of code to "sqlite-utils convert" | fgregg 536941 | closed | 0 | 8 | 2021-12-10T18:06:14Z | 2021-12-11T01:38:29Z | 2021-12-11T01:09:39Z | CONTRIBUTOR | sqlite-utils is so nice, but the ergonomics of the multiline code in kind of tough. It's really hard (maybe impossible) to make the newlines play well with Makefiles. it would be great to write your code fragment in a separate file and direct it into the sqlite-utils either like
or
Thanks, as ever, for these great tools! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);