issues
3 rows where state = "closed" and user = 1448859 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1257724585 | I_kwDOCGYnMM5K91qp | 441 | Combining `rows_where()` and `search()` to limit which rows are searched | betatim 1448859 | closed | 0 | 4 | 2022-06-02T06:01:55Z | 2022-06-14T21:57:57Z | 2022-06-14T21:54:38Z | NONE | What is the right way to limit a full text search query to some rows of a table? For example, I have a table that contains the following columns: I tried to combine My two questions:
1. is adding a Right now I am thinking I will make my own version of Bonus question: is this generally useful/something to add to sqlite-utils or too niche? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
705057955 | MDU6SXNzdWU3MDUwNTc5NTU= | 969 | Add --tar option to "datasette publish heroku" | betatim 1448859 | closed | 0 | Datasette 0.50 5971510 | 3 | 2020-09-20T06:54:53Z | 2020-10-08T23:55:59Z | 2020-10-08T23:30:59Z | NONE | This issue is about how best to pass additional options to tools used for publishing datasettes. A concrete example is wanting to pass the When using ``` › Warning: heroku update available from 7.42.1 to 7.43.0. › Warning: heroku update available from 7.42.1 to 7.43.0. › Warning: heroku update available from 7.42.1 to 7.43.0. Setting WEB_CONCURRENCY and restarting ⬢ binderlytics... done, v13 WEB_CONCURRENCY: 1 › Warning: heroku update available from 7.42.1 to 7.43.0. ▸ Couldn't detect GNU tar. Builds could fail due to decompression errors ▸ See https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive ▸ Please install it, or specify the '--tar' option ▸ Falling back to node's built-in compressor buffer.js:358 throw new ERR_INVALID_OPT_VALUE.RangeError('size', size); ^ RangeError [ERR_INVALID_OPT_VALUE]: The value "3303763968" is invalid for option "size" at Function.alloc (buffer.js:367:3) at new Buffer (buffer.js:281:19) at Readable.<anonymous> (/Users/thead/.local/share/heroku/node_modules/archiver-utils/index.js:39:15) at Readable.emit (events.js:322:22) at endReadableNT (/Users/thead/.local/share/heroku/node_modules/readable-stream/lib/_stream_readable.js:1010:12) at processTicksAndRejections (internal/process/task_queues.js:84:21) { code: 'ERR_INVALID_OPT_VALUE' } ``` After installing GNU tar with I think the problem occurs once your heroku slug reaches a certain size. At least when I add only a few 100 entries to the datasette then the error does not occcur. datasette version 0.49.1 OSX 10.14.6 (18G103) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
593751293 | MDU6SXNzdWU1OTM3NTEyOTM= | 97 | Adding a "recreate" flag to the `Database` constructor | betatim 1448859 | closed | 0 | 4 | 2020-04-04T05:41:10Z | 2020-04-15T14:29:31Z | 2020-04-13T03:52:29Z | NONE | I have a script that imports data into a sqlite DB. When I re-run that script I'd like to remove the existing sqlite DB, instead of adding to it. The pragmatic answer is to add the check and file deletion to my script. However I thought it would be easy and useful for others to add a Does anyone have an idea/suggestion where to start investigating? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/97/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);