issues
4 rows where author_association = "CONTRIBUTOR", comments = 9 and repo = 107914493 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1859415334 | PR_kwDOBm6k_c5YY5Ea | 2148 | Bump sphinx, furo, blacken-docs dependencies | dependabot[bot] 49699333 | closed | 0 | 9 | 2023-08-21T13:48:11Z | 2023-08-29T00:15:31Z | 2023-08-29T00:15:27Z | CONTRIBUTOR | simonw/datasette/pulls/2148 | Bumps the python-packages group with 3 updates: sphinx, furo and blacken-docs. Updates Release notesSourced from sphinx's releases.
ChangelogSourced from sphinx's changelog.
... (truncated) Commits
Updates ChangelogSourced from furo's changelog.
... (truncated) Commits
Updates ChangelogSourced from blacken-docs's changelog.
Commits
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting Dependabot commands and optionsYou can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name> dependency` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name> dependency` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions :books: Documentation preview :books:: https://datasette--2148.org.readthedocs.build/en/2148/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1015646369 | I_kwDOBm6k_c48iYih | 1480 | Exceeding Cloud Run memory limits when deploying a 4.8G database | ghing 110420 | open | 0 | 9 | 2021-10-04T21:20:24Z | 2022-10-07T04:39:10Z | CONTRIBUTOR | When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message:
Unfortunately, the maximum amount of memory that can be allocated to an instance is 8192M. Naively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally:
I'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow. This is somewhat related to #1082, but on a different platform, so I decided to open a new issue. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1480/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1344823170 | PR_kwDOBm6k_c49e3_k | 1789 | Add new entrypoint option to `--load-extension` | asg017 15178711 | closed | 0 | 9 | 2022-08-19T19:27:47Z | 2022-08-23T18:42:52Z | 2022-08-23T18:34:30Z | CONTRIBUTOR | simonw/datasette/pulls/1789 | Closes #1784 The ```bash would load default entrypoint like beforedatasette data.db --load-extension ext loads the extensions with the "sqlite3_foo_init" entrpointdatasette data.db --load-extension ext:sqlite3_foo_init loads the extensions with the "sqlite3_bar_init" entrpointdatasette data.db --load-extension ext:sqlite3_bar_init ``` For testing, I added a small SQLite extension in C at
Where Re documentation: I added a bit to the help text for |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | |||||
1078702875 | I_kwDOBm6k_c5AS7Mb | 1552 | Allow to set `facets_array` in metadata (like current `facets`) | davidbgk 3556 | closed | 0 | Datasette 0.60 7571612 | 9 | 2021-12-13T16:00:44Z | 2022-01-13T22:26:15Z | 2021-12-16T18:47:48Z | CONTRIBUTOR | For now, you can set a I'm new to datasette, and I'm willing to help with a PR if that is not already implemented and I missed it! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);