issues
7 rows where type = "issue" and user = 3243482 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 860625833 | MDU6SXNzdWU4NjA2MjU4MzM= | 1300 | Make row available to `render_cell` plugin hook | abdusco 3243482 | closed | 0 | 5 | 2021-04-18T10:14:37Z | 2022-07-07T16:34:05Z | 2022-07-07T16:31:22Z | CONTRIBUTOR | Original title: Generating URL for a row inside Hey, I am using Datasette to view a database that contains video metadata. It has BLOB columns that contain video thumbnails in JPG format (around 100-500KB per row). I've registered an output formatter that extends ```python from datasette.blob_renderer import render_blob async def render_jpg(datasette, database, rows, columns, request, table, view_name): response = await render_blob(datasette, database, rows, columns, request, table, view_name) response.content_type = "image/jpeg" response.headers["Content-Disposition"] = f'inline; filename="image.jpg"' return response @hookimpl def register_output_renderer(): return { "extension": "jpg", "render": render_jpg, "can_render": lambda: True, } ``` This works well. I can visit I want to display the image directly with an Datasette generates a link with But I have no way of getting the row inside the
Any pointers? |
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/1300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | ||||||
| 642572841 | MDU6SXNzdWU2NDI1NzI4NDE= | 859 | Database page loads too slowly with many large tables (due to table counts) | abdusco 3243482 | open | 0 | 21 | 2020-06-21T14:23:17Z | 2021-08-25T21:59:55Z | CONTRIBUTOR | Hey, I have a database that I save in HTML from couple of web scrapers. There are around 200k+, 50+ rows in a couple of tables, with sqlite file weighing around 600MB. The app runs on a VPS with 2 core CPU, 4GB RAM and refreshing database page regularly takes more than 10 seconds. I was suspecting that counting tables was the culprit, but manually running I've looked at the source code. There's a check for index page for mutable databases larger than 100MB https://github.com/simonw/datasette/blob/799c5d53570d773203527f19530cf772dc2eeb24/datasette/views/index.py#L15 but this check is not performed for database page.
I've manually crippled now the page loads in <100ms. Is it possible to apply size check on database page too? /-/versions output{
"python": {
"version": "3.8.0",
"full": "3.8.0 (default, Oct 28 2019, 16:14:01) \n[GCC 8.3.0]"
},
"datasette": {
"version": "0.44"
},
"asgi": "3.0",
"uvicorn": "0.11.5",
"sqlite": {
"version": "3.22.0",
"fts_versions": [
"FTS5",
"FTS4",
"FTS3"
],
"extensions": {
"json1": null
},
"compile_options": [
"COMPILER=gcc-7.4.0",
"ENABLE_COLUMN_METADATA",
"ENABLE_DBSTAT_VTAB",
"ENABLE_FTS3",
"ENABLE_FTS3_PARENTHESIS",
"ENABLE_FTS3_TOKENIZER",
"ENABLE_FTS4",
"ENABLE_FTS5",
"ENABLE_JSON1",
"ENABLE_LOAD_EXTENSION",
"ENABLE_PREUPDATE_HOOK",
"ENABLE_RTREE",
"ENABLE_SESSION",
"ENABLE_STMTVTAB",
"ENABLE_UNLOCK_NOTIFY",
"ENABLE_UPDATE_DELETE_LIMIT",
"HAVE_ISNAN",
"LIKE_DOESNT_MATCH_BLOBS",
"MAX_SCHEMA_RETRY=25",
"MAX_VARIABLE_NUMBER=250000",
"OMIT_LOOKASIDE",
"SECURE_DELETE",
"SOUNDEX",
"TEMP_STORE=1",
"THREADSAFE=1"
]
}
}
|
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
||||||||
| 756875827 | MDU6SXNzdWU3NTY4NzU4Mjc= | 1129 | Fix footer to the bottom of the page | abdusco 3243482 | open | 0 | 0 | 2020-12-04T07:28:07Z | 2020-12-04T16:04:29Z | CONTRIBUTOR | Footer doesn't stick to the bottom if the body content isn't long enough to reach the end of viewport.
This can be fixed using flexbox. ```css body { min-height: 100vh; display: flex; flex-direction: column; } .content { flex-grow: 1; } ``` |
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/1129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
||||||||
| 754178780 | MDU6SXNzdWU3NTQxNzg3ODA= | 1121 | Table actions cog is misaligned | abdusco 3243482 | closed | 0 | 1 | 2020-12-01T08:41:25Z | 2020-12-03T01:03:19Z | 2020-12-03T00:33:36Z | CONTRIBUTOR | At the moment it looks like this https://datasette-graphql-demo.datasette.io/github/repos
Adding a few flex statements fixes the alignment and centers |
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/1121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | ||||||
| 649702801 | MDU6SXNzdWU2NDk3MDI4MDE= | 888 | URLs in release notes point to 127.0.0.1 | abdusco 3243482 | closed | 0 | 1 | 2020-07-02T07:28:04Z | 2020-09-15T20:39:50Z | 2020-09-15T20:39:49Z | CONTRIBUTOR | Just a quick heads up: Release notes for 0.45 include urls that point to localhost. |
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | ||||||
| 640330278 | MDU6SXNzdWU2NDAzMzAyNzg= | 851 | Having trouble getting writable canned queries to work | abdusco 3243482 | closed | 0 | 1 | 2020-06-17T10:30:28Z | 2020-06-17T10:33:25Z | 2020-06-17T10:32:33Z | CONTRIBUTOR | Hey, I'm trying to get canned inserts to work. I have an DB with following metadata: ```text sqlite> .mode line sqlite> select name, sql from sqlite_master where name like '%search%'; name = search sql = CREATE TABLE "search" ("id" INTEGER NOT NULL PRIMARY KEY, "name" VARCHAR(255) NOT NULL, "url" VARCHAR(255) NOT NULL) ``` ```yaml ...queries:
add_search:
sql: insert into search(name, url) VALUES (:name, :url),
write: true
but when submit post the form I've attached a debugger to see where the error comes from, because Inside
this line raises an exception. That led me to believe I had something wrong with my SQL. But running the command in
So I'm a bit lost here.
|
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | ||||||
| 465731062 | MDU6SXNzdWU0NjU3MzEwNjI= | 555 | Static mounts with relative paths not working | abdusco 3243482 | closed | 0 | 0 | 2019-07-09T11:38:35Z | 2019-07-11T16:13:22Z | 2019-07-11T16:13:22Z | CONTRIBUTOR | Datasette fails to serve files from static mounts that are created using relative paths |
datasette 107914493 | issue | {
"url": "https://api.github.com/repos/simonw/datasette/issues/555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[pull_request] TEXT,
[body] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);



