home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

44 rows where comments = 3, repo = 107914493 and state = "open" sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: milestone, author_association, draft, created_at (date), updated_at (date)

type 2

  • issue 36
  • pull 8

state 1

  • open · 44 ✖

repo 1

  • datasette · 44 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1994861266 PR_kwDOBm6k_c5fhgOS 2209 Fix query for suggested facets with column named value rgieseke 198537 open 0     3 2023-11-15T14:13:30Z 2023-11-15T15:31:12Z   CONTRIBUTOR simonw/datasette/pulls/2209

See discussion in https://github.com/simonw/datasette/issues/2208


:books: Documentation preview :books:: https://datasette--2209.org.readthedocs.build/en/2209/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2209/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1978023780 I_kwDOBm6k_c515j9k 2205 request.post_vars() method obliterates form keys with multiple values simonw 9599 open 0   Datasette 1.0a-next 8755003 3 2023-11-05T23:25:08Z 2023-11-06T04:10:34Z   OWNER  

https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139

In GET requests you can do ?foo=1&foo=2 - you can do the same in POST requests, but the dict() call here eliminates those duplicates.

You can't even try calling post_body() and implement your own custom parsing because of: - #2204

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2205/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1884330740 PR_kwDOBm6k_c5ZszDF 2174 Use $DATASETTE_INTERNAL in absence of --internal asg017 15178711 open 0     3 2023-09-06T16:07:15Z 2023-09-08T00:46:13Z   CONTRIBUTOR simonw/datasette/pulls/2174

refs 2157, specifically this comment

Passing in --internal my_internal.db over and over again can get repetitive.

This PR adds a new configurable env variable DATASETTE_INTERNAL_DB_PATH. If it's defined, then it takes place as the path to the internal database. Users can still overwrite this behavior by passing in their own --internal internal.db flag.

In draft mode for now, needs tests and documentation.

Side note: Maybe we can have a sections in the docs that lists all the "configuration environment variables" that Datasette respects? I did a quick grep and found:

  • DATASETTE_LOAD_PLUGINS
  • DATASETTE_SECRETS

:books: Documentation preview :books:: https://datasette--2174.org.readthedocs.build/en/2174/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2174/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1865869205 I_kwDOBm6k_c5vNueV 2157 Proposal: Make the `_internal` database persistent, customizable, and hidden asg017 15178711 open 0     3 2023-08-24T20:54:29Z 2023-08-31T02:45:56Z   CONTRIBUTOR  

The current _internal database is used by Datasette core to cache info about databases/tables/columns/foreign keys of databases in a Datasette instance. It's a temporary database created at startup, that can only be seen by the root user. See an example _internal DB here, after logging in as root.

The current _internal database has a few rough edges:

  • It's part of datasette.databases, so many plugins have to specifically exclude _internal from their queries examples here
  • It's only used by Datasette core and can't be used by plugins or 3rd parties
  • It's created from scratch at startup and stored in memory. Why is fine, the performance is great, but persistent storage would be nice.

Additionally, it would be really nice if plugins could use this _internal database to store their own configuration, secrets, and settings. For example:

  • datasette-auth-tokens creates a _datasette_auth_tokens table to store auth token metadata. This could be moved into the _internal database to avoid writing to the gues database
  • datasette-socrata creates a socrata_imports table, which also can be in _internal
  • datasette-upload-csvs creates a _csv_progress_ table, which can be in _internal
  • datasette-write-ui wants to have the ability for users to toggle whether a table appears editable, which can be either in datasette.yaml or on-the-fly by storing config in _internal

In general, these are specific features that Datasette plugins would have access to if there was a central internal database they could read/write to:

  • Dynamic configuration. Changing the datasette.yaml file works, but can be tedious to restart the server every time. Plugins can define their own configuration table in _internal, and could read/write to it to store configuration based on user actions (cell menu click, API access, etc.)
  • Caching. If a plugin or Datasette Core needs to cache some expensive computation, they can store it inside _internal (possibly as a temporary table) instead of managing their own caching solution.
  • Audit logs. If a plugin performs some sensitive operations, they can log usage info to _internal for others to audit later.
  • Long running process status. Many plugins (datasette-upload-csvs, datasette-litestream, datasette-socrata) perform tasks that run for a really long time, and want to give continue status updates to the user. They can store this info inside_internal
  • Safer authentication. Passwords and authentication plugins usually store credentials/hashed secrets in configuration files or environment variables, which can be difficult to handle. Now, they can store them in _internal

Proposal

  • We remove _internal from datasette.databases property.
  • We add new datasette.get_internal_db() method that returns the _internal database, for plugins to use
  • We add a new --internal internal.db flag. If provided, then the _internal DB will be sourced from that file, and further updates will be persisted to that file (instead of an in-memory database)
  • When creating internal.db, create a new _datasette_internal table to mark it a an "datasette internal database"
  • In datasette serve, we check for the existence of the _datasette_internal table. If it exists, we assume the user provided that file in error and raise an error. This is to limit the chance that someone accidentally publishes their internal database to the internet. We could optionally add a --unsafe-allow-internal flag (or database plugin) that allows someone to do this if they really want to.

New features unlocked with this

These features don't really need a standardized _internal table per-say (plugins could currently configure their own long-time storage features if they really wanted to), but it would make it much simpler to create these kinds of features with a persistent application database.

  • datasette-comments : A plugin for commenting on rows or specific values in a database. Comment contents + threads + email notification info can be stored in _internal
  • Bookmarks: "Bookmarking" an SQL query could be stored in _internal, or a URL link shortener
  • Webhooks: If a plugin wants to either consume a webhook or create a new one, they can store hashed credentials/API endpoints in _internal
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2157/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1838469176 I_kwDOBm6k_c5tlNA4 2127 Context base class to support documenting the context simonw 9599 open 0   Datasette 1.0 3268330 3 2023-08-07T00:01:02Z 2023-08-10T01:30:25Z   OWNER  

This idea first came up here: - https://github.com/simonw/datasette/issues/2112#issuecomment-1652751140

If datasette.render_template(...) takes an optional Context subclass as an alternative to a context dictionary, I could then use dataclasses to define the context made available to specific templates - which then gives me something I can use to help document what they are.

Also refs: - https://github.com/simonw/datasette/issues/1510

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2127/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1822939274 I_kwDOBm6k_c5sp9iK 2113 Implement and document extras for the new query view page simonw 9599 open 0   Datasette 1.0a-next 8755003 3 2023-07-26T18:24:01Z 2023-08-09T17:35:22Z   OWNER  
  • 2109

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2113/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1811824307 I_kwDOBm6k_c5r_j6z 2105 When reverse proxying datasette with nginx an URL element gets erronously added aki-k 2235371 open 0     3 2023-07-19T12:16:53Z 2023-07-21T21:17:09Z   NONE  

I use this nginx config: ``` location /datasette-llm { return 302 /datasette-llm/; }

location /datasette-llm/ {
  proxy_set_header Upgrade           $http_upgrade;
  proxy_set_header Connection        "Upgrade";
  proxy_http_version 1.1;
  proxy_set_header X-Real-IP         $remote_addr;
  proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto https;
  proxy_set_header X-Forwarded-Host  $http_host;
  proxy_set_header Host              $host;
  proxy_max_temp_file_size           0;
  proxy_pass                         http://127.0.0.1:8001/datasette-llm/;
  proxy_redirect                     http:// https://;
  proxy_buffering off;
  proxy_request_buffering off;
  proxy_set_header Origin            '';
  client_max_body_size 0;
  auth_basic                         "datasette-llm";
  auth_basic_user_file               /etc/nginx/custom-userdb;
}

Then I start datasette with this command: datasette serve --setting base_url /datasette-llm/ $(llm logs path) ``` Everything else works right, except the links in "This data as json, CSV". They get an extra URL element "datasette-llm" like this:

https://192.168.1.3:5432/datasette-llm/datasette-llm/logs.json?sql=select+*+from+_llm_migrations

https://192.168.1.3:5432/datasette-llm/datasette-llm/logs.csv?sql=select+*+from+_llm_migrations&_size=max

When I remove that extra "datasette-llm" from the URL, those links work too.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2105/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1054244712 I_kwDOBm6k_c4-1n9o 1510 Datasette 1.0 documented template context (maybe via API docs) simonw 9599 open 0   Datasette 1.0 3268330 3 2021-11-15T23:23:58Z 2023-06-28T02:05:21Z   OWNER  

Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1510/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1531991339 I_kwDOBm6k_c5bUFUr 1989 Suggestion: Hiding columns pax 116795 open 0     3 2023-01-13T09:33:32Z 2023-03-31T06:18:05Z   NONE  

As there's the possibility of hiding tables - I've run into the need of hiding specific columns - data that's either not relevant for public or can't be shown due to privacy reasons.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1989/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1555701851 PR_kwDOBm6k_c5IdsD7 2003 Show referring tables and rows when the referring foreign key is compound fgregg 536941 open 0     3 2023-01-24T21:31:31Z 2023-01-25T18:44:42Z   CONTRIBUTOR simonw/datasette/pulls/2003

sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys.

in particular,

  1. in a table view, there is not a link from the row to the referenced row if the foreign key is compound
  2. in a row view, there is no listing of tables and rows that refer to the focal row if those referencing foreign keys are compound.

Both of these issues are discussed in #1099.

This PR only fixes the second one, because it's not clear what the right UX is for the first issue.

Some things that might not be desirable about this approach.

  1. it changes the external API, by changing column => columns and other_column => other_columns (see inline comment for more discussion.
  2. There are various places where the plural foreign keys have to be checked for length and discarded or transformed to singular.
datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2003/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1525815985 I_kwDOBm6k_c5a8hqx 1983 Make CustomJSONEncoder a documented public API simonw 9599 open 0     3 2023-01-09T15:27:05Z 2023-01-09T15:35:58Z   OWNER  

It's used by datasette-geojson here: https://github.com/eyeseast/datasette-geojson/commit/902bf135a5a33a0dc8264673d00a59a67cb05152

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1983/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1082584499 I_kwDOBm6k_c5Ahu2z 1558 Redesign `facet_results` JSON structure prior to Datasette 1.0 simonw 9599 open 0   Datasette 1.0 3268330 3 2021-12-16T19:45:10Z 2023-01-09T15:31:17Z   OWNER  

Decision: as an initial fix I'm going to de-duplicate those keys by using tags__array etc - with a _2 on the end if that key is already used.

I'll open a separate issue to redesign this better for Datasette 1.0.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/625#issuecomment-996130862

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1558/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1175690070 I_kwDOBm6k_c5GE5tW 1676 Reconsider ensure_permissions() logic, can it be less confusing? simonw 9599 open 0   Datasette 1.0 3268330 3 2022-03-21T17:14:57Z 2022-12-02T01:23:40Z   OWNER  

Updated documentation: https://github.com/simonw/datasette/blob/e627510b760198ccedba9e5af47a771e847785c9/docs/internals.rst#await-ensure_permissionsactor-permissions

This method allows multiple permissions to be checked at onced. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted.

This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False:

That's pretty hard to understand! I'm going to open a separate issue to reconsider if this is a useful enough abstraction given how confusing it is.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1675#issuecomment-1074177827

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1676/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1410305897 I_kwDOBm6k_c5UD49p 1845 Reconsider the Datasette first-run experience simonw 9599 open 0     3 2022-10-15T22:21:31Z 2022-10-16T08:54:53Z   OWNER  

Had a really interesting conversation today about how hard it is to get from "I installed Datasette" to "I've done something useful with it": https://news.ycombinator.com/item?id=33216789#33218590

Spending some time focusing on that first-run experience feels very worthwhile.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1845/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1386917344 PR_kwDOBm6k_c4_prjN 1823 Keyword-only arguments for a bunch of internal methods simonw 9599 open 0     3 2022-09-27T00:44:59Z 2022-10-05T04:37:54Z   OWNER simonw/datasette/pulls/1823

Refs #1822


:books: Documentation preview :books:: https://datasette--1823.org.readthedocs.build/en/1823/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1823/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1386854246 I_kwDOBm6k_c5Sqbdm 1822 Switch to keyword-only arguments for a bunch of internal methods simonw 9599 open 0   Datasette 1.0 3268330 3 2022-09-26T23:20:38Z 2022-09-27T00:44:04Z   OWNER  

This is a good idea, and one that needs to happen before Datasette 1.0:

While you are adding features, would you be future-proofing your APIs if you switched over some arguments over to keyword-only arguments or would that be too disruptive?

Thinking out loud:

async def render_template( self, templates, *, context=None, plugin_context=None, request=None, view_name=None ): Originally posted by @jefftriplett in https://github.com/simonw/datasette/issues/1817#issuecomment-1256781274

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1822/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
838245338 MDU6SXNzdWU4MzgyNDUzMzg= 1272 Unit tests for the Dockerfile simonw 9599 open 0     3 2021-03-23T01:36:29Z 2022-07-29T10:22:59Z   OWNER  

Working on the Dockerfile in #1249 made me wish for automated tests - to confirm that it boots up correctly, can run SpatiaLite and doesn't have weird bugs like the /db page hanging.

These could run in CI too, but maybe only if the Dockerfile is updated.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1272/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1060631257 I_kwDOBm6k_c4_N_LZ 1528 Add new `"sql_file"` key to Canned Queries in metadata? asg017 15178711 open 0     3 2021-11-22T21:58:01Z 2022-06-10T03:23:08Z   CONTRIBUTOR  

Currently for canned queries, you have to inline SQL in your metadata.yaml like so:

yaml databases: fixtures: queries: neighborhood_search: sql: |- select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood title: Search neighborhoods

This works fine, but for a few reasons, I usually have my canned queries already written in separate .sql files. I'd like to instead re-use those instead of re-writing it.

So, I'd like to see a new "sql_file" key that works like so:

metadata.yaml:

yaml databases: fixtures: queries: neighborhood_search: sql_file: neighborhood_search.sql title: Search neighborhoods neighborhood_search.sql: sql select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood

Both of these would work in the exact same way, where Datasette would instead open + include neighborhood_search.sql on startup.

A few reasons why I'd like to keep my canned queries SQL separate from metadata.yaml:

  • Keeping SQL in standalone SQL files means syntax highlighting and other text editor integrations in my code
  • Multiline strings in yaml, while functional, are a tad cumbersome and are hard to edit
  • Works well with other tools (can pipe .sql files into the sqlite3 CLI, or use with other SQLite clients easier)
  • Typically my canned queries are quite long compared to everything else in my metadata.yaml, so I'd love to separate it where possible

Let me know if this is a feature you'd like to see, I can try to send up a PR if this sounds right!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1528/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
531502365 MDU6SXNzdWU1MzE1MDIzNjU= 646 Make database level information from metadata.json available in the index.html template lagolucas 18017473 open 0   Datasette 1.0 3268330 3 2019-12-02T19:55:10Z 2022-03-15T20:50:34Z   NONE  

Did a search on the issues here and didn't find anything related to what I want.

I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page.

I tried some small tweaks on the python and html files, but failed to get that result.

Is there a way? Thanks!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/646/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1054243511 I_kwDOBm6k_c4-1nq3 1509 Datasette 1.0 JSON API (and documentation) simonw 9599 open 0   Datasette 1.0 3268330 3 2021-11-15T23:22:45Z 2022-03-15T20:38:56Z   OWNER  

The new JSON API in a stable, documented form.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1509/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1122451096 PR_kwDOBm6k_c4x_mXy 1626 Try test suite against macOS and Windows simonw 9599 open 0     3 2022-02-02T22:26:51Z 2022-02-03T01:22:44Z   OWNER simonw/datasette/pulls/1626

Refs #1625

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1626/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
534629631 MDU6SXNzdWU1MzQ2Mjk2MzE= 650 Add a glossary to the documentation simonw 9599 open 0     3 2019-12-09T00:23:45Z 2022-01-13T22:04:56Z   OWNER  

Call it glossary.rst - it can use a definition list something like this: ```rst .. _glossary:

Glossary

Term A definition of the term.

Another term Another definition. ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/650/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
793002853 MDExOlB1bGxSZXF1ZXN0NTYwNzYwMTQ1 1204 WIP: Plugin includes simonw 9599 open 0     3 2021-01-25T03:59:06Z 2021-12-17T07:10:49Z   OWNER simonw/datasette/pulls/1204

Refs #1191

Next steps:

  • [ ] Get comfortable that this pattern is the right way to go
  • [ ] Implement it for all of the other pages, not just the table page
  • [ ] Add a new set of plugin tests that exercise ALL of these new hook locations
  • [ ] Document, then ship
datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1204/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1  
636511683 MDU6SXNzdWU2MzY1MTE2ODM= 830 Redesign register_facet_classes plugin hook simonw 9599 open 0   Datasette 1.0 3268330 3 2020-06-10T20:03:27Z 2021-12-16T19:58:22Z   OWNER  

Nothing uses this plugin hook yet, so the design is not yet proven.

I'm going to build a real plugin against it and use that process to inform any design changes that may need to be made.

I'll add a warning about this to the documentation.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/830/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1079111498 I_kwDOBm6k_c5AUe9K 1553 if csv export is truncated in non streaming mode set informative response header fgregg 536941 open 0     3 2021-12-13T22:50:44Z 2021-12-16T19:17:28Z   CONTRIBUTOR  

streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.

it would be great if a response is truncated that an header signalling that was set in the header.

i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1553/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
959710008 MDU6SXNzdWU5NTk3MTAwMDg= 1419 `publish cloudrun` should deploy a more recent SQLite version fgregg 536941 open 0     3 2021-08-04T00:45:55Z 2021-08-05T03:23:24Z   CONTRIBUTOR  

I recently changed from deploying a datasette using datasette publish heroku to datasette publish cloudrun. A query that ran on the heroku site, now throws a syntax error on the cloudrun site.

I suspect this is because they are running different versions of sqlite3.

  • Heroku: sqlite3 3.31.1 (-/versions)
  • Cloudrun: sqlite3 3.27.2 (-/versions)

If so, it would be great to

  1. harmonize the sqlite3 versions across platforms
  2. update the docker files so as to update the sqlite3 version for cloudrun
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1419/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
646448486 MDExOlB1bGxSZXF1ZXN0NDQwNzM1ODE0 868 initial windows ci setup joshmgrant 702729 open 0     3 2020-06-26T18:49:13Z 2021-07-10T23:41:43Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/868

Picking up the work done on #557 with a new PR. Seeing if I can get this working.

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/868/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
919822817 MDU6SXNzdWU5MTk4MjI4MTc= 1376 Official Datasette Docker image should use SQLite >= 3.31.0 (for generated columns) jcgregorio 1726460 open 0     3 2021-06-13T15:25:51Z 2021-06-13T15:39:37Z   NONE  

Trying to run datasette via the Docker container doesn't seem to work:

$ docker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db Traceback (most recent call last): File "/usr/local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 544, in serve asyncio.get_event_loop().run_until_complete(check_databases(ds)) File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/usr/local/lib/python3.9/site-packages/datasette/cli.py", line 584, in check_databases await database.execute_fn(check_connection) File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 155, in execute_fn return await asyncio.get_event_loop().run_in_executor( File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/usr/local/lib/python3.9/site-packages/datasette/database.py", line 153, in in_thread return fn(conn) File "/usr/local/lib/python3.9/site-packages/datasette/utils/__init__.py", line 892, in check_connection for r in conn.execute( sqlite3.DatabaseError: malformed database schema (generated_columns) - near "AS": syntax error

I have confirmed that the downloaded fixtures.db database is fine:

``` [skia-public] jcgregorio@jcgregorio840 ~/Downloads $ sqlite3 fixtures.db SQLite version 3.34.1 2021-01-20 14:10:07 Enter ".help" for usage hints. sqlite> pragma integrity_check; ok sqlite>

```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1376/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
903902495 MDU6SXNzdWU5MDM5MDI0OTU= 1342 Improve `path_with_replaced_args()` and friends and document them simonw 9599 open 0     3 2021-05-27T15:18:28Z 2021-05-27T15:23:02Z   OWNER  

In order to cleanly implement this I need to expose the path_with_replaced_args utility function to Datasette's template engine. This is the first time this will become an exposed (and hence should-by-documented) API and I don't like its shape much.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1337#issuecomment-849721280

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1342/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
642296989 MDU6SXNzdWU2NDIyOTY5ODk= 856 Consider pagination of canned queries simonw 9599 open 0     3 2020-06-20T03:15:59Z 2021-05-21T14:22:41Z   OWNER  

The new canned_queries() plugin hook from #852 combined with plugins like https://github.com/simonw/datasette-saved-queries could mean that some installations end up with hundreds or even thousands of canned queries. I should consider pagination or some other way of ensuring that this doesn't cause performance problems for Datasette.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/856/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
741862364 MDU6SXNzdWU3NDE4NjIzNjQ= 1090 Custom widgets for canned query forms simonw 9599 open 0     3 2020-11-12T19:21:07Z 2021-03-27T16:25:25Z   OWNER  

This is an idea that was cut from the first version of writable canned queries:

I really want the option to use a <textarea> for a specific value.

Idea: metadata syntax like this:

json { "databases": { "my-database": { "queries": { "add_twitter_handle": { "sql": "insert into twitter_handles (username) values (:username)", "write": true, "params": { "username": { "widget": "textarea" } } } } } } }

I can ship with some default widgets and provide a plugin hook for registering extra widgets.

This opens up some really exciting possibilities for things like map widgets that let you draw polygons.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/698#issuecomment-608125928

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1090/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
837350092 MDU6SXNzdWU4MzczNTAwOTI= 1270 Try implementing SQLite timeouts using .interrupt() instead of using .set_progress_handler() simonw 9599 open 0     3 2021-03-22T06:00:17Z 2021-03-23T16:45:39Z   OWNER  

Maybe I could implement SQLite query timeouts using the interrupt() method instead of the progress handler hack I'm currently using?

https://stackoverflow.com/questions/43240496/python-sqlite3-how-to-quickly-and-cleanly-interrupt-long-running-query-with-e has some tips.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/1268#issuecomment-803764919

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1270/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
837956424 MDExOlB1bGxSZXF1ZXN0NTk4MjEzNTY1 1271 Use SQLite conn.interrupt() instead of sqlite_timelimit() simonw 9599 open 0     3 2021-03-22T17:34:20Z 2021-03-22T21:49:27Z   OWNER simonw/datasette/pulls/1271

Refs #1270, #1268, #1249

Before merging this I need to do some more testing (to make sure that expensive queries really are properly cancelled). I also need to delete a bunch of code relating to the old mechanism of cancelling queries.

[See comment below: this doesn't actually cancel the query due to a thread-local confusion]

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1271/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1  
769520939 MDU6SXNzdWU3Njk1MjA5Mzk= 1149 Make it easier to theme Datasette with CSS simonw 9599 open 0   Datasette 1.0 3268330 3 2020-12-17T05:01:26Z 2021-03-22T21:43:16Z   OWNER  

I want to theme https://datasette.io/ so that when you visit https://datasette.io/content (the Datasette UI part of it) the navigation from the parent site is used.

I tried dropping in a base.html template like this:

```html {% extends "page_base.html" %}

{% block base_extra_head %}

<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> {% for url in extra_css_urls %} <link rel="stylesheet" href="{{ url.url }}"{% if url.sri %} integrity="{{ url.sri }}" crossorigin="anonymous"{% endif %}> {% endfor %} {% for url in extra_js_urls %} <script src="{{ url.url }}"{% if url.sri %} integrity="{{ url.sri }}" crossorigin="anonymous"{% endif %}></script> {% endfor %} {% block extra_head %}{% endblock %} {% endblock %}

{% block extra_body_end %} {% include "_close_open_menus.html" %}

{% for body_script in body_scripts %} <script>{{ body_script }}</script> {% endfor %} {% endblock %} ``` But this resulted in pages looking like this:

Note that the cog menu is broken and the filter UI is unstyled. To get these working correctly I would need to copy over a whole lot of Datasette's default CSS - and that means that when Datasette changes in the future those pages could break in subtle ways.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1149/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
834602299 MDU6SXNzdWU4MzQ2MDIyOTk= 1262 Plugin hook that could support 'order by random()' for table view henry501 19328961 open 0     3 2021-03-18T10:02:01Z 2021-03-18T17:55:01Z   NONE  

I am frequently using Datasette to quickly get a visual impression for a table without reviewing it in its entirety. Because I have some groups of similar records, the default sorting options mean that each page is very similar and not representative of the full dataset. The current interface allows sorting by columns, but random sorting is only available via custom SQL.

Maybe this could be a button or link.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1262/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
799663959 MDU6SXNzdWU3OTk2NjM5NTk= 1213 gzip support for HTML (and JSON) responses simonw 9599 open 0     3 2021-02-02T20:36:28Z 2021-02-02T20:41:55Z   OWNER  

This page https://datasette-tiles-demo.datasette.io/San_Francisco/tiles is 2MB because of all of the base64 images. Gzipped it's 1.5MB.

Since Datasette is usually deployed without a frontend gzipping proxy, Datasette itself needs to solve for this.

Gzipping everything won't work because some endpoints - the all-rows CSV endpoint and the download-database endpoint - are streaming and hence can't be buffered-and-gzipped.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1213/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
751195017 MDU6SXNzdWU3NTExOTUwMTc= 1111 Accessing a database's `.json` is slow for very large SQLite files asg017 15178711 open 0     3 2020-11-26T00:27:27Z 2021-01-04T19:57:53Z   CONTRIBUTOR  

I have a SQLite DB that's pretty large, 23GB and something like 300 million rows. I expect that most queries I run on it will be slow, which is fine, but there are some things that Datasette does that makes working with the DB very slow. Specifically, when I access the .json metadata for a table (which I believe it comes from datasette/views/database.py, it takes 43 seconds for the request to come in:

bash $ time curl localhost:9999/out.json {"database": "out", "size": 24291454976, "tables": [{"name": "PageviewsHour", "columns": ["file", "code", "page", "pageviews"], "primary_keys": [], "count": null, "hidden": false, "fts_table": null, "foreign_keys": {"incoming": [], "outgoing": [{"other_table": "PageviewsHourFiles", "column": "file", "other_column": "file_id"}]}, "private": false}, {"name": "PageviewsHourFiles", "columns": ["file_id", "filename", "sha256", "size", "day", "hour"], "primary_keys": ["file_id"], "count": null, "hidden": false, "fts_table": null, "foreign_keys": {"incoming": [{"other_table": "PageviewsHour", "column": "file_id", "other_column": "file"}], "outgoing": []}, "private": false}, {"name": "sqlite_sequence", "columns": ["name", "seq"], "primary_keys": [], "count": 1, "hidden": false, "fts_table": null, "foreign_keys": {"incoming": [], "outgoing": []}, "private": false}], "hidden_count": 0, "views": [], "queries": [], "private": false, "allow_execute_sql": true, "query_ms": 43340.23213386536} real 0m43.417s user 0m0.006s sys 0m0.016s

I suspect this is because a COUNT(*) is happening under the hood, which, when I run it through sqlite directly, does take around the same time:

```bash $ time sqlite3 out.db < <(echo "select count(*) from PageviewsHour;") 362794272

real 0m44.523s user 0m2.497s sys 0m6.703s ```

I'm using the .json request in the Observable Datasette Client to 1) verify that a link passed in is a reachable Datasette instance, and 2) a quick way to look at metadata for a db. A few different solutions I can think of:

  1. Have some other endpoint, like /-/datasette.json that the Observable Datasette client can fetch from to verify that the passed in URL is a valid Datasette (doesnt solve the slow problem, feel free to split this issue into 2)
  2. Have a way to turn off table counts when accessing a database's .json view, like ?no_count=1 or something
  3. Maybe have a timeout on the table_counts() function if it takes too long. which is odd, because it seems like it already does that (I think?), I can debug a little more if that's the case

More than happy to debug further, or send a PR if you like one of the proposals above!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1111/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
735852274 MDU6SXNzdWU3MzU4NTIyNzQ= 1082 DigitalOcean buildpack memory errors for large sqlite db? justmars 39538958 open 0     3 2020-11-04T06:35:32Z 2020-11-04T19:35:44Z   NONE  
  1. Have a sqlite db stored in Dropbox
  2. Previously tried the Digital Ocean build pack minimal approach (e.g. Procfile, requirements.txt, bin/post_compile)
  3. bin/post_compile with wget from Dropbox
  4. download of large sqlite db is successful
  5. log reveals that when building Docker container, Digital Ocean runs out of memory for 5gb+ sqlite db but works fine for 2gb+ sqlite db
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1082/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
613422636 MDU6SXNzdWU2MTM0MjI2MzY= 760 Way of seeing full schema for a database simonw 9599 open 0     3 2020-05-06T15:46:08Z 2020-05-06T23:49:06Z   OWNER  

I find myself wanting to quickly figure out all of the BLOB columns in a database.

A /-/schema page showing the full schema (actually since it's per-database probably /dbname/-/schema or /-/schema/dbname) would be really handy.

It would need to be carefully constructed from various queries against sqlite_master - just doing select * from sqlite_master where type='table' isn't quite enough because I also want to show indexes, triggers etc.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/760/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
457147936 MDU6SXNzdWU0NTcxNDc5MzY= 512 "about" parameter in metadata does not appear when alone chrismp 7936571 open 0     3 2019-06-17T21:04:20Z 2019-10-11T15:49:13Z   NONE  

Here's an example of metadata I have for one database on datasette.

"Records-requests": { "tables": { "Some table": { "about": "This table has data." } } }

The text in about does not show up when I publish the data. But it shows up after I add a "source" parameter in the metadata.

Is this intended?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
465327844 MDU6SXNzdWU0NjUzMjc4NDQ= 553 Potential improvements to facet-by-date simonw 9599 open 0     3 2019-07-08T15:37:53Z 2019-07-08T15:41:55Z   OWNER  

In addition to #483 Tobias had some useful suggestions on Twitter:

https://twitter.com/rixxtr/status/1148253926476701696

I think for date facets, it might be more meaningful to order them by date, rather than by size? Or offer both? I'm definitely often interested in size-over-time, so https://data.rixx.de/django_tickets/tickets?_facet_date=created#facet-created … isn't all that helpful!

Screenshot of that link:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/553/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
327395270 MDU6SXNzdWUzMjczOTUyNzA= 296 Per-database and per-table /-/ URL namespace simonw 9599 open 0     3 2018-05-29T16:23:13Z 2019-06-28T16:46:34Z   OWNER  

Initially this will be for subsets of /-/inspect and /-/metadata but it will also give us a URL namespace for future features like /-/facet (expanded list of a specific facet, linked to from ...) and /-/graph

To start:

  • /dbname/-/inspect
  • /dbname/-/metadata
  • /dbname/tablename/-/inspect
  • /dbname/tablename/-/metadata

This means we will no longer allow databases or tables to have the name "-" - I think that's OK

We will continue to support rows with a primary key of "-" at the following URL:

  • /dbname/tablename/-
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/296/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
451585764 MDU6SXNzdWU0NTE1ODU3NjQ= 499 Accessibility for non-techie newsies? chrismp 7936571 open 0     3 2019-06-03T16:49:37Z 2019-06-05T21:22:55Z   NONE  

Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/499/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
400340905 MDU6SXNzdWU0MDAzNDA5MDU= 402 Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs simonw 9599 open 0     3 2019-01-17T15:52:28Z 2019-01-17T16:15:21Z   OWNER  

Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html

https://twitter.com/ignoredambience/status/1085926961413869568

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/402/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 502.218ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows