issues
691 rows where state = "open" and type = "issue" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: milestone, comments, created_at (date), updated_at (date)
repo 16
- datasette 505
- sqlite-utils 85
- github-to-sqlite 17
- dogsheep-photos 16
- dogsheep-beta 14
- twitter-to-sqlite 13
- google-takeout-to-sqlite 8
- healthkit-to-sqlite 6
- apple-notes-to-sqlite 6
- pocket-to-sqlite 5
- evernote-to-sqlite 4
- swarm-to-sqlite 3
- hacker-news-to-sqlite 3
- inaturalist-to-sqlite 2
- genome-to-sqlite 2
- dogsheep.github.io 2
type 1
- issue · 691 ✖
state 1
- open · 691 ✖
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1570375808 | I_kwDODFdgUs5dmgiA | 79 | Deploy demo job is failing due to rate limit | simonw 9599 | open | 0 | 2 | 2023-02-03T20:05:01Z | 2023-12-08T14:50:15Z | MEMBER | github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
1988525411 | I_kwDOCGYnMM52hn1j | 603 | Pyhton 3.12 Bug report | constantinedev 1324252 | open | 0 | 1 | 2023-11-10T22:57:48Z | 2023-12-08T05:10:31Z | NONE | I start with new python3 verison 3.12.0 Also have the error where connect DataBase
As well now of the resolved plan just keep the sqlite-utils version in python3.12 with v3.32.1 [tested] but where are the sqlite3.Connection problem.... This won't happen on python version down to 3.11[tested]
Just the python3.12.0, I have test this error are come from the sqlite3 connection
The error say from Let fix together. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
2029908157 | I_kwDOBm6k_c54_fC9 | 2214 | CSV export fails for some `text` foreign key references | precipice 2874 | open | 0 | 1 | 2023-12-07T05:04:34Z | 2023-12-07T07:36:34Z | NONE | I'm starting this issue without a clear reproduction in case someone else has seen this behavior, and to use the issue as a notebook for research. I'm using Datasette with the SWITRS data set, which is a California Highway Patrol collection of traffic incident data from the past decade or so. I receive data from them in CSV and want to work with it in Datasette, then export it to CSV for mapping in Felt.com. Their data makes extensive use of codes for incident column data ( If I import the data and set up the integer foreign keys, everything works fine, but if I set up the text foreign keys, CSV export starts to fail. The foreign key configuration is as follows: ``` Some tables use integer ids, like sensible tables do. Let's import them firstsince we favor them.for TABLE in DAY_OF_WEEK CHP_SHIFT POPULATION SPECIAL_COND BEAT_TYPE COLLISION_SEVERITY do sqlite-utils create-table records.db $TABLE id integer name text --pk=id sqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv sqlite-utils add-foreign-key records.db collisions $TABLE $TABLE id sqlite-utils create-index records.db collisions $TABLE done Other tables use letter keys, like they were raised by WOLVES. Let's put themat the end of the import queue.for TABLE in WEATHER_1 WEATHER_2 LOCATION_TYPE RAMP_INTERSECTION SIDE_OF_HWY \ PRIMARY_COLL_FACTOR PCF_CODE_OF_VIOL PCF_VIOL_CATEGORY TYPE_OF_COLLISION MVIW \ PED_ACTION ROAD_SURFACE ROAD_COND_1 ROAD_COND_2 LIGHTING CONTROL_DEVICE \ STWD_VEHTYPE_AT_FAULT CHP_VEHTYPE_AT_FAULT PRIMARY_RAMP SECONDARY_RAMP do sqlite-utils create-table records.db $TABLE key text name text --pk=key sqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv sqlite-utils add-foreign-key records.db collisions $TABLE $TABLE key sqlite-utils create-index records.db collisions $TABLE done ``` You can see the full code and import script here: https://github.com/radical-bike-lobby/switrs-db If I run this code and then hit the CSV export link in the Datasette interface (the simple link or the "advanced" dialog), export fails after a small number of CSV rows are written. I am not seeing any detailed error messages but this appears in the logging output: ``` INFO: 127.0.0.1:57885 - "GET /records/collisions.csv?_facet=PRIMARY_RD&PRIMARY_RD=ASHBY+AV&_labels=on&_size=max HTTP/1.1" 200 OK Caught this error: ``` (No other output follows I've stared at the rows directly after the error occurs and can't yet see what is causing the problem. I'm going to set up a development environment and see if I get any more detailed error output, and then stare more at some problematic lines to see if I can get a simple reproduction. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
2028698018 | I_kwDOBm6k_c5463mi | 2213 | feature request: gzip compression of database downloads | fgregg 536941 | open | 0 | 1 | 2023-12-06T14:35:03Z | 2023-12-06T15:05:46Z | CONTRIBUTOR | At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
2023057255 | I_kwDOBm6k_c54lWdn | 2212 | Can't filter with numbers | fzakaria 605070 | open | 0 | 0 | 2023-12-04T05:26:29Z | 2023-12-04T05:26:29Z | NONE | I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean). I can't seem to get the facet to work or even filtering on this column. My guess is that Datasette is "stringifying" the number and it's not matching? Example: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
2019811176 | I_kwDOBm6k_c54Y99o | 2211 | Unreachable exception handlers for `sqlite3.OperationalError` | mattparmett 1214074 | open | 0 | 0 | 2023-12-01T00:50:22Z | 2023-12-01T00:50:22Z | NONE | There are several places where Because the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then This issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it. One example: Other instances: https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270 https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
564833696 | MDU6SXNzdWU1NjQ4MzM2OTY= | 670 | Prototoype for Datasette on PostgreSQL | simonw 9599 | open | 0 | 15 | 2020-02-13T17:17:55Z | 2023-11-17T15:32:21Z | OWNER | I thought this would never happen, but now that I'm deep in the weeds of running SQLite in production for Datasette Cloud I'm starting to reconsider my policy of only supporting SQLite. Some of the factors making me think PostgreSQL support could be worth the effort:
- Serverless. I'm getting increasingly excited about writable-database use-cases for Datasette. If it could talk to PostgreSQL then users could easily deploy it on Heroku or other serverless providers that can talk to a managed RDS-style PostgreSQL.
- Existing databases. Plenty of organizations have PostgreSQL databases. They can export to SQLite using db-to-sqlite but that's a pretty big barrier to getting started - being able to run The above reasons feel strong enough to justify a prototype. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/670/reactions", "total_count": 19, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 } |
||||||||
1994857251 | I_kwDOBm6k_c525xsj | 2208 | No suggested facets when a column named 'value' is included | rgieseke 198537 | open | 0 | 1 | 2023-11-15T14:11:17Z | 2023-11-15T14:18:59Z | CONTRIBUTOR | When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'. Currently the following is shown (from https://latest.datasette.io/fixtures/facetable) When I add a column named 'value' only the JSON facets are processed. I think that not using aliases could be a solution (except if someone wants to use a column named There is also a TODO with a similar question in the same file. I have not looked into that yet. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1994845152 | I_kwDOBm6k_c525uvg | 2207 | ModuleNotFoundError: No module named 'click_default_group | honzajavorek 283441 | open | 0 | 0 | 2023-11-15T14:04:32Z | 2023-11-15T14:04:32Z | NONE | No matter what I do, I'm getting this error:
I have datasette in my dependencies like this:
I had the latest regular version (not pre-release) there originally, but the result was the same:
Full pyproject.toml is at https://github.com/honzajavorek/junior.guru/ Previously datasette worked for me, but I guess something had to upgrade and now I can't even launch it. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1978603203 | I_kwDOCGYnMM517xbD | 602 | `sqlite-utils transform` removes the `AUTOINCREMENT` keyword | ArsTapatun 4472046 | open | 0 | 0 | 2023-11-06T08:48:43Z | 2023-11-06T08:48:43Z | NONE | ContextWe ran into this bug randomly, noticing that deleted Reproducible exampleOriginal database ```sql $ sqlite3 test.db << EOF CREATE TABLE mytable ( col1 INTEGER PRIMARY KEY AUTOINCREMENT, col2 TEXT NOT NULL ) EOF $ sqlite3 test.db ".schema mytable" CREATE TABLE mytable ( col1 INTEGER PRIMARY KEY AUTOINCREMENT, col2 TEXT NOT NULL ); ``` Modified database after sqlite-utils ```sql $ sqlite-utils transform test.db mytable --rename col2 renamedcol2 $ sqlite3 test.db "SELECT sql FROM sqlite_master WHERE name = 'mytable';" CREATE TABLE IF NOT EXISTS "mytable" ( [col1] INTEGER PRIMARY KEY, [renamedcol2] TEXT NOT NULL ); ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1978023780 | I_kwDOBm6k_c515j9k | 2205 | request.post_vars() method obliterates form keys with multiple values | simonw 9599 | open | 0 | Datasette 1.0a-next 8755003 | 3 | 2023-11-05T23:25:08Z | 2023-11-06T04:10:34Z | OWNER | In GET requests you can do You can't even try calling |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1978022687 | I_kwDOBm6k_c515jsf | 2204 | request.post_body() can only be called once | simonw 9599 | open | 0 | 0 | 2023-11-05T23:22:03Z | 2023-11-05T23:23:23Z | OWNER | This code here: It consumes the messages, which means if you try to call it a second time you won't be able to get at the body. This is efficient - we don't end up with a Potential solution: set Potential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call I'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959137143 | MDU6SXNzdWU5NTkxMzcxNDM= | 1415 | feature request: document minimum permissions for service account for cloudrun | fgregg 536941 | open | 0 | 4 | 2021-08-03T13:48:43Z | 2023-11-05T16:46:59Z | CONTRIBUTOR | Thanks again for such a powerful project. For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions. It would be great to document what those minimum permission that need to be set in the IAM. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1977726056 | I_kwDOBm6k_c514bRo | 2203 | custom plugin not seen as sql function | LyzardKing 7113541 | open | 0 | 0 | 2023-11-05T10:30:19Z | 2023-11-05T10:30:19Z | NONE | Hi, I'm not sure if this is the right repo for this issue. I'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly. Now I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works).
If I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error:
Since the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1977155641 | I_kwDOCGYnMM512QA5 | 601 | Move plugin directory into documentation | simonw 9599 | open | 0 | 0 | 2023-11-04T04:07:52Z | 2023-11-04T04:07:52Z | OWNER | https://github.com/simonw/sqlite-utils-plugins should be in the official documentation. I can use the same pattern as https://llm.datasette.io/en/stable/plugins/directory.html |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1955676270 | I_kwDOBm6k_c50kUBu | 2201 | Discord invite link is invalid | andrewsanchez 11708906 | open | 0 | 0 | 2023-10-21T21:50:05Z | 2023-10-21T21:50:05Z | NONE | https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1163369515 | I_kwDOBm6k_c5FV5wr | 1655 | query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data | fgregg 536941 | open | 0 | 8 | 2022-03-09T00:56:40Z | 2023-10-17T21:53:17Z | CONTRIBUTOR | is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb. it's using over a 1G on chrome 99. i found this because, i was trying to figure out why editing the SQL was getting very slow. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1943259395 | I_kwDOEhK-wc5z08kD | 16 | time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ' | linonetwo 3746270 | open | 0 | 0 | 2023-10-14T13:24:39Z | 2023-10-14T13:24:39Z | NONE |
enex is exported by evernote mac client |
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1940346034 | I_kwDOBm6k_c5zp1Sy | 2199 | Detailed upgrade instructions for metadata.yaml -> datasette.yaml | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 7 | 2023-10-12T16:21:25Z | 2023-10-12T22:08:42Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/2190#issuecomment-1759947021 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1931794126 | I_kwDOBm6k_c5zJNbO | 2198 | --load-extension=spatialite not working with Windows | hcarter333 363004 | open | 0 | 0 | 2023-10-08T12:50:22Z | 2023-10-08T12:50:22Z | NONE | Using each of
and
and
I got the error: ``` File "C:\Users\m3n7es\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\datasette\database.py", line 209, in in_thread self.ds._prepare_connection(conn, self.name) File "C:\Users\m3n7es\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\datasette\app.py", line 596, in _prepare_connection conn.execute("SELECT load_extension(?, ?)", [path, entrypoint]) sqlite3.OperationalError: The specified module could not be found. ``` I finally tried modifying the code in app.py to read: ``` def _prepare_connection(self, conn, database): conn.row_factory = sqlite3.Row conn.text_factory = lambda x: str(x, "utf-8", "replace") if self.sqlite_extensions: conn.enable_load_extension(True) for extension in self.sqlite_extensions: # "extension" is either a string path to the extension # or a 2-item tuple that specifies which entrypoint to load. #if isinstance(extension, tuple): # path, entrypoint = extension # conn.execute("SELECT load_extension(?, ?)", [path, entrypoint]) #else: conn.execute("SELECT load_extension('C:\Windows\System32\mod_spatialite.dll')") ``` At which point the counties example worked. Is there a correct way to install/use the extension on Windows? My method will cause issues if there's a second extension to be used. On an unrelated note, my next step is to figure out how to write a query across the two loaded databases supplied from the command line:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1920416843 | I_kwDOCGYnMM5ydzxL | 597 | sqlite-utils insert-files should be able to convert fields | grimnight 1737541 | open | 0 | 0 | 2023-09-30T22:20:47Z | 2023-09-30T22:20:47Z | NONE | Currently using both ```shell ~ ❯ cat test.py import os class Example: def init(self, arg1, arg2): self.arg1 = arg1 ~ ❯ sqlite-utils insert-files test.sqlar sqlar test.py -c name:name -c data:content -c mode:mode -c mtime:mtime -c sz:size --pk=name [####################################] 100% ~ ❯ sqlite-utils convert test.sqlar sqlar data "zlib.compress(value)" --import=zlib --where "name = 'test.py'" [####################################] 100% ~ ❯ cat test.py | sqlite-utils convert test.sqlar sqlar data "zlib.compress(sys.stdin.buffer.read())" --import=zlib --import=sys --where "name = 'test.py'" # Alternative way [####################################] 100% ~ ❯ sqlite3 test.sqlar "SELECT hex(data) FROM sqlar WHERE name = 'test.py';" | python3 -c "import sys, zlib; sys.stdout.buffer.write(zlib.decompress(bytes.fromhex(sys.stdin.read())))" import os class Example: def init(self, arg1, arg2): self.arg1 = arg1 ~ ❯ rm test.py ~ ❯ sqlar -l test.sqlar test.py ~ ❯ sqlar -x test.sqlar ~ ❯ cat test.py import os class Example: def init(self, arg1, arg2): self.arg1 = arg1 ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
777333388 | MDU6SXNzdWU3NzczMzMzODg= | 1168 | Mechanism for storing metadata in _metadata tables | simonw 9599 | open | 0 | 21 | 2021-01-01T18:47:27Z | 2023-09-28T18:29:05Z | OWNER | Original title: Perhaps metadata should all live in a Inspired by #1150 - metadata should be exposed as an API, and for large Datasette instances that API may need to be paginated. So why not expose it through an in-memory database table? One catch to this: plugins. #860 aims to add a plugin hook for metadata. But if the metadata comes from an in-memory table, how do the plugins interact with it? The need to paginate over metadata does make a plugin hook that returns metadata for an individual table seem less wise, since we don't want to have to do 10,000 plugin hook invocations to show a list of all metadata. If those plugins write directly to the in-memory table how can their contributions survive the server restarting? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
944846776 | MDU6SXNzdWU5NDQ4NDY3NzY= | 297 | Option for importing CSV data using the SQLite .import mechanism | simonw 9599 | open | 0 | 23 | 2021-07-14T22:36:41Z | 2023-09-22T20:49:52Z | OWNER | As seen in https://til.simonwillison.net/sqlite/import-csv - An option to use this would be useful - maybe something like this:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1907765514 | I_kwDOBm6k_c5xtjEK | 2195 | `datasette publish` needs support for the new config/metadata split | simonw 9599 | open | 0 | 9 | 2023-09-21T21:08:12Z | 2023-09-21T22:57:48Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/2194#issuecomment-1730259871 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1825007061 | I_kwDOBm6k_c5sx2XV | 2123 | datasette serve when invoked with --reload interprets the serve command as a file | cadeef 79087 | open | 0 | 2 | 2023-07-27T19:07:22Z | 2023-09-18T13:02:46Z | NONE | When running
If a 'serve' file is created it launches properly (albeit with an empty database called serve):
Version (running from HEAD on main):
This issue appears to have existed for awhile as https://github.com/simonw/datasette/issues/1380#issuecomment-953366110 mentions the error in a different context. I'm happy to debug and land a patch if it's welcome. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2123/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1900026059 | I_kwDOBm6k_c5xQBjL | 2188 | Plugin Hooks for "compile to SQL" languages | asg017 15178711 | open | 0 | 2 | 2023-09-18T01:37:15Z | 2023-09-18T06:58:53Z | CONTRIBUTOR | There's a ton of tools/languages that compile to SQL, which may be nice in Datasette. Some examples:
It would be cool if plugins could extend Datasette to use these languages, in both the code editor and API usage. A few things I'd imagine a
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
787098345 | MDU6SXNzdWU3ODcwOTgzNDU= | 1191 | Ability for plugins to collaborate when adding extra HTML to blocks in default templates | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 12 | 2021-01-15T18:18:51Z | 2023-09-18T06:55:52Z | OWNER | Sometimes a plugin may want to add content to an existing default template - for example Currently plugins can do this by providing a new version of the It would be better if there were known areas of those templates which plugins could add additional content to, such that multiple plugins can use the same spot. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1191/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1899310542 | I_kwDOBm6k_c5xNS3O | 2187 | Datasette for serving JSON only | geofinder 19705106 | open | 0 | 0 | 2023-09-16T05:48:29Z | 2023-09-16T05:48:29Z | NONE | Hi, is there any way to use datasette for serving json only without displaying webpage? I've tried to search about this in documentation but didn't get any information |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1898927976 | I_kwDOBm6k_c5xL1do | 2186 | Mechanism for register_output_renderer hooks to access full count | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 2 | 2023-09-15T18:57:54Z | 2023-09-15T19:27:59Z | OWNER | The cause of this bug: - https://github.com/simonw/datasette-export-notebook/issues/17 Is that That field is no longer available by default - the It would be useful if plugins like this could access the total count on demand, should they need to. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1895266807 | I_kwDOBm6k_c5w93n3 | 2184 | Design decision - should configuration be exposed at /-/config ? | simonw 9599 | open | 0 | 0 | 2023-09-13T21:07:08Z | 2023-09-13T21:07:38Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/pull/2183#discussion_r1325076924 Also refs: - #2093 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1891614971 | I_kwDOCGYnMM5wv8D7 | 594 | Represent compound foreign keys in table.foreign_keys output | simonw 9599 | open | 0 | 2 | 2023-09-12T03:48:24Z | 2023-09-12T03:51:13Z | OWNER | Given this schema:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1781530343 | I_kwDOBm6k_c5qL_7n | 2093 | Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File | asg017 15178711 | open | 0 | 8 | 2023-06-29T21:18:23Z | 2023-09-11T20:19:32Z | CONTRIBUTOR | Very often I get tripped up when trying to configure my Datasette instances. For example: if I want to change the port my app listen too, do I do that with a CLI flag, a Normally I need to look it up in Datasette docs, and I quickly find my answer, but the number of places where "config" goes it overwhelming.
Typically my Datasette deploys are extremely long shell commands, with multiple Proposal: Consolidate all "config" into
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1876353656 | I_kwDOBm6k_c5v1uJ4 | 2168 | Consider a request/response wrapping hook slightly higher level than asgi_wrapper() | simonw 9599 | open | 0 | 6 | 2023-08-31T21:42:04Z | 2023-09-10T17:54:08Z | OWNER | There's a long justification for why this might be needed here: - https://github.com/simonw/datasette-auth-tokens/issues/10#issuecomment-1701820001 Short version: it would be neat if it was possible to stash some data on the The Since Datasette has well-defined |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1888477283 | I_kwDOC8SPRc5wj-Bj | 38 | Run `rebuild_fts` after building the index | simonw 9599 | open | 0 | 0 | 2023-09-08T23:17:45Z | 2023-09-08T23:17:45Z | MEMBER | In: - https://github.com/simonw/datasette.io/issues/152#issuecomment-1712323347 This turned out to be the fix:
|
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1010112818 | I_kwDOBm6k_c48NRky | 1479 | Win32 "used by another process" error with datasette publish | kirajano 76450761 | open | 0 | 7 | 2021-09-28T19:12:00Z | 2023-09-07T02:14:16Z | NONE | I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette. Failed to deploy. Attaching logs:
1. Tried with an app created via Error error connecting to docker: An unknown error occured. Traceback (most recent call last): File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly__init__.py", line 156, in fly "--remote-only", File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpgcm8cz66\frosty-fog-8565' ```
Error not possible to validate configuration: server returned Post "https://api.fly.io/graphql": unexpected EOF Traceback (most recent call last):
File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main These are also the contents of the generated .toml file in 2 scenario: ``` fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00app = "dark-feather-168" kill_signal = "SIGINT" kill_timeout = 5 processes = [] [env] [experimental] allowed_public_ports = [] auto_rollback = true [[services]] http_checks = [] internal_port = 8080 processes = ["app"] protocol = "tcp" script_checks = [] [services.concurrency] hard_limit = 25 soft_limit = 20 type = "connections" [[services.ports]] handlers = ["http"] port = 80 [[services.ports]] handlers = ["tls", "http"] port = 443 [[services.tcp_checks]] grace_period = "1s" interval = "15s" restart_limit = 6 timeout = "2s" ```
```[+] Building 147.3s (11/11) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 396B 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.8 4.7s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.1s => => transferring context: 82.37kB 0.0s => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s => [2/5] COPY . /app 2.3s => [3/5] WORKDIR /app 0.2s => [4/5] RUN pip install -U datasette 26.9s => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s => exporting to image 1.2s => => exporting layers 1.2s => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Traceback (most recent call last): "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette\cli.py", line 283, in package call(args) File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpkb27qid3\datasette'``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1479/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1884408624 | I_kwDOBm6k_c5wUcsw | 2177 | Move schema tables from _internal to _catalog | simonw 9599 | open | 0 | 1 | 2023-09-06T16:58:33Z | 2023-09-06T17:04:30Z | OWNER | This came up in discussion over: - https://github.com/simonw/datasette/pull/2174 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1879214365 | I_kwDOCGYnMM5wAokd | 590 | Ability to tell if a Database is an in-memory one | simonw 9599 | open | 0 | 1 | 2023-09-03T19:50:15Z | 2023-09-03T19:50:36Z | OWNER | Currently the constructor accepts This makes it hard to tell if a database object is to an in-memory or a file-based database, which is sometimes useful to know. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1879209560 | I_kwDOCGYnMM5wAnZY | 589 | Mechanism for de-registering registered SQL functions | simonw 9599 | open | 0 | 3 | 2023-09-03T19:32:39Z | 2023-09-03T19:36:34Z | OWNER | I used a custom SQL function in a migration script and then realized that it should be de-registered before the end of the script to avoid leaking into the calling code. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1875739055 | I_kwDOBm6k_c5vzYGv | 2167 | Document return type of await ds.permission_allowed() | simonw 9599 | open | 0 | 0 | 2023-08-31T15:14:23Z | 2023-08-31T15:14:23Z | OWNER | The return type isn't documented here: https://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/docs/internals.rst#L327-L350 On inspecting the code I'm not 100% sure if it's possible for this. method to return |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1865869205 | I_kwDOBm6k_c5vNueV | 2157 | Proposal: Make the `_internal` database persistent, customizable, and hidden | asg017 15178711 | open | 0 | 3 | 2023-08-24T20:54:29Z | 2023-08-31T02:45:56Z | CONTRIBUTOR | The current The current
Additionally, it would be really nice if plugins could use this
In general, these are specific features that Datasette plugins would have access to if there was a central internal database they could read/write to:
Proposal
New features unlocked with thisThese features don't really need a standardized
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
594237015 | MDU6SXNzdWU1OTQyMzcwMTU= | 718 | Plugin idea: datasette-redirects | simonw 9599 | open | 0 | 0 | 2020-04-05T03:41:38Z | 2023-08-30T22:17:31Z | OWNER | I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
reopened | |||||||
1865649347 | I_kwDOBm6k_c5vM4zD | 2156 | datasette -s/--setting option for setting nested configuration options | simonw 9599 | open | 0 | 4 | 2023-08-24T18:09:27Z | 2023-08-28T19:33:05Z | OWNER |
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1868713944 | I_kwDOCGYnMM5vYk_Y | 588 | `table.get(column=value)` option for retrieving things not by their primary key | simonw 9599 | open | 0 | 1 | 2023-08-28T00:41:23Z | 2023-08-28T00:41:54Z | OWNER | This came up working on this feature: - https://github.com/simonw/llm/pull/186 I have a table with this schema:
Problem is, fetching the collection by name is actually pretty inconvenient. Fetch by numeric ID:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
814595021 | MDU6SXNzdWU4MTQ1OTUwMjE= | 1241 | Share button for copying current URL | Kabouik 7107523 | open | 0 | 6 | 2021-02-23T15:55:40Z | 2023-08-24T20:09:52Z | NONE | I use datasette in an This particular use prevents users to access the full URLs of their datasette views and queries, which is a shame because the way datasette handles URLs to make every view or query easy to share is awesome. I know how to get the URL from the context menu of my browser, but I don't think many visitors would do it or even notice that datasette uses permalinks for pretty much every action they do. Would it be possible to add a "Share link" button to the interface, either in datasette itself or in a plugin? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1855885427 | I_kwDOBm6k_c5unpBz | 2143 | De-tangling Metadata before Datasette 1.0 | asg017 15178711 | open | 0 | 24 | 2023-08-18T00:51:50Z | 2023-08-24T18:28:27Z | CONTRIBUTOR | Metadata in Datasette is a really powerful feature, but is a bit difficult to work with. It was initially a way to add "metadata" about your "data" in Datasette instances, like descriptions for databases/tables/columns, titles, source URLs, licenses, etc. But it later became the go-to spot for other Datasette features that have nothing to do with metadata, like permissions/plugins/canned queries. Specifically, I've found the following problems when working with Datasette metadata:
Possible solutionsHere's a few ideas of Datasette core changes we can make to address these problems. Re-vamp the Datasette Python metadata APIsThe Datasette object has a single The (I'm a bit fuzzy on what to actually do here, but I imagine it'll be very small breaking changes to a few Python methods) Add an optional
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1858228057 | I_kwDOBm6k_c5uwk9Z | 2147 | Plugin hook for database queries that are run | jackowayed 18899 | open | 0 | 6 | 2023-08-20T18:43:50Z | 2023-08-24T03:54:35Z | NONE | I'm interested in making a plugin that saves every query that gets run to a table in the database. (I know about datasette-query-history but thought it would be good to have a server-side option.) As far as I can tell reading the docs, there isn't really a hook setup to allow this. Maybe I could hack it with some of the hooks that are passed requests, but that doesn't seem good. I'm a little surprised this isn't possible, so I thought I would open an issue and see if that's a deeply considered decision or just "haven't needed it yet." I'm potentially interested in implementing the hook if the latter. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
459509126 | MDU6SXNzdWU0NTk1MDkxMjY= | 516 | Enforce import sort order with isort | simonw 9599 | open | 0 | 8 | 2019-06-22T20:35:50Z | 2023-08-23T02:15:36Z | OWNER | I want to use isort to order imports. A few steps here:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1857234285 | I_kwDOBm6k_c5usyVt | 2145 | If a row has a primary key of `null` various things break | simonw 9599 | open | 0 | 23 | 2023-08-18T20:06:28Z | 2023-08-21T17:30:01Z | OWNER | Stumbled across this while experimenting with
Tracked it down to this code, which assembles the URL for a row page: https://github.com/simonw/datasette/blob/943df09dcca93c3b9861b8c96277a01320db8662/datasette/utils/init.py#L120-L134 That's because |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1856075668 | I_kwDOCGYnMM5uoXeU | 586 | .transform() fails to drop column if table is part of a view | simonw 9599 | open | 0 | 3 | 2023-08-18T05:25:22Z | 2023-08-18T06:13:47Z | OWNER | I got this error trying to drop a column from a table that was part of a SQL view:
Upon further investigation I found that this pattern seemed to fix it:
Originally posted by @simonw in https://github.com/simonw/datasette-edit-schema/issues/35#issuecomment-1683370548 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1754174496 | I_kwDOCGYnMM5ojpQg | 558 | Ability to define unique columns when creating a table | aguinane 1910303 | open | 0 | 0 | 2023-06-13T06:56:19Z | 2023-08-18T01:06:03Z | NONE | When creating a new table, it would be good to have an option to set unique columns similar to how not_null is set. ```python from sqlite_utils import Database columns = {"mRID": str, "name": str} db = Database("example.db") db["ExampleTable"].create(columns, pk="mRID", not_null=["mRID"], if_not_exists=True) db["ExampleTable"].create_index(["mRID"], unique=True, if_not_exists=True) ``` So something like this would add the UNIQUE flag to the table definition.
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
476852861 | MDU6SXNzdWU0NzY4NTI4NjE= | 568 | Add database_color as a configurable option | LBHELewis 50906992 | open | 0 | 1 | 2019-08-05T13:14:45Z | 2023-08-11T05:19:42Z | NONE | This would be really useful as it would allow us to tie in with colour schemes. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1838469176 | I_kwDOBm6k_c5tlNA4 | 2127 | Context base class to support documenting the context | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 3 | 2023-08-07T00:01:02Z | 2023-08-10T01:30:25Z | OWNER | This idea first came up here: - https://github.com/simonw/datasette/issues/2112#issuecomment-1652751140 If Also refs: - https://github.com/simonw/datasette/issues/1510 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1843821954 | I_kwDOBm6k_c5t5n2C | 2137 | Redesign row default JSON | simonw 9599 | open | 0 | Datasette 1.0a-next 8755003 | 1 | 2023-08-09T18:49:11Z | 2023-08-09T19:02:47Z | OWNER | This URL here: https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables
That |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1822939274 | I_kwDOBm6k_c5sp9iK | 2113 | Implement and document extras for the new query view page | simonw 9599 | open | 0 | Datasette 1.0a-next 8755003 | 3 | 2023-07-26T18:24:01Z | 2023-08-09T17:35:22Z | OWNER |
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1840417903 | I_kwDOBm6k_c5tsoxv | 2131 | Refactor code that supports templates_considered comment | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 1 | 2023-08-08T01:28:36Z | 2023-08-09T15:27:41Z | OWNER | I ended up duplicating it here: https://github.com/simonw/datasette/blob/7532feb424b1dce614351e21b2265c04f9669fe2/datasette/views/database.py#L164-L167 I think it should move to |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1840324765 | I_kwDOBm6k_c5tsSCd | 2129 | CSV ?sql= should indicate errors | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 1 | 2023-08-07T23:13:04Z | 2023-08-08T02:02:21Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/2118#issuecomment-1668688947 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1818838294 | I_kwDOCGYnMM5saUUW | 578 | Plugin hook for adding new output formats | simonw 9599 | open | 0 | 5 | 2023-07-24T17:29:18Z | 2023-08-07T15:41:49Z | OWNER |
https://discord.com/channels/823971286308356157/997738192360964156/1133076679011602432 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1839344979 | I_kwDOCGYnMM5toi1T | 582 | Handling CSV/file input that contains NUL bytes | betatim 1448859 | open | 0 | 0 | 2023-08-07T12:24:14Z | 2023-08-07T12:24:14Z | NONE | I was using sqlite-utils to create a DB from a CSV and it turns out the CSV contains a NUL byte. When the processing reaches the line that contains the NUL an exception is raised. I'm wondering if there is something that can be done in Concretely the file is the This is the command and output:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1824457306 | I_kwDOBm6k_c5svwJa | 2122 | Parameters on canned queries: fixed or query-generated list? | meowcat 1563881 | open | 0 | 0 | 2023-07-27T14:07:07Z | 2023-07-27T14:07:07Z | NONE | Hi, currently parameters in canned queries are just text fields. It would be cool to have one of the options below. Would you accept a PR doing something in this direction? (Possibly this could even work as a plugin.)
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1823428714 | I_kwDOBm6k_c5sr1Bq | 2120 | Add __all__ to datasette/__init__.py | simonw 9599 | open | 0 | 0 | 2023-07-27T01:07:10Z | 2023-07-27T01:07:10Z | OWNER | Currently looks like this: https://github.com/simonw/datasette/blob/08181823990a71ffa5a1b57b37259198eaa43e06/datasette/init.py#L1-L6 Adding |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1822918995 | I_kwDOCGYnMM5sp4lT | 580 | Add way to export to a csv file using the Python library | kevinlinxc 44324811 | open | 0 | 0 | 2023-07-26T18:09:26Z | 2023-07-26T18:09:26Z | NONE | According to the documentation, we can make a csv output using the CLI tool, but not the Python library. Could we have the latter? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1822813627 | I_kwDOBm6k_c5spe27 | 2108 | some (many?) SQL syntax errors are not throwing errors with a .csv endpoint | fgregg 536941 | open | 0 | 0 | 2023-07-26T16:57:45Z | 2023-07-26T16:58:07Z | CONTRIBUTOR | here's a CTE query that should always fail with a syntax error:
when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B same with this bad sql
vs but, datasette catches this bad sql at both endpoints:
https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1821108702 | I_kwDOCGYnMM5si-ne | 579 | Special handling for SQLite column of type `JSON` | asg017 15178711 | open | 0 | 0 | 2023-07-25T20:37:23Z | 2023-07-25T20:37:23Z | CONTRIBUTOR |
Automatic NestingAccording to "Nested JSON Values", sqlite-utils will only expand JSON if the Instead,
I'm sure there's other ways |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1816830546 | I_kwDODEm0Qs5sSqJS | 73 | Twitter v1 API shutdown | david-perez 6341745 | open | 0 | 0 | 2023-07-22T16:57:41Z | 2023-07-22T16:57:41Z | NONE | I've been using this project reliably over the past two years to periodically download my liked tweets, but unfortunately since 19th July I get:
It appears like Twitter has now shut down their v1 endpoints, which is rather gracious of them, considering they announced they'd be deprecated on 29th April. Unfortunately retrieving likes using the v2 API is not part of their free plan. In fact, with the free plan one can only post and delete tweets and retrieve information about oneself. So I'm afraid this is the end of this very nice project. It was very useful, thank you! |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/73/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
||||||||
1811824307 | I_kwDOBm6k_c5r_j6z | 2105 | When reverse proxying datasette with nginx an URL element gets erronously added | aki-k 2235371 | open | 0 | 3 | 2023-07-19T12:16:53Z | 2023-07-21T21:17:09Z | NONE | I use this nginx config: ``` location /datasette-llm { return 302 /datasette-llm/; }
https://192.168.1.3:5432/datasette-llm/datasette-llm/logs.json?sql=select+*+from+_llm_migrations https://192.168.1.3:5432/datasette-llm/datasette-llm/logs.csv?sql=select+*+from+_llm_migrations&_size=max When I remove that extra "datasette-llm" from the URL, those links work too. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1808215339 | I_kwDOBm6k_c5rxy0r | 2104 | Tables starting with an underscore should be treated as hidden | simonw 9599 | open | 0 | 2 | 2023-07-17T17:13:53Z | 2023-07-18T22:41:37Z | OWNER | Plugins can then take advantage of this pattern, for example: - https://github.com/simonw/datasette-auth-tokens/pull/8 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1808116827 | I_kwDOBm6k_c5rxaxb | 2103 | data attribute on Datasette tables exposing the primary key of the row | simonw 9599 | open | 0 | 0 | 2023-07-17T16:18:25Z | 2023-07-17T16:18:25Z | OWNER | Maybe put it on the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1765870617 | I_kwDOBm6k_c5pQQwZ | 2087 | `--settings settings.json` option | simonw 9599 | open | 0 | 2 | 2023-06-20T17:48:45Z | 2023-07-14T17:02:03Z | OWNER | https://discord.com/channels/823971286308356157/823971286941302908/1120705940728066080
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2087/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1803264272 | I_kwDOBm6k_c5re6EQ | 2101 | alter: true support for JSON write API | simonw 9599 | open | 0 | 1 | 2023-07-13T15:24:11Z | 2023-07-13T15:24:18Z | OWNER | Requested here: https://discord.com/channels/823971286308356157/823971286941302908/1129034187073134642
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
771608692 | MDU6SXNzdWU3NzE2MDg2OTI= | 14 | UNIQUE constraint failed: workouts.id | n8henrie 1234956 | open | 0 | 5 | 2020-12-20T15:11:20Z | 2023-07-10T14:46:52Z | NONE | I'm getting an error on my initial attempt to import data:
|
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1795219865 | I_kwDOCGYnMM5rAOGZ | 566 | `--no-headers` doesn't work on most formats | zellyn 33625 | open | 0 | 2 | 2023-07-09T03:43:36Z | 2023-07-09T04:13:35Z | NONE | Version 3.33
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1794097871 | I_kwDOBm6k_c5q78LP | 2095 | Introduce "dark mode" CSS | jamietanna 3315059 | open | 0 | 0 | 2023-07-07T19:15:58Z | 2023-07-07T19:15:58Z | NONE | Using the CSS media query |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1784794489 | I_kwDOCGYnMM5qYc15 | 562 | Explore the intersection between sqlite-utils and dataclasses | simonw 9599 | open | 0 | 1 | 2023-07-02T19:23:08Z | 2023-07-02T19:26:39Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/llm/issues/65#issuecomment-1616742529 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1783304750 | I_kwDOBm6k_c5qSxIu | 2094 | JS Plugin Hooks for the Code Editor | asg017 15178711 | open | 0 | 0 | 2023-07-01T00:51:57Z | 2023-07-01T00:51:57Z | CONTRIBUTOR | When #2052 merges, I'd like to add support to add extensions/functions to the Datasette code editor. I'd eventually like to build a JS plugin for
I did some hacking to see what this would look like, see here:
There can be a new hook that allows JS plugins to add new "extension" in the CodeMirror editorview here: Will need some more planning. For example, the Codemirror bundle in Datasette has functions that we could re-export for plugins to use (so we don't load 2 version of |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2094/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1781005740 | I_kwDOBm6k_c5qJ_2s | 2090 | Adopt ruff for linting | simonw 9599 | open | 0 | 2 | 2023-06-29T14:56:43Z | 2023-06-29T15:05:04Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
1054244712 | I_kwDOBm6k_c4-1n9o | 1510 | Datasette 1.0 documented template context (maybe via API docs) | simonw 9599 | open | 0 | Datasette 1.0 3268330 | 3 | 2021-11-15T23:23:58Z | 2023-06-28T02:05:21Z | OWNER | Documented context plus protective unit tests. Goal is that custom templates built for 1.x will not break without a 2.x release. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
323223872 | MDU6SXNzdWUzMjMyMjM4NzI= | 260 | Validate metadata.json on startup | simonw 9599 | open | 0 | 7 | 2018-05-15T13:42:56Z | 2023-06-21T12:51:22Z | OWNER | It's easy to misspell the name of a database or table and then be puzzled when the metadata settings silently fail. To avoid this, let's sanity check the provided metadata.json on startup and quit with a useful error message if we find any obvious mistakes. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1764792125 | I_kwDOBm6k_c5pMJc9 | 2086 | Show information on startup in directory configuration mode | simonw 9599 | open | 0 | 0 | 2023-06-20T07:13:33Z | 2023-06-20T07:13:33Z | OWNER | https://discord.com/channels/823971286308356157/823971286941302908/1120516587036889098
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1762180409 | I_kwDOBm6k_c5pCL05 | 2085 | Interactive row selection in Datasette | learning4life 24938923 | open | 0 | 0 | 2023-06-18T08:29:45Z | 2023-06-18T08:31:23Z | NONE | Simon did a excellent prototype of an interactive row selection in Datasette. I hope this functionality can be turned into a Datasette plugin. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1761613778 | I_kwDOBm6k_c5pABfS | 2084 | Support facets for columns that contain timestamps | devxpy 19492893 | open | 0 | 0 | 2023-06-17T03:33:54Z | 2023-06-17T03:33:54Z | NONE | Django has this very nice filter for datetime fields - It would be nice to have something similar to facet by a field that contains a timestamp in datasette too - Which doesn't seem to do anything with timestamps right now... |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1383646615 | I_kwDOCGYnMM5SeMWX | 491 | Ability to merge databases and tables | sgraaf 8904453 | open | 0 | 7 | 2022-09-23T11:10:55Z | 2023-06-14T22:14:24Z | NONE | Hi! Let me firstly say that I am a big fan of your work -- I follow your tweets and blog posts with great interest 😄. Now onto the matter at hand: I think it would be great if This could look something like this:
I imagine this is rather straightforward if all databases involved in the merge contain differently named tables (i.e. no chance of conflicts), but things get slightly more complicated if two or more of the databases to be merged contain tables with the same name. Not only do you have to "do something" with the primary key(s), but these tables could also simply have different schemas (and therefore be incompatible for concatenation to begin with). Anyhow, I would love your thoughts on this, and, if you are open to it, work together on the design and implementation! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/491/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1733198948 | I_kwDOCGYnMM5nToRk | 555 | Filter table by a large bunch of ids | redraw 10843208 | open | 0 | 1 | 2023-05-31T00:29:51Z | 2023-06-14T22:01:57Z | NONE | Hi! this might be a question related to both SQLite & sqlite-utils, and you might be more experienced with them. I have a large bunch of ids, and I'm wondering which is the best way to query them in terms of performance, and simplicity if possible. The naive approach would be something like Another approach might be creating a temp table, or in-memory db table, insert all ids in that table and then join with the target one. I failed to attach an in-memory db both using sqlite-utils, and plain sql's execute(), so my closest approach is something like,
That kinda worked, I couldn't find an option in sqlite-utils's |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1751214236 | I_kwDOC8SPRc5oYWic | 36 | Getting sqlite_master may not be modified when creating dogsheep index | khushmeeet 8711912 | open | 0 | 0 | 2023-06-11T03:21:53Z | 2023-06-11T03:21:53Z | NONE | When creating a
Command I ran to get this error
Dogsheep version
Python version
|
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/36/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1740026046 | I_kwDOCGYnMM5ntrC- | 556 | Support storing incrementally piped values | mcint 601708 | open | 0 | 1 | 2023-06-04T00:45:23Z | 2023-06-04T01:21:15Z | CONTRIBUTOR | I'm trying to use sqlite-utils to data generated incrementally. There are a few aspects of this that I don't currently know how to handle. I would like an option to apply writes incrementally, line-by-line as they are received. I would like an option to echo incremental progress. And, it would be nice to have In particular, I'm using CoreLocationCLI -w -j to generate, newline-delimited JSON. One variant of the command
It looks like I can get what I want with:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1727478903 | I_kwDOBm6k_c5m9zx3 | 2081 | Update Endpoints defined in metadata throws 403 Forbidden after a while | cutmasta-kun 15085007 | open | 0 | 0 | 2023-05-26T11:52:30Z | 2023-05-26T11:52:30Z | NONE | Hello. I expose an endpoint to update This works really well! But after a while, the Datasette Instanz answers with 403 Forbidden. I have to delete the database and recreate it in order to work again. Any help here? (´。_。`) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1720096994 | I_kwDOCGYnMM5mhpji | 554 | `IndexError` when doing `.insert(..., pk='id')` after `insert_all` | xavdid 1231935 | open | 0 | 1 | 2023-05-22T17:13:02Z | 2023-05-22T17:18:33Z | NONE | I believe this is related to https://github.com/simonw/sqlite-utils/issues/98. When ```py from sqlite_utils import Database def test_pk_for_insert(fresh_db): user = {"id": "abc", "name": "david"}
if name == "main": db = Database("bug.db") if db["users"].exists(): raise ValueError( "bug only shows on a new database - remove bug.db before running the script" ) test_pk_for_insert(db) ``` The error is:
The issue is in this block: relevant locals are:
What's most interesting is the comment |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1124731464 | I_kwDOCGYnMM5DCgpI | 399 | Make it easier to insert geometries, with documentation and maybe code | simonw 9599 | open | 0 | 25 | 2022-02-05T00:11:26Z | 2023-05-16T03:11:52Z | OWNER | In playing with the new SpatiaLite helpers from #385 I noticed that actually populating geometry columns is still a little bit tricky. Here's what I ended up doing: ```python import httpx, sqlite_utils db = sqlite_utils.Database("/tmp/spatial.db") attractions = httpx.get("https://latest.datasette.io/fixtures/roadside_attractions.json?_shape=array").json() db["attractions"].insert_all(attractions, pk="pk") Schema of that table is now:CREATE TABLE [attractions] ([pk] INTEGER PRIMARY KEY,[name] TEXT,[address] TEXT,[latitude] FLOAT,[longitude] FLOAT)db.init_spatialite() db["attractions"].add_geometry_column("point", "POINT") db.execute("""
update attractions set point = GeomFromText(
'POINT(' || longitude || ' ' || latitude || ')', 4326
)
""")
It would be good to both document this in more detail, but ideally also to come up with a more obvious pattern for inserting common types of spatial data. Also related: - #398 - #79 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1708030220 | I_kwDOBm6k_c5lznkM | 2073 | Faceting doesn't work against integer columns in views | simonw 9599 | open | 0 | 2 | 2023-05-12T18:20:10Z | 2023-05-12T18:24:07Z | OWNER | Spotted this issue here: https://til.simonwillison.net/datasette/baseline I had to do this workaround:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1700936245 | I_kwDOCGYnMM5lYjo1 | 542 | Remove `skip_false=True` and `--no-skip-false` in `sqlite-utils` 4.0 | simonw 9599 | open | 0 | 4.0 backwards incomatible changes 9374594 | 1 | 2023-05-08T21:04:28Z | 2023-05-08T21:07:41Z | OWNER | Following: - #527 The only reason I didn't remove fix this mis-feature entirely is that it represents a backwards incompatible change. I'll make that change in 4.0. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1595340692 | I_kwDOCGYnMM5fFveU | 530 | add ability to configure "on delete" and "on update" attributes of foreign keys: | fgregg 536941 | open | 0 | 2 | 2023-02-22T15:44:14Z | 2023-05-08T20:39:01Z | CONTRIBUTOR | sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1700840265 | I_kwDOCGYnMM5lYMNJ | 541 | Get tests to pass with `pytest -Werror` | simonw 9599 | open | 0 | 1 | 2023-05-08T19:57:23Z | 2023-05-08T19:59:35Z | OWNER | Inspired by: - #534 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1698865182 | I_kwDOBm6k_c5lQqAe | 2069 | [BUG] Cannot insert new data to deployed instance | yqlbu 31861128 | open | 0 | 1 | 2023-05-07T02:59:42Z | 2023-05-07T03:17:35Z | NONE | SummaryRecently, I deployed an instance of datasette to Vercel with the following plugins:
With the above plugins, I was able to insert new data to local sqlite db. However, when it comes to the deployment on Vercel, things behave differently. I observed some errors from the logs console on Vercel:
I think it is a potential bug. Reproducemetadata.json```json { "plugins": { "datasette-insert": { "allow": { "id": "*" } }, "datasette-auth-tokens": { "tokens": [ { "token": { "$env": "INSERT_TOKEN" }, "actor": { "id": "repeater" } } ], "param": "_auth_token" } } } ``` commands```bash # deploy datasette publish vercel remote.db \ --project=repeater-bot-sqlite \ --metadata metadata.json \ --install datasette-auth-tokens \ --install datasette-insert \ --vercel-json=vercel.json # test insert cat fixtures/dogs.json | curl --request POST -d @- -H "Authorization: Bearer <token>" \ 'https://repeater-bot-sqlite.vercel.app/-/insert/remote/dogs?pk=id' ``` logs```console Traceback (most recent call last): File "/var/task/datasette/app.py", line 1354, in route_path response = await view(request, send) File "/var/task/datasette/app.py", line 1500, in async_view_fn response = await async_call_with_supported_arguments( File "/var/task/datasette/utils/__init__.py", line 1005, in async_call_with_supported_arguments return await fn(*call_with) File "/var/task/datasette_insert/__init__.py", line 14, in insert_or_upsert response = await insert_or_upsert_implementation(request, datasette) File "/var/task/datasette_insert/__init__.py", line 91, in insert_or_upsert_implementation table_count = await db.execute_write_fn(write_in_thread, block=True) File "/var/task/datasette/database.py", line 167, in execute_write_fn raise result File "/var/task/datasette/database.py", line 179, in _execute_writes conn = self.connect(write=True) File "/var/task/datasette/database.py", line 93, in connect assert not (write and not self.is_mutable) AssertionError ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1393202060 | I_kwDOCGYnMM5TCpOM | 496 | devrel/python api: Pylance type hinting | chapmanjacobd 7908073 | open | 0 | 4 | 2022-10-01T03:03:34Z | 2023-05-03T05:53:27Z | CONTRIBUTOR | Pylance is generally pretty good at figuring out stuff but For example:
I think this is because DEFAULT is an empty class? maybe a few small changes could be made to make the library more type-friendly The interim solution is of course to turn off type hints completely for the line
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1690765434 | I_kwDOBm6k_c5kxwh6 | 2067 | Litestream-restored db: errors on 3.11 and 3.10.8; but works on py3.10.7 and 3.10.6 | justmars 39538958 | open | 0 | 1 | 2023-05-01T12:42:28Z | 2023-05-03T00:16:03Z | NONE | Hi! Wondering if this issue is limited to my local system or if it affects others as well. It seems like 3.11 errors out on a "litestream-restored" database. On further investigation, it also appears to conk out on 3.10.8 but works on 3.10.7 and 3.10.6. To demo issue I created a test database, replicated it to an aws s3 bucket, then restored the same under various .pyenv-versioned shells where I test whether I can read the database via the sqlite3 cli. ```sh create new shell with 3.11.3litestream restore -o data/db.sqlite s3://mytestbucketxx/db sqlite3 data/db.sqlite SQLite version 3.41.2 2023-03-22 11:56:21Enter ".help" for usage hints.sqlite> .tables_litestream_lock _litestream_seq moviesqlite>``` However this get me an Error on 3.11.3 and 3.10.8```sh datasette data/db.sqlite ``` ```console /tester/.venv/lib/python3.11/site-packages/pkg_resources/__init__.py:121: DeprecationWarning: pkg_resources is deprecated as an API warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning) Traceback (most recent call last): File "/tester/.venv/bin/datasette", line 8, in <module> sys.exit(cli()) ^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/cli.py", line 143, in wrapped return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/cli.py", line 615, in serve asyncio.get_event_loop().run_until_complete(check_databases(ds)) File "/Users/mv/.pyenv/versions/3.11.3/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/cli.py", line 660, in check_databases await database.execute_fn(check_connection) File "/tester/.venv/lib/python3.11/site-packages/datasette/database.py", line 213, in execute_fn return await asyncio.get_event_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mv/.pyenv/versions/3.11.3/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/database.py", line 211, in in_thread return fn(conn) ^^^^^^^^ File "/tester/.venv/lib/python3.11/site-packages/datasette/utils/__init__.py", line 951, in check_connection for r in conn.execute( ^^^^^^^^^^^^^ sqlite3.OperationalError: unable to open database file ```Works on 3.10.7, 3.10.6```sh # create new shell with 3.10.7 / 3.10.6 litestream restore -o data/db.sqlite s3://mytestbucketxx/db datasette data/db.sqlite # ... # INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) ```In both scenarios, the only dependencies were the pinned python version and the latest Datasette version 0.64. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1617602868 | I_kwDOJHON9s5gaqk0 | 6 | Character encoding problem | simonw 9599 | open | 0 | 2 | 2023-03-09T16:44:34Z | 2023-04-14T15:22:09Z | MEMBER | I ran against a recent note with this in it:
And got back:
Pasting that into https://ftfy.vercel.app/?s=Actions+%E2%80%9A%C3%B6%C3%B4%C3%94%E2%88%8F%C3%A8+ gives this:
|
apple-notes-to-sqlite 611552758 | issue | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1663399821 | I_kwDOBm6k_c5jJXeN | 2058 | 500 "attempt to write a readonly database" error caused by "PRAGMA schema_version" | simonw 9599 | open | 0 | 9 | 2023-04-11T23:57:50Z | 2023-04-13T16:35:21Z | OWNER | I've not been able to replicate this myself yet, but I've seen log files from a user affected by it.
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1665510265 | I_kwDOBm6k_c5jRat5 | 2060 | Clean up a bunch of warnings from ruff | simonw 9599 | open | 0 | 0 | 2023-04-13T01:23:02Z | 2023-04-13T01:23:02Z | OWNER | See: - #2056
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1665053646 | I_kwDOBm6k_c5jPrPO | 2059 | "Deceptive site ahead" alert on Heroku deployment | mtdukes 1186275 | open | 0 | 1 | 2023-04-12T18:34:51Z | 2023-04-13T01:13:01Z | NONE | I deployed a fairly basic instance of Datasette ( Is there way around this? Maybe a way to add ownership verification through Google's search console? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1373210675 | I_kwDODD6af85R2Ygz | 13 | fails before generating views. ERR: table sqlite_master may not be modified | pax 116795 | open | 0 | 4 | 2022-09-14T15:41:50Z | 2023-04-11T03:46:17Z | NONE | generates checkins.db but seems to fail before generating views note: it worked on an Ubuntu WSL but fails on macOS 12.5.1 later edit: I suspect this is a problem with my local set-up, full error:
|
swarm-to-sqlite 205429375 | issue | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1657861026 | I_kwDOBm6k_c5i0POi | 2054 | Make detailed notes on how table, query and row views work right now | simonw 9599 | open | 0 | 13 | 2023-04-06T18:21:09Z | 2023-04-07T20:14:38Z | OWNER | Research to help influence the following: - #2049 - #2053 - #2050 - #262 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);