issues
4 rows where state = "open" and user = 82988 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: author_association, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
377155320 | MDU6SXNzdWUzNzcxNTUzMjA= | 370 | Integration with JupyterLab | psychemedia 82988 | open | 0 | 4 | 2018-11-04T13:57:13Z | 2022-09-29T08:17:47Z | CONTRIBUTOR | I just watched a demo video for the JupyterLab Chart Editor which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue). JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe. This made me wonder about what For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a View / Run as datasette file viewer option that will:
(? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all As a freebie, the Related: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1128466114 | I_kwDOCGYnMM5DQwbC | 406 | Creating tables with custom datatypes | psychemedia 82988 | open | 0 | 5 | 2022-02-09T12:16:31Z | 2022-09-15T18:13:50Z | NONE | Via https://stackoverflow.com/a/18622264/454773 I note the ability to register custom handlers for novel datatypes that can map into and out of things like sqlite From a quick look and a quick play, I didn't spot a way to do this in For example: ```python Via https://stackoverflow.com/a/18622264/454773import sqlite3 import numpy as np import io def adapt_array(arr): """ http://stackoverflow.com/a/31312102/190597 (SoulNibbler) """ out = io.BytesIO() np.save(out, arr) out.seek(0) return sqlite3.Binary(out.read()) def convert_array(text): out = io.BytesIO(text) out.seek(0) return np.load(out) Converts np.array to TEXT when insertingsqlite3.register_adapter(np.ndarray, adapt_array) Converts TEXT to np.array when selectingsqlite3.register_converter("array", convert_array) ``` ```python from sqlite_utils import Database db = Database('test.db') Reset the database connection to used the parsed datatypesqlite_utils doesn't seem to support eg:Database('test.db', detect_types=sqlite3.PARSE_DECLTYPES)db.conn = sqlite3.connect(db_name, detect_types=sqlite3.PARSE_DECLTYPES) Create a table the old fashioned waybut using the new custom data typevector_table_create = """ CREATE TABLE dummy (title TEXT, vector array ); """ cur = db.conn.cursor() cur.execute(vector_table_create) sqlite_utils doesn't appear to support custom types (yet?!)The following errors on the "array" datatype""" db["dummy"].create({ "title": str, "vector": "array", }) """ ``` We can then add / retrieve records from the database where the datatype of the ```python import numpy as np db["dummy"].insert({'title':"test1", 'vector':np.array([1,2,3])}) for row in db.query("SELECT * FROM dummy"): print(row['title'], row['vector'], type(row['vector'])) """ test1 [1 2 3] <class 'numpy.ndarray'> """ ``` It would be handy to be able to do this idiomatically in |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
527710055 | MDU6SXNzdWU1Mjc3MTAwNTU= | 640 | Nicer error message for heroku publish name clash | psychemedia 82988 | open | 0 | 1 | 2019-11-24T14:57:07Z | 2019-12-06T07:19:34Z | CONTRIBUTOR | If you try to publish to Heroku using no set name (i.e. the default
It would be neater if:
It may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
377166793 | MDU6SXNzdWUzNzcxNjY3OTM= | 372 | Docker build tools | psychemedia 82988 | open | 0 | 0 | 2018-11-04T16:02:35Z | 2018-11-04T16:02:35Z | CONTRIBUTOR | In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base Examples of Dockerfile generators / container builders: Discussions / threads (via Binderhub gitter) on:
- why Relates to things like: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/372/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);