issues
16 rows where type = "issue" and user = 82988 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, author_association, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
377155320 | MDU6SXNzdWUzNzcxNTUzMjA= | 370 | Integration with JupyterLab | psychemedia 82988 | open | 0 | 4 | 2018-11-04T13:57:13Z | 2022-09-29T08:17:47Z | CONTRIBUTOR | I just watched a demo video for the JupyterLab Chart Editor which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue). JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe. This made me wonder about what For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a View / Run as datasette file viewer option that will:
(? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all As a freebie, the Related: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1128466114 | I_kwDOCGYnMM5DQwbC | 406 | Creating tables with custom datatypes | psychemedia 82988 | open | 0 | 5 | 2022-02-09T12:16:31Z | 2022-09-15T18:13:50Z | NONE | Via https://stackoverflow.com/a/18622264/454773 I note the ability to register custom handlers for novel datatypes that can map into and out of things like sqlite From a quick look and a quick play, I didn't spot a way to do this in For example: ```python Via https://stackoverflow.com/a/18622264/454773import sqlite3 import numpy as np import io def adapt_array(arr): """ http://stackoverflow.com/a/31312102/190597 (SoulNibbler) """ out = io.BytesIO() np.save(out, arr) out.seek(0) return sqlite3.Binary(out.read()) def convert_array(text): out = io.BytesIO(text) out.seek(0) return np.load(out) Converts np.array to TEXT when insertingsqlite3.register_adapter(np.ndarray, adapt_array) Converts TEXT to np.array when selectingsqlite3.register_converter("array", convert_array) ``` ```python from sqlite_utils import Database db = Database('test.db') Reset the database connection to used the parsed datatypesqlite_utils doesn't seem to support eg:Database('test.db', detect_types=sqlite3.PARSE_DECLTYPES)db.conn = sqlite3.connect(db_name, detect_types=sqlite3.PARSE_DECLTYPES) Create a table the old fashioned waybut using the new custom data typevector_table_create = """ CREATE TABLE dummy (title TEXT, vector array ); """ cur = db.conn.cursor() cur.execute(vector_table_create) sqlite_utils doesn't appear to support custom types (yet?!)The following errors on the "array" datatype""" db["dummy"].create({ "title": str, "vector": "array", }) """ ``` We can then add / retrieve records from the database where the datatype of the ```python import numpy as np db["dummy"].insert({'title':"test1", 'vector':np.array([1,2,3])}) for row in db.query("SELECT * FROM dummy"): print(row['title'], row['vector'], type(row['vector'])) """ test1 [1 2 3] <class 'numpy.ndarray'> """ ``` It would be handy to be able to do this idiomatically in |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1063388037 | I_kwDOCGYnMM4_YgOF | 343 | Provide function to generate hash_id from specified columns | psychemedia 82988 | closed | 0 | 4 | 2021-11-25T10:12:12Z | 2022-03-02T04:25:25Z | 2022-03-02T04:25:25Z | NONE | Hi I note that you define It would be useful to be able to call a complementary function to generate a corresponding Or is there a better pattern for doing that? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
336936010 | MDU6SXNzdWUzMzY5MzYwMTA= | 331 | Datasette throws error when loading spatialite db without extension loaded | psychemedia 82988 | closed | 0 | 2 | 2018-06-29T09:51:14Z | 2022-01-20T21:29:40Z | 2018-07-10T15:13:36Z | CONTRIBUTOR | When starting datasette on a SpatialLite database without loading the SpatiaLite extension (using eg
It would be nice to trap this and return a message saying something like: ``` It looks like you're trying to load a SpatiaLite database? Make sure you load in the SpatiaLite extension when starting datasette. Read more: https://datasette.readthedocs.io/en/latest/spatialite.html ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
377156339 | MDU6SXNzdWUzNzcxNTYzMzk= | 371 | datasette publish digitalocean plugin | psychemedia 82988 | closed | 0 | 3 | 2018-11-04T14:07:41Z | 2021-01-04T20:14:28Z | 2021-01-04T20:14:28Z | CONTRIBUTOR | Provide support for launching Example: Deploy Docker containers into Digital Ocean. Digital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: Docker One-Click Application. Related: - Launching containers in Digital Ocean servers running docker: How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04 - How To Use Doctl, the Official DigitalOcean Command-Line Client |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
492153532 | MDU6SXNzdWU0OTIxNTM1MzI= | 573 | Exposing Datasette via Jupyter-server-proxy | psychemedia 82988 | closed | 0 | 3 | 2019-09-11T10:32:36Z | 2020-03-26T09:41:30Z | 2020-03-26T09:41:30Z | CONTRIBUTOR | It is possible to expose a running For example, using this demo Binder which has the server proxy installed, we can then upload a simple test database from the notebook homepage, from a Jupyter termianl install datasette and set it running against the test db on eg port 8001 and then view it via the path Clicking links results in 404s though because the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
545407916 | MDU6SXNzdWU1NDU0MDc5MTY= | 73 | upsert_all() throws issue when upserting to empty table | psychemedia 82988 | closed | 0 | 6 | 2020-01-05T11:58:57Z | 2020-01-31T14:21:09Z | 2020-01-05T17:20:18Z | NONE | If I try to add a list of ```python import sqlite3 from sqlite_utils import Database import pandas as pd conx = sqlite3.connect(':memory') cx = conx.cursor() cx.executescript('CREATE TABLE "test" ("Col1" TEXT);') q="SELECT * FROM test;" pd.read_sql(q, conx) #shows empty table db = Database(conx) db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) TypeError Traceback (most recent call last) <ipython-input-74-8c26d93d7587> in <module> 1 db = Database(conx) ----> 2 db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1159 upsert=True, 1160 ) 1161 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert) 1040 sql = "INSERT OR IGNORE INTO {table} VALUES({pk_placeholders});".format( 1041 table=self.name, -> 1042 pks=", ".join(["[{}]".format(p) for p in pks]), 1043 pk_placeholders=", ".join(["?" for p in pks]), 1044 ) TypeError: 'NoneType' object is not iterable ``` A hacky workaround in use is:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
527710055 | MDU6SXNzdWU1Mjc3MTAwNTU= | 640 | Nicer error message for heroku publish name clash | psychemedia 82988 | open | 0 | 1 | 2019-11-24T14:57:07Z | 2019-12-06T07:19:34Z | CONTRIBUTOR | If you try to publish to Heroku using no set name (i.e. the default
It would be neater if:
It may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
291639118 | MDU6SXNzdWUyOTE2MzkxMTg= | 183 | Custom Queries - escaping strings | psychemedia 82988 | closed | 0 | 2 | 2018-01-25T16:49:13Z | 2019-06-24T06:45:07Z | 2019-06-24T06:45:07Z | CONTRIBUTOR | If a SQLite table column name contains spaces, they are usually referred to in double quotes:
In the JSON metadata file, this is passed by escaping the double quotes:
When specifying a custom query in
which does not work. Alternatively, a valid custom query can be passed using backticks (`) to quote the column name and single (unescaped) quotes for the matched value:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
432870248 | MDU6SXNzdWU0MzI4NzAyNDg= | 431 | Datasette doesn't reload when database file changes | psychemedia 82988 | closed | 0 | 3 | 2019-04-13T16:50:43Z | 2019-05-02T05:13:55Z | 2019-05-02T05:13:54Z | CONTRIBUTOR | My understanding of the I'm running on a Mac and from the I was also expecting to see some sort of log statement in the datasette logging to say that it had detected a file change and restarted, but don't see anything there? Will try to check on an Ubuntu box when I get a chance to see if this is a Mac thing. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
403922644 | MDU6SXNzdWU0MDM5MjI2NDQ= | 8 | Problems handling column names containing spaces or - | psychemedia 82988 | closed | 0 | 3 | 2019-01-28T17:23:28Z | 2019-04-14T15:29:33Z | 2019-02-23T21:09:03Z | NONE | Irrrespective of whether using column names containing a space or - character is good practice, SQLite does allow it, but ```python from sqlite_utils import Database dbname = 'test.db' DB = Database(sqlite3.connect(dbname)) import pandas as pd df = pd.DataFrame({'col1':range(3), 'col2':range(3)}) Convert pandas dataframe to appropriate list/dict formatDB['test1'].insert_all( df.to_dict(orient='records') ) Works fine``` However:
throws: ```OperationalError Traceback (most recent call last) <ipython-input-27-070b758f4f92> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "1": syntax error ``` and:
results in: ```OperationalError Traceback (most recent call last) <ipython-input-28-654523549d20> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "-": syntax error ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
415575624 | MDU6SXNzdWU0MTU1NzU2MjQ= | 414 | datasette requires specific version of Click | psychemedia 82988 | closed | 0 | 1 | 2019-02-28T11:24:59Z | 2019-03-15T04:42:13Z | 2019-03-15T04:42:13Z | CONTRIBUTOR | Is Current release is at 7.0. Can the requirement be liberalised, eg to |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
411066700 | MDU6SXNzdWU0MTEwNjY3MDA= | 10 | Error in upsert if column named 'order' | psychemedia 82988 | closed | 0 | 1 | 2019-02-16T12:05:18Z | 2019-02-24T16:55:38Z | 2019-02-24T16:55:37Z | NONE | The following works fine: ``` connX = sqlite3.connect('DELME.db', timeout=10) dfX=pd.DataFrame({'col1':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ``` But if a column is named dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ``` it throws an error: ```OperationalError Traceback (most recent call last) <ipython-input-130-7dba33cd806c> in <module> 3 dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) 4 DBX = Database(connX) ----> 5 DBX['test'].upsert_all(dfX.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order) 347 foreign_keys=foreign_keys, 348 upsert=True, --> 349 column_order=column_order, 350 ) 351 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "order": syntax error ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
377166793 | MDU6SXNzdWUzNzcxNjY3OTM= | 372 | Docker build tools | psychemedia 82988 | open | 0 | 0 | 2018-11-04T16:02:35Z | 2018-11-04T16:02:35Z | CONTRIBUTOR | In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base Examples of Dockerfile generators / container builders: Discussions / threads (via Binderhub gitter) on:
- why Relates to things like: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/372/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
||||||||
336924199 | MDU6SXNzdWUzMzY5MjQxOTk= | 330 | Limit text display in cells containing large amounts of text | psychemedia 82988 | closed | 0 | 4 | 2018-06-29T09:15:22Z | 2018-07-24T04:53:20Z | 2018-07-10T16:20:48Z | CONTRIBUTOR | The default preview of a database shows all columns (is the row count limited?) which is fine in many cases but can take a long time to load / offer a large overhead if the table is a SpatiaLite table containing geometry columns that include large shapefiles. Would it make sense to have a setting that can limit the amount of text displayed in any given cell in the table preview, or (less useful?) suppress (with notification) the display of overlong columns unless enabled by the user? An issue then arises if a user does want to see all the text in a cell: 1) for a particular cell; 2) for every cell in the table; 3) for all cells in a particular column or columns (I haven't checked but what if a column contains e.g. raw image data? Does this display as raw data? Or can this be rendered in a context aware way as an image preview? I guess a custom template would be one way to do that?) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
286938589 | MDU6SXNzdWUyODY5Mzg1ODk= | 177 | Publishing to Heroku - metadata file not uploaded? | psychemedia 82988 | closed | 0 | 0 | 2018-01-09T01:04:31Z | 2018-01-25T16:45:32Z | 2018-01-25T16:45:32Z | CONTRIBUTOR | Trying to run datasette (version 0.14) on Heroku with a On a Mac with dodgy
Could that be causing the issue? Also, I'm not seeing custom query links anywhere obvious when I run the metadata file with a local datasette server? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);