issues
5 rows where user = 1055831 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
repo 3
type 1
- issue 5
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
410384988 | MDU6SXNzdWU0MTAzODQ5ODg= | 411 | How to pass named parameter into spatialite MakePoint() function | dazzag24 1055831 | closed | 0 | 3 | 2019-02-14T16:30:22Z | 2023-10-25T13:23:04Z | 2019-05-05T12:25:04Z | NONE | Hi, datasette version: "0.26.2" extensions: spatialite: "4.4.0-RC0" sqlite version: "3.22.0" I have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables: ``` conn.execute('SELECT InitSpatialMetadata(1)') conn.execute("SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);") conn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||"longitude"||' '||"latitude"||')',4326);''') conn.execute("SELECT CreateSpatialIndex('airports', 'point_geom');") ``` I'm attempting to create a canned query and have this in my metadata.json file:
Have also tired:
However I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported? Thanks Darren |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
548591089 | MDU6SXNzdWU1NDg1OTEwODk= | 657 | Allow creation of virtual tables at startup | dazzag24 1055831 | open | 0 | 4 | 2020-01-12T16:10:55Z | 2021-01-15T20:24:35Z | NONE | Hi, I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial CREATE VIRTUAL TABLE ......... command that is required to load the data from the parquet file into the table. It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time? My thoughts are that something at init time could detect that the user was loading a .parquet file and then switch to a mode were it loads that via the "CREATE VIRTUAL TABLE..." rather than loading the .db file in the default case?? I'm happy to contribute code and testing, I just need some pointers on the best approach. Thanks Darren |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
504238461 | MDU6SXNzdWU1MDQyMzg0NjE= | 6 | sqlite3.OperationalError: table users has no column named bio | dazzag24 1055831 | closed | 0 | 2 | 2019-10-08T19:39:52Z | 2019-10-13T05:31:28Z | 2019-10-13T05:30:19Z | NONE | ``` $ github-to-sqlite repos github.db $ github-to-sqlite starred github.db dazzag24 Traceback (most recent call last): File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/bin/github-to-sqlite", line 10, in <module> sys.exit(cli()) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 764, in call return self.main(args, kwargs) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, ctx.params) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(args, **kwargs) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/github_to_sqlite/cli.py", line 106, in starred utils.save_stars(db, user, stars) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/github_to_sqlite/utils.py", line 177, in save_stars user_id = save_user(db, user) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/github_to_sqlite/utils.py", line 61, in save_user return db["users"].upsert(to_save, pk="id").last_pk File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/sqlite_utils/db.py", line 1067, in upsert extracts=extracts, File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/sqlite_utils/db.py", line 916, in insert extracts=extracts, File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/sqlite_utils/db.py", line 1024, in insert_all result = self.db.conn.execute(sql, values) sqlite3.OperationalError: table users has no column named bio ``` ``` $ pipenv graph github-to-sqlite==0.4 - requests [required: Any, installed: 2.22.0] - certifi [required: >=2017.4.17, installed: 2019.9.11] - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4] - idna [required: >=2.5,<2.9, installed: 2.8] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.6] - sqlite-utils [required: ~=1.11, installed: 1.11] - click [required: Any, installed: 7.0] - click-default-group [required: Any, installed: 1.2.2] - click [required: Any, installed: 7.0] - tabulate [required: Any, installed: 0.8.5] Python 3.6.8 ``` |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
504720731 | MDU6SXNzdWU1MDQ3MjA3MzE= | 1 | Add more details on how to request data from google takeout correctly. | dazzag24 1055831 | open | 0 | 0 | 2019-10-09T15:17:34Z | 2019-10-09T15:17:34Z | NONE | The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:
In addition unless you specify that "My Activity" is downloaded in JSON format the default is HTML. This then causes the
command to fail as it only contains html files not json files. Thanks |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
400229984 | MDU6SXNzdWU0MDAyMjk5ODQ= | 401 | How to pass configuration to plugins? | dazzag24 1055831 | closed | 0 | 3 | 2019-01-17T11:20:41Z | 2019-01-18T11:48:13Z | 2019-01-18T06:49:07Z | NONE | Hi, Firstly, thanks for your work on datasette, it is a hugely useful tool! I've been working on a fork [https://github.com/dazzag24/datasette-cluster-map] of datasette-cluster-map to allow the tileserver to be easily switched. Primarily because the tiles being served in the current version use localised text for labels and I'd like to have English used for these names instead. It uses http://leaflet-extras.github.io/leaflet-providers/preview/ to allow you to simply set the tile provider using a call like so:
Can you please point me to an example or how to pass configuration from the metadata.json down into a plugin. Once I've over come this issue I was wondering if you would be interested in taking this change into your version? Many thanks Darren |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);