home / github

Menu
  • Search all tables
  • GraphQL API

github

Custom SQL query returning 10 rows (hide)

Query parameters

This data as json, CSV

html_urlissue_urlidnode_idusercreated_atupdated_atauthor_associationbodyreactionsissueperformed_via_github_app
https://github.com/simonw/sqlite-utils/issues/430#issuecomment-1116336340 https://api.github.com/repos/simonw/sqlite-utils/issues/430 1116336340 IC_kwDOCGYnMM5CifDU 9308268 2022-05-03T17:03:31Z 2022-05-03T17:03:31Z NONE So, the good news is that it appears that setting one of those PRAGMA statements fixed the issue of `table.extract()` method call on this large database completing (that I described above.) The bad news is that I'm not sure which one! I wonder if it's something system / environment specific about SQLite, or maybe something else going on.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1224112817  
https://github.com/simonw/datasette/issues/1713#issuecomment-1099443468 https://api.github.com/repos/simonw/datasette/issues/1713 1099443468 IC_kwDOBm6k_c5BiC0M 9308268 2022-04-14T17:26:27Z 2022-04-14T17:26:27Z NONE What would be an awesome feature as a plugin would be to be able to save a query (and possibly even results) to a github gist. Being able to share results that way would be super fantastic. Possibly even in Jupyter Notebook format (since github and github gists nicely render those)! I know there's the handy datasette-saved-queries plugin, but a button that could export stuff out and then even possibly import stuff back in (I'm sort of thinking the way that Google Colab allows you to save to github, and then pull the notebook back in is a really great workflow ![image](https://user-images.githubusercontent.com/9308268/163441612-9ad2649f-c73e-4557-aaf2-e3d0fdc48fbf.png) https://github.com/cincinnatilibrary/collection-analysis/blob/master/reports/colab_datasette_example.ipynb )
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
1203943272  
https://github.com/simonw/datasette/issues/1181#issuecomment-998999230 https://api.github.com/repos/simonw/datasette/issues/1181 998999230 IC_kwDOBm6k_c47i4S- 9308268 2021-12-21T18:25:15Z 2021-12-21T18:25:15Z NONE I wonder if I'm encountering the same bug (or something related). I had previously been using the .csv feature to run queries and then fetch results for the pandas `read_csv()` function, but it seems to have stopped working recently. https://ilsweb.cincinnatilibrary.org/collection-analysis/collection-analysis/current_collection-3d56dbf.csv?sql=select%0D%0A++*%0D%0Afrom%0D%0A++bib%0D%0Alimit%0D%0A++100&_size=max Datasette v0.59.4 ![image](https://user-images.githubusercontent.com/9308268/146979957-66911877-2cd9-4022-bc76-fd54e4a3a6f7.png)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
781262510  
https://github.com/simonw/datasette/issues/1304#issuecomment-988459453 https://api.github.com/repos/simonw/datasette/issues/1304 988459453 IC_kwDOBm6k_c466rG9 9308268 2021-12-08T03:15:27Z 2021-12-08T03:15:27Z NONE I was thinking if there were a way to use some sort of sting function to "unpack" the values and convert them into ints... hm
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
863884805  
https://github.com/simonw/sqlite-utils/issues/186#issuecomment-897600677 https://api.github.com/repos/simonw/sqlite-utils/issues/186 897600677 IC_kwDOCGYnMM41gEyl 9308268 2021-08-12T12:32:14Z 2021-08-12T12:32:14Z NONE Actually, I forgot to include the `bib_pub_year` in the extract ... But also, I tried again with empty string values instead of `NULL` values and it seems to place the foreign key properly / correctly... ```python3 sql = """\ INSERT INTO "circulation_info" ("item_id", "bib_title", "bib_creator", "bib_format", "bib_pub_year", "checkout_date") VALUES (1, "title one", "creator one", "Book", 2018, "2021-08-12 00:01"), (2, "title two", "creator one", "Book", 2019, "2021-08-12 00:02"), (3, "title three", "", "DVD", 2020, "2021-08-12 00:03"), (4, "title four", "", "DVD", "", "2021-08-12 00:04"), (5, "title five", "", "DVD", "", "2021-08-12 00:05") """ with sqlite3.connect('test_bib_2.db') as con: con.execute(sql) ``` ```python3 db["circulation_info"].extract( [ "bib_title", "bib_creator", "bib_format", "bib_pub_year" ], table="bib_info", fk_column="bib_info_id" ) ``` ``` {'id': 1, 'item_id': 1, 'bib_info_id': 1, 'bib_pub_year': 2018, 'checkout_date': '2021-08-12 00:01'} {'id': 2, 'item_id': 2, 'bib_info_id': 2, 'bib_pub_year': 2019, 'checkout_date': '2021-08-12 00:02'} {'id': 3, 'item_id': 3, 'bib_info_id': 3, 'bib_pub_year': 2020, 'checkout_date': '2021-08-12 00:03'} {'id': 4, 'item_id': 4, 'bib_info_id': 4, 'bib_pub_year': '', 'checkout_date': '2021-08-12 00:04'} {'id': 5, 'item_id': 5, 'bib_info_id': 5, 'bib_pub_year': '', 'checkout_date': '2021-08-12 00:05'} --- {'id': 1, 'bib_title': 'title one', 'bib_creator': 'creator one', 'bib_format': 'Book'} {'id': 2, 'bib_title': 'title two', 'bib_creator': 'creator one', 'bib_format': 'Book'} {'id': 3, 'bib_title': 'title three', 'bib_creator': '', 'bib_format': 'DVD'} {'id': 4, 'bib_title': 'title four', 'bib_creator': '', 'bib_format': 'DVD'} {'id': 5, 'bib_title': 'title five', 'bib_creator': '', 'bib_format': 'DVD'} ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
722816436  
https://github.com/simonw/sqlite-utils/issues/186#issuecomment-897588624 https://api.github.com/repos/simonw/sqlite-utils/issues/186 897588624 IC_kwDOCGYnMM41gB2Q 9308268 2021-08-12T12:13:25Z 2021-08-12T12:13:25Z NONE I think I ran into an issue that's perhaps related with `extract()` I have a case where I want to create a lookup table for all the related title data where there are possibly multiple null values in the related columns .... ```python3 sql = """\ INSERT INTO "circulation_info" ("item_id", "bib_title", "bib_creator", "bib_format", "bib_pub_year", "checkout_date") VALUES (1, "title one", "creator one", "Book", 2018, "2021-08-12 00:01"), (2, "title two", "creator one", "Book", 2019, "2021-08-12 00:02"), (3, "title three", NULL, "DVD", 2020, "2021-08-12 00:03"), (4, "title four", NULL, "DVD", NULL, "2021-08-12 00:04"), (5, "title five", NULL, "DVD", NULL, "2021-08-12 00:05") """ with sqlite3.connect('test_bib.db') as con: con.execute(sql) ``` when I run the `extract()` method ... ```python3 db["circulation_info"].extract( [ "bib_title", "bib_creator", "bib_format" ], table="bib_info", fk_column="bib_info_id" ) db = sqlite_utils.Database("test_bib.db") for row in db["circulation_info"].rows: print(row) print("\n---\n") for row in db["bib_info"].rows: print(row) ``` results in this .. ``` {'id': 1, 'item_id': 1, 'bib_info_id': 1, 'bib_pub_year': 2018, 'checkout_date': '2021-08-12 00:01'} {'id': 2, 'item_id': 2, 'bib_info_id': 2, 'bib_pub_year': 2019, 'checkout_date': '2021-08-12 00:02'} {'id': 3, 'item_id': 3, 'bib_info_id': None, 'bib_pub_year': 2020, 'checkout_date': '2021-08-12 00:03'} {'id': 4, 'item_id': 4, 'bib_info_id': None, 'bib_pub_year': None, 'checkout_date': '2021-08-12 00:04'} {'id': 5, 'item_id': 5, 'bib_info_id': None, 'bib_pub_year': None, 'checkout_date': '2021-08-12 00:05'} --- {'id': 1, 'bib_title': 'title one', 'bib_creator': 'creator one', 'bib_format': 'Book'} {'id': 2, 'bib_title': 'title two', 'bib_creator': 'creator one', 'bib_format': 'Book'} {'id': 3, 'bib_title': 'title three', 'bib_creator': None, 'bib_format': 'DVD'} {'id': 4, 'bib_title': 'title four', 'bib_cre…
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
722816436  
https://github.com/simonw/datasette/issues/268#issuecomment-876721585 https://api.github.com/repos/simonw/datasette/issues/268 876721585 MDEyOklzc3VlQ29tbWVudDg3NjcyMTU4NQ== 9308268 2021-07-08T20:22:17Z 2021-07-08T20:22:17Z NONE I do like the idea of there being a option for turning that on by default so that you could use those terms in the default "Search" bar presented when you browse to a table where FTS has been enabled. Maybe even a small inline pop up with a short bit explaining the FTS feature and the keywords (e.g. case matters). What are the side-effects of turning that on in the query string, or even by default as you suggested? I see that you stated in the docs... "to ensure they do not cause any confusion for users who are not aware of them", but I'm not sure what those could be. Isn't it the case that those keywords are only picked up by sqlite in where you're using the MATCH clause? Seems like a really powerful feature (even though there are a lot of hurdles around setting it up in the sqlite db ... sqlite-utils makes that so simple by the way!)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
323718842  
https://github.com/simonw/datasette/issues/268#issuecomment-876428348 https://api.github.com/repos/simonw/datasette/issues/268 876428348 MDEyOklzc3VlQ29tbWVudDg3NjQyODM0OA== 9308268 2021-07-08T13:13:12Z 2021-07-08T13:13:12Z NONE I had setup a full text search on my instance of Datasette for title data for our public library, and was noticing that some of the features of the SQLite FTS weren't working as expected ... and maybe the issue is in the `escape_fts()` function ![image](https://user-images.githubusercontent.com/9308268/124925900-f1ea8b00-dfca-11eb-895e-59cc083d6524.png) vs removing the function... ![image](https://user-images.githubusercontent.com/9308268/124925971-0464c480-dfcb-11eb-8fbf-8e9b5d6e0861.png) Also, on the issue of sorting by rank by default .. perhaps something like this could work for the baked-in default SQL query for Datasette? ![image](https://user-images.githubusercontent.com/9308268/124927191-5a863780-dfcc-11eb-9908-3f63577d5ff5.png) [link to the above search in my instance of Datasette](https://ilsweb.cincinnatilibrary.org/collection-analysis/current_collection-87a9011?sql=with+fts_search+as+%28%0D%0A++select%0D%0A++rowid%2C%0D%0A++rank%0D%0A++++from%0D%0A++++++bib_fts%0D%0A++++where%0D%0A++++++bib_fts+match+%3Asearch%0D%0A%29%0D%0A%0D%0Aselect%0D%0A++%0D%0A++bib_record_num%2C%0D%0A++creation_date%2C%0D%0A++record_last_updated%2C%0D%0A++isbn%2C%0D%0A++best_author%2C%0D%0A++best_title%2C%0D%0A++publisher%2C%0D%0A++publish_year%2C%0D%0A++bib_level_callnumber%2C%0D%0A++indexed_subjects%0D%0Afrom%0D%0A++fts_search%0D%0A++join+bib+on+bib.rowid+%3D+fts_search.rowid%0D%0A++%0D%0Aorder+by%0D%0Arank%0D%0A&search=black+death+NOT+fiction)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
323718842  
https://github.com/simonw/datasette/issues/1387#issuecomment-873166836 https://api.github.com/repos/simonw/datasette/issues/1387 873166836 MDEyOklzc3VlQ29tbWVudDg3MzE2NjgzNg== 9308268 2021-07-02T17:58:23Z 2021-07-02T17:58:23Z NONE Thanks Simon for nailing that one down! It does seem a little confusing that the ProxyPreservehost option is set to Off By default, but this config totally did the trick and fixed the issue ``` <Location /collection-analysis/> ProxyPass http://127.0.0.1:8010/collection-analysis/ ProxyPreservehost On </Location> ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
935930820  
https://github.com/simonw/datasette/issues/283#issuecomment-780991910 https://api.github.com/repos/simonw/datasette/issues/283 780991910 MDEyOklzc3VlQ29tbWVudDc4MDk5MTkxMA== 9308268 2021-02-18T02:13:56Z 2021-02-18T02:13:56Z NONE I was going ask you about this issue when we talk during your office-hours schedule this Friday, but was there any support ever added for doing this cross-database joining? I have a use-case where could be pretty neat to do analysis using this tool on time-specific databases from snapshots https://ilsweb.cincinnatilibrary.org/collection-analysis/ ![image](https://user-images.githubusercontent.com/9308268/108294883-ba3a8e00-7164-11eb-9206-fcd5a8cdd883.png) and thanks again for such an amazing tool!
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
325958506  
Powered by Datasette · Queries took 8.302ms · About: github-to-sqlite