home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

4 rows where type = "issue" and user = 9308268 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

repo 2

  • datasette 3
  • sqlite-utils 1

type 1

  • issue · 4 ✖

state 1

  • open 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1224112817 I_kwDOCGYnMM5I9nqx 430 Document how to use `PRAGMA temp_store` to avoid errors when running VACUUM against huge databases rayvoelker 9308268 open 0     2 2022-05-03T13:33:58Z 2022-06-14T23:26:37Z   NONE  

I'm trying to figure out a way to get the table.extract() method to complete successfully -- I'm not sure if maybe the cause (and a possible solution) of this on Ubuntu Server 22.04 is to adjust some of the PRAGMA values within SQLite itself ... on another Linux system (PopOS), using this method on this same database appears to work just fine.

Here's the bit that's causing the error, and the resulting error output: ```python

combine these columns into 1 table "bib_properties" :

best_title

bib_level_code

mat_type

material_code

best_author

db["circ_trans"].extract( ["best_title", "bib_level_code", "mat_type", "material_code", "best_author"], table="bib_properties", fk_column="bib_properties_id" )

db["circ_trans"].extract( ["call_number"], table="call_number", fk_column="call_number_id", rename={"call_number": "value"} ) ```

```python

OperationalError Traceback (most recent call last) Input In [17], in <cell line: 7>() 1 # combine these columns into 1 table "bib_properties" : 2 # best_title 3 # bib_level_code 4 # mat_type 5 # material_code 6 # best_author ----> 7 db["circ_trans"].extract( 8 ["best_title", "bib_level_code", "mat_type", "material_code", "best_author"], 9 table="bib_properties", 10 fk_column="bib_properties_id" 11 ) 13 db["circ_trans"].extract( 14 ["call_number"], 15 table="call_number", 16 fk_column="call_number_id", 17 rename={"call_number": "value"} 18 )

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1764, in Table.extract(self, columns, table, fk_column, rename) 1761 column_order.append(c.name) 1763 # Drop the unnecessary columns and rename lookup column -> 1764 self.transform( 1765 drop=set(columns), 1766 rename={magic_lookup_column: fk_column}, 1767 column_order=column_order, 1768 ) 1770 # And add the foreign key constraint 1771 self.add_foreign_key(fk_column, table, "id")

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1526, in Table.transform(self, types, rename, drop, pk, not_null, defaults, drop_foreign_keys, column_order) 1524 with self.db.conn: 1525 for sql in sqls: -> 1526 self.db.execute(sql) 1527 # Run the foreign_key_check before we commit 1528 if pragma_foreign_keys_was_on:

File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:465, in Database.execute(self, sql, parameters) 463 return self.conn.execute(sql, parameters) 464 else: --> 465 return self.conn.execute(sql)

OperationalError: database or disk is full ```

This database is about 17G in total size, so I'm assuming the error is coming from the vacuum ... where i'm assuming it's maybe trying to do the temp storage in a location that doesn't have sufficient room. The disk space is more than ample on the host in question (1.8T is free in the directory where the sqlite db resides) The /tmp directory however is limited on a smaller disk associated with the OS

I'm trying to think if there's a way to set the PRAGMA temp_store or maybe if it's temp_store_directory that I'm after ... to use the same local directory of where the file is located (maybe this is a property of the version of sqlite on the system?)

```python

SET the temp file store to be a file ...

print(db.execute('PRAGMA temp_store').fetchall()) print(db.execute('PRAGMA temp_store=FILE').fetchall())

print(db.execute('PRAGMA temp_store').fetchall())

the users home directory ...

print(db.execute("PRAGMA temp_store_directory='/home/plchuser/'").fetchall()) print(db.execute("PRAGMA sqlite3_temp_directory='/home/plchuser/'").fetchall())

print(db.execute("PRAGMA temp_store_directory").fetchall()) print(db.execute("PRAGMA sqlite3_temp_directory").fetchall()) text [(1,)] [] [(1,)] [] [] [('/home/plchuser/',)] [] ```

Here's the docs on the Temporary File Storage Locations https://www.sqlite.org/tempfiles.html

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/430/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1174655187 I_kwDOBm6k_c5GA9DT 1671 Filters fail to work correctly against calculated numeric columns returned by SQL views because type affinity rules do not apply rayvoelker 9308268 open 0     8 2022-03-20T19:17:24Z 2022-03-22T17:43:12Z   NONE  

I found a strange behavior, and I'm not sure if it's related to views and boolean values perhaps, or if there's something else weird going on here, but I'll provide an example that may help show what I'm seeing happen.

```bash

!/bin/bash

echo "\"id\",\"expiration_date\" 0,2018-01-04 1,2019-01-05 2,2020-01-06 3,2021-01-07 4,2022-01-08 5,2023-01-09 6,2024-01-10 7,2025-01-11 8,2026-01-12 9,2027-01-13 " > test.csv csvs-to-sqlite test.csv test.db sqlite-utils create-view --replace test.db test_view "select id, expiration_date, case when julianday('NOW') >= julianday(expiration_date) then 1 else 0 end as has_expired FROM test" ```

bash datasette test.db

Thanks again and let me know if you want me to provide anything else!

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1671/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
863884805 MDU6SXNzdWU4NjM4ODQ4MDU= 1304 Document how to send multiple values for "Named parameters" rayvoelker 9308268 open 0     4 2021-04-21T13:19:06Z 2021-12-08T03:23:14Z   NONE  

https://docs.datasette.io/en/stable/sql_queries.html#named-parameters

I thought that I had seen an example of how to do this example below, but I can't seem to find it

sql select * from bib where bib.bib_record_num in (1008088,1008092)

sql select * from bib where bib.bib_record_num in (:bib_record_numbers)

https://ilsweb.cincinnatilibrary.org/collection-analysis/current_collection-204d100?sql=select%0D%0A++*%0D%0Afrom%0D%0A++bib%0D%0Awhere%0D%0A++bib.bib_record_num+in+%28%3Abib_record_numbers%29&bib_record_numbers=1008088%2C1008092

Or, maybe this isn't a fully supported feature.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1304/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
826064552 MDU6SXNzdWU4MjYwNjQ1NTI= 1253 Capture "Ctrl + Enter" or "⌘ + Enter" to send SQL query? rayvoelker 9308268 open 0     1 2021-03-09T15:00:50Z 2021-10-30T16:00:42Z   NONE  

It appears as though "Shift + Enter" triggers the form submit action to submit SQL, but could that action be bound to the "Ctrl + Enter" or "⌘ + Enter" action?

I feel like that pattern already exists in a number of similar tools and could improve usability of the editor.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1253/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 97.942ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows