id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason 1237586379,I_kwDOBm6k_c5JxBHL,1742,?_trace=1 fails with datasette-geojson for some reason,9599,open,0,,,4,2022-05-16T19:06:05Z,2022-05-16T19:42:13Z,,OWNER,,view-source:https://calands.datasettes.com/calands/CPAD_2020a_SuperUnits.geojson?_sort=id&id__exact=4&_labels=on&_trace=1 is showing me a blank page.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1742/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 795367402,MDU6SXNzdWU3OTUzNjc0MDI=,1209,v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround,11788561,open,0,,,1,2021-01-27T19:08:13Z,2021-01-28T23:00:27Z,,NONE,,"v0.54 500 error in sql query template; code worked in v0.53; found a workaround **schema:** CREATE TABLE ""talks"" (""talk"" TEXT,""series"" INTEGER, ""talkdate"" TEXT) CREATE TABLE ""series"" (""id"" INTEGER PRIMARY KEY, ""series"" TEXT, talks_list TEXT default '', website TEXT default ''); **Live example of correctly rendered template in v.053:** https://cosmotalks-cy6xkkbezq-uw.a.run.app/cosmotalks/talks/1 **Description of problem:** I needed 'sql select' code in a custom row-mydatabase-mytable.html template to lookup the series name for a foreign key integer value in the talks table. So `metadata.json` specifies the `datasette-template-sql` plugin. The code below worked perfectly in v0.53 (just the relevant sql statement part is shown; full code is [here](https://github.com/jrdmb/cosmotalks-datasette/blob/main/templates/row-cosmotalks-talks.html)): ``` {# custom addition #} {% for row in display_rows %} ... {% set sname = sql(""select series from series where id = ?"", [row.series]) %} Series name: {{ sname[0].series }} ... {% endfor %} {# End of custom addition #} ``` **In v0.54, that code resulted in a 500 error with a 'no such table series' message.** A second query in that template also did not work but the above is fully illustrative of the problem. All templates were up-to-date along with datasette v0.54. **Workaround:** After fiddling around with trying different things, what worked was the syntax from [Querying a different database from the datasette-template-sql github repo](https://github.com/simonw/datasette-template-sql#querying-a-different-database) to add the database name to the sql statement: `{% set sname = sql(""select series from series where id = ?"", [row.series], database=""cosmotalks"") %}` Though this was found to work, it should not be necessary to add `database=""cosmotalks""` since per the `datasette-template-sql` README, it's only needed when querying a different database, but here it's a table within the same database. ",107914493,issue,,,,, 793907673,MDExOlB1bGxSZXF1ZXN0NTYxNTEyNTAz,15,added try / except to write_records ,9857779,open,0,,,0,2021-01-26T03:56:21Z,2021-01-26T03:56:21Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/healthkit-to-sqlite/pulls/15,"to keep the data write from failing if it came across an error during processing. In particular when trying to convert my HealthKit zip file (and that of my wife's) it would consistently error out with the following: ``` db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables --------------------------------------------------------------------------------------------------------------------------------------------------------------------- db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: too many SQL variables --------------------------------------------------------------------------------------------------------------------------------------------------------------------- db.py 1709 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered --------------------------------------------------------------------------------------------------------------------------------------------------------------------- healthkit-to-sqlite 8 sys.exit(cli()) core.py 829 __call__ return self.main(*args, **kwargs) core.py 782 main rv = self.invoke(ctx) core.py 1066 invoke return ctx.invoke(self.callback, **ctx.params) core.py 610 invoke return callback(*args, **kwargs) cli.py 57 cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) utils.py 42 convert_xml_to_sqlite write_records(records, db) utils.py 143 write_records db[table].insert_all( db.py 1899 insert_all self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1720 insert_chunk self.insert_chunk( db.py 1714 insert_chunk result = self.db.execute(query, params) db.py 226 execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: table rBodyMass has no column named metadata_HKWasUserEntered ``` Adding the try / except in the `write_records` seems to fix that issue. ",197882382,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/15/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1613974869,PR_kwDOBm6k_c5LgPS-,2034,remove an unused `app` var in cli.py,4370201,open,0,,,2,2023-03-07T18:19:05Z,2023-03-29T20:56:20Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2034,"this var `app` isn't actually used? unless init it does some side-effect outside of the event loop, idon't think it's necessary. Feel free to ignore this PR if the deleted line actually does something. ---- :books: Documentation preview :books:: https://datasette--2034.org.readthedocs.build/en/2034/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2034/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 920636216,MDU6SXNzdWU5MjA2MzYyMTY=,64,"feature: support ""events""",231498,open,0,,,5,2021-06-14T17:42:49Z,2021-06-15T00:48:37Z,,NONE,,"the GitHub API provides the ability to fetch all events for a given user, organization, or repository: https://docs.github.com/en/rest/reference/activity#list-events-for-the-authenticated-user this would allow users to export all of the issue comments, new issues, etc. that they created. something which is currently missing from the GitHub takeout exports.",207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1079111498,I_kwDOBm6k_c5AUe9K,1553,if csv export is truncated in non streaming mode set informative response header,536941,open,0,,,3,2021-12-13T22:50:44Z,2021-12-16T19:17:28Z,,CONTRIBUTOR,,"streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit. it would be great if a response is truncated that an header signalling that was set in the header. i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1553/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1864112887,PR_kwDOBm6k_c5Yo7bk,2151,Test Datasette on multiple SQLite versions,15178711,open,0,,,1,2023-08-23T22:42:51Z,2023-08-23T22:58:13Z,,CONTRIBUTOR,simonw/datasette/pulls/2151,"still testing, hope it works! ---- :books: Documentation preview :books:: https://datasette--2151.org.readthedocs.build/en/2151/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2151/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1, 1524431805,I_kwDODEm0Qs5a3Pu9,72,"Import thread, including self- and others' replies",601708,open,0,,,0,2023-01-08T09:51:06Z,2023-01-08T09:51:06Z,,NONE,,"statuses-lookup, home-timeline, mentions (only for auth'ed user) don't cover this. `twitter-to-sqlite fetch-thread tw-group1.db 1234123412341234` twitter-to-sqlite focuses on archiving users, but does not easily support archiving conversations or community activity. For reference, this is [implemented in twarc](https://sourcegraph.com/github.com/DocNow/twarc/-/blob/twarc/client.py?L708-766&subtree=true), using a search, optionally recursively. Other research suggests that this formerly, or currently, requires a [search query](https://stackoverflow.com/a/30480103/1020467), use of [undocumented `related_results` api](https://stackoverflow.com/a/9419346/1020467), or with requested inclusion of [newer conversation_id](https://stackoverflow.com/a/68115718/1020467) with subsequent query. ",206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/72/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 782708469,MDU6SXNzdWU3ODI3MDg0Njk=,1183,"Take advantage of sqlite-utils cached table counts, if available",9599,open,0,,,2,2021-01-09T23:51:48Z,2021-01-12T02:42:08Z,,OWNER,,"sqlite-utils 3.2 now has a mechanism for creating a `_counts` table with triggers to maintain counts for individual tables. Datasette could look for this table and use it for counts, dramatically speeding up places that currently suffer from slow counts. Refs #859. https://sqlite-utils.datasette.io/en/stable/python-api.html#cached-table-counts-using-triggers",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1183/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1595340692,I_kwDOCGYnMM5fFveU,530,"add ability to configure ""on delete"" and ""on update"" attributes of foreign keys:",536941,open,0,,,2,2023-02-22T15:44:14Z,2023-05-08T20:39:01Z,,CONTRIBUTOR,,"sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils. https://www.sqlite.org/foreignkeys.html#fk_actions",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1555701851,PR_kwDOBm6k_c5IdsD7,2003,Show referring tables and rows when the referring foreign key is compound,536941,open,0,,,3,2023-01-24T21:31:31Z,2023-01-25T18:44:42Z,,CONTRIBUTOR,simonw/datasette/pulls/2003,"sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys. in particular, 1. in a table view, there is not a link from the row to the referenced row if the foreign key is compound 2. in a row view, there is no listing of tables and rows that refer to the focal row if those referencing foreign keys are compound. Both of these issues are discussed in #1099. This PR only fixes the second one, because it's not clear what the right UX is for the first issue. ![Screenshot 2023-01-24 at 19-47-40 nlrb bargaining_unit](https://user-images.githubusercontent.com/536941/214454749-d53deead-4151-4329-a5d4-8a7a454de7d3.png) Some things that might not be desirable about this approach. 1. it changes the external API, by changing `column` => `columns` and `other_column` => `other_columns` (see inline comment for more discussion. 2. There are various places where the plural foreign keys have to be checked for length and discarded or transformed to singular. ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2003/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1708981860,PR_kwDOBm6k_c5QdMea,2074,sort files by mtime,3919561,open,0,,,0,2023-05-14T15:25:15Z,2023-05-14T15:25:29Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2074,"serving multiple database files and getting tired by the default sort, changes so the sort order puts the latest changed databases to be on top of the list so don't have to scroll down, lazy as i am ;) ---- :books: Documentation preview :books:: https://datasette--2074.org.readthedocs.build/en/2074/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2074/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1090810196,I_kwDOBm6k_c5BBHFU,1583,consider adding deletion step of cloudbuild artifacts to gcloud publish,536941,open,0,,,1,2021-12-30T00:33:23Z,2021-12-30T00:34:16Z,,CONTRIBUTOR,,"right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1583/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 697162939,MDU6SXNzdWU2OTcxNjI5Mzk=,20,Add more tags so people can find your project.,7902810,open,0,,,0,2020-09-09T21:14:09Z,2020-09-09T21:14:09Z,,NONE,,"quantified-self habit-tracking google-fit time-tracking wearables quantifiedself for example",197431109,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 1, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1310243385,I_kwDOCGYnMM5OGLo5,456,feature request: pivot command,536941,open,0,,,5,2022-07-20T00:58:08Z,2022-07-20T17:50:50Z,,CONTRIBUTOR,,pivoting long-format table to wide-format tables is pretty common and kind of pain. would love to see this feature in sqlite-utils!,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/456/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 546073980,MDU6SXNzdWU1NDYwNzM5ODA=,74,Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column,15092,open,0,,,3,2020-01-07T04:35:50Z,2020-01-12T07:21:17Z,,CONTRIBUTOR,,"openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes. Most fail on the cli exit code like ```py [ 74s] =================================== FAILURES =================================== [ 74s] _________________________________ test_tables __________________________________ [ 74s] [ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db' [ 74s] [ 74s] def test_tables(db_path): [ 74s] result = CliRunner().invoke(cli.cli, [""tables"", db_path]) [ 74s] > assert '[{""table"": ""Gosh""},\n {""table"": ""Gosh2""}]' == result.output.strip() [ 74s] E assert '[{""table"": ""...e"": ""Gosh2""}]' == '' [ 74s] E - [{""table"": ""Gosh""}, [ 74s] E - {""table"": ""Gosh2""}] [ 74s] [ 74s] tests/test_cli.py:28: AssertionError ``` packaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils I'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1557599877,I_kwDODFE5qs5c1xaF,12,location history changes,14809320,open,0,,,0,2023-01-26T03:57:25Z,2023-01-26T03:57:25Z,,NONE,,"not sure if each download is unique, but I had to change some things to work with the takeout zip I made 2023-01-25 filename changed from ""Location History.json"" to ""Records.json"" `""timestampMs""` is not present, `""timestamp""` is roughly iso timestamp ```py def get_timestamp_ms(raw_timestamp): try: return datetime.datetime.strptime(raw_timestamp, ""%Y-%m-%dT%H:%M:%SZ"").timestamp() except ValueError: return datetime.datetime.strptime(raw_timestamp, ""%Y-%m-%dT%H:%M:%S.%fZ"").timestamp() def save_location_history(db, zf): location_history = json.load( zf.open(""Takeout/Location History/Records.json"") ) db[""location_history""].upsert_all( ( { ""id"": id_for_location_history(row), ""latitude"": row[""latitudeE7""] / 1e7, ""longitude"": row[""longitudeE7""] / 1e7, ""accuracy"": row[""accuracy""], ""timestampMs"": get_timestamp_ms(row[""timestamp""]), ""when"": row[""timestamp""], } for row in location_history[""locations""] ), pk=""id"", ) def id_for_location_history(row): # We want an ID that is unique but can be sorted by in # date order - so we use the isoformat date + the first # 6 characters of a hash of the JSON first_six = hashlib.sha1( json.dumps(row, separators=("","", "":""), sort_keys=True).encode(""utf8"") ).hexdigest()[:6] return ""{}-{}"".format( row['timestamp'], first_six, ) ``` example locations from mine ```json { ""latitudeE7"": 427220206, ""longitudeE7"": -923423972, ""accuracy"": 10, ""deviceTag"": -1312429967, ""deviceDesignation"": ""PRIMARY"", ""timestamp"": ""2019-01-08T23:31:50.867Z"" } ``` ```json { ""latitudeE7"": 427011317, ""longitudeE7"": -923448300, ""accuracy"": 5, ""deviceTag"": -1312429967, ""deviceDesignation"": ""PRIMARY"", ""timestamp"": ""2019-01-08T23:33:53Z"" }, ```",206649770,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/12/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 2}",, 769376447,MDU6SXNzdWU3NjkzNzY0NDc=,2,killed by oomkiller on large location-history,231498,open,0,,,2,2020-12-17T00:32:24Z,2020-12-17T00:48:32Z,,NONE,,"memory seems to grow unbounded and is oom-killed after about 20GB memory usage. this is happening while loading a ~1GB uncompressed location history.",206649770,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/2/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 951185411,MDU6SXNzdWU5NTExODU0MTE=,1402,feature request: social meta tags,536941,open,0,,,2,2021-07-23T01:57:23Z,2021-07-26T19:31:41Z,,CONTRIBUTOR,,"it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1402/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1400374908,I_kwDOBm6k_c5TeAZ8,1836,docker image is duplicating db files somehow,536941,open,0,,,13,2022-10-06T22:35:54Z,2022-10-08T16:56:51Z,,CONTRIBUTOR,,"if you look into the docker image created by docker publish, the `datasette inspect` line is duplicating the db files. here's the result of the inspect command: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1836/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1067775061,I_kwDOBm6k_c4_pPRV,1539,Research PRAGMA query_only,9599,open,0,,,0,2021-11-30T23:30:24Z,2021-11-30T23:30:24Z,,OWNER,,"https://www.sqlite.org/pragma.html#pragma_query_only > The query_only pragma prevents data changes on database files when enabled. When this pragma is enabled, any attempt to CREATE, DELETE, DROP, INSERT, or UPDATE will result in an [SQLITE_READONLY](https://www.sqlite.org/rescode.html#readonly) error. However, the database is not truly read-only. You can still run a [checkpoint](https://www.sqlite.org/wal.html#ckpt) or a [COMMIT](https://www.sqlite.org/lang_transaction.html) and the return value of the [sqlite3_db_readonly()](https://www.sqlite.org/c3ref/db_readonly.html) routine is not affected. Would it be worth adding this as an extra protection against accidental writes to a DB file over a read-only connection?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1539/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1215216249,I_kwDOCGYnMM5Ibrp5,428,Research adding support for savepoints,9599,open,0,,,1,2022-04-26T01:04:01Z,2022-04-26T01:05:29Z,,OWNER,,"https://www.sqlite.org/lang_savepoint.html Savepoints are like regular transactions except they have names and can be nested. Would there be any value in adding support to them to `sqlite-utils`, potentially as some kind of context manager? Something like this: ```python with db.savepoint(""name""): # do stuff with db.savepoint(""name2""): # do more stuff raise Release # Rolls back to before ""name2"" savepoint ``` I've never used this feature so I'm not comfortable adding anything like this without a bunch of extra research.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/428/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 830567275,MDU6SXNzdWU4MzA1NjcyNzU=,1259,Research using CTEs for faster facet counts,9599,open,0,,,5,2021-03-12T22:19:49Z,2021-03-21T22:55:31Z,,OWNER,,"https://www.sqlite.org/changes.html#version_3_35_0 > Add support for the [MATERIALIZED](https://www.sqlite.org/lang_with.html#mathint) and [NOT MATERIALIZED](https://www.sqlite.org/lang_with.html#mathint) hints when specifying [common table expressions](https://www.sqlite.org/lang_with.html). The default behavior was formerly NOT MATERIALIZED, but is now changed to MATERIALIZED for CTEs that are used more than once. If a CTE creates a table that is used multiple time in that query, SQLite will now default to creating a materialized table for the duration of that query. This could be a big performance boost when applying faceting multiple times against the same query. Consider this example query: ```sql WITH data as ( select * from [global-power-plants] ), country_long as (select 'country_long' as col, country_long as value, count(*) as c from data group by country_long order by c desc limit 10 ), primary_fuel as ( select 'primary_fuel' as col, primary_fuel as value, count(*) as c from data group by primary_fuel order by c desc limit 10 ) select * from primary_fuel union select * from country_long order by col, c desc ``` https://global-power-plants.datasettes.com/global-power-plants?sql=WITH+data+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%5Bglobal-power-plants%5D%0D%0A%29%2C%0D%0Acountry_long+as+%28select+%0D%0A++%27country_long%27+as+col%2C+country_long+as+value%2C+count%28*%29+as+c+from+data+group+by+country_long%0D%0A++order+by+c+desc+limit+10%0D%0A%29%2C%0D%0Aprimary_fuel+as+%28%0D%0Aselect%0D%0A++%27primary_fuel%27+as+col%2C+primary_fuel+as+value%2C+count%28*%29+as+c+from+data+group+by+primary_fuel%0D%0A++order+by+c+desc+limit+10%0D%0A%29%0D%0Aselect+*+from+primary_fuel+union+select+*+from+country_long+order+by+col%2C+c+desc Outputs: col | value | c -- | -- | -- country_long | United States of America | 8688 country_long | China | 4235 country_long | United Kingdom | 2603 country_long | Brazil | 2360 country_long | France | 2155 country_long | India | 1590 country_long | Germany | 1309 country_long | Canada | 1159 country_long | Spain | 829 country_long | Russia | 545 primary_fuel | Solar | 9662 primary_fuel | Hydro | 7155 primary_fuel | Wind | 5188 primary_fuel | Gas | 3922 primary_fuel | Coal | 2390 primary_fuel | Oil | 2290 primary_fuel | Biomass | 1396 primary_fuel | Waste | 1087 primary_fuel | Nuclear | 198 primary_fuel | Geothermal | 189 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1259/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1203943272,I_kwDOBm6k_c5Hwrdo,1713,Datasette feature for publishing snapshots of query results,9599,open,0,,,5,2022-04-14T01:42:00Z,2022-07-04T05:16:35Z,,OWNER,,"https://twitter.com/simonw/status/1514392335718645760 > Maybe [@datasetteproj](https://twitter.com/datasetteproj) should grow a feature that lets you cache the results of a query and give that snapshot a stable permalink > > A plugin that publishes the JSON output of a query to an S3 bucket would be pretty neat... especially if it could also be configured to re-publish the results on a schedule A lot of people said they would find this useful. Probably going to build this as a plugin.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1713/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1100015398,I_kwDOBm6k_c5BkOcm,1591,Maybe let plugins define custom serve options?,9599,open,0,,,7,2022-01-12T08:18:47Z,2022-01-15T11:56:59Z,,OWNER,,"https://twitter.com/psychemedia/status/1481171650934714370 > can extensions be passed their own cli args? eg `--ext-tiddlywiki-dbname tiddlywiki2.sqlite` ? I've thought something like this might be useful for other plugins in the past, too.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1591/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 320132682,MDU6SXNzdWUzMjAxMzI2ODI=,250,Setup some issue templates,9599,open,0,,,0,2018-05-04T01:49:07Z,2018-05-04T01:49:07Z,,OWNER,,"https://twitter.com/left_pad/status/99216385740464537 I like the idea of using these to help people understand some of the ways I want to use issues.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/250/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 792652391,MDU6SXNzdWU3OTI2NTIzOTE=,1199,Experiment with PRAGMA mmap_size=N,9599,open,0,,,2,2021-01-23T21:24:09Z,2021-07-17T17:39:17Z,,OWNER,,"https://sqlite.org/mmap.html - SQLite supports memory-mapped I/O but it's disabled by default. The `PRAGMA mmap_size=N` option can be used to enable it. It would be very interesting to understand the impact this could have on Datasette performance for various different shapes of data.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1199/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 581795570,MDU6SXNzdWU1ODE3OTU1NzA=,93,Support more string values for types in .add_column(),9599,open,0,,,0,2020-03-15T19:32:49Z,2020-09-24T20:36:46Z,,OWNER,,"https://sqlite-utils.readthedocs.io/en/2.4.2/python-api.html#adding-columns says: > SQLite types you can specify are ""TEXT"", ""INTEGER"", ""FLOAT"" or ""BLOB"". As discovered in #92 this isn't the right list of values. I should expand this to match https://www.sqlite.org/datatype3.html",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/93/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1353074021,I_kwDOCGYnMM5QpkVl,474,Add an option for specifying column names when inserting CSV data,14294,open,0,,,3,2022-08-27T15:29:59Z,2022-08-31T03:42:36Z,,NONE,,"https://sqlite-utils.datasette.io/en/stable/cli.html#csv-files-without-a-header-row > The first row of any CSV or TSV file is expected to contain the names of the columns in that file. > If your file does not include this row, you can use the `--no-headers` option to specify that the tool should not use that fist row as headers. > If you do this, the table will be created with column names called `untitled_1` and `untitled_2` and so on. You can then rename them using the `sqlite-utils transform ... --rename` command. It would be nice to be able to specify the column names when importing CSV/TSV without a header row, via an extra command line option. (renaming a column of a large table can take a long time, which makes it an inconvenient workaround)",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/474/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1345561209,I_kwDOBm6k_c5QM6J5,1790,A better HTML title for canned query pages,9599,open,0,,,0,2022-08-21T18:27:46Z,2022-08-21T18:27:46Z,,OWNER,,"https://scotrail.datasette.io/scotrail/assemble_sentence?terms=This+train+is+formed+of%2Cbomb+which Current title is: `scotrail: with phrases as ( select key, value from json_each('["' || replace(:terms, ',', '","') || '"]')),matches as (select phrases.key, phrases.value, ( select File from announcements where announcements.Transcription like '%' || trim(phrases.value) || '%' order by length(announcements.Transcription) limit 1 ) as Filefrom phrases),results as ( select key, announcements.Transcription, announcements.mp3 from announcements join matches on announcements.File = matches.File order by key)select 'Combined sentence:' as mp3, group_concat(Transcription, ' ') as Transcription, -1 as keyfrom results unionselect mp3, Transcription, keyfrom resultsorder by key` I think a better title would be: `scotrail: assemble_sentence, terms = This train is formed of,bomb which`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1790/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 776101101,MDU6SXNzdWU3NzYxMDExMDE=,1161,Update a whole bunch of links to datasette.io instead of datasette.readthedocs.io,9599,open,0,,,1,2020-12-29T21:47:31Z,2020-12-29T21:49:57Z,,OWNER,,https://ripgrep.datasette.io/-/ripgrep?pattern=%28datasette%7Csqlite-utils%29%5C.readthedocs%5C.io,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1161/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 530468212,MDU6SXNzdWU1MzA0NjgyMTI=,643,Set up some basic benchmarks as part of the unit tests,9599,open,0,,,0,2019-11-29T19:24:19Z,2019-11-29T19:24:19Z,,OWNER,,"https://pypi.org/project/pytest-benchmark/ looks great for this. Here's how to run it as a github action: https://github.com/rhysd/github-action-benchmark/blob/master/examples/pytest/README.md",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/643/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 615626118,MDU6SXNzdWU2MTU2MjYxMTg=,22,Try out ExifReader,9599,open,0,,,4,2020-05-11T06:32:13Z,2020-05-14T05:59:53Z,,MEMBER,,"https://pypi.org/project/ExifReader/ New fork that should be able to handle EXIF in HEIC files. Forked here: https://github.com/ianare/exif-py/issues/102#issuecomment-626376522 Refs #3 ",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 777140799,MDU6SXNzdWU3NzcxNDA3OTk=,1166,Adopt Prettier for JavaScript code formatting,9599,open,0,,,10,2020-12-31T21:25:27Z,2022-01-13T22:22:18Z,,OWNER,,https://prettier.io/ - I'm going to go with 2 spaces.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1166/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 792890765,MDU6SXNzdWU3OTI4OTA3NjU=,1200,?_size=10 option for the arbitrary query page would be useful,9599,open,0,,,2,2021-01-24T20:55:35Z,2021-02-11T03:13:59Z,,OWNER,,"https://latest.datasette.io/fixtures?sql=select+*+from+compound_three_primary_keys&_size=10 - `_size=10` does not do anything at the moment. It would be useful if it did. Would also be good if it persisted in a hidden form field.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1200/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1340900019,I_kwDOBm6k_c5P7IKz,1785,Can't use cog menu to facet by first column in a view,9599,open,0,,,0,2022-08-16T21:27:23Z,2022-08-16T21:27:23Z,,OWNER,,"https://latest.datasette.io/fixtures/paginated_view Compare with: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1785/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1649791661,I_kwDOBm6k_c5iVdKt,2050,Row page JSON should use new ?_extra= format,9599,open,0,,8755003,1,2023-03-31T17:56:53Z,2023-03-31T17:59:49Z,,OWNER,,"https://latest.datasette.io/fixtures/facetable/2.json Related: - #2049 - #1709 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2050/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1174717287,I_kwDOBm6k_c5GBMNn,1674,Tweak design of /.json,9599,open,0,,3268330,1,2022-03-20T22:58:01Z,2022-03-20T22:58:40Z,,OWNER,,"https://latest.datasette.io/.json Currently: ```json { ""_memory"": { ""name"": ""_memory"", ""hash"": null, ""color"": ""a6c7b9"", ""path"": ""/_memory"", ""tables_and_views_truncated"": [], ""tables_and_views_more"": false, ""tables_count"": 0, ""table_rows_sum"": 0, ""show_table_row_counts"": false, ""hidden_table_rows_sum"": 0, ""hidden_tables_count"": 0, ""views_count"": 0, ""private"": false }, ""fixtures"": { ""name"": ""fixtures"", ""hash"": ""645005884646eb941c89997fbd1c0dd6be517cb1b493df9816ae497c0c5afbaa"", ""color"": ""645005"", ""path"": ""/fixtures"", ""tables_and_views_truncated"": [ { ""name"": ""compound_three_primary_keys"", ""columns"": [ ""pk1"", ""pk2"", ""pk3"", ""content"" ], ""primary_keys"": [ ""pk1"", ""pk2"", ""pk3"" ], ""count"": 1001, ""hidden"": false, ""fts_table"": null, ""num_relationships_for_sorting"": 0, ""private"": false }, ``` As-of this issue the `""path""` key is confusing, it doesn't match what https://latest.datasette.io/-/databases returns: - #1668",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1674/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1479914599,I_kwDOCGYnMM5YNbRn,516,Feature request: output number of ignored/replaced rows for insert command,9599,open,0,,,4,2022-12-06T18:59:21Z,2022-12-06T19:08:14Z,,OWNER,,"https://hachyderm.io/@briandorsey/109468185742876820 > I'm fiddling with piping json to `insert -ignore` I'd love to see the count of records inserted & ignored, but didn't see a way to do that in the help/docs. > > Example: `xh ""https://hachyderm.io/api/v1/timelines/tag/rust?max_id=109443380308326328"" | sqlite-utils insert aoc.db aoc - --pk=id --ignore`",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/516/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1405196044,PR_kwDOCGYnMM5AmYzy,499,feat: recreate fts triggers after table transform,7908073,open,0,,,2,2022-10-11T20:35:39Z,2022-10-26T17:54:51Z,,CONTRIBUTOR,simonw/sqlite-utils/pulls/499,"https://github.com/simonw/sqlite-utils/pull/498 ---- :books: Documentation preview :books:: https://sqlite-utils--499.org.readthedocs.build/en/499/ alternatively, `self.disable_fts()`",140912432,pull,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/499/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1977155641,I_kwDOCGYnMM512QA5,601,Move plugin directory into documentation,9599,open,0,,,0,2023-11-04T04:07:52Z,2023-11-04T04:07:52Z,,OWNER,,"https://github.com/simonw/sqlite-utils-plugins should be in the official documentation. I can use the same pattern as https://llm.datasette.io/en/stable/plugins/directory.html https://til.simonwillison.net/readthedocs/stable-docs",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/601/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 455852801,MDU6SXNzdWU0NTU4NTI4MDE=,507,Every datasette plugin on the ecosystem page should have a screenshot,9599,open,0,,,4,2019-06-13T17:02:51Z,2020-09-17T02:47:35Z,,OWNER,,https://github.com/simonw/datasette/blob/master/docs/ecosystem.rst,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/507/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1554032168,I_kwDOBm6k_c5coKYo,2002,Document how actors are displayed,9599,open,0,,,0,2023-01-24T00:08:49Z,2023-01-24T00:08:49Z,,OWNER,,"https://github.com/simonw/datasette/blob/e4ebef082de90db4e1b8527abc0d582b7ae0bc9d/datasette/utils/__init__.py#L1052-L1056 This logic should be reflected in the documentation on https://docs.datasette.io/en/stable/authentication.html#actors",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2002/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 728905098,MDU6SXNzdWU3Mjg5MDUwOTg=,1048,Documentation and unit tests for urls.row() urls.row_blob() methods,9599,open,0,,,7,2020-10-25T00:13:53Z,2022-07-10T16:23:57Z,,OWNER,,https://github.com/simonw/datasette/blob/5db7ae3ce165ded57c7fb1cfbdb3258b1cf06c10/datasette/app.py#L1307-L1313,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1048/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1978023780,I_kwDOBm6k_c515j9k,2205,request.post_vars() method obliterates form keys with multiple values,9599,open,0,,8755003,3,2023-11-05T23:25:08Z,2023-11-06T04:10:34Z,,OWNER,,"https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139 In GET requests you can do `?foo=1&foo=2` - you can do the same in POST requests, but the `dict()` call here eliminates those duplicates. You can't even try calling `post_body()` and implement your own custom parsing because of: - #2204",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2205/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 626593402,MDU6SXNzdWU2MjY1OTM0MDI=,780,Internals documentation for datasette.metadata() method,9599,open,0,,3268330,2,2020-05-28T15:14:22Z,2022-03-15T20:50:34Z,,OWNER,,https://github.com/simonw/datasette/blob/40885ef24e32d91502b6b8bbad1c7376f50f2830/datasette/app.py#L297-L328,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/780/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1469062686,I_kwDOBm6k_c5XkB4e,1919,Intermittent `test_delete_row` test failure ,9599,open,0,,,1,2022-11-30T05:18:46Z,2022-11-30T05:20:56Z,,OWNER,,"https://github.com/simonw/datasette/actions/runs/3580503393/jobs/6022689591 ``` delete_response = await ds_write.client.post( ""/data/{}/{}/-/delete"".format(table, delete_path), headers={ ""Authorization"": ""***"".format(write_token(ds_write)), }, ) > assert delete_response.status_code == 200 E assert 404 == 200 E + where 404 = .status_code /home/runner/work/datasette/datasette/tests/test_api_write.py:396: AssertionError =========================== short test summary info ============================ FAILED tests/test_api_write.py::test_delete_row[compound_pk_table-row_for_create2-pks2-article,k] - assert 404 == 200 + where 404 = .status_code ``` This passes most of the time, but very occasionally fails - in this case in Python 3.7 It seems to only fail for the `article,k` compound primary key test.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1919/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 780278550,MDU6SXNzdWU3ODAyNzg1NTA=,1179,Make original path available to render hooks,9599,open,0,,,8,2021-01-06T08:31:45Z,2021-01-25T04:44:33Z,,OWNER,,"https://github.com/simonw/datasette-export-notebook/blob/0.1/datasette_export_notebook/__init__.py ```python async def render_notebook(datasette, request): return Response.html( await datasette.render_template( ""export_notebook.html"", { ""csv_stream_url"": datasette.absolute_url( request, path_with_format( request=request, format=""csv"", extra_qs={""_stream"": ""on""} ), ), ""json_url"": datasette.absolute_url( request, path_with_format( request=request, format=""json"", extra_qs={""_shape"": ""array""} ), ), ""json"": json, }, ) ) ``` This results in https://latest-with-plugins.datasette.io/github/issue_comments.Notebook showing `http://latest-with-plugins.datasette.io/github/issue_comments.Notebook?_format=json&_shape=array`",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1179/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 268110769,MDU6SXNzdWUyNjgxMTA3Njk=,33,Use locust for benchmarking and load tests,9599,open,0,,,0,2017-10-24T17:00:09Z,2017-12-10T03:12:16Z,,OWNER,,"https://github.com/locustio/locust Needed for #32 ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/33/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 803929694,MDU6SXNzdWU4MDM5Mjk2OTQ=,1219,Try profiling Datasette using scalene,9599,open,0,,,2,2021-02-08T20:37:06Z,2021-02-08T22:13:00Z,,OWNER,,https://github.com/emeryberger/scalene looks like an interesting profiling tool.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1219/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 505673645,MDU6SXNzdWU1MDU2NzM2NDU=,16,Do a better job with archived direct message threads,9599,open,0,,,0,2019-10-11T06:55:21Z,2019-10-11T06:55:27Z,,MEMBER,,https://github.com/dogsheep/twitter-to-sqlite/blob/fb2698086d766e0333a55bb73435e7283feeb438/twitter_to_sqlite/archive.py#L98-L99,206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/16/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1570375808,I_kwDODFdgUs5dmgiA,79,Deploy demo job is failing due to rate limit,9599,open,0,,,2,2023-02-03T20:05:01Z,2023-12-08T14:50:15Z,,MEMBER,,https://github.com/dogsheep/github-to-sqlite/actions/runs/4080058087/jobs/7032116511,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 718272593,MDU6SXNzdWU3MTgyNzI1OTM=,1007,set-env and add-path commands have been deprecated,9599,open,0,,,1,2020-10-09T16:21:18Z,2020-10-09T16:23:51Z,,OWNER,,"https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/ > Starting today runner version 2.273.5 will begin to warn you if you use the `add-path` or `set-env` commands. We are monitoring telemetry for the usage of these commands and plan to fully disable them in the future.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1007/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1091838742,I_kwDOBm6k_c5BFCMW,1585,Fire base caching for `publish cloudrun`,9599,open,0,,,1,2022-01-01T15:38:15Z,2022-01-01T15:40:38Z,,OWNER,,"https://gist.github.com/steren/03d3e58c58c9a53fd49bb78f58541872 has a recipe for this, via https://twitter.com/steren/status/1477038411114446848 Could this enable easier vanity URLs of the format `https://$project_id.web.app/`? How about CDN caching?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1585/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 621486115,MDU6SXNzdWU2MjE0ODYxMTU=,27,photos_with_apple_metadata view should include labels,9599,open,0,,,0,2020-05-20T06:06:17Z,2020-05-20T06:06:17Z,,MEMBER,,"https://dogsheep-photos.dogsheep.net/public/photos_with_apple_metadata?place_city=New+Orleans&_facet=place_city&_facet_array=albums&_facet_array=persons Here's one way to add that: ```sql select rowid, photo, ( select json_group_array( json_object( 'label', normalized_string, 'href', '/photos/labelled?_hide_sql=1&label=' || normalized_string ) ) from labels where labels.uuid = photos_with_apple_metadata.uuid ) as labels, date, ```",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/27/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 915488244,MDU6SXNzdWU5MTU0ODgyNDQ=,1372,"Add section to ""writing plugins"" about security, e.g. avoiding XSS",9599,open,0,,,0,2021-06-08T20:49:33Z,2021-06-08T20:49:46Z,,OWNER,,https://docs.datasette.io/en/stable/writing_plugins.html should have tips on writing secure plugins.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1372/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 863884805,MDU6SXNzdWU4NjM4ODQ4MDU=,1304,"Document how to send multiple values for ""Named parameters"" ",9308268,open,0,,,4,2021-04-21T13:19:06Z,2021-12-08T03:23:14Z,,NONE,,"https://docs.datasette.io/en/stable/sql_queries.html#named-parameters I thought that I had seen an example of how to do this example below, but I can't seem to find it ```sql select * from bib where bib.bib_record_num in (1008088,1008092) ``` ```sql select * from bib where bib.bib_record_num in (:bib_record_numbers) ``` ![image](https://user-images.githubusercontent.com/9308268/115558839-2333a480-a281-11eb-85e6-ce3bada79140.png) https://ilsweb.cincinnatilibrary.org/collection-analysis/current_collection-204d100?sql=select%0D%0A++*%0D%0Afrom%0D%0A++bib%0D%0Awhere%0D%0A++bib.bib_record_num+in+%28%3Abib_record_numbers%29&bib_record_numbers=1008088%2C1008092 Or, maybe this isn't a fully supported feature. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1304/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1575880841,I_kwDOBm6k_c5d7giJ,2020,"Documentation refers to ""off"" setting; doesn't seem to work, ""false"" does",1350673,open,0,,,0,2023-02-08T10:38:10Z,2023-02-08T10:38:10Z,,NONE,,"https://docs.datasette.io/en/stable/settings.html#suggest-facets, among others, suggests using ""off"" to disable the setting; however, this doesn't appear to work in the JSON config files, where it apparently needs to be a ""JSON boolean"" and have the values ""true"" or ""false"". Perhaps the Python code is more flexible?...but either way, the documentation probably should mention it.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2020/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1142107925,I_kwDOBm6k_c5EEy8V,1638,`filters_from_request` plugin hook docs should mention that returning an async function is allowed,9599,open,0,,,0,2022-02-18T00:08:26Z,2022-02-18T00:08:26Z,,OWNER,,"https://docs.datasette.io/en/stable/plugin_hooks.html#filters-from-request-request-database-table-datasette doesn't mention that you can return an `async` function - but you can, and in fact Datasette itself uses that here: https://github.com/simonw/datasette/blob/aa7f0037a46eb76ae6fe9bf2a1f616c58738ecdf/datasette/filters.py#L43-L47",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1638/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 855451460,MDU6SXNzdWU4NTU0NTE0NjA=,1297,"Documentation: json1, and introspection endpoints",192568,open,0,,,0,2021-04-12T00:38:00Z,2021-04-12T01:29:33Z,,CONTRIBUTOR,,"https://docs.datasette.io/en/stable/facets.html notes that: > If your SQLite installation provides the json1 extension (you can check using /-/versions) Datasette will automatically detect columns that contain JSON arrays... When I check -/versions I see two sections relevant to json1: ``` ""extensions"": { ""json1"": null }, ""compile_options"": [ ... ""ENABLE_JSON1"", ``` The ENABLE_JSON1 makes me think json1 is likely available. But the `""json1"": null` made me think it wasn't available (because of the `null`). It would help if the documentation provided clarity about how to know if json1 is installed. It would also be helpful if the `/-/versions` information signalled somehow that that is to be appended to the hostname or domain name (or whatever you want to call it, or simply show it, using `example.com/-/versions` instead of `/-/versions`. Likewise on that last point, for https://docs.datasette.io/en/stable/introspection.html#introspection , at least at some point on that page detailing where those introspection endpoints go. (Sometimes documentation can be so abbreviated that it's hard for new users to figure out what's going on.) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1297/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 838382890,MDU6SXNzdWU4MzgzODI4OTA=,1273,Refresh SpatiaLite documentation,9599,open,0,,,4,2021-03-23T06:05:55Z,2022-01-20T21:28:50Z,,OWNER,,https://docs.datasette.io/en/0.55/spatialite.html was written before I had tools like [geojson-to-sqlite](https://datasette.io/tools/geojson-to-sqlite) and [shapefile-to-sqlite](https://datasette.io/tools/shapefile-to-sqlite).,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1273/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1765870617,I_kwDOBm6k_c5pQQwZ,2087,`--settings settings.json` option,9599,open,0,,,2,2023-06-20T17:48:45Z,2023-07-14T17:02:03Z,,OWNER,,"https://discord.com/channels/823971286308356157/823971286941302908/1120705940728066080 > May I add a request to the whole metadata / settings ? Allow to pass `--settings path/to/settings.json` instead of having to rely exclusively on directory mode to centralize settings (this would reflect the behavior of providing metadata)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2087/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1764792125,I_kwDOBm6k_c5pMJc9,2086,Show information on startup in directory configuration mode,9599,open,0,,,0,2023-06-20T07:13:33Z,2023-06-20T07:13:33Z,,OWNER,,"https://discord.com/channels/823971286308356157/823971286941302908/1120516587036889098 > One thing that would be helpful would be message at launch indicating a metadata.json is getting picked up. I'm using directory mode and was editing the wrong file for awhile before I realize nothing I was doing was having any effect.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2086/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 703216044,MDU6SXNzdWU3MDMyMTYwNDQ=,49,Feature: gists and starred gists,9599,open,0,,,0,2020-09-17T02:30:52Z,2020-09-17T02:30:52Z,,MEMBER,,https://developer.github.com/v3/gists/#list-starred-gists,207052882,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/49/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1149310456,I_kwDOBm6k_c5EgRX4,1641,Tweak mobile keyboard settings,9599,open,0,,,1,2022-02-24T13:47:10Z,2022-02-24T13:49:26Z,,OWNER,,"https://developer.apple.com/library/archive/documentation/StringsTextFonts/Conceptual/TextAndWebiPhoneOS/KeyboardManagement/KeyboardManagement.html#//apple_ref/doc/uid/TP40009542-CH5-SW12 `autocorrect=""off""` is worth experimenting with. Twitter: https://twitter.com/forestgregg/status/1496842959563726852",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1641/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1430797211,I_kwDOBm6k_c5VSDub,1875,Figure out design for JSON errors (consider RFC 7807),9599,open,0,,8755003,7,2022-11-01T03:14:15Z,2022-12-13T05:29:08Z,,OWNER,,"https://datatracker.ietf.org/doc/draft-ietf-httpapi-rfc7807bis/ is a brand new standard. Since I need a neat, predictable format for my JSON errors, maybe I should use this one?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1875/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1516815571,I_kwDOBm6k_c5aaMTT,1975,_col=id can cause id column to export twice in CSV export,9599,open,0,,,0,2023-01-03T00:25:15Z,2023-01-03T00:25:21Z,,OWNER,,"https://datasette.simonwillison.net/simonwillisonblog/blog_entry.csv?_col=id&_col=title&_col=body&_labels=on&_size=1 ```csv id,id,title,body 1,1,WaSP Phase II,""

The Web Standards project has launched Phase II.

"" ``` That should not have two `id` columns.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1975/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1955676270,I_kwDOBm6k_c50kUBu,2201,Discord invite link is invalid,11708906,open,0,,,0,2023-10-21T21:50:05Z,2023-10-21T21:50:05Z,,NONE,,"https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following: ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2201/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 925491857,MDU6SXNzdWU5MjU0OTE4NTc=,1383,Improve test coverage for `inspect.py`,9599,open,0,,,0,2021-06-20T00:22:43Z,2021-06-20T00:22:49Z,,OWNER,,https://codecov.io/gh/simonw/datasette/src/main/datasette/inspect.py shows only 36% coverage for that module at the moment.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1383/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1781005740,I_kwDOBm6k_c5qJ_2s,2090,Adopt ruff for linting,9599,open,0,,,2,2023-06-29T14:56:43Z,2023-06-29T15:05:04Z,,OWNER,,https://beta.ruff.rs/docs/,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2090/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 639993467,MDU6SXNzdWU2Mzk5OTM0Njc=,850,Proof of concept for Datasette on AWS Lambda with EFS,9599,open,0,,,25,2020-06-16T21:48:31Z,2020-06-16T23:52:16Z,,OWNER,,"https://aws.amazon.com/about-aws/whats-new/2020/06/aws-lambda-support-for-amazon-elastic-file-system-now-generally-/ If Datasette can run on Lambda with access to EFS it could both read AND write large databases there.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/850/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 624490929,MDU6SXNzdWU2MjQ0OTA5Mjk=,28,Invalid SQL no such table: main.uploads,41439,open,0,,,1,2020-05-25T21:25:39Z,2020-12-24T22:26:22Z,,NONE,,"http://127.0.0.1:8001/photos/photos_with_apple_metadata gives ""Invalid SQL no such table: main.uploads""",256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1822813627,I_kwDOBm6k_c5spe27,2108,some (many?) SQL syntax errors are not throwing errors with a .csv endpoint,536941,open,0,,,0,2023-07-26T16:57:45Z,2023-07-26T16:58:07Z,,CONTRIBUTOR,,"here's a CTE query that should always fail with a syntax error: ```sql with foo as (nonsense) select * from foo; ``` when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B same with this bad sql ```sql select a, from foo; ``` https://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B vs https://global-power-plants.datasettes.com/global-power-plants.csv?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B but, datasette catches this bad sql at both endpoints: ```sql slect a from foo; ``` https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2108/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 639542974,MDU6SXNzdWU2Mzk1NDI5NzQ=,47,Fall back to FTS4 if FTS5 is not available,73579,open,0,,,3,2020-06-16T10:11:23Z,2020-06-17T20:13:48Z,,NONE,,"got this with version 0.21.1 from pypi. twitter-to-sqlite auth worked but then ""twitter-to-sqlite user-timeline USER.db"" produced a tracekback ending in ""no such module: FTS5"". ",206156866,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1373210675,I_kwDODD6af85R2Ygz,13,fails before generating views. ERR: table sqlite_master may not be modified,116795,open,0,,,4,2022-09-14T15:41:50Z,2023-04-11T03:46:17Z,,NONE,,"generates checkins.db but seems to fail before generating views note: it worked on an Ubuntu WSL but fails on macOS 12.5.1 later edit: I suspect this is a problem with my local set-up, `dogsheep-beta index` also throws the same error full error: Importing 2591 checkins [###################################-] 98% 00:00:00 Traceback (most recent call last): File ""/Users/pax/devbox/envAll/bin/swarm-to-sqlite"", line 8, in sys.exit(cli()) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/swarm_to_sqlite/cli.py"", line 77, in cli ensure_foreign_keys(db) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/swarm_to_sqlite/utils.py"", line 145, in ensure_foreign_keys db[fk.table].add_foreign_key(fk.column, fk.other_table, fk.other_column) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/sqlite_utils/db.py"", line 2123, in add_foreign_key self.db.add_foreign_keys([(self.name, column, other_table, other_column)]) File ""/Users/pax/devbox/envAll/lib/python3.8/site-packages/sqlite_utils/db.py"", line 1086, in add_foreign_keys cursor.execute( sqlite3.OperationalError: table sqlite_master may not be modified",205429375,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/13/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 426722204,MDU6SXNzdWU0MjY3MjIyMDQ=,423,?_search_col=X not reflected correctly in the UI,9599,open,0,,,0,2019-03-28T21:48:19Z,2020-11-03T19:01:59Z,,OWNER,,"e.g. https://latest.datasette.io/fixtures/searchable?_search_text1=barry ![2019-03-28 at 2 47 PM](https://user-images.githubusercontent.com/9599/55195035-84ebb800-5168-11e9-910b-fc9868bcd93e.png) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/423/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1641013220,I_kwDOBm6k_c5hz9_k,2045,First column on a view page has no facet option in cog menu,9599,open,0,,3268330,0,2023-03-26T18:02:47Z,2023-03-26T18:02:48Z,,OWNER,,"e.g. first column on this page - cog menu has no option to facet. https://datasette.io/content/tools ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2045/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 842695374,MDU6SXNzdWU4NDI2OTUzNzQ=,35,Support to annotate photos on other than macOS OSes,1151557,open,0,,,1,2021-03-28T09:01:25Z,2021-04-05T07:37:57Z,,NONE,,dogsheep-photos allows to annotate photos using Apple Photo's db. It would be nice to have such ability on other OSes too. For example using trained local model or using Google Vision API (see #14).,256834907,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/dogsheep-photos/issues/35/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 923270900,MDExOlB1bGxSZXF1ZXN0NjcyMDUzODEx,65,basic support for events,231498,open,0,,,2,2021-06-17T00:51:30Z,2022-10-03T22:35:03Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/github-to-sqlite/pulls/65,"a quick first pass at implementing the feature requested in https://github.com/dogsheep/github-to-sqlite/issues/64 testing instructions: ``` $ github-to-sqlite events events.db user/khimaros ``` if the specified user is the authenticated user, it will also include private events. caveat: pagination appears to be broken (i don't see `next` in the response JSON from GitHub)",207052882,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1821108702,I_kwDOCGYnMM5si-ne,579,Special handling for SQLite column of type `JSON`,15178711,open,0,,,0,2023-07-25T20:37:23Z,2023-07-25T20:37:23Z,,CONTRIBUTOR,,"`sqlite-utils` should detect and have specially handling for column with a `JSON` column. For example: ```sql CREATE TABLE ""dogs"" ( id INTEGER PRIMARY KEY, name TEXT, friends JSON ); ``` ## Automatic Nesting According to [""Nested JSON Values""](https://sqlite-utils.datasette.io/en/stable/cli.html#nested-json-values), sqlite-utils will only expand JSON if the `--json-cols` flag is passed. It looks like it'll try to `json.load` all text column to test if its JSON, which can get expensive on non-json columns. Instead, `sqlite-utils` should be default (ie without the `--json-cols` flags) do the `maybe_json()` operation on columns with a declared `JSON` type. So the above table would expand the `""friends""` column as expected, withoutthe `--json-cols` flag: ```bash sqlite-utils dogs.db ""select * from dogs"" | python -mjson.tool ``` ``` [ { ""id"": 1, ""name"": ""Cleo"", ""friends"": [ { ""name"": ""Pancakes"" }, { ""name"": ""Bailey"" } ] } ] ``` --- I'm sure there's other ways `sqlite-utils` can specially handle JSON columns, so keeping this open while I think of more",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/579/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 626582657,MDU6SXNzdWU2MjY1ODI2NTc=,779,Make human_description_en explicitly available to output renderers,9599,open,0,,,0,2020-05-28T14:59:54Z,2020-05-28T14:59:54Z,,OWNER,,"`datasette-atom` uses this: https://github.com/simonw/datasette-atom/blob/df98a6c43a443224b6cd232f84703ec297ef046b/datasette_atom/__init__.py#L36-L37 ```python if data.get(""human_description_en""): title += "": "" + data[""human_description_en""] ``` It's a nice way to generate a useful title for a filtered table.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/779/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1373224657,I_kwDOCGYnMM5R2b7R,488,`sqlite-utils transform` should set empty strings to null when converting text columns to integer/float,9599,open,0,,,5,2022-09-14T15:51:30Z,2022-12-23T17:38:55Z,,OWNER,,"``` /tmp % echo ""id,age,weight\n1,3,2.5\n2,,"" | sqlite-utils insert test.db test - --csv /tmp % sqlite-utils schema test.db CREATE TABLE [test] ( [id] TEXT, [age] TEXT, [weight] TEXT ); /tmp % sqlite-utils transform test.db test --type age integer --type weight float /tmp % sqlite-utils schema test.db CREATE TABLE ""test"" ( [id] TEXT, [age] INTEGER, [weight] FLOAT ); /tmp % sqlite-utils rows test.db test [{""id"": ""1"", ""age"": 3, ""weight"": 2.5}, {""id"": ""2"", ""age"": """", ""weight"": """"}] ``` It would be neat if this resulted in the following instead: ``` {""id"": ""2"", ""age"": null, ""weight"": null} ``` Related Discord discussion: https://discord.com/channels/823971286308356157/823971286941302908/1019635490833567794",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/488/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 769397742,MDU6SXNzdWU3NjkzOTc3NDI=,3,sqlite-utils error on takeout import,231498,open,0,,,0,2020-12-17T01:18:48Z,2020-12-17T01:19:04Z,,NONE,,"``` $ google-takeout-to-sqlite my-activity takeout.db /path/to/zip ... sqlite3.OperationalError: no such table: main.my_activity ``` there is no table create in `utils.py`, unlike other importers such as github-to-sqlite additionally, this package and hackernews-to-sqlite have conflicting `sqlite-utils` dep with datasette and dogsheep-beta",206649770,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/3/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 647095487,MDU6SXNzdWU2NDcwOTU0ODc=,873,"""datasette -p 0 --root"" gives the wrong URL",9599,open,0,,,14,2020-06-29T04:03:06Z,2020-08-18T17:26:10Z,,OWNER,,"``` $ datasette -p 0 --root http://127.0.0.1:0/-/auth-token?token=2d498c... ``` The port is incorrect.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/873/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1650981564,I_kwDOJHON9s5iZ_q8,12,Error running pytest,14314871,open,0,,,0,2023-04-02T15:02:36Z,2023-04-02T15:07:10Z,,NONE,,"`______________________________________________________ ERROR collecting tests/test_apple_notes_to_sqlite.py _______________________________________________________ ImportError while importing test module '/Users/lol/development/apple-notes-to-sqlite/tests/test_apple_notes_to_sqlite.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/homebrew/Cellar/python@3.9/3.9.16/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_apple_notes_to_sqlite.py:2: in from apple_notes_to_sqlite.cli import cli, COUNT_SCRIPT, FOLDERS_SCRIPT E ModuleNotFoundError: No module named 'apple_notes_to_sqlite'` Solution: This is likely a PYTHONPATH issue due to having pytest installed both globally and in the venv. We can guarantee the tests run by adding the current directory to sys.path automatically using `python -m pytest` The alternative is to activate the venv, install pytest, deactivate, then activate the venv again (https://stackoverflow.com/questions/35045038/how-do-i-use-pytest-with-virtualenv)",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/12/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 668064026,MDU6SXNzdWU2NjgwNjQwMjY=,911,"Rethink the --name option to ""datasette publish""",9599,open,0,,3268330,0,2020-07-29T18:49:49Z,2020-07-29T18:49:49Z,,OWNER,,`--name` works inconsistently across the different publish providers - on Cloud Run you should use `--service` instead for example. Need to review it across all of them and either remove it or clarify what it does.,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/911/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1363766973,I_kwDOCGYnMM5RSW69,484,Expose convert recipes to `sqlite-utils --functions`,9599,open,0,,,11,2022-09-06T20:15:08Z,2022-09-07T19:09:52Z,,OWNER,,"`--functions` was added in: - #471 It would be useful if the `r.jsonsplit()` and similar recipes for `sqlite-utils convert` could be used in these blocks of code too: https://sqlite-utils.datasette.io/en/stable/cli.html#sqlite-utils-convert-recipes",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/484/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 652961907,MDU6SXNzdWU2NTI5NjE5MDc=,121,Improved (and better documented) support for transactions,9599,open,0,,,3,2020-07-08T04:56:51Z,2020-09-24T20:36:46Z,,OWNER,,"_Originally posted by @simonw in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655283393_ We should put some thought into how this library supports and encourages smart use of transactions.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/121/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 777333388,MDU6SXNzdWU3NzczMzMzODg=,1168,Mechanism for storing metadata in _metadata tables,9599,open,0,,,21,2021-01-01T18:47:27Z,2023-09-28T18:29:05Z,,OWNER,,"_Original title: Perhaps metadata should all live in a `_metadata` in-memory database_ Inspired by #1150 - metadata should be exposed as an API, and for large Datasette instances that API may need to be paginated. So why not expose it through an in-memory database table? One catch to this: plugins. #860 aims to add a plugin hook for metadata. But if the metadata comes from an in-memory table, how do the plugins interact with it? The need to paginate over metadata does make a plugin hook that returns metadata for an individual table seem less wise, since we don't want to have to do 10,000 plugin hook invocations to show a list of all metadata. If those plugins write directly to the in-memory table how can their contributions survive the server restarting?",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1168/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1163369515,I_kwDOBm6k_c5FV5wr,1655,query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data,536941,open,0,,,8,2022-03-09T00:56:40Z,2023-10-17T21:53:17Z,,CONTRIBUTOR,,"[this page](https://labordata.bunkum.us/opdr-8335ea3?sql=with+most_recent_lu+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%28%0D%0A++++++select%0D%0A++++++++*%0D%0A++++++from%0D%0A++++++++lm_data%0D%0A++++++order+by%0D%0A++++++++f_num%2C%0D%0A++++++++receive_date+desc%0D%0A++++%29+t%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++aff_abbr+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+abbr_local_name%2C%0D%0A++coalesce%28%0D%0A++++regexp_match%28%27%28.*%3F%29%28%2C%3F+AFL-CIO%24%29%27%2C+union_name%29%2C%0D%0A++++regexp_match%28%27%28.*%3F%29%28+IND%24%29%27%2C+union_name%29%2C%0D%0A++++union_name%0D%0A++%29+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+full_local_name%2C%0D%0A++*%0D%0Afrom%0D%0A++most_recent_lu%0D%0Awhere+%28desig_num+IS+NOT+NULL+OR+unit_name+IS+NOT+NULL%29+AND+desig_name+%21%3D+%27HQ%27%0D%0Alimit%0D%0A++5000+offset+0) is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb. it's using over a 1G on chrome 99. i found this because, i was trying to figure out why editing the SQL was getting very slow. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1655/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1661860507,PR_kwDOBm6k_c5N_bMw,2056,GitHub Action to lint Python code with ruff,3709715,open,0,,,6,2023-04-11T06:41:27Z,2023-04-15T14:24:46Z,,FIRST_TIME_CONTRIBUTOR,simonw/datasette/pulls/2056,"[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) and can be used to replace [Flake8](https://pypi.org/project/flake8/) (plus dozens of plugins), [isort](https://pypi.org/project/isort/), [pydocstyle](https://pypi.org/project/pydocstyle/), [yesqa](https://github.com/asottile/yesqa), [eradicate](https://pypi.org/project/eradicate/), [pyupgrade](https://pypi.org/project/pyupgrade/), and [autoflake](https://pypi.org/project/autoflake/), all while executing (in Rust) tens or hundreds of times faster than any individual tool. The ruff Action uses minimal steps to run in ~5 seconds, rapidly providing intuitive GitHub Annotations to contributors. ![image](https://user-images.githubusercontent.com/3709715/223758136-afc386d2-70aa-4eff-953a-2c2d82ceea23.png) ---- :books: Documentation preview :books:: https://datasette--2056.org.readthedocs.build/en/2056/ ",107914493,pull,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2056/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 285168503,MDU6SXNzdWUyODUxNjg1MDM=,176,Add GraphQL endpoint,173848,open,0,,,8,2017-12-29T23:21:01Z,2020-04-21T14:16:24Z,,NONE,,Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ?,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/176/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 974067156,MDU6SXNzdWU5NzQwNjcxNTY=,318,Research: handle gzipped CSV directly,9599,open,0,,,2,2021-08-18T21:23:04Z,2021-08-18T21:25:30Z,,OWNER,,"Would it be worthwhile for the `sqlite-utils` command-line tool to grow features to efficiently directly interact with gzipped CSV data? Maybe add `--gz` options to both `insert` and to the various commands that output query results.",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/318/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 849220154,MDU6SXNzdWU4NDkyMjAxNTQ=,1286,Better default display of arrays of items,192568,open,0,,,5,2021-04-02T13:31:40Z,2021-06-12T12:36:15Z,,CONTRIBUTOR,,"Would be great to have template filters that convert array fields to bullets and/or delimited lists upon table display: ``` |to_bullets |to_comma_delimited |to_semicolon_delimited ``` or maybe: ``` |join_array(""bullet"") |join_array(""bullet"",""square"") |join_array("";"") |join_array("","") ``` Keeping in mind that bullets show up in html as \ while other delimiting characters appear after the value. Of course, the fields themselves would remain as facetable arrays. ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1286/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1618130434,I_kwDOJHON9s5gcrYC,11,Implement a SQL view to make it easier to query files in a nested folder,9599,open,0,,,3,2023-03-09T23:19:28Z,2023-03-09T23:24:01Z,,MEMBER,,"Working with nested data in SQL is tricky, can I make it easier with a view or canned query?",611552758,issue,,,"{""url"": ""https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/11/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 838245338,MDU6SXNzdWU4MzgyNDUzMzg=,1272,Unit tests for the Dockerfile,9599,open,0,,,3,2021-03-23T01:36:29Z,2022-07-29T10:22:59Z,,OWNER,,"Working on the Dockerfile in #1249 made me wish for automated tests - to confirm that it boots up correctly, can run SpatiaLite and doesn't have weird bugs like the `/db` page hanging. These could run in CI too, but maybe only if the `Dockerfile` is updated.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1272/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 741231849,MDU6SXNzdWU3NDEyMzE4NDk=,1087,Idea: ?_extra=urls for getting back URLs to useful things,9599,open,0,,,0,2020-11-12T02:55:41Z,2021-12-15T18:06:16Z,,OWNER,,"Working on https://github.com/simonw/datasette-search-all/issues/10 made me realize that sometimes it can be difficult to calculate the URL for a database, table or row within Datasette. It would be useful to have an optional extra JSON extension (using `?_extra=` from #262) that can help with this.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1087/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 981690086,MDExOlB1bGxSZXF1ZXN0NzIxNjg2NzIx,67,Replacing step ID key with step_id,16374374,open,0,,,0,2021-08-28T01:26:41Z,2021-08-28T01:27:00Z,,FIRST_TIME_CONTRIBUTOR,dogsheep/github-to-sqlite/pulls/67,"Workflows that have an `id` in any step result in the following error when running `workflows`: e.g.`github-to-sqlite workflows github.db nixos/nixpkgs` ```Traceback (most recent call last): File ""/usr/local/bin/github-to-sqlite"", line 8, in sys.exit(cli()) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1137, in __call__ return self.main(*args, **kwargs) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1062, in main rv = self.invoke(ctx) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1668, in invoke```Traceback (most recent call last): File ""/usr/local/bin/github-to-sqlite"", line 8, in sys.exit(cli()) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1137, in __call__ return self.main(*args, **kwargs) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1062, in main rv = self.invoke(ctx) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/usr/local/lib/python3.8/dist-packages/click/core.py"", line 763, in invoke return __callback(*args, **kwargs) File ""/usr/local/lib/python3.8/dist-packages/github_to_sqlite/cli.py"", line 601, in workflows utils.save_workflow(db, repo_id, filename, content) File ""/usr/local/lib/python3.8/dist-packages/github_to_sqlite/utils.py"", line 865, in save_workflow db[""steps""].insert_all( File ""/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py"", line 2596, in insert_all self.insert_chunk( File ""/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py"", line 2378, in insert_chunk result = self.db.execute(query, params) File ""/usr/local/lib/python3.8/dist-packages/sqlite_utils/db.py"", line 419, in execute return self.conn.execute(sql, parameters) sqlite3.IntegrityError: datatype mismatch ``` - [Information about the ID key in a step for GHA](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsid) - [An example workflow from a public repo](https://github.com/NixOS/nixpkgs/blob/b4cc66827745e525ce7bb54659845ac89788a597/.github/workflows/direct-push.yml#L16) # Changes I'm proposing that the key for `id` in step is replaced with `step_id` so that it no longer interferes with the table `id` for tracking the record. Special thanks to @sarcasticadmin @egiffen and @ruebenramirez for helping a bit on this 😄 ",207052882,pull,,,"{""url"": ""https://api.github.com/repos/dogsheep/github-to-sqlite/issues/67/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",0, 1323332006,I_kwDOBm6k_c5O4HGm,1774,Request of feature for mongo,428820,open,0,,,0,2022-07-31T01:00:05Z,2022-07-31T01:00:05Z,,NONE,,Will love if can we use datasette for mongo and all pipelines and workflows,107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1774/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 855476501,MDU6SXNzdWU4NTU0NzY1MDE=,1298,improve table horizontal scroll experience,192568,open,0,,,4,2021-04-12T01:55:16Z,2022-08-30T21:11:49Z,,CONTRIBUTOR,,"Wide tables aren't a huge problem if you know to click and drag right. But it's not at all obvious to do that. (it also tends to blue-select any content as it's dragging.) Depending on column widths, public users might entirely miss all the columns to the right. There is a scrollbar at the bottom of the table, but I'm displaying ALL my records because it's the only way for datasette-vega to make accurate charts. So that bottom scrollbar is likely to be missed. I wonder if some sort of javascript-y mouseover to an arrow might help, similar to those seen in image carousels. Ah: here's a perfect example: 1. Visit http://google.com 2. Search for: animals endangered 3. Note the 'g-right-button' (in the code) that looks like a right-facing caret in a circle. 4. Click on that and the carousel scrolls right (and 'g-left-button' appears on the left). Might be tricky to do that on a table, rather than a one-row carousel, but it's worth experimenting with. Another option is just to put the scrollbars at the top of the table, too. Meantime, I'm trying to build a button like the ""View/hide all columns on https://salaries.news.baltimoresun.com/salaries-be494cf/2019+Maryland+state+salaries Might be nice to have that available by default, with settings in the metadata showing which are on by default. (I saw some other closed issues related to horizontal scrolling, and admit I don't entirely understand them. For instance, the animated gif at https://github.com/simonw/datasette/issues/998#issuecomment-714117534 confuses me. ) ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1298/reactions"", ""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",, 1573424830,I_kwDOBm6k_c5dyI6-,2019,Refactor out the keyset pagination code,9599,open,0,,,14,2023-02-06T23:04:00Z,2023-02-08T01:40:46Z,,OWNER,,"While working on: - #1999 I noticed that some of the most complex code in the existing table view is the code that implements keyset pagination: https://github.com/simonw/datasette/blob/0b4a28691468b5c758df74fa1d72a823813c96bf/datasette/views/table.py#L417-L493 Extracting that into a utility function would simplify that code a lot.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2019/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,