{"html_url": "https://github.com/simonw/datasette/pull/564#issuecomment-1420941334", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/564", "id": 1420941334, "node_id": "IC_kwDOBm6k_c5UsdgW", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2023-02-07T15:14:10Z", "updated_at": "2023-02-07T15:14:10Z", "author_association": "CONTRIBUTOR", "body": "Is this feature covered by any more recent updates to `datasette`, or via any plugins that you're aware of?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 473288428, "label": "First proof-of-concept of Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1248440137", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406", "id": 1248440137, "node_id": "IC_kwDOCGYnMM5Kaa9J", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-09-15T18:13:50Z", "updated_at": "2022-09-15T18:13:50Z", "author_association": "NONE", "body": "I was wondering if you have any more thoughts on this? I have a tangible use case now: adding a \"vector\" column to a database to support semantic search using doc2vec embeddings ([example](https://psychemedia.github.io/storynotes/Lang_Doc2Vec.html); note that the `vtfunc` package may no longer be reliable...).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1128466114, "label": "Creating tables with custom datatypes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1810#issuecomment-1248204219", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1810", "id": 1248204219, "node_id": "IC_kwDOBm6k_c5KZhW7", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-09-15T14:44:47Z", "updated_at": "2022-09-15T14:46:26Z", "author_association": "CONTRIBUTOR", "body": "A couple+ of possible use case examples:\r\n\r\n- someone has a collection of articles indexed with FTS; they want to publish a simple search tool over the results;\r\n- someone has an image collection and they want to be able to search over description text to return images;\r\n- someone has a set of locations with descriptions, and wants to run a query over places and descriptions and get results as a listing or on a map;\r\n- someone has a set of audio or video files with titles, descriptions and/or transcripts, and wants to be able to search over them and return playable versions of returned items.\r\n\r\nIn many cases, I suspect the raw content will be in one table, but the search table will be a second (eg FTS) table. Generally, the search may be over one or more joined tables, and the results constructed from one or more tables (which may or may not be distinct from the search tables).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1374626873, "label": "Featured table(s) on the homepage"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041363433", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406", "id": 1041363433, "node_id": "IC_kwDOCGYnMM4-EfHp", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-16T10:57:03Z", "updated_at": "2022-02-16T10:57:19Z", "author_association": "NONE", "body": "Wondering if this actually relates to https://github.com/simonw/sqlite-utils/issues/402 ?\r\n\r\nI also wonder if this would be a sensible approach for eg registering `pint` based quantity conversions into and out of the db, perhaps storing the quantity as a serialised `magnitude measurement` single column string?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1128466114, "label": "Creating tables with custom datatypes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1041325398", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/402", "id": 1041325398, "node_id": "IC_kwDOCGYnMM4-EV1W", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-16T10:12:48Z", "updated_at": "2022-02-16T10:18:55Z", "author_association": "NONE", "body": "> My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns\r\n\r\nOther possible pairs: unconventional date/datetime and timezone pairs eg `2022-02-16::17.00, London`; or more generally, numerical value and unit of measurement pairs (eg if you want to cast into and out of different measurement units using packages like `pint`) or currencies etc. Actually, in that case, I guess you may be presenting things that are unit typed already, and so a conversion would need to parse things into an appropriate, possibly two column `value, unit` format.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1125297737, "label": "Advanced class-based `conversions=` mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041313679", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406", "id": 1041313679, "node_id": "IC_kwDOCGYnMM4-ES-P", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-16T09:59:51Z", "updated_at": "2022-02-16T10:00:10Z", "author_association": "NONE", "body": "The `CustomColumnType()` approach looks good. This pushes you into the mindspace that you are defining and working with a custom column type.\r\n\r\nWhen creating the table, you could then error, or at least warn, if someone wasn't setting a column on a `type` or a custom column type, which I guess is where `mypy` comes in?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1128466114, "label": "Creating tables with custom datatypes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1033641009", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/203", "id": 1033641009, "node_id": "IC_kwDOCGYnMM49nBwx", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-02-09T11:06:18Z", "updated_at": "2022-02-09T11:06:18Z", "author_association": "NONE", "body": "Is there any progress elsewhere on the handling of compound / composite foreign keys, or is this PR still effectively open?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 743384829, "label": "changes to allow for compound foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1591#issuecomment-1010947634", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1591", "id": 1010947634, "node_id": "IC_kwDOBm6k_c48QdYy", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2022-01-12T11:32:17Z", "updated_at": "2022-01-12T11:32:17Z", "author_association": "CONTRIBUTOR", "body": "Is it possible to parse things like `--ext-{plugin}-{arg} VALUE` ?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1100015398, "label": "Maybe let plugins define custom serve options?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-752098906", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 752098906, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-12-29T14:34:30Z", "updated_at": "2020-12-29T14:34:50Z", "author_association": "CONTRIBUTOR", "body": "FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py\r\n\r\nNot a production thing, just an experiment trying to explore what might be possible...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-720354227", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 720354227, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-11-02T09:33:58Z", "updated_at": "2020-11-02T09:33:58Z", "author_association": "CONTRIBUTOR", "body": "Thanks; just a note that the `datasette.urls.static(path)` and `datasette.urls.static_plugins(plugin_name, path)` items both seem to be repeated and appear in the docs twice?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1049#issuecomment-718528252", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1049", "id": 718528252, "node_id": "MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-29T09:20:34Z", "updated_at": "2020-10-29T09:20:34Z", "author_association": "CONTRIBUTOR", "body": "That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 729017519, "label": "Add template block prior to extra URL loaders"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-716123598", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 716123598, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-25T10:20:12Z", "updated_at": "2020-10-25T10:53:24Z", "author_association": "CONTRIBUTOR", "body": "I'm trying to [run something behind a MyBinder proxy](https://github.com/ouseful-testing/nbsearch), but seem to have something set up incorrectly and not sure what the fix is?\r\n\r\nI'm starting datasette with jupyter-server-proxy setup:\r\n\r\n```\r\n# __init__.py\r\ndef setup_nbsearch():\r\n\r\n return {\r\n \"command\": [\r\n \"datasette\",\r\n \"serve\",\r\n f\"{_NBSEARCH_DB_PATH}\",\r\n \"-p\",\r\n \"{port}\",\r\n \"--config\",\r\n \"base_url:{base_url}nbsearch/\"\r\n ],\r\n \"absolute_url\": True,\r\n # The following needs a the labextension installing.\r\n # eg in postBuild: jupyter labextension install jupyterlab-server-proxy\r\n \"launcher_entry\": {\r\n \"enabled\": True,\r\n \"title\": \"nbsearch\",\r\n },\r\n }\r\n```\r\n\r\nwhere the `base_url` gets automatically populated by the server-proxy. I define the loaders as:\r\n\r\n```\r\n# __init__.py\r\nfrom datasette import hookimpl\r\n\r\n@hookimpl\r\ndef extra_css_urls(database, table, columns, view_name, datasette):\r\n return [\r\n \"/-/static-plugins/nbsearch/prism.css\",\r\n \"/-/static-plugins/nbsearch/nbsearch.css\",\r\n ]\r\n```\r\nbut these seem to also need a base_url prefix set somehow?\r\n\r\nCurrently, the generated HTML loads properly but internal links are incorrect; eg they take the form `` which resolves to eg `https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css` rather than required URL of form `https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css`.\r\n\r\nThe main css is loaded correctly: ``", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-716066000", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 716066000, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-24T22:58:33Z", "updated_at": "2020-10-24T22:58:33Z", "author_association": "CONTRIBUTOR", "body": "From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note:\r\n\r\n```\r\ndatasette.urls.instance()\r\nReturns the URL to the Datasette instance root page. This is usually \"/\"\r\n```\r\n\r\nWhat about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to?\r\n\r\n- [ ] `https://example.com:PORT/weirdpath/datasette`\r\n- [ ] `https://example.com:PORT/weirdpath/`\r\n- [ ] `https://example.com:PORT/`\r\n- [ ] `https://example.com`\r\n- [ ] something else?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-714657366", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 714657366, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-22T17:51:29Z", "updated_at": "2020-10-22T17:51:29Z", "author_association": "CONTRIBUTOR", "body": "How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `\"extra_js_urls\": [ \"/static/app.js\"]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-604328163", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 604328163, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-03-26T09:41:30Z", "updated_at": "2020-03-26T09:41:30Z", "author_association": "CONTRIBUTOR", "body": "Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-586599424", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 586599424, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-02-15T15:12:19Z", "updated_at": "2020-02-15T15:12:33Z", "author_association": "CONTRIBUTOR", "body": "So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 580745213, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-31T14:02:38Z", "updated_at": "2020-01-31T14:21:09Z", "author_association": "NONE", "body": "So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error.\r\n\r\nIf I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly.\r\n\r\nIt feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 573047321, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-10T14:02:56Z", "updated_at": "2020-01-10T14:09:23Z", "author_association": "NONE", "body": "Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop).\r\n\r\nCould it be sqlite3? I'm on 3.30.1.\r\n\r\nUPDATE: just tried it on jupyter.org/try and I get the error there, too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73", "id": 571138093, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-01-06T13:28:31Z", "updated_at": "2020-01-06T13:28:31Z", "author_association": "NONE", "body": "I think I actually had several issues in play...\r\n\r\nThe missing key was one, but I think there is also an issue as per below.\r\n\r\nFor example, in the following:\r\n\r\n```python\r\ndef init_testdb(dbname='test.db'):\r\n \r\n if os.path.exists(dbname):\r\n os.remove(dbname)\r\n\r\n conn = sqlite3.connect(dbname)\r\n db = Database(conn)\r\n \r\n return conn, db\r\n\r\nconn, db = init_testdb()\r\n\r\nc = conn.cursor()\r\nc.executescript('CREATE TABLE \"test1\" (\"Col1\" TEXT, \"Col2\" TEXT, PRIMARY KEY (\"Col1\"));')\r\nc.executescript('CREATE TABLE \"test2\" (\"Col1\" TEXT, \"Col2\" TEXT, PRIMARY KEY (\"Col1\"));')\r\n\r\nprint('Test 1...')\r\nfor i in range(3):\r\n db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n\r\nprint('Test 2...')\r\nfor i in range(3):\r\n db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},\r\n {'Col1':'c','Col2':'x'}], pk=('Col1'))\r\nprint('Done...')\r\n\r\n---------------------------------------------------------------------------\r\nTest 1...\r\nTest 2...\r\n IndexError: list index out of range \r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n in \r\n 22 print('Test 2...')\r\n 23 for i in range(3):\r\n---> 24 db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1'))\r\n 25 db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'},\r\n 26 {'Col1':'c','Col2':'x'}], pk=('Col1'))\r\n\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts)\r\n 1157 alter=alter,\r\n 1158 extracts=extracts,\r\n-> 1159 upsert=True,\r\n 1160 )\r\n 1161 \r\n\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert)\r\n 1097 # self.last_rowid will be 0 if a \"INSERT OR IGNORE\" happened\r\n 1098 if (hash_id or pk) and self.last_rowid:\r\n-> 1099 row = list(self.rows_where(\"rowid = ?\", [self.last_rowid]))[0]\r\n 1100 if hash_id:\r\n 1101 self.last_pk = row[hash_id]\r\n\r\nIndexError: list index out of range\r\n```\r\n\r\nthe first test works but the second fails. Is the length of the list of items being upserted leaking somewhere?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 545407916, "label": "upsert_all() throws issue when upserting to empty table"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-559632608", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 559632608, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTYzMjYwOA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-11-29T01:43:38Z", "updated_at": "2019-11-29T01:43:38Z", "author_association": "CONTRIBUTOR", "body": "In passing, it looks like a start was made on a datasette Jupyter server extension in https://github.com/lucasdurand/jupyter-datasette although the build fails in MyBinder.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/642#issuecomment-559207224", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/642", "id": 559207224, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTIwNzIyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-11-27T18:40:57Z", "updated_at": "2019-11-27T18:41:07Z", "author_association": "CONTRIBUTOR", "body": "Would cookie cutter approaches also work for creating various flavours of customised templates?\r\n\r\nI need to try to create a couple of sites for myself to get a feel for what sorts of thing are easily doable, and what cribbable cookie cutter items might be. I'm guessing https://simonwillison.net/2019/Nov/25/niche-museums/ is a good place to start from?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 529429214, "label": "Provide a cookiecutter template for creating new plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/507#issuecomment-509013413", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/507", "id": 509013413, "node_id": "MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-07-07T16:31:57Z", "updated_at": "2019-07-07T16:31:57Z", "author_association": "CONTRIBUTOR", "body": "Chrome and Firefox [both support headless screengrabs]( https://www.bleepingcomputer.com/news/software/chrome-and-firefox-can-take-screenshots-of-sites-from-the-command-line/) from command line, but I don't know how parameterised they can be?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 455852801, "label": "Every datasette plugin on the ecosystem page should have a screenshot"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/429#issuecomment-483202658", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/429", "id": 483202658, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MzIwMjY1OA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-15T10:48:01Z", "updated_at": "2019-04-15T10:48:01Z", "author_association": "CONTRIBUTOR", "body": "Minor UI observation:\r\n\r\n![image](https://user-images.githubusercontent.com/82988/56127017-2bf78e80-5f74-11e9-9120-9393eb5d4988.png)\r\n\r\n`_where=` renders a `[remove]` link whereas `_facet=` gets a cross to remove it. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432636432, "label": "?_where=sql-fragment parameter for table views"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/431#issuecomment-483017176", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/431", "id": 483017176, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-14T16:58:37Z", "updated_at": "2019-04-14T16:58:37Z", "author_association": "CONTRIBUTOR", "body": "Hmm... nope... I see an updated timestamp from `ls -al` on the db but no reload?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 432870248, "label": "Datasette doesn't reload when database file changes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/8#issuecomment-482994231", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8", "id": 482994231, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4Mjk5NDIzMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-14T15:04:07Z", "updated_at": "2019-04-14T15:29:33Z", "author_association": "NONE", "body": "\r\n\r\nPLEASE IGNORE THE BELOW... I did a package update and rebuilt the kernel I was working in... may just have been an old version of sqlite_utils, seems to be working now. (Too many containers / too many environments!)\r\n\r\n\r\nHas an issue been reintroduced here with FTS? eg I'm getting an error thrown by spaces in column names here:\r\n\r\n```\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order)\r\n\r\ndef enable_fts(self, columns, fts_version=\"FTS5\"):\r\n--> 329 \"Enables FTS on the specified columns\"\r\n 330 sql = \"\"\"\r\n 331 CREATE VIRTUAL TABLE \"{table}_fts\" USING {fts_version} (\r\n```\r\n\r\nwhen trying an `insert_all`.\r\n\r\nAlso, if a col has a `.` in it, I seem to get:\r\n\r\n```\r\n/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order)\r\n 327 jsonify_if_needed(record.get(key, None)) for key in all_columns\r\n 328 )\r\n--> 329 result = self.db.conn.execute(sql, values)\r\n 330 self.db.conn.commit()\r\n 331 self.last_id = result.lastrowid\r\n\r\nOperationalError: near \".\": syntax error\r\n```\r\n\r\n(Can't post a worked minimal example right now; racing trying to build something against a live timing screen that will stop until next weekend in an hour or two...)\r\n\r\nPS Hmmm I did a test and they seem to work; I must be messing up s/where else...\r\n\r\n```\r\nimport sqlite3\r\nfrom sqlite_utils import Database\r\n\r\ndbname='testingDB_sqlite_utils.db'\r\n\r\n#!rm $dbname\r\nconn = sqlite3.connect(dbname, timeout=10)\r\n\r\n\r\n#Setup database tables\r\nc = conn.cursor()\r\n\r\nsetup='''\r\nCREATE TABLE IF NOT EXISTS \"test1\" (\r\n \"NO\" INTEGER,\r\n \"NAME\" TEXT\r\n);\r\n\r\nCREATE TABLE IF NOT EXISTS \"test2\" (\r\n \"NO\" INTEGER,\r\n `TIME OF DAY` TEXT\r\n);\r\n\r\nCREATE TABLE IF NOT EXISTS \"test3\" (\r\n \"NO\" INTEGER,\r\n `AVG. SPEED (MPH)` FLOAT\r\n);\r\n'''\r\n\r\nc.executescript(setup)\r\n\r\n\r\nDB = Database(conn)\r\n\r\nimport pandas as pd\r\n\r\ndf1 = pd.DataFrame({'NO':[1,2],'NAME':['a','b']})\r\nDB['test1'].insert_all(df1.to_dict(orient='records'))\r\n\r\ndf2 = pd.DataFrame({'NO':[1,2],'TIME OF DAY':['early on','late']})\r\nDB['test2'].insert_all(df2.to_dict(orient='records'))\r\n\r\ndf3 = pd.DataFrame({'NO':[1,2],'AVG. SPEED (MPH)':['123.3','123.4']})\r\nDB['test3'].insert_all(df3.to_dict(orient='records'))\r\n```\r\n\r\nall seem to work ok. I'm still getting errors in my set up though, which is not too different to the text cases?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 403922644, "label": "Problems handling column names containing spaces or - "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/18#issuecomment-480621924", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/18", "id": 480621924, "node_id": "MDEyOklzc3VlQ29tbWVudDQ4MDYyMTkyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-04-07T19:31:42Z", "updated_at": "2019-04-07T19:31:42Z", "author_association": "NONE", "body": "I've just noticed that SQLite lets you IGNORE inserts that collide with a pre-existing key. This can be quite handy if you have a dataset that keeps changing in part, and you don't want to upsert and replace pre-existing PK rows but you do want to ignore collisions to existing PK rows.\r\n\r\nDo `sqlite_utils` support such (cavalier!) behaviour?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 413871266, "label": ".insert/.upsert/.insert_all/.upsert_all should add missing columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/412#issuecomment-474282321", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/412", "id": 474282321, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-03-19T10:09:46Z", "updated_at": "2019-03-19T10:09:46Z", "author_association": "CONTRIBUTOR", "body": "Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to `ATTACH DATABASE`?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 411257981, "label": "Linked Data(sette)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-474280581", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 474280581, "node_id": "MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-03-19T10:06:42Z", "updated_at": "2019-03-19T10:06:42Z", "author_association": "CONTRIBUTOR", "body": "This would be really interesting but several possibilities in use arise, I think?\r\n\r\nFor example:\r\n\r\n- I put a new CSV file into the import dir and a new table is created therefrom\r\n- I put a CSV file into the import dir that replaces a previous file / table of the same name as a pre-existing table (eg files that contain monthly data in year to date). The data may also patch previous months, so a full replace / DROP on the original table may well be in order.\r\n- I put a CSV file into the import dir that updates a table of the same name as a pre-existing table (eg files that contain last month's data)\r\n\r\nCSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form `MYTABLENAME-February2019.csv` etc", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/8#issuecomment-464341721", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8", "id": 464341721, "node_id": "MDEyOklzc3VlQ29tbWVudDQ2NDM0MTcyMQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-02-16T12:08:41Z", "updated_at": "2019-02-16T12:08:41Z", "author_association": "NONE", "body": "We also get an error if a column name contains a `.`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 403922644, "label": "Problems handling column names containing spaces or - "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/160#issuecomment-459915995", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/160", "id": 459915995, "node_id": "MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-02-02T00:43:16Z", "updated_at": "2019-02-02T00:58:20Z", "author_association": "CONTRIBUTOR", "body": "Do you have any simple working examples of how to use `--static`? Inspection of default served files suggests locations such as `http://example.com/-/static/app.css?0e06ee`.\r\n\r\nIf `datasette` is being proxied to `http://example.com/foo/datasette`, what form should arguments to `--static` take so that static files are correctly referenced?\r\n\r\nUse case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple `datasette` demo in MyBinder using jupyter-server-proxy.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 278208011, "label": "Ability to bundle and serve additional static files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/370#issuecomment-436042445", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/370", "id": 436042445, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNjA0MjQ0NQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-11-05T21:30:42Z", "updated_at": "2018-11-05T21:31:48Z", "author_association": "CONTRIBUTOR", "body": "Another route would be something like creating a `datasette` IPython magic for notebooks to take a dataframe and easily render it as a `datasette`. You'd need to run the app in the background rather than block execution in the notebook. Related to that, or to publishing a dataframe in notebook cell for use in other cells in a non-blocking way, there may be cribs in something like https://github.com/micahscopes/nbmultitask .", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377155320, "label": "Integration with JupyterLab"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/370#issuecomment-436037692", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/370", "id": 436037692, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNjAzNzY5Mg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-11-05T21:15:47Z", "updated_at": "2018-11-05T21:18:37Z", "author_association": "CONTRIBUTOR", "body": "In terms of integration with `pandas`, I was pondering two different ways `datasette`/`csvs_to_sqlite` integration may work:\r\n\r\n- like [`pandasql`](https://github.com/yhat/pandasql), to provide a SQL query layer either by a direct connection to the sqlite db or via `datasette` API;\r\n- as an improvement of `pandas.to_sql()`, which is a bit ropey (e.g. `pandas.to_sql_from_csvs()`, routing the dataframe to sqlite via `csvs_tosqlite` rather than the dodgy mapping that `pandas` supports).\r\n\r\nThe `pandas.publish_*` idea could be quite interesting though... Would it be useful/fruitful to think about `publish_` as a complement to [`pandas.to_`](https://pandas.pydata.org/pandas-docs/stable/api.html#id12)?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377155320, "label": "Integration with JupyterLab"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/371#issuecomment-435862009", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/371", "id": 435862009, "node_id": "MDEyOklzc3VlQ29tbWVudDQzNTg2MjAwOQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-11-05T12:48:35Z", "updated_at": "2018-11-05T12:48:35Z", "author_association": "CONTRIBUTOR", "body": "I think you need to register a domain name you own separately in order to get a non-IP address address? https://www.digitalocean.com/docs/networking/dns/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 377156339, "label": "datasette publish digitalocean plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/276#issuecomment-401310732", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/276", "id": 401310732, "node_id": "MDEyOklzc3VlQ29tbWVudDQwMTMxMDczMg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-06-29T10:05:04Z", "updated_at": "2018-06-29T10:07:25Z", "author_association": "CONTRIBUTOR", "body": "@russs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg [kartena/Proj4Leaflet](https://kartena.github.io/Proj4Leaflet/)) although the leaflet side would need to detect or be informed of the original projection?\r\n\r\nAnother possibility would be to provide an easy way/guidance for users to create an FK'd table containing the WGS84 projection of a non-WGS84 geometry in the original/principle table? This could then as a proxy for serving GeoJSON to the leaflet map?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324835838, "label": "Handle spatialite geometry columns better"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/179#issuecomment-360535979", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/179", "id": 360535979, "node_id": "MDEyOklzc3VlQ29tbWVudDM2MDUzNTk3OQ==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2018-01-25T17:18:24Z", "updated_at": "2018-01-25T17:18:24Z", "author_association": "CONTRIBUTOR", "body": "To summarise that thread:\r\n\r\n- expose full `metadata.json` object to the index page template, eg to allow tables to be referred to by name;\r\n- ability to import multiple `metadata.json` files, eg to allow metadata files created for a specific SQLite db to be reused in a datasette referring to several database files;\r\n\r\nIt could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 288438570, "label": "More metadata options for template authors "}, "performed_via_github_app": null}