issues
1,745 rows where state = "closed" and type = "issue" sorted by updated_at descending
This data as json, CSV (advanced)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
836273891 | MDU6SXNzdWU4MzYyNzM4OTE= | 1266 | Documentation for Response.asgi_send(send) method | simonw 9599 | closed | 0 | 1 | 2021-03-19T18:52:49Z | 2021-03-20T21:35:00Z | 2021-03-20T21:32:28Z | OWNER | I found myself wanting to use this method for https://github.com/simonw/datasette-auth-passwords/issues/15 - but it's not documented. It should be documented. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
836123030 | MDU6SXNzdWU4MzYxMjMwMzA= | 1265 | Support for HTTP Basic Authentication | yunzheng 468612 | closed | 0 | 3 | 2021-03-19T15:31:09Z | 2021-03-19T22:05:12Z | 2021-03-19T21:03:09Z | NONE | It would be nice if datasette could support HTTP Basic Authentication. For now I could ofcourse leverage Nginx for basic authentication, but it would be nice to have support for this in datasette by default or via a plugin like datasette-auth-github. My main usecase is to put the whole datasette instance behind a username/password prompt via Basic Auth and not specific urls. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
824067604 | MDU6SXNzdWU4MjQwNjc2MDQ= | 1250 | Research: Plugin hook for alternative database connections | simonw 9599 | closed | 0 | 2 | 2021-03-08T00:28:15Z | 2021-03-12T01:01:25Z | 2021-03-12T01:01:17Z | OWNER | The The real win would be if this could lead to running Datasette against PostgreSQL. I made some initial explorations in that direction a while ago in #670. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
794554881 | MDU6SXNzdWU3OTQ1NTQ4ODE= | 1208 | A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper | kbaikov 4488943 | closed | 0 | 2 | 2021-01-26T20:56:28Z | 2021-03-11T16:15:49Z | 2021-03-11T16:15:49Z | CONTRIBUTOR | Your code is full of open files that are never closed, especially when you deal with reading/writing json/yaml files. If you run python with warnings enabled this problem becomes evident. This probably contributes to some memory leaks in long running datasettes if the GC will not 'collect' those resources properly. This is easily fixed by using a context manager instead of just using open:
In some newer parts of the code you use Path objects 'read_text' and 'write_text' functions which close the file properly and are prefered in some cases. If you want I can create a PR for all places i found this pattern in. Bellow is a fraction of places where i found a ResourceWarning: ```python update-docs-help.py: 20 actual = actual.replace("Usage: cli ", "Usage: datasette ") 21: open(docs_path / filename, "w").write(actual) 22 datasette\app.py: 210 ): 211: inspect_data = json.load((config_dir / "inspect-data.json").open()) 212 if immutables is None: 266 if config_dir and (config_dir / "settings.json").exists() and not config: 267: config = json.load((config_dir / "settings.json").open()) 268 self._settings = dict(DEFAULT_SETTINGS, **(config or {})) 445 self._app_css_hash = hashlib.sha1( 446: open(os.path.join(str(app_root), "datasette/static/app.css")) 447 .read() datasette\cli.py: 130 else: 131: out = open(inspect_file, "w") 132 loop = asyncio.get_event_loop() 459 if inspect_file: 460: inspect_data = json.load(open(inspect_file)) 461 ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
823035080 | MDU6SXNzdWU4MjMwMzUwODA= | 1248 | duckdb database (very low performance in SQLite) | verajosemanuel 15836677 | closed | 0 | 1 | 2021-03-05T12:20:29Z | 2021-03-08T00:25:27Z | 2021-03-08T00:25:27Z | NONE | My sqlite is getting too big to be processed by datasette (more than 10 minutes waiting to load) so I am working with duckdb and is waaaaay faster. I think the fastest embeddable database actually. Taking into account DuckDb is SQLite based it would be GREAT to use it with datasette. is that possible? Regards and thanks for a superb job |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
818430405 | MDU6SXNzdWU4MTg0MzA0MDU= | 1247 | datasette.add_memory_database() method | simonw 9599 | closed | 0 | 2 | 2021-03-01T03:48:38Z | 2021-03-01T04:02:26Z | 2021-03-01T04:02:26Z | OWNER | I just wrote this code: It would be nice if you didn't have to separately instantiate a database object here. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
817597268 | MDU6SXNzdWU4MTc1OTcyNjg= | 1246 | Suggest for ArrayFacet possibly confused by blank values | simonw 9599 | closed | 0 | 3 | 2021-02-26T19:11:52Z | 2021-03-01T03:46:11Z | 2021-03-01T03:46:11Z | OWNER | I sometimes don't get the suggestion for facet-by-array for columns that contain arrays. I think it may be because they have empty spaces in them - or perhaps it's because the null detection doesn't actually work. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
718259202 | MDU6SXNzdWU3MTgyNTkyMDI= | 1005 | Remove xfail tests when new httpx is released | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 3 | 2020-10-09T16:00:19Z | 2021-02-28T22:41:08Z | 2021-02-28T22:41:08Z | OWNER |
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
814591962 | MDU6SXNzdWU4MTQ1OTE5NjI= | 1240 | Allow facetting on custom queries | Kabouik 7107523 | closed | 0 | 3 | 2021-02-23T15:52:19Z | 2021-02-26T18:19:46Z | 2021-02-26T18:18:18Z | NONE | Facets are a tremendously useful feature, especially for people peeking at the database for the first time and still having little knowledge about the details of the data. It is of great assistance to discover interesting features to explore futher in advanced queries. Yet, it seems it's impossible to use facets when running a custom SQL query, be it from the little gear icons in column names, the facet suggestions at the top (hidden when performing a custom query), or by appending a facet code to the URL. Is there a technical limitation, or is this something that could be unlocked easily? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
817528452 | MDU6SXNzdWU4MTc1Mjg0NTI= | 1244 | Plugin tip: look at the examples linked from the hooks page | simonw 9599 | closed | 0 | 1 | 2021-02-26T17:18:27Z | 2021-02-26T17:30:38Z | 2021-02-26T17:27:15Z | OWNER | Someone asked "what are good example plugins I can look at?" and I realized that the answer is to look through the example links on https://docs.datasette.io/en/stable/plugin_hooks.html - but that tip should be written down somewhere on the https://docs.datasette.io/en/stable/writing_plugins.html page. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
815554385 | MDU6SXNzdWU4MTU1NTQzODU= | 237 | db["my_table"].drop(ignore=True) parameter, plus sqlite-utils drop-table --ignore and drop-view --ignore | mhalle 649467 | closed | 0 | 3 | 2021-02-24T14:55:06Z | 2021-02-25T17:11:41Z | 2021-02-25T17:11:41Z | NONE | When I'm generating a derived table in python, I often drop the table and create it from scratch. However, the first time I generate the table, it doesn't exist, so the drop raises an exception. That means more boilerplate. I was going to submit a pull request that adds an "if_exists" option to the However, for a utility like sqlite_utils, perhaps the "IF EXISTS" SQL semantics is what you want most of the time, and thus should be the default. What do you think? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
816523763 | MDU6SXNzdWU4MTY1MjM3NjM= | 238 | .add_foreign_key() corrupts database if column contains a space | simonw 9599 | closed | 0 | 1 | 2021-02-25T15:07:20Z | 2021-02-25T16:54:02Z | 2021-02-25T16:54:02Z | OWNER | I ran this:
And got this: ``` ~/jupyter-venv/lib/python3.9/site-packages/sqlite_utils/db.py in add_foreign_keys(self, foreign_keys) 616 # Have to VACUUM outside the transaction to ensure .foreign_keys property 617 # can see the newly created foreign key. --> 618 self.vacuum() 619 620 def index_foreign_keys(self): ~/jupyter-venv/lib/python3.9/site-packages/sqlite_utils/db.py in vacuum(self) 629 630 def vacuum(self): --> 631 self.execute("VACUUM;") 632 633 ~/jupyter-venv/lib/python3.9/site-packages/sqlite_utils/db.py in execute(self, sql, parameters) 234 return self.conn.execute(sql, parameters) 235 else: --> 236 return self.conn.execute(sql) 237 238 def executescript(self, sql): DatabaseError: database disk image is malformed ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
816560819 | MDU6SXNzdWU4MTY1NjA4MTk= | 240 | table.pks_and_rows_where() method returning primary keys along with the rows | simonw 9599 | closed | 0 | 7 | 2021-02-25T15:49:28Z | 2021-02-25T16:39:23Z | 2021-02-25T16:28:23Z | OWNER | Original title: Easier way to update a row returned from .rows Here's a surprisingly hard problem I ran into while trying to implement #239 - given a row returned by The problem is that the Instead, currently, you need to introspect the table and, if A utility mechanism to make this easier would be very welcome. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
813978858 | MDU6SXNzdWU4MTM5Nzg4NTg= | 1239 | JSON filter fails if column contains spaces | simonw 9599 | closed | 0 | 1 | 2021-02-23T00:18:07Z | 2021-02-23T00:22:53Z | 2021-02-23T00:22:53Z | OWNER | Got this exception:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
783778672 | MDU6SXNzdWU3ODM3Nzg2NzI= | 220 | Better error message for *_fts methods against views | mhalle 649467 | closed | 0 | 3 | 2021-01-11T23:24:00Z | 2021-02-22T20:44:51Z | 2021-02-14T22:34:26Z | NONE | enable_fts and its related methods only work on tables, not views. Could those methods and possibly others move up to the Queryable superclass? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
797651831 | MDU6SXNzdWU3OTc2NTE4MzE= | 1212 | Tests are very slow. | kbaikov 4488943 | closed | 0 | 4 | 2021-01-31T08:06:16Z | 2021-02-19T22:54:13Z | 2021-02-19T22:54:13Z | CONTRIBUTOR | Working on my PR i noticed that tests are very slow. The plain pytest run took about 37 minutes for me.
However i could shave of about 10 minutes from that if i used pytest-xdist to parallelize execution.
I can create a PR to mention that in your documentation. This will be a simple change to add pytest-xdist to requirements and change a command to run pytest in documentation. Does that make sense to you? After a bit more investigation it looks like python-xdist is not an answer. It creates a race condition for tests that try to clead temp dir before run. Profiling shows that most time is spent on conn.executescript(TABLES) in make_app_client function. Which makes sense. Perhaps the better approach would be look at the app_client fixture which is already session scoped, but not used by all test cases. And/or use conn = sqlite3.connect(":memory:") which is much faster. And/or truncate tables after each TC instead of deleting the file and re-creating them. I can take a look which is the best approach if you give the go-ahead. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
811680502 | MDU6SXNzdWU4MTE2ODA1MDI= | 236 | --attach command line option for attaching extra databases | simonw 9599 | closed | 0 | 1 | 2021-02-19T04:38:30Z | 2021-02-19T05:10:41Z | 2021-02-19T05:08:43Z | OWNER | This will enable cross-database joins, as seen in https://github.com/simonw/datasette/issues/283 Also refs #113 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
621286870 | MDU6SXNzdWU2MjEyODY4NzA= | 113 | Syntactic sugar for ATTACH DATABASE | simonw 9599 | closed | 0 | 2 | 2020-05-19T21:10:00Z | 2021-02-19T05:09:12Z | 2021-02-19T04:56:36Z | OWNER | https://www.sqlite.org/lang_attach.html Maybe something like this:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
811589344 | MDU6SXNzdWU4MTE1ODkzNDQ= | 1235 | Upgrade Python version used by official Datasette Docker image | simonw 9599 | closed | 0 | 2 | 2021-02-19T00:47:40Z | 2021-02-19T01:48:31Z | 2021-02-19T01:48:30Z | OWNER | Currently uses 3.7.2: https://github.com/simonw/datasette/blob/73bed175631a79e13a521eee82f8451dd0477eb3/Dockerfile#L1 There's a security fix for Python which it would be good to ship in this image (even though I'm reasonably confident it doesn't affect Datasette): https://bugs.python.org/issue42938 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
808843401 | MDU6SXNzdWU4MDg4NDM0MDE= | 1226 | --port option should validate port is between 0 and 65535 | simonw 9599 | closed | 0 | 4 | 2021-02-15T22:01:33Z | 2021-02-18T18:41:27Z | 2021-02-18T18:41:27Z | OWNER | Currently throws an ugly error message:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
807174161 | MDU6SXNzdWU4MDcxNzQxNjE= | 227 | Error reading csv files with large column data | camallen 295329 | closed | 0 | 4 | 2021-02-12T11:51:47Z | 2021-02-16T11:48:03Z | 2021-02-14T21:17:19Z | NONE | Feel free to close this issue - I mostly added it for reference for future folks that run into this :) I have a CSV file with one column that has very long strings. When i try to import this file via the Traceback (most recent call last):
File "/usr/local/bin/sqlite-utils", line 10, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in call
return self.main(args, kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(args, kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 774, in insert
default=default,
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 705, in insert_upsert_implementation
docs, pk=pk, batch_size=batch_size, alter=alter, extra_kwargs
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py", line 1852, in insert_all
first_record = next(records)
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 703, in <genexpr>
docs = (decode_base64_values(doc) for doc in docs)
File "/usr/local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 681, in <genexpr>
docs = (dict(zip(headers, row)) for row in reader)
_csv.Error: field larger than field limit (131072)
sqlite-utils --versionsqlite-utils, version 3.4.1 datasette --versiondatasette, version 0.54 ``` It appears this is a known issue reading in csv files in python and doesn't look to be modifiable through system / env vars (i may be very wrong on this). Noting that using sqlite3 Finally, I'm loving https://datasette.io/ thank you very much for an amazing tool and data ecosytem 🙇♀️ |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
807817197 | MDU6SXNzdWU4MDc4MTcxOTc= | 229 | Hitting `_csv.Error: field larger than field limit (131072)` | frosencrantz 631242 | closed | 0 | 3 | 2021-02-13T19:52:44Z | 2021-02-14T21:33:33Z | 2021-02-14T21:33:33Z | NONE | I have a csv file where one of the fields is so large it is throwing an exception with this error and stops loading:
The stack trace occurs here: https://github.com/simonw/sqlite-utils/blob/3.1/sqlite_utils/cli.py#L633 There is a way to handle this that helps: https://stackoverflow.com/questions/15063936/csv-error-field-larger-than-field-limit-131072 One issue I had with this problem was sqlite-utils only provides limited context as to where the problem line is. There is the progress bar, but that is by percent rather than by line number. It would have been helpful if it could have provided a line number. Also, it would have been useful if it had allowed the loading to continue with later lines. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
808008305 | MDU6SXNzdWU4MDgwMDgzMDU= | 230 | --sniff option for sniffing delimiters | simonw 9599 | closed | 0 | 8 | 2021-02-14T17:43:54Z | 2021-02-14T21:15:33Z | 2021-02-14T19:24:32Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/228#issuecomment-778812050 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
808046597 | MDU6SXNzdWU4MDgwNDY1OTc= | 234 | .insert_all() fails if subsequent chunks contain additional columns | simonw 9599 | closed | 0 | 1 | 2021-02-14T21:01:51Z | 2021-02-14T21:03:40Z | 2021-02-14T21:03:40Z | OWNER | Reported by @nieuwenhoven in #225 along with a proposed fix. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
808036774 | MDU6SXNzdWU4MDgwMzY3NzQ= | 232 | Run tests against Windows in GitHub Actions | simonw 9599 | closed | 0 | 0 | 2021-02-14T20:09:45Z | 2021-02-14T20:39:55Z | 2021-02-14T20:39:55Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/225#issuecomment-778834504 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
808028757 | MDU6SXNzdWU4MDgwMjg3NTc= | 231 | limit=X, offset=Y parameters for more Python methods | simonw 9599 | closed | 0 | 2 | 2021-02-14T19:31:23Z | 2021-02-14T20:03:08Z | 2021-02-14T20:03:08Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/224#issuecomment-778828495 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
806743116 | MDU6SXNzdWU4MDY3NDMxMTY= | 1220 | Installing datasette via docker: Path 'fixtures.db' does not exist | aborruso 30607 | closed | 0 | 4 | 2021-02-11T21:09:14Z | 2021-02-12T21:35:17Z | 2021-02-12T21:35:17Z | NONE | Hi, If I run
I have
If I run What's my error? Thank you |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
743297582 | MDU6SXNzdWU3NDMyOTc1ODI= | 7 | evernote-to-sqlite on windows 10 give this error: TypeError: insert() got an unexpected keyword argument 'replace' | martinvanwieringen 42387931 | closed | 0 | 1 | 2020-11-15T16:57:28Z | 2021-02-11T22:13:17Z | 2021-02-11T22:13:17Z | NONE | running evernote-to-sqlite 0.2 on windows 10. Command: evernote-to-sqlite enex evernote.db MyNotes.enex I get the followinng error: File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\evernote_to_sqlite\utils.py", line 46, in save_note note_id = db["notes"].insert(row, hash_id="id", replace=True, alter=True).last_pk TypeError: insert() got an unexpected keyword argument 'replace' Removing replace=True, Leads to below error: note_id = db["notes"].insert(row, hash_id="id", alter=True).last_pk File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py", line 924, in insert return self.insert_all( File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py", line 1046, in insert_all result = self.db.conn.execute(sql, values) sqlite3.IntegrityError: UNIQUE constraint failed: notes.id |
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
748372469 | MDU6SXNzdWU3NDgzNzI0Njk= | 9 | ParseError: undefined entity š | mkorosec 4028322 | closed | 0 | 1 | 2020-11-22T23:04:35Z | 2021-02-11T22:10:55Z | 2021-02-11T22:10:55Z | CONTRIBUTOR | I encountered a parse error if the enex file contained š or Run command: evernote-to-sqlite enex evernote.db evernote.enex
Workaround:
|
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
792851444 | MDU6SXNzdWU3OTI4NTE0NDQ= | 11 | XML parse error | dskrad 3613583 | closed | 0 | 2 | 2021-01-24T17:38:54Z | 2021-02-11T21:18:58Z | 2021-02-11T21:18:48Z | NONE | I am on Windows 10 using Windows Subsystem for Linux, Python 3.8. I installed evernote-to-sqlite via pipx (in a venv). I tried using enex files from the latest version of Evernote for Windows (10.6.9 which only lets you export 50 notes at a time) and from Legacy Evernote (6.25.2.9198 which lets you export all your notes at once). The enex file from latest evernote gives this error:
The enex file from Legacy Evernote gives this error:
|
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/11/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
802583450 | MDU6SXNzdWU4MDI1ODM0NTA= | 226 | 3.4 release is broken - includes a rogue line | simonw 9599 | closed | 0 | 0 | 2021-02-06T02:08:01Z | 2021-02-06T02:10:26Z | 2021-02-06T02:10:26Z | OWNER | I started seeing weird errors, caused by this line: https://github.com/simonw/sqlite-utils/blob/f8010ca78fed8c5fca6cde19658ec09fdd468420/sqlite_utils/cli.py#L1-L3 That was added by accident in 1b666f9315d4ea6bb332b2e75e48480c26100199 I'm surprised the tests didn't catch this! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
788527932 | MDU6SXNzdWU3ODg1Mjc5MzI= | 223 | --delimiter option for CSV import | simonw 9599 | closed | 0 | 2 | 2021-01-18T20:25:03Z | 2021-02-06T01:39:47Z | 2021-02-06T01:34:54Z | OWNER |
Would be useful to be able to do this:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
796234313 | MDU6SXNzdWU3OTYyMzQzMTM= | 1210 | Immutable Database w/ Canned Queries | heyarne 525780 | closed | 0 | 2 | 2021-01-28T18:08:29Z | 2021-02-05T11:30:34Z | 2021-02-05T11:30:34Z | NONE | I have a database that I only want to read from; when instructing datasette to treat the database as immutable my defined canned queries disappear. Are these two features incompatible or have I hit an unintended bug? Thanks for datasette in any way, it's a joy to use! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
796736607 | MDU6SXNzdWU3OTY3MzY2MDc= | 56 | Not all quoted statuses get fetched? | gsajko 42315895 | closed | 0 | 3 | 2021-01-29T09:48:44Z | 2021-02-03T10:36:36Z | 2021-02-03T10:36:36Z | NONE | In my database I have 13300 quote tweets, but eta 3600 have I fetched some of them using |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/56/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
799693777 | MDU6SXNzdWU3OTk2OTM3Nzc= | 1214 | Re-submitting filter form duplicates _x querystring arguments | simonw 9599 | closed | 0 | 3 | 2021-02-02T21:13:35Z | 2021-02-02T21:28:53Z | 2021-02-02T21:21:13Z | OWNER | Really nasty bug, caused by #1194 fix in 07e163561592c743e4117f72102fcd350a600909 Navigate to this page: https://github-to-sqlite.dogsheep.net/github/labels?_search=help&_sort=id Click "Apply" to submit the form and the resulting URL is https://github-to-sqlite.dogsheep.net/github/labels?_search=help&_sort=id&_search=help&_sort=id That's because the (truncated) HTML for the form looks like this:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
793881756 | MDU6SXNzdWU3OTM4ODE3NTY= | 1207 | Document the Datasette(..., pdb=True) testing pattern | simonw 9599 | closed | 0 | 1 | 2021-01-26T02:48:10Z | 2021-01-29T02:37:19Z | 2021-01-29T02:12:34Z | OWNER | If you're writing tests for a Datasette plugin and you get a 500 error from inside Datasette, you can cause Datasette to open a PDB session within the application server code by doing this:
You'll need to run |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
793027837 | MDU6SXNzdWU3OTMwMjc4Mzc= | 1205 | Rename /:memory: to /_memory | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 3 | 2021-01-25T05:04:56Z | 2021-01-28T22:55:02Z | 2021-01-28T22:51:42Z | OWNER | For consistency with This change would need to be in before Datasette 1.0. I could land it earlier and set up redirects from the old URLs though. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
770448622 | MDU6SXNzdWU3NzA0NDg2MjI= | 1151 | Database class mechanism for cross-connection in-memory databases | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 11 | 2020-12-17T23:25:43Z | 2021-01-26T19:07:44Z | 2020-12-18T01:01:26Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/1150#issuecomment-747768112 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
792904595 | MDU6SXNzdWU3OTI5MDQ1OTU= | 1201 | Release notes for Datasette 0.54 | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 5 | 2021-01-24T21:22:28Z | 2021-01-25T17:42:21Z | 2021-01-25T17:42:21Z | OWNER | These will incorporate the release notes from the alpha, much expanded: https://github.com/simonw/datasette/releases/tag/0.54a0 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
788447787 | MDU6SXNzdWU3ODg0NDc3ODc= | 1194 | ?_size= argument is not persisted by hidden form fields in the table filters | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 3 | 2021-01-18T17:41:52Z | 2021-01-25T03:10:23Z | 2021-01-25T03:10:23Z | OWNER | Click "Apply" on https://covid-19.datasettes.com/covid/ny_times_us_counties?_size=1000&county__exact=San+Francisco&state__exact=California&_sort_desc=date#g.mark=line&g.x_column=date&g.x_type=temporal&g.y_column=cases&g.y_type=quantitative and the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
777145954 | MDU6SXNzdWU3NzcxNDU5NTQ= | 1167 | Add Prettier to contributing documentation | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 3 | 2020-12-31T22:00:55Z | 2021-01-25T02:01:19Z | 2021-01-25T01:58:28Z | OWNER | Following #1166 - the docs at https://docs.datasette.io/en/stable/contributing.html should include a section about JavaScript, and it should document how to run Prettier. I run it in VS Code but it can be run on the command-line too:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
771208009 | MDU6SXNzdWU3NzEyMDgwMDk= | 1154 | Documentation for new _internal database and tables | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 2 | 2020-12-18T22:34:52Z | 2021-01-25T00:09:22Z | 2021-01-25T00:08:41Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/1150#issuecomment-748352106 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
792931244 | MDU6SXNzdWU3OTI5MzEyNDQ= | 1202 | Documentation convention for marking unstable APIs. | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 2 | 2021-01-24T23:47:18Z | 2021-01-25T00:01:02Z | 2021-01-25T00:01:02Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/1154#issuecomment-766462197 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
785588942 | MDU6SXNzdWU3ODU1ODg5NDI= | 1187 | extra_body_script() support for script type="module" | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 1 | 2021-01-14T02:01:47Z | 2021-01-24T21:21:44Z | 2021-01-14T02:14:39Z | OWNER | Follows #1186. The Relevant docs: https://docs.datasette.io/en/stable/plugin_hooks.html#extra-body-script-template-database-table-columns-view-name-request-datasette |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
785573793 | MDU6SXNzdWU3ODU1NzM3OTM= | 1186 | script type="module" support | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 1 | 2021-01-14T01:17:47Z | 2021-01-24T21:21:41Z | 2021-01-14T01:50:58Z | OWNER | Custom JavaScript can be loaded in |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
784628163 | MDU6SXNzdWU3ODQ2MjgxNjM= | 1185 | "Statement may not contain PRAGMA" error is not strictly true | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 3 | 2021-01-12T22:07:10Z | 2021-01-24T21:21:37Z | 2021-01-12T22:26:26Z | OWNER | It says "Statement may not contain PRAGMA" - but that's not actually true. Datasette has an allow-list of PRAGMA that are OK - in this case there was a typo in So the error message is misleading. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
783714076 | MDU6SXNzdWU3ODM3MTQwNzY= | 1184 | request.full_path property | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 0 | 2021-01-11T21:21:58Z | 2021-01-24T21:21:16Z | 2021-01-11T21:34:47Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/datasette/issues/1179#issuecomment-755495387 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
782692159 | MDU6SXNzdWU3ODI2OTIxNTk= | 1182 | Retire "Ecosystem" page in favour of datasette.io/plugins and /tools | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 3 | 2021-01-09T21:54:47Z | 2021-01-24T21:21:09Z | 2021-01-09T22:17:28Z | OWNER | https://docs.datasette.io/en/stable/ecosystem.html is no longer needed. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
780267857 | MDU6SXNzdWU3ODAyNjc4NTc= | 1178 | Use force_https_urls on when deploying with Cloud Run | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 9 | 2021-01-06T08:20:55Z | 2021-01-24T21:21:05Z | 2021-01-06T18:24:47Z | OWNER | Original title: datasette.absolute_url() should return https:// not http:// on Cloud Run https://latest-with-plugins.datasette.io/github/issue_comments.Notebook?_labels=on currently provides |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
772438273 | MDU6SXNzdWU3NzI0MzgyNzM= | 1157 | Use time.perf_counter() instead of time.time() to measure performance | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 1 | 2020-12-21T20:21:41Z | 2021-01-24T21:20:42Z | 2020-12-21T21:49:20Z | OWNER | I do that in a bunch of places: https://ripgrep.datasette.io/-/ripgrep?pattern=time%28%29&literal=on&glob=datasette%2F%2A%2A https://docs.python.org/3/library/time.html#time.perf_counter
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
772408750 | MDU6SXNzdWU3NzI0MDg3NTA= | 1156 | Rename _schemas to _internal | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 1 | 2020-12-21T19:27:58Z | 2021-01-24T21:20:39Z | 2020-12-21T19:51:18Z | OWNER | I like Refs #1154 #1150 #1155 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
742011049 | MDU6SXNzdWU3NDIwMTEwNDk= | 1091 | .json and .csv exports fail to apply base_url | simonw 9599 | closed | 0 | Datasette 0.54 6346396 | 22 | 2020-11-12T23:45:16Z | 2021-01-24T21:20:24Z | 2021-01-09T22:19:29Z | OWNER |
Originally posted by @tballison in https://github.com/simonw/datasette/issues/865#issuecomment-726385422 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
760605882 | MDU6SXNzdWU3NjA2MDU4ODI= | 1135 | Feature: --create option to create database file if it does not yet exist | simonw 9599 | closed | 0 | 0 | 2020-12-09T19:23:58Z | 2021-01-24T21:19:39Z | 2020-12-09T19:45:52Z | OWNER | I'd like to be able to tell people to run the following in the Datasette documentation to get started:
This would give them a local Datasette instance with the ability to drag-and-drop CSV files directly into it. Just one catch: I don't want to have to talk them through creating an empty SQLite database file. So I want to add a new |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
791381623 | MDU6SXNzdWU3OTEzODE2MjM= | 1197 | DB size limit for publishing with Heroku | mtdukes 1186275 | closed | 0 | 1 | 2021-01-21T18:08:43Z | 2021-01-24T20:53:44Z | 2021-01-24T20:53:44Z | NONE | Hello, I tried searching for this, but can't seem to get a great answer: Does anybody know the size limit for databases deploying to Heroku? The files I'm working with are pretty large, but I might be able to pare them down if I have a limit in mind. I'm getting the following error when running
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
792625812 | MDU6SXNzdWU3OTI2MjU4MTI= | 1198 | Plugin testing documentation on using pytest-httpx | simonw 9599 | closed | 0 | 1 | 2021-01-23T18:46:16Z | 2021-01-24T20:40:38Z | 2021-01-24T20:38:43Z | OWNER | I keep on having to figure this out: if you use the https://pypi.org/project/pytest-httpx/ fixture to write tests against mocked external APIs, they will fail because that module will break Datasette's own You can fix this using:
See https://github.com/simonw/datasette-indieauth/blob/1.2/tests/test_indieauth.py I can add this tip to the https://docs.datasette.io/en/stable/testing_plugins.html page. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
725996507 | MDU6SXNzdWU3MjU5OTY1MDc= | 1036 | Make it possible to download BLOB data from the Datasette UI | simonw 9599 | closed | 0 | 0.51 6026070 | 16 | 2020-10-20T22:47:56Z | 2021-01-18T17:45:00Z | 2020-10-25T00:14:52Z | OWNER | Currently you can only extract binary BLOB data as base64-encoded JSON, which is not user friendly at all. It should always be possible for end-users to get the binary data out. I'm worried about XSS vulnerabilities here, but hopefully sending |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
743400216 | MDU6SXNzdWU3NDM0MDAyMTY= | 11 | Error thrown: sqlite3.OperationalError: table users has no column named lastName | beaugunderson 61791 | closed | 0 | 2 | 2020-11-16T01:21:18Z | 2021-01-18T04:35:22Z | 2021-01-18T04:35:22Z | NONE | Just installed
|
swarm-to-sqlite 205429375 | issue | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/11/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
787900412 | MDU6SXNzdWU3ODc5MDA0MTI= | 222 | .m2m() should accept alter=True parameter | simonw 9599 | closed | 0 | 0 | 2021-01-18T04:15:43Z | 2021-01-18T04:26:10Z | 2021-01-18T04:26:10Z | OWNER | sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||||
783910901 | MDU6SXNzdWU3ODM5MTA5MDE= | 221 | .add_missing_columns() does not take case insensitivity into account | simonw 9599 | closed | 0 | 0 | 2021-01-12T05:01:00Z | 2021-01-12T23:17:33Z | 2021-01-12T23:17:33Z | OWNER | SQLite columns are case insensitive - but the |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
497170355 | MDU6SXNzdWU0OTcxNzAzNTU= | 576 | Documented internals API for use in plugins | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 10 | 2019-09-23T15:28:50Z | 2021-01-05T23:12:51Z | 2021-01-05T23:12:37Z | OWNER | Quite a few of the plugin hooks make a This means it should provide a documented, stable API so that plugin authors can rely on it. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
435819321 | MDU6SXNzdWU0MzU4MTkzMjE= | 436 | 400 Error when trying to register new user via https://publish.datasettes.com/ | nniiicc 317694 | closed | 0 | 1 | 2019-04-22T17:55:00Z | 2021-01-04T20:15:42Z | 2021-01-04T20:15:41Z | NONE | Behavior: When registering a new user via Zeit - confirmation is sent and screen acknowledges registered user... When clicking grant access the next screen is a white 400 error message. Replicated: Chrome and Firefox; 2 different email accounts |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
459537047 | MDU6SXNzdWU0NTk1MzcwNDc= | 517 | Add unit test for "static" mechanism in plugins | simonw 9599 | closed | 0 | 1 | 2019-06-23T05:03:31Z | 2021-01-04T20:15:19Z | 2021-01-04T20:15:19Z | OWNER | Split out from #272 - this is actually quite tricky. Here's the relevant code: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
377156339 | MDU6SXNzdWUzNzcxNTYzMzk= | 371 | datasette publish digitalocean plugin | psychemedia 82988 | closed | 0 | 3 | 2018-11-04T14:07:41Z | 2021-01-04T20:14:28Z | 2021-01-04T20:14:28Z | CONTRIBUTOR | Provide support for launching Example: Deploy Docker containers into Digital Ocean. Digital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: Docker One-Click Application. Related: - Launching containers in Digital Ocean servers running docker: How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04 - How To Use Doctl, the Official DigitalOcean Command-Line Client |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
274264175 | MDU6SXNzdWUyNzQyNjQxNzU= | 102 | datasette publish elasticbeanstalk | simonw 9599 | closed | 0 | 1 | 2017-11-15T18:48:31Z | 2021-01-04T20:13:20Z | 2021-01-04T20:13:19Z | OWNER | It looks like Elastic Beanstalk is the most convenient way to deploy a docker container to AWS without first deploying a cluster. https://aws.amazon.com/blogs/devops/dockerizing-a-python-web-app/ looks helpful. We would need to automate the deployment with Boto: http://boto3.readthedocs.io/en/latest/reference/services/elasticbeanstalk.html |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/102/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
315142414 | MDU6SXNzdWUzMTUxNDI0MTQ= | 221 | Allow plugins to add new cli sub commands | simonw 9599 | closed | 0 | 3 | 2018-04-17T16:40:13Z | 2021-01-04T20:12:14Z | 2021-01-04T20:12:14Z | OWNER | I could then test this out by having https://github.com/simonw/csvs-to-sqlite register itself as a plugin |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
267739593 | MDU6SXNzdWUyNjc3Mzk1OTM= | 18 | See if I can get a websockets interface working | simonw 9599 | closed | 0 | 1 | 2017-10-23T16:46:41Z | 2021-01-04T20:05:52Z | 2021-01-04T20:05:48Z | OWNER | Since I am already running on Sanic, how hard would it be to add a websocket ebdpoint that lets you talk to sqlite interactively? Could this be used to efficiently support streaming in answers to giant queries? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
274265878 | MDU6SXNzdWUyNzQyNjU4Nzg= | 103 | datasette publish appengine | simonw 9599 | closed | 0 | 1 | 2017-11-15T18:54:18Z | 2021-01-04T20:05:14Z | 2021-01-04T20:05:14Z | OWNER | Similar approach to Heroku, discussed in #90 Looks like this could be pretty easy: https://cloud.google.com/appengine/docs/flexible/python/quickstart |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777677671 | MDU6SXNzdWU3Nzc2Nzc2NzE= | 1169 | Prettier package not actually being cached | benpickles 3637 | closed | 0 | 4 | 2021-01-03T17:04:41Z | 2021-01-04T19:52:34Z | 2021-01-04T19:52:33Z | CONTRIBUTOR | With the current configuration Prettier seems to be installed on every run - which can been seen from the output:
Prettier isn't explicitly being installed (it's surprising that actually installing the dependencies isn't included in the actions/cache docs) but it turns out that
I think there are a couple of approaches to tackling this, you could manually install/cache Prettier within the action, or add a I've tested the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777707544 | MDU6SXNzdWU3Nzc3MDc1NDQ= | 219 | reset_counts() method and command | simonw 9599 | closed | 0 | 4 | 2021-01-03T20:08:28Z | 2021-01-03T20:59:37Z | 2021-01-03T20:59:37Z | OWNER |
Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/215#issuecomment-753545757 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777535402 | MDU6SXNzdWU3Nzc1MzU0MDI= | 215 | Use _counts to speed up counts | simonw 9599 | closed | 0 | 9 | 2021-01-02T22:30:17Z | 2021-01-03T20:19:40Z | 2021-01-03T20:19:40Z | OWNER | Utility mechanism for taking advantage of the new These can trigger automatically if the |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
738514367 | MDU6SXNzdWU3Mzg1MTQzNjc= | 202 | sqlite-utils insert -f colname - for configuring full-text search | simonw 9599 | closed | 0 | 2 | 2020-11-08T17:30:09Z | 2021-01-03T05:00:36Z | 2021-01-03T05:00:27Z | OWNER | A mechanism for specifying columns that should be configured for full-text search as part of the initial data import:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777543336 | MDU6SXNzdWU3Nzc1NDMzMzY= | 217 | Rename .escape() to .quote() | simonw 9599 | closed | 0 | 2 | 2021-01-02T23:40:52Z | 2021-01-03T04:27:38Z | 2021-01-03T04:15:23Z | OWNER |
This method has never been documented so I'm going to rename it without a major version bump. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777540352 | MDU6SXNzdWU3Nzc1NDAzNTI= | 216 | database.triggers_dict introspection property | simonw 9599 | closed | 0 | 1 | 2021-01-02T23:13:00Z | 2021-01-03T04:27:14Z | 2021-01-03T04:25:36Z | OWNER | Following #211 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777530107 | MDU6SXNzdWU3Nzc1MzAxMDc= | 214 | sqlite-utils enable-counts command | simonw 9599 | closed | 0 | 0 | 2021-01-02T21:45:48Z | 2021-01-03T04:26:44Z | 2021-01-03T04:26:44Z | OWNER | The CLI version of #212 and #213.
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777560474 | MDU6SXNzdWU3Nzc1NjA0NzQ= | 218 | "sqlite-utils triggers" command | simonw 9599 | closed | 0 | 1 | 2021-01-03T02:34:50Z | 2021-01-03T03:49:51Z | 2021-01-03T03:03:35Z | OWNER | A command to list the triggers in the database.
Can optionally take one or more tables:
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777529979 | MDU6SXNzdWU3Nzc1Mjk5Nzk= | 213 | db.enable_counts() method | simonw 9599 | closed | 0 | 2 | 2021-01-02T21:44:55Z | 2021-01-02T22:04:02Z | 2021-01-02T22:04:02Z | OWNER | Following #212 it would be useful if there was a utility method for enabling counts for ALL tables in a database:
Open question: should this setup triggers for virtual tables such as FTS tables? Could that break things? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777392020 | MDU6SXNzdWU3NzczOTIwMjA= | 212 | Mechanism for maintaining cache of table counts using triggers | simonw 9599 | closed | 0 | 1 | 2021-01-02T02:58:53Z | 2021-01-02T21:40:27Z | 2021-01-02T21:40:27Z | OWNER | Counting all of the rows in a large table is expensive - this is one of the main causes of performance problems in Datasette when running against large databases. Carefully constructed SQL triggers could be used to maintain accurate cached counts for a table, by incrementing and decrementing a counter every time a row is inserted or deleted.
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
777386465 | MDU6SXNzdWU3NzczODY0NjU= | 211 | table.triggers_dict introspection property | simonw 9599 | closed | 0 | 0 | 2021-01-02T02:04:00Z | 2021-01-02T02:10:10Z | 2021-01-02T02:10:10Z | OWNER |
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
767685961 | MDU6SXNzdWU3Njc2ODU5NjE= | 210 | Support of RData files | PeterBailey 23739126 | closed | 0 | 1 | 2020-12-15T15:04:14Z | 2021-01-02T00:02:40Z | 2021-01-02T00:02:40Z | NONE | Hi Simon, Would be great if you could ingest RData files! I could do this in a few lines of code but I am too lazy - sorry! Peter |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
766156875 | MDU6SXNzdWU3NjYxNTY4NzU= | 209 | Test failure with sqlite 3.34 in test_cli.py::test_optimize | meatcar 191622 | closed | 0 | 1 | 2020-12-14T08:58:18Z | 2021-01-01T23:52:46Z | 2021-01-01T23:52:46Z | CONTRIBUTOR | pytest output:
Came across this while packaging ``` docker run --rm -it alpine:edge /bin/sh apk update && apk add git sqlite python3 gcc python3-dev musl-dev && python3 -m ensurepipgit clone https://github.com/simonw/sqlite-utils.gitcd sqlite-utils/pip3 install -e .[test]pytest``` This definitely works on sqlite v3.33. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
770436876 | MDU6SXNzdWU3NzA0MzY4NzY= | 1150 | Maintain an in-memory SQLite table of connected databases and their tables | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 32 | 2020-12-17T23:02:13Z | 2020-12-27T14:51:39Z | 2020-12-18T22:34:12Z | OWNER | I want Datasette to have its own internal metadata about connected tables, to power features like a paginated searchable homepage in #461. I want this to be a SQLite table. This could also be part of the directory scanning mechanism prototyped in #672 - where Datasette can be set to continually scan a directory for new database files that it can serve. Also relevant to the Datasette Library concept in #417. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1150/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
456568880 | MDU6SXNzdWU0NTY1Njg4ODA= | 509 | Support opening multiple databases with the same stem | simonw 9599 | closed | 0 | simonw 9599 | Datasette 1.0 3268330 | 4 | 2019-06-15T19:32:00Z | 2020-12-22T20:04:35Z | 2020-12-22T20:04:35Z | OWNER | e.g. I should be able to do this:
This currently errors because you can't have two databases taking the Instead, how about in this particular case assigning the second database |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/509/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||
771216293 | MDU6SXNzdWU3NzEyMTYyOTM= | 1155 | Better internal database_name for _internal database | simonw 9599 | closed | 0 | 9 | 2020-12-18T22:47:27Z | 2020-12-22T20:04:35Z | 2020-12-22T20:04:35Z | OWNER | https://latest.datasette.io/login-as-root then https://latest.datasette.io/_internal That |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
612151767 | MDU6SXNzdWU2MTIxNTE3Njc= | 15 | Expose scores from ZCOMPUTEDASSETATTRIBUTES | simonw 9599 | closed | 0 | 7 | 2020-05-04T20:36:07Z | 2020-12-20T04:44:22Z | 2020-05-05T00:11:45Z | MEMBER | The Apple Photos database has a |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
771324837 | MDU6SXNzdWU3NzEzMjQ4Mzc= | 53 | --since support for favorites | anotherjesse 27 | closed | 0 | 1 | 2020-12-19T07:08:23Z | 2020-12-19T07:47:11Z | 2020-12-19T07:47:11Z | NONE | Having support for https://twittercommunity.com/t/cant-get-all-favorite-tweets-by-rest-api/22007/3 The api seems to take an optional |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
615474990 | MDU6SXNzdWU2MTU0NzQ5OTA= | 21 | bpylist.archiver.CircularReference: archive has a cycle with uid(13) | simonw 9599 | closed | 0 | 11 | 2020-05-10T20:58:06Z | 2020-12-19T07:44:49Z | 2020-05-10T21:57:13Z | MEMBER | ```
% python -i $(which photos-to-sqlite) apple-photos photos.db During handling of the above exception, another exception occurred: Traceback (most recent call last):
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/bin/photos-to-sqlite", line 11, in <module>
load_entry_point('photos-to-sqlite', 'console_scripts', 'photos-to-sqlite')()
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/click/core.py", line 829, in call
return self.main(args, kwargs)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, ctx.params)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(args, **kwargs)
File "/Users/simon/Dropbox/Development/photos-to-sqlite/photos_to_sqlite/cli.py", line 249, in apple_photos
photo_row = osxphoto_to_row(sha256, photo)
File "/Users/simon/Dropbox/Development/photos-to-sqlite/photos_to_sqlite/utils.py", line 91, in osxphoto_to_row
place = photo.place
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/osxphotos/photoinfo.py", line 614, in place
self._place = PlaceInfo5(self._info["reverse_geolocation"])
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/osxphotos/placeinfo.py", line 505, in init
self._plrevgeoloc = archiver.unarchive(revgeoloc_bplist)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 16, in unarchive
return Unarchive(plist).top_object()
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 256, in top_object
return self.decode_object(self.top_uid)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 247, in decode_object
obj = klass.decode_archive(ArchivedObject(raw_obj, self))
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/osxphotos/placeinfo.py", line 126, in decode_archive
mapItem = archive.decode("mapItem")
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 140, in decode
return self._unarchiver.decode_key(self._object, key)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 216, in decode_key
return self.decode_object(val)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 247, in decode_object
obj = klass.decode_archive(ArchivedObject(raw_obj, self))
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/osxphotos/placeinfo.py", line 180, in decode_archive
sortedPlaceInfos = archive.decode("sortedPlaceInfos")
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 140, in decode
return self._unarchiver.decode_key(self._object, key)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 216, in decode_key
return self.decode_object(val)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 247, in decode_object
obj = klass.decode_archive(ArchivedObject(raw_obj, self))
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 112, in decode_archive
return [archive._decode_index(index) for index in uids]
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 112, in <listcomp>
return [archive._decode_index(index) for index in uids]
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 137, in _decode_index
return self._unarchiver.decode_object(index)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 247, in decode_object
obj = klass.decode_archive(ArchivedObject(raw_obj, self))
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/osxphotos/placeinfo.py", line 217, in decode_archive
placeType = archive.decode("placeType")
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 140, in decode
return self._unarchiver.decode_key(self._object, key)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 216, in decode_key
return self.decode_object(val)
File "/Users/simon/.local/share/virtualenvs/photos-to-sqlite-0uGSHd6e/lib/python3.8/site-packages/bpylist/archiver.py", line 227, in decode_object
raise CircularReference(index)
bpylist.archiver.CircularReference: archive has a cycle with uid(13)
|
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
771316301 | MDU6SXNzdWU3NzEzMTYzMDE= | 31 | Searching for "github-to-sqlite" throws an error | simonw 9599 | closed | 0 | 4 | 2020-12-19T06:07:20Z | 2020-12-19T06:18:07Z | 2020-12-19T06:18:07Z | MEMBER | https://datasette.io/-/beta?q=github-to-sqlite&sort=relevance&type=blog.db%2Fentries - "no such column: to" |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/31/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
767561886 | MDU6SXNzdWU3Njc1NjE4ODY= | 1148 | Syntax error with + symbol when deployed to Vercel | kaihendry 765871 | closed | 0 | 2 | 2020-12-15T12:53:12Z | 2020-12-16T21:57:42Z | 2020-12-16T21:51:54Z | NONE | Works locally:
Copy the SQL into https://aws-partners-singapore.vercel.app/partners and get syntax error: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
769282206 | MDU6SXNzdWU3NjkyODIyMDY= | 30 | Upgrade to sqlite-utils 3.0 (tests are failing) | simonw 9599 | closed | 0 | 0 | 2020-12-16T21:25:15Z | 2020-12-16T21:27:11Z | 2020-12-16T21:27:10Z | MEMBER | ``` results = beta_db["search_index"].search("run") if use_porter: assert results == [ ( "dogs.db/dogs", "1", "Cleo", "2020-08-22 04:41:33", 1, 0, "running", None, None, ) ] else:
|
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
769150394 | MDU6SXNzdWU3NjkxNTAzOTQ= | 58 | Readme HTML has broken internal links | simonw 9599 | closed | 0 | 2 | 2020-12-16T17:58:11Z | 2020-12-16T19:20:14Z | 2020-12-16T19:20:14Z | MEMBER | From https://github.com/simonw/datasette.io/issues/46 ```html ... <svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg>Filtering tables``` So this is a bug in GitHub's API, but we need to work around it. |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/58/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
758944006 | MDU6SXNzdWU3NTg5NDQwMDY= | 57 | --readme throws 404 error if README does not exist in repo | simonw 9599 | closed | 0 | 0 | 2020-12-07T23:58:49Z | 2020-12-16T18:17:54Z | 2020-12-16T18:17:54Z | MEMBER | It should fail silently (populate the column with a null) instead. |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/57/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
761713079 | MDU6SXNzdWU3NjE3MTMwNzk= | 1138 | "Powered by Datasette" should link to new datasette.io site | simonw 9599 | closed | 0 | 0 | 2020-12-10T23:33:41Z | 2020-12-15T02:28:10Z | 2020-12-10T23:37:14Z | OWNER | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||||
763283616 | MDU6SXNzdWU3NjMyODM2MTY= | 207 | sqlite-utils analyze-tables command | simonw 9599 | closed | 0 | 4 | 2020-12-12T04:33:12Z | 2020-12-13T07:25:23Z | 2020-12-13T07:20:13Z | OWNER | A command which analyzes a table (potentially taking quite a while if the table is large) and outputs information for each column - things like:
The command can output this information to the terminal, but it should also provide an option for writing the information to a database table so it can be explored later. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
717699884 | MDU6SXNzdWU3MTc2OTk4ODQ= | 998 | Wide tables should scroll horizontally within the page | simonw 9599 | closed | 0 | 0.51 6026070 | 8 | 2020-10-08T22:13:27Z | 2020-12-11T09:25:09Z | 2020-10-22T01:12:26Z | OWNER | Wrap the main table in |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
551834842 | MDU6SXNzdWU1NTE4MzQ4NDI= | 659 | README information is obscured by feature history | labstersteve 55480210 | closed | 0 | 1 | 2020-01-18T22:34:51Z | 2020-12-10T23:28:51Z | 2020-12-10T23:28:51Z | NONE | While it's sometimes valuable to know how a project has developed, there is usually little justification for including this information in the README, and certainly not immediately after other key information such as "what does this package do, and who might want to use it?" Might I recommend that the feature history is migrated to an Appendix in the documentation? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
761706858 | MDU6SXNzdWU3NjE3MDY4NTg= | 1137 | Update README to reflect new datasette.io site | simonw 9599 | closed | 0 | 0 | 2020-12-10T23:22:06Z | 2020-12-10T23:28:50Z | 2020-12-10T23:28:50Z | OWNER | Can finally close #659. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
760960559 | MDU6SXNzdWU3NjA5NjA1NTk= | 205 | sqlite3.OperationalError: near "(": syntax error | kaihendry 765871 | closed | 0 | 2 | 2020-12-10T06:44:40Z | 2020-12-10T19:18:22Z | 2020-12-10T07:24:23Z | NONE | The sqlite version is 3.22.0 2018-01-22 18:45:57 0c55d179733b46d8d0ba4d88e01a25e10677046ee3da1d5b1581e86726f2alt1 sqlite-utils, version 3.0 It fails here: https://github.com/kaihendry/aws-partners-datasette/runs/1528432635?check_suite_focus=true I'm not sure where the problem is, since it works fine locally on Archlinux system running 3.34.0 2020-12-01 16:14:00 a26b6597e3ae272231b96f9982c3bcc17ddec2f2b6eb4df06a224b91089fed5b https://github.com/kaihendry/aws-partners-datasette/blob/main/create-summary-view.sh Maybe I need to bump up from ubuntu-latest to ? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
760312579 | MDU6SXNzdWU3NjAzMTI1Nzk= | 1134 | "_searchmode=raw" throws an index out of range error when combined with "_search_COLUMN" | clausjuhl 2181410 | closed | 0 | 4 | 2020-12-09T13:05:37Z | 2020-12-10T05:57:17Z | 2020-12-09T19:56:55Z | NONE | Hi Simon! Maybe it's just me, but when using _searchmode=raw (trying to enable wildcard-searching) in combination with the "_search_COLUMN"-table argument, I get a list index out of range error. When combining with the simpler "_search"-argument everything works, including wildcard-seaches.. Here's the traceback: ``` Traceback (most recent call last): File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py", line 122, in route_path return await view(new_scope, receive, send) File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/utils/asgi.py", line 196, in view request, scope["url_route"]["kwargs"] File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py", line 204, in get request, database, hash, correct_hash_provided, kwargs File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/base.py", line 342, in view_get request, database, hash, **kwargs File "/Users/cjk/.local/share/virtualenvs/minutes-jMDZ8Ssk/lib/python3.7/site-packages/datasette/views/table.py", line 393, in data search_col = key.split("search", 1)[1] IndexError: list index out of range ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
760621356 | MDU6SXNzdWU3NjA2MjEzNTY= | 1136 | Establish pattern for release branches to support bug fixes | simonw 9599 | closed | 0 | 8 | 2020-12-09T19:48:18Z | 2020-12-09T20:17:02Z | 2020-12-09T20:14:41Z | OWNER | I want to fix the bug in #1134 and ship it as Datasette 0.52.5 - but the I'm not ready for a feature release, so instead I want to release 0.52.5 with just that bug fix. This is the first time I will have shipped a release from a branch. I need to establish that pattern and add it to the documentation in https://docs.datasette.io/en/stable/contributing.html |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
758899581 | MDU6SXNzdWU3NTg4OTk1ODE= | 1132 | New filter: array does not contain | simonw 9599 | closed | 0 | 1 | 2020-12-07T22:28:20Z | 2020-12-07T22:50:36Z | 2020-12-07T22:41:09Z | OWNER | I want to see all of my GitHub repos that are tagged |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);