issues
50 rows where author_association = "NONE", comments = 1, state = "closed" and type = "issue" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2007893839 | I_kwDOCGYnMM53rgdP | 605 | Insert fails with `Error: Python int too large to convert to SQLite INTEGER`; can we use `NUMERIC` here? | Zac-HD 12229877 | closed | 0 | 1 | 2023-11-23T10:19:46Z | 2023-12-08T05:07:54Z | 2023-12-08T05:07:54Z | NONE | I'm currently working on a new feature for Hypothesis, where we can dump a tidy jsonlines table of all the test cases we tried - including arguments, outcomes, timings, coverage, etc. Exploring this seems like a perfect cases for I originally went to report this as a bug... and then found https://github.com/simonw/sqlite-utils/issues/309#issuecomment-895581038 almost exactly matched my repro 😅 https://github.com/simonw/sqlite-utils/issues/110#issuecomment-626391063 suggests that using After a bit more hacking, "manually cast large integers to float" seems like a decent solution for my particular case, but having written it up I thought I might as well post this issue anyway - I hope it's useful feedback, and won't mind at all if you close as wontfix if it's not. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1907281675 | I_kwDOCGYnMM5xrs8L | 595 | Cascading DELETE not working with Table.delete(pk) | cycle-data 123451970 | closed | 0 | 1 | 2023-09-21T15:46:41Z | 2023-09-25T09:38:57Z | 2023-09-25T09:38:13Z | NONE | Hi ! I noticed that when I am trying to use the delete method of the Table object, the record get properly deleted from the table, but the cascading delete triggers on foreign keys do not activate.
I tried querying the database directly via DB Browser and the triggers work without any issue. Looked up the source code and behind the scene this method is just querying the database normally so I'm not exactly sure where this behavior comes from. Thank you in advance for your time ! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1817281557 | I_kwDOC8SPRc5sUYQV | 37 | cannot use jinja filters in display? | rprimet 10352819 | closed | 0 | 1 | 2023-07-23T20:09:54Z | 2023-07-23T20:18:27Z | 2023-07-23T20:18:26Z | NONE | Hi, I'm trying to have a display function in Dogsheep's ``` {{ display.title }} (source){{ display.snippet|safe }} ``` Unfortunately, rendering fails with a message 'urls is undefined'. The same happens if I'm trying to build a row URL manually, using filters like Any hints? Thanks! |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/37/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1622640374 | I_kwDOCGYnMM5gt4b2 | 534 | ResourceWarning: unclosed file | djhenderson 1244826 | closed | 0 | 1 | 2023-03-14T03:02:18Z | 2023-05-08T19:56:29Z | 2023-05-08T19:56:29Z | NONE | Issuing either
exhibits a ResourceWarning indicating that the CSV file being loaded is not closed. sqlite-utils --version sqlite-utils, version 3.30 py --version Python 3.11.2 Windows Version 10.0.19045 Build 19045 SQLite version 3.41.0 2023-02-21 18:09:37 |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1617823309 | I_kwDOJHON9s5gbgZN | 8 | Increase performance using macnotesapp | RhetTbull 41546558 | closed | 0 | 1 | 2023-03-09T18:51:05Z | 2023-03-14T22:00:22Z | 2023-03-14T22:00:21Z | NONE | Neat project! You can probably increase performance using my python interface to Notes, macnotesapp, which uses Scripting Bridge and bulk queries for much better performance than AppleScript. Another related project is PyXA which uses Scripting Bridge to access Notes (and many other apps) and can return all the notes at once as opposed to calling AppleScript for each note. macnotesapp allows you to access multiple accounts and folders as well. ```python from macnotesapp import NotesApp NotesApp() provides interface to Notes.appnotesapp = NotesApp() Get list of notes (Note objects for each note)notes = notesapp.notes() note = notes[0] print( note.id, note.account, note.folder, note.name, note.body, note.plaintext, note.password_protected, ) print(note.asdict()) ``` |
apple-notes-to-sqlite 611552758 | issue | { "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1578609658 | I_kwDOBm6k_c5eF6v6 | 2022 | Error 500 - not clear the cause | DavidPratten 1667631 | closed | 0 | 1 | 2023-02-09T20:57:17Z | 2023-02-09T21:13:50Z | 2023-02-09T21:13:50Z | NONE | On the database that I have sent via linkedIn, datasette works great, but the following URL gives a 500 error. http://127.0.0.1:8001/literature/authors_papers?authorId=100550354 The cause of the error is not apparent. Is this expected behaviour? David |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1501900064 | I_kwDOBm6k_c5ZhS0g | 1966 | Broken link to live demo in Getting started docs | lbellomo 7551922 | closed | 0 | 1 | 2022-12-18T13:17:00Z | 2022-12-31T19:15:19Z | 2022-12-31T19:15:10Z | NONE | The link in Play with a live demo in Getting started to https://fivethirtyeight.datasettes.com/fivethirtyeight is broken and the datasette is no longer working (maybe due to the end of the free tier). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1199158210 | I_kwDOCGYnMM5HebPC | 423 | .extract() doesn't set foreign key when extracted columns contain NULL value | jlieth 37447552 | closed | 0 | 1 | 2022-04-10T20:05:30Z | 2022-08-27T14:45:04Z | 2022-08-27T14:45:04Z | NONE | I've run into an issue with I'm working with a database with music listening information. Currently it has one large table A simplified demonstration with just In [2]: db = sqlite_utils.Database(memory=True) In [3]: db["listens"].insert_all([ ...: {"id": 1, "track_title": "foo", "album_title": "bar"}, ...: {"id": 2, "track_title": "baz", "album_title": None} ...: ], pk="id") Out[3]: <Table listens (id, track_title, album_title)> ``` The track in the first row has an album, the second track doesn't. Now I extract album information into a separate column: ```ipython In [4]: db["listens"].extract(columns=["album_title"], table="albums", fk_column="album_id") Out[4]: <Table listens (id, track_title, album_id)> In [5]: list(db["albums"].rows) Out[5]: [{'id': 1, 'album_title': 'bar'}, {'id': 2, 'album_title': None}] In [6]: list(db["listens"].rows) Out[6]: [{'id': 1, 'track_title': 'foo', 'album_id': 1}, {'id': 2, 'track_title': 'baz', 'album_id': None}] ``` This behaves as expected -- the Now I want to extract the track information as well. Album information belongs to the track so I want to extract both columns to a new table. ```ipython In [7]: db["listens"].extract(columns=["track_title", "album_id"], table="tracks", fk_column="track_id") Out[7]: <Table listens (id, track_id)> In [8]: list(db["tracks"].rows) Out[8]: [{'id': 1, 'track_title': 'foo', 'album_id': 1}, {'id': 2, 'track_title': 'baz', 'album_id': None}] In [9]: list(db["listens"].rows) Out[9]: [{'id': 1, 'track_id': 1}, {'id': 2, 'track_id': None}] ``` Extracting to the Changing the order of extracts doesn't help. I poked around in the source a bit and I believe this line (essentially comparing |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1303169663 | I_kwDOCGYnMM5NrMp_ | 453 | 'unclosed file' warning when using insert_upsert_implementation from Python | makkus 311257 | closed | 0 | 1 | 2022-07-13T09:34:35Z | 2022-07-15T21:52:25Z | 2022-07-15T21:52:21Z | NONE | I'm using the The warning goes away when wrapping the code from this line in a try/finally block like:
I suspect Python closes the reference automatically when the sqlite-utils cli run is done, but since my code doesn't exit, I'm getting the warning. Alternatively, it'd be cool if the 'import csv/tsv' functionality could be added directly to the Database class. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1212701569 | I_kwDOCGYnMM5ISFuB | 427 | sqlite-utils convert date parsing recipe complains about trying to parse "*" | wdccdw 1385831 | closed | 0 | 1 | 2022-04-22T19:27:10Z | 2022-07-02T13:59:59Z | 2022-07-02T13:59:32Z | NONE | Missing values in my dataset are denoted by a single asterisk. I am trying to parse string dates into dates. This works fine for columns without missing values, but, when the column contains "*", I get the following: ``` $ sqlite-utils convert ${dbfile} details dob 'r.parsedate(value)' [------------------------------------] 0%Traceback (most recent call last): File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/sqlite_utils/db.py", line 2508, in convert_value return fn(v) File "<string>", line 2, in fn File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/sqlite_utils/recipes.py", line 8, in parsedate parser.parse(value, dayfirst=dayfirst, yearfirst=yearfirst).date().isoformat() File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/dateutil/parser/_parser.py", line 1368, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/dateutil/parser/_parser.py", line 643, in parse raise ParserError("Unknown string format: %s", timestr) dateutil.parser._parser.ParserError: Unknown string format: * Traceback (most recent call last): File "/usr/local/bin/sqlite-utils", line 33, in <module> sys.exit(load_entry_point('sqlite-utils==3.25.1', 'console_scripts', 'sqlite-utils')()) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/click/core.py", line 1128, in call return self.main(args, kwargs) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/click/core.py", line 754, in invoke return __callback(args, **kwargs) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/sqlite_utils/cli.py", line 2698, in convert db[table].convert( File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/sqlite_utils/db.py", line 2524, in convert self.db.execute(sql, where_args or []) File "/usr/local/Cellar/sqlite-utils/3.25.1/libexec/lib/python3.9/site-packages/sqlite_utils/db.py", line 458, in execute return self.conn.execute(sql, parameters) sqlite3.OperationalError: user-defined function raised exception ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1251710928 | I_kwDOBm6k_c5Km5fQ | 1751 | Add scrollbars to table presentation in default layout | knutwannheden 408765 | closed | 0 | 1 | 2022-05-28T19:44:57Z | 2022-05-28T19:52:17Z | 2022-05-28T19:52:17Z | NONE | (As you will be able to tell from the terminology I use, I am not a frontend guy, but I hope you will understand.) When a table is wide and needs horizontal scrolling to see the columns towards the end, the user needs to scroll horizontally. However, since the container for the HTML table ( I understand that I could provide my own template and / or CSS, but I think it would probably make sense to adjust the default in this regard. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1091819089 | I_kwDOCGYnMM5BE9ZR | 360 | MemoryError | nzaar9 559453 | closed | 0 | 1 | 2022-01-01T13:39:17Z | 2022-03-21T04:22:46Z | 2022-03-21T04:22:46Z | NONE | HI, when dealing with large json file (~170GB) i got the following error
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1170497629 | I_kwDOBm6k_c5FxGBd | 1662 | [feature request] Publish to fully static website | contrun 32609395 | closed | 0 | 1 | 2022-03-16T03:32:28Z | 2022-03-19T00:42:23Z | 2022-03-19T00:42:23Z | NONE | It seems currently all datasette publish requires a real backend server which is able to query the database and send results back to the frontend. There are a few projects to on-demand download a portion of data from the database from a sqlite lite database url, and present it directly to the user. These methods leverages web assembly under the hood. I think datasette is a perfect use case for this technology. Below are a few examples of querying sqlite database from frontend directly. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1662/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
723708310 | MDU6SXNzdWU3MjM3MDgzMTA= | 188 | About loading spatialite | aborruso 30607 | closed | 0 | 1 | 2020-10-17T08:47:02Z | 2022-02-05T00:04:26Z | 2020-10-17T08:52:58Z | NONE | Hi @simonw , If I run
I have If I run
I have
How to load properly spatialite extension in sqlite-utils? Thank you very muc |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1076057610 | I_kwDOBm6k_c5AI1YK | 1546 | validating the sql | jadsongmatos 50336793 | closed | 0 | 1 | 2021-12-09T21:35:57Z | 2021-12-18T02:05:17Z | 2021-12-18T02:05:16Z | NONE | Could someone tell me that part of the code is responsible for validating the sql that guarantees that only a table can be read |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1079422215 | I_kwDOCGYnMM5AVq0H | 357 | pytest-runner is not required | pgajdos 4067843 | closed | 0 | 1 | 2021-12-14T07:51:24Z | 2021-12-16T20:43:19Z | 2021-12-16T20:43:13Z | NONE | Deprecated pytest-runner is not necessary for running the testsuite. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1028056713 | I_kwDOCGYnMM49RuaJ | 332 | `sqlite-utils memory --flatten` option to flatten nested JSON | rdtq 22523840 | closed | 0 | 1 | 2021-10-16T14:04:42Z | 2021-11-14T23:05:05Z | 2021-11-14T23:05:05Z | NONE | currently --flatten option works only for |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
1028115674 | I_kwDOBm6k_c49R8za | 1493 | `--get '/:memory:.json?sql=select+3*5'` error with datasette 0.59 | chenrui333 1580956 | closed | 0 | 1 | 2021-10-16T18:22:22Z | 2021-10-19T04:39:11Z | 2021-10-19T04:39:11Z | NONE | 👋 trying to upgrade the formula to use the latest release, but runs into some regression test issue with My QQ is does this relates to https://github.com/Homebrew/homebrew-core/pull/87369 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
934123448 | MDU6SXNzdWU5MzQxMjM0NDg= | 295 | Insert with --tsv and --no-headers give error about --nl arguments | davidscotson 7288187 | closed | 0 | 1 | 2021-06-30T21:01:01Z | 2021-08-18T20:19:04Z | 2021-08-18T20:18:57Z | NONE | Not quite sure if this is a bug, or just an assumption I made but I thought Instead it says:
As if it has interpreted the --no-headers as --nl. The --help does specifically say CSV:
And this heading in the documentation also only refers to CSV, but the text does mention TSV in passing, and I'd generally expect them to behave the same in most cases. https://sqlite-utils.datasette.io/en/stable/cli.html#csv-files-without-a-header-row |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
956832836 | MDU6SXNzdWU5NTY4MzI4MzY= | 300 | Returning underlying cause for User Defined Functions | wsargent 71236 | closed | 0 | 1 | 2021-07-30T15:08:21Z | 2021-08-02T21:53:50Z | 2021-08-02T21:53:50Z | NONE | The sqlite3 client takes user defined functions and replaces the text with "user-defined function raised exception`" so it's not apparent what's gone wrong:
As mentioned in https://code.djangoproject.com/ticket/29500 and https://stackoverflow.com/questions/45824209/how-to-get-an-error-kind-from-sqlite-create-function/45834923#45834923 the workaround for this is to enable callback tracebacks:
It would be nice if https://sqlite-utils.datasette.io/en/stable/python-api.html#registering-custom-sql-functions either included a reference to |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
925406964 | MDU6SXNzdWU5MjU0MDY5NjQ= | 1382 | Datasette with Glitch - is it possible to use CSV with ISO-8859-1 encoding? | reichaves 23701514 | closed | 0 | 1 | 2021-06-19T14:37:20Z | 2021-06-20T00:21:02Z | 2021-06-20T00:20:06Z | NONE | Hi Please, I used Remix on Glitch to create a project on Glitch and uploaded a CSV But it's a CSV with ISO-8859-1 encoding (https://en.wikipedia.org/wiki/ISO/IEC_8859-1) Is it possible for me to change the encoding to correctly visualize the data? Example: https://emphasized-carpal-pillow.glitch.me/data/Emendas Best |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
508100844 | MDU6SXNzdWU1MDgxMDA4NDQ= | 598 | Character encoding bug with CSV export | JoeGermuska 46313 | closed | 0 | 1 | 2019-10-16T21:09:30Z | 2021-06-17T18:13:20Z | 2019-10-18T22:52:21Z | NONE | I was just poking around, and at this URL, I encountered this error:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
656959584 | MDU6SXNzdWU2NTY5NTk1ODQ= | 893 | pip3 install datasette not serving static on linuxbrew. | zodman 44167 | closed | 0 | 1 | 2020-07-14T23:33:38Z | 2021-06-02T04:29:56Z | 2021-06-02T04:29:56Z | NONE | This error wasn't thrown
Linuxbrew install python@3.8 with symbolic links when You call the full_path.relative_to(root_path) throw ValueError. This happened when you install from pip3 when you install with python3 setup.py develop , works good. Well at the end the static wasn't serving. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
756818250 | MDU6SXNzdWU3NTY4MTgyNTA= | 1127 | Make the custom SQL query text box larger or resizable | zaneselvans 596279 | closed | 0 | 1 | 2020-12-04T05:37:11Z | 2021-06-02T04:29:06Z | 2021-06-02T04:28:55Z | NONE | The text entry field for custom SQL queries is too small to display a moderately complex query, especially when it's been formatted. Would it be easy to make the textbox resizable by the user rather than having a fixed height? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
891969037 | MDU6SXNzdWU4OTE5NjkwMzc= | 1326 | How to limit fields returned from the JSON API? | bram2000 5268174 | closed | 0 | 1 | 2021-05-14T14:27:41Z | 2021-05-23T02:55:06Z | 2021-05-23T02:55:00Z | NONE | Hi, I have quite wide tables, and in many cases only want a subset of the data (to save on network bandwidth). I need to use the JSON API as handling pagination is so much easier, but I can't see a way to select specific columns. Is there a way to do this, or is it a feature request? Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
842062949 | MDU6SXNzdWU4NDIwNjI5NDk= | 252 | Support json-line files | rathboma 279769 | closed | 0 | 1 | 2021-03-26T15:19:39Z | 2021-03-26T15:21:38Z | 2021-03-26T15:21:38Z | NONE | It's common for many processes to create flat files where each line is a JSON object. So the file isn't a json array. Many tools (like jq) support this natively, it'd be great for sqlite-utils to also! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
837208901 | MDU6SXNzdWU4MzcyMDg5MDE= | 1267 | Update Datasette alternativeto listening with details | RayBB 921217 | closed | 0 | 1 | 2021-03-21T23:20:20Z | 2021-03-22T04:37:26Z | 2021-03-22T04:37:26Z | NONE | Hello, I recently learned about Datasette from an old hackernews post. It seems like an awesome project and I actually have use case I might be trying out in the coming months. Alas, to get a better understanding of your project I looked it up on alternativeto to see what it is similar too. I promise it's not spam, it's reputable enough to have a Wikipedia page. There was no listing on the website so I went ahead and created a listing that is now approved. I encourage anyone who likes this project and hopes to spread the word to help update the listing by: 1. Adding to the list of software it compares to 2. Uploading screenshots 3. Writing a review 4. Adding "features" I know this may seem spammy but I promise I have no affiliation with alternativeto I'm just a happy user and know it's a popular site for discovering software. Here is the listing for datasette: https://alternativeto.net/software/datasette/about/ Cheers |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1267/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
823035080 | MDU6SXNzdWU4MjMwMzUwODA= | 1248 | duckdb database (very low performance in SQLite) | verajosemanuel 15836677 | closed | 0 | 1 | 2021-03-05T12:20:29Z | 2021-03-08T00:25:27Z | 2021-03-08T00:25:27Z | NONE | My sqlite is getting too big to be processed by datasette (more than 10 minutes waiting to load) so I am working with duckdb and is waaaaay faster. I think the fastest embeddable database actually. Taking into account DuckDb is SQLite based it would be GREAT to use it with datasette. is that possible? Regards and thanks for a superb job |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
743297582 | MDU6SXNzdWU3NDMyOTc1ODI= | 7 | evernote-to-sqlite on windows 10 give this error: TypeError: insert() got an unexpected keyword argument 'replace' | martinvanwieringen 42387931 | closed | 0 | 1 | 2020-11-15T16:57:28Z | 2021-02-11T22:13:17Z | 2021-02-11T22:13:17Z | NONE | running evernote-to-sqlite 0.2 on windows 10. Command: evernote-to-sqlite enex evernote.db MyNotes.enex I get the followinng error: File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\evernote_to_sqlite\utils.py", line 46, in save_note note_id = db["notes"].insert(row, hash_id="id", replace=True, alter=True).last_pk TypeError: insert() got an unexpected keyword argument 'replace' Removing replace=True, Leads to below error: note_id = db["notes"].insert(row, hash_id="id", alter=True).last_pk File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py", line 924, in insert return self.insert_all( File "C:\Users\marti\AppData\Roaming\Python\Python38\site-packages\sqlite_utils\db.py", line 1046, in insert_all result = self.db.conn.execute(sql, values) sqlite3.IntegrityError: UNIQUE constraint failed: notes.id |
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
791381623 | MDU6SXNzdWU3OTEzODE2MjM= | 1197 | DB size limit for publishing with Heroku | mtdukes 1186275 | closed | 0 | 1 | 2021-01-21T18:08:43Z | 2021-01-24T20:53:44Z | 2021-01-24T20:53:44Z | NONE | Hello, I tried searching for this, but can't seem to get a great answer: Does anybody know the size limit for databases deploying to Heroku? The files I'm working with are pretty large, but I might be able to pare them down if I have a limit in mind. I'm getting the following error when running
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
435819321 | MDU6SXNzdWU0MzU4MTkzMjE= | 436 | 400 Error when trying to register new user via https://publish.datasettes.com/ | nniiicc 317694 | closed | 0 | 1 | 2019-04-22T17:55:00Z | 2021-01-04T20:15:42Z | 2021-01-04T20:15:41Z | NONE | Behavior: When registering a new user via Zeit - confirmation is sent and screen acknowledges registered user... When clicking grant access the next screen is a white 400 error message. Replicated: Chrome and Firefox; 2 different email accounts |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
767685961 | MDU6SXNzdWU3Njc2ODU5NjE= | 210 | Support of RData files | PeterBailey 23739126 | closed | 0 | 1 | 2020-12-15T15:04:14Z | 2021-01-02T00:02:40Z | 2021-01-02T00:02:40Z | NONE | Hi Simon, Would be great if you could ingest RData files! I could do this in a few lines of code but I am too lazy - sorry! Peter |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
771324837 | MDU6SXNzdWU3NzEzMjQ4Mzc= | 53 | --since support for favorites | anotherjesse 27 | closed | 0 | 1 | 2020-12-19T07:08:23Z | 2020-12-19T07:47:11Z | 2020-12-19T07:47:11Z | NONE | Having support for https://twittercommunity.com/t/cant-get-all-favorite-tweets-by-rest-api/22007/3 The api seems to take an optional |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
551834842 | MDU6SXNzdWU1NTE4MzQ4NDI= | 659 | README information is obscured by feature history | labstersteve 55480210 | closed | 0 | 1 | 2020-01-18T22:34:51Z | 2020-12-10T23:28:51Z | 2020-12-10T23:28:51Z | NONE | While it's sometimes valuable to know how a project has developed, there is usually little justification for including this information in the README, and certainly not immediately after other key information such as "what does this package do, and who might want to use it?" Might I recommend that the feature history is migrated to an Appendix in the documentation? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
743011397 | MDU6SXNzdWU3NDMwMTEzOTc= | 1094 | import EX_CANTCREAT means datasette fails to work on Windows | drkane 1049910 | closed | 0 | 1 | 2020-11-14T14:17:11Z | 2020-12-05T19:35:04Z | 2020-12-05T19:35:04Z | NONE | Trying to use datasette 0.51.1 gives the following error:
Looks like that code is only available on unix: https://docs.python.org/3/library/os.html#os.EX_CANTCREAT Removing the line makes it work fine ( |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1094/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
745393298 | MDU6SXNzdWU3NDUzOTMyOTg= | 52 | Discussion: Adding support for fetching only fresh tweets | fatihky 4169772 | closed | 0 | 1 | 2020-11-18T07:01:48Z | 2020-11-18T07:12:45Z | 2020-11-18T07:12:45Z | NONE | I think it'd be very useful if this tool has an option like |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/52/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
577302229 | MDU6SXNzdWU1NzczMDIyMjk= | 91 | Enable ordering FTS results by rank | gfrmin 416374 | closed | 0 | 3.0 6079500 | 1 | 2020-03-07T08:43:51Z | 2020-11-06T23:53:26Z | 2020-11-06T23:53:25Z | NONE | According to https://www.sqlite.org/fts5.html (not sure about FTS4) results can be sorted by relevance. At the moment results are returned by default by |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/91/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
473307794 | MDU6SXNzdWU0NzMzMDc3OTQ= | 565 | Conflict between datasette and uvicorn click versions | jonheslop 440503 | closed | 0 | 1 | 2019-07-26T11:13:40Z | 2020-10-02T00:09:55Z | 2020-10-02T00:09:55Z | NONE | Hello Datasette is awesome thanks so much! I not very familiar with Python but I think there is a problem with datasette docker builds I keep getting this error
The full log from the docker build is here - https://gist.github.com/jonheslop/e01cd322e761cfaf34f0cb83f86411b0 Just in case it’s helpful this is my setup - https://github.com/dotwatcher/dotwatcher-data |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
699947574 | MDU6SXNzdWU2OTk5NDc1NzQ= | 963 | Currently selected array facets are not correctly persisted through hidden form fields | mhalle 649467 | closed | 0 | Datasette 0.49 5818042 | 1 | 2020-09-12T01:49:17Z | 2020-09-12T21:54:29Z | 2020-09-12T21:54:09Z | NONE | Faceted search uses JSON array elements as facets rather than the arrays. However, if a search is "Apply"ed (using the Apply button), the array itself rather than its elements used. To reproduce: https://latest.datasette.io/fixtures/facetable?_sort=pk&_facet=created&_facet=tags&_facet_array=tags Press "Apply", which might be done when removing a filter. Notice that the "tags" facet values are now arrays, not array elements. It appears the "&_facet_array=tags" element of the query string is dropped. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
660827546 | MDU6SXNzdWU2NjA4Mjc1NDY= | 899 | How to setup a request limit per user | Krazybug 133845 | closed | 0 | 1 | 2020-07-19T13:08:25Z | 2020-07-31T23:54:42Z | 2020-07-31T23:54:42Z | NONE | Hello, Until now I'm using datasette without any authentication system but I would like to setup a configuration or limiting the number of requests per user (eventually by IP or with a cookie mechanism) and eventually allowing me to ban specific users/IPs. Is there a plugin available for this use case ? If not what are your insights regarding this UC ? Should I write a plugin ? Should I deploy datasette behind a reverse proxy to manage this ? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
611835285 | MDU6SXNzdWU2MTE4MzUyODU= | 752 | Non-utf8 encoding in exceptionhandlers and custom-pages | clausjuhl 2181410 | closed | 0 | 1 | 2020-05-04T12:24:42Z | 2020-05-04T17:42:20Z | 2020-05-04T17:42:20Z | NONE | Hi Simon. Whenever a response is not piped through a router-view, the template is encoded in latin-1 (I think). This is especially a problem (for me) with the new custom_pages-functionality, but also problematic with the 404- and 500-handlers. Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
453243459 | MDU6SXNzdWU0NTMyNDM0NTk= | 503 | Handle SQLite databases with spaces in their names? | chrismp 7936571 | closed | 0 | simonw 9599 | 1 | 2019-06-06T21:20:59Z | 2019-11-04T23:16:30Z | 2019-11-04T23:16:30Z | NONE | I named my SQLite database "Government workers" and published it to Heroku. When I clicked the "Government workers" database online it lead to a 404 page: I believe this is because the database name has a space. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
476437213 | MDU6SXNzdWU0NzY0MzcyMTM= | 566 | Unexpected keyword argument 'hidden' | dvot197007 8330931 | closed | 0 | 1 | 2019-08-03T10:07:57Z | 2019-08-03T16:13:36Z | 2019-08-03T16:13:36Z | NONE | I couldn't get a test example running. I am running python 3.6.8 and tried both windows and windows subsystem for linux, getting the same error. My test.db was created by converting a five line csv file with csvs-to-sqlite. The csv file is: col1, col2, col3 1,2,3 4,5,6 7,8,9 10,11,12 Here is the error message: (myvenv) davido@DESKTOP-L29G79U:~/dot/datasette-eg$ datasette test.db Traceback (most recent call last): File "/home/davido/dot/datasette-eg/myvenv/bin/datasette", line 7, in <module> from datasette.cli import cli File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/datasette/cli.py", line 2, in <module> import uvicorn File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/uvicorn/init.py", line 2, in <module> from uvicorn.main import Server, main, run File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/uvicorn/main.py", line 224, in <module> headers: typing.List[str], File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/click/decorators.py", line 170, in decorator _param_memo(f, OptionClass(param_decls, attrs)) File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/click/core.py", line 1430, in init Parameter.init(self, param_decls, type=type, attrs) TypeError: init() got an unexpected keyword argument 'hidden' Thanks. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
450862577 | MDU6SXNzdWU0NTA4NjI1Nzc= | 496 | Additional options to gcloud build command in cloudrun - timeout | costrouc 1740337 | closed | 0 | 1 | 2019-05-31T15:43:55Z | 2019-05-31T23:05:05Z | 2019-05-31T23:05:05Z | NONE | I am trying to deploy a 3.1 GB dataset to cloudrun with datasette. Currrently the docker build times out. Would be nice to have a timeout flag or additional gcloud commands that could be specified. Here is the line https://github.com/simonw/datasette/blob/f825e2012109247fa246e2b938f8174069e574f1/datasette/publish/cloudrun.py#L78 I would be happy to submit a PR to allow for a timeout option. What are your ideas of allowing the user additional build publishing flag options? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
432727685 | MDU6SXNzdWU0MzI3Mjc2ODU= | 20 | JSON column values get extraneously quoted | mhalle 649467 | closed | 0 | 1.0 4348046 | 1 | 2019-04-12T20:15:30Z | 2019-05-25T00:57:19Z | 2019-05-25T00:57:19Z | NONE | If the input to ``` echo '[{"key": ["one", "two", "three"]}]' | sqlite-utils insert t.db t -sqlite-utils t.db 'select * from t'[{"key": "[\"one\", \"two\", \"three\"]"}] sqlite3 t.db 'select * from t'["one", "two", "three"] ``` This might require an imperfect solution, since sqlite3 doesn't have a JSON type. Perhaps fields that start with |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/20/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | |||||
408518024 | MDU6SXNzdWU0MDg1MTgwMjQ= | 410 | How to setup a multi database environment? | aborruso 30607 | closed | 0 | 1 | 2019-02-10T09:39:24Z | 2019-04-12T04:42:28Z | 2019-04-12T04:42:27Z | NONE | Hi, first of all I need to write that Simon Willison and datasette are really great. I have probably a stupid question, but it seems to me that I do not have the reply in the documentation. I have installed datasette and run it with But how to work with more than one db? Imagine I have ten sqlite databases, and that I need to explore/query these via datasette, how to run datasette? Is it possibile to create a sort of db index and than run Thank you |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
411066700 | MDU6SXNzdWU0MTEwNjY3MDA= | 10 | Error in upsert if column named 'order' | psychemedia 82988 | closed | 0 | 1 | 2019-02-16T12:05:18Z | 2019-02-24T16:55:38Z | 2019-02-24T16:55:37Z | NONE | The following works fine: ``` connX = sqlite3.connect('DELME.db', timeout=10) dfX=pd.DataFrame({'col1':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ``` But if a column is named dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ``` it throws an error: ```OperationalError Traceback (most recent call last) <ipython-input-130-7dba33cd806c> in <module> 3 dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) 4 DBX = Database(connX) ----> 5 DBX['test'].upsert_all(dfX.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order) 347 foreign_keys=foreign_keys, 348 upsert=True, --> 349 column_order=column_order, 350 ) 351 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "order": syntax error ``` |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
392610803 | MDU6SXNzdWUzOTI2MTA4MDM= | 391 | Google Trends example doesn’t work | styfle 229881 | closed | 0 | 1 | 2018-12-19T13:51:38Z | 2019-01-02T19:45:13Z | 2019-01-02T19:45:12Z | NONE | https://google-trends.datasettes.com/ I see a cloud flare error. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
282971961 | MDU6SXNzdWUyODI5NzE5NjE= | 175 | Add project topic "automatic-api" | dbohdan 3179832 | closed | 0 | 1 | 2017-12-18T18:09:17Z | 2017-12-21T18:33:55Z | 2017-12-21T18:33:55Z | NONE | Hi there! Could you add the ~~tag~~ topic |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
274161964 | MDU6SXNzdWUyNzQxNjE5NjQ= | 101 | TemplateAssertionError: no filter named 'tojson' | eaubin 450244 | closed | 0 | 1 | 2017-11-15T13:47:32Z | 2017-11-15T13:48:55Z | 2017-11-15T13:48:55Z | NONE | I get an exception clicking on the table link:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);