issues
172 rows where author_association = "NONE" and state = "open" sorted by state
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date)
repo 16
- datasette 102
- sqlite-utils 25
- twitter-to-sqlite 7
- google-takeout-to-sqlite 7
- github-to-sqlite 5
- dogsheep-photos 5
- healthkit-to-sqlite 4
- dogsheep-beta 3
- pocket-to-sqlite 3
- swarm-to-sqlite 2
- inaturalist-to-sqlite 2
- dogsheep.github.io 2
- evernote-to-sqlite 2
- genome-to-sqlite 1
- hacker-news-to-sqlite 1
- apple-notes-to-sqlite 1
type 1
- issue 172
state 1
- open · 172 ✖
id | node_id | number | title | user | state ▼ | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
281110295 | MDU6SXNzdWUyODExMTAyOTU= | 173 | I18n and L10n support | janimo 50138 | open | 0 | 2 | 2017-12-11T17:49:58Z | 2021-04-26T12:10:01Z | NONE | It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/173/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
285168503 | MDU6SXNzdWUyODUxNjg1MDM= | 176 | Add GraphQL endpoint | yozlet 173848 | open | 0 | 8 | 2017-12-29T23:21:01Z | 2020-04-21T14:16:24Z | NONE | Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
299760684 | MDU6SXNzdWUyOTk3NjA2ODQ= | 185 | Metadata should be a nested arbitrary KV store | carlmjohnson 222245 | open | 0 | 12 | 2018-02-23T16:02:07Z | 2019-05-13T18:33:33Z | NONE | I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication. Ideally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
319449852 | MDU6SXNzdWUzMTk0NDk4NTI= | 247 | SQLite code decoupled from Datasette | jsancho-gpl 11912854 | open | 0 | 1 | 2018-05-02T08:03:28Z | 2018-05-21T15:29:31Z | NONE | I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with PyTables format. In order to accomplish that, I've started a fork for decoupling the code related with SQLite and putting it in an external connector to allow future connectors for a lot of file formats. It'd be nice if you could look at it and suggest improvements for a possible PR. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
330826972 | MDU6SXNzdWUzMzA4MjY5NzI= | 308 | Support extra Heroku apps:create options - region, space, team | annapowellsmith 78156 | open | 0 | 2 | 2018-06-08T23:08:33Z | 2018-09-21T14:09:28Z | NONE | It would be useful to document how to pass Heroku CLI options on |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
352768017 | MDU6SXNzdWUzNTI3NjgwMTc= | 362 | Add option to include/exclude columns in search filters | annapowellsmith 78156 | open | 0 | 1 | 2018-08-22T01:32:08Z | 2020-11-03T19:01:59Z | NONE | I have a dataset with many columns, of which only some are likely to be of interest for searching. It would be great for usability if the search filters in the UI could be configured to include/exclude columns. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
411257981 | MDU6SXNzdWU0MTEyNTc5ODE= | 412 | Linked Data(sette) | sfkeller 43340 | open | 0 | 2 | 2019-02-18T00:38:14Z | 2019-03-19T10:09:46Z | NONE | I've a radical feature idea (possible first as an extension in order to experiment?): I'd like to link to a remote table from a remote database, e.g. with a function "linked_datasette()". So one could do following query:
There's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ). And here's an implementation from me in Postgres FDW to connect another Postgres "endpoint": https://pastebin.com/Fz2v64Cz . |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/412/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
451585764 | MDU6SXNzdWU0NTE1ODU3NjQ= | 499 | Accessibility for non-techie newsies? | chrismp 7936571 | open | 0 | 3 | 2019-06-03T16:49:37Z | 2019-06-05T21:22:55Z | NONE | Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
457147936 | MDU6SXNzdWU0NTcxNDc5MzY= | 512 | "about" parameter in metadata does not appear when alone | chrismp 7936571 | open | 0 | 3 | 2019-06-17T21:04:20Z | 2019-10-11T15:49:13Z | NONE | Here's an example of metadata I have for one database on datasette.
The text in Is this intended? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
459882902 | MDU6SXNzdWU0NTk4ODI5MDI= | 526 | Stream all results for arbitrary SQL and canned queries | matej-fr 50578294 | open | 0 | 23 | 2019-06-24T13:09:45Z | 2022-09-28T04:01:25Z | NONE | I think that there is a difficulty with canned queries. When I want to stream all results of a canned query TwoDays I get only first 1.000 records. Example:
returns only first 1.000 records. If I do the same with the whole database i.e.
I get correctly all records. Any ideas? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
476852861 | MDU6SXNzdWU0NzY4NTI4NjE= | 568 | Add database_color as a configurable option | LBHELewis 50906992 | open | 0 | 1 | 2019-08-05T13:14:45Z | 2023-08-11T05:19:42Z | NONE | This would be really useful as it would allow us to tie in with colour schemes. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
504720731 | MDU6SXNzdWU1MDQ3MjA3MzE= | 1 | Add more details on how to request data from google takeout correctly. | dazzag24 1055831 | open | 0 | 0 | 2019-10-09T15:17:34Z | 2019-10-09T15:17:34Z | NONE | The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now:
In addition unless you specify that "My Activity" is downloaded in JSON format the default is HTML. This then causes the
command to fail as it only contains html files not json files. Thanks |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
510076368 | MDU6SXNzdWU1MTAwNzYzNjg= | 605 | Support queries at the table level | bsilverm 12617395 | open | 0 | 2 | 2019-10-21T15:58:30Z | 2019-10-30T18:55:37Z | NONE | Per the issue described in issue #588, it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
527670799 | MDU6SXNzdWU1Mjc2NzA3OTk= | 639 | updating metadata.json without recreating the app | pkoppstein 172847 | open | 0 | 6 | 2019-11-24T09:19:53Z | 2019-11-30T06:08:50Z | NONE | I've sucessfully "uploaded" an SQLite database (with a metadata.json file) to heroku using:
The question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database. The directions on heroku indicate I should run:
But this just results in an empty directory with a warning: warning: You appear to have cloned an empty repository. I've been able to "clone" the heroku "app" using the command:
but this is not a git repository.... Ideally, it seems to me, there'd be an option of the (p.s. I ran (p.p.s. Thanks for Datasette!) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
531502365 | MDU6SXNzdWU1MzE1MDIzNjU= | 646 | Make database level information from metadata.json available in the index.html template | lagolucas 18017473 | open | 0 | Datasette 1.0 3268330 | 3 | 2019-12-02T19:55:10Z | 2022-03-15T20:50:34Z | NONE | Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
539204432 | MDU6SXNzdWU1MzkyMDQ0MzI= | 70 | Implement ON DELETE and ON UPDATE actions for foreign keys | LucasElArruda 26292069 | open | 0 | 2 | 2019-12-17T17:19:10Z | 2020-02-27T04:18:53Z | NONE | Hi! I did not find any mention on the library about ON DELETE and ON UPDATE actions for foreign keys. Are those expected to be implemented? If not, it would be a nice thing to include! |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
541274681 | MDU6SXNzdWU1NDEyNzQ2ODE= | 2 | Add linkedin-to-sqlite | mnp 881925 | open | 0 | 0 | 2019-12-21T03:13:40Z | 2019-12-21T03:13:40Z | NONE | There is an API available. https://developer.linkedin.com/docs/rest-api# At the minimum, I would think contact list and messages would be of interest. |
dogsheep.github.io 214746582 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
548591089 | MDU6SXNzdWU1NDg1OTEwODk= | 657 | Allow creation of virtual tables at startup | dazzag24 1055831 | open | 0 | 4 | 2020-01-12T16:10:55Z | 2021-01-15T20:24:35Z | NONE | Hi, I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial CREATE VIRTUAL TABLE ......... command that is required to load the data from the parquet file into the table. It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time? My thoughts are that something at init time could detect that the user was loading a .parquet file and then switch to a mode were it loads that via the "CREATE VIRTUAL TABLE..." rather than loading the .db file in the default case?? I'm happy to contribute code and testing, I just need some pointers on the best approach. Thanks Darren |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
550293770 | MDU6SXNzdWU1NTAyOTM3NzA= | 658 | How do I use the app.css as style sheet? | null92 49656826 | open | 0 | 2 | 2020-01-15T16:27:57Z | 2020-02-07T00:29:50Z | NONE | Simon, I'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't. Is this possible? Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/658/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
567902704 | MDU6SXNzdWU1Njc5MDI3MDQ= | 675 | --cp option for datasette publish and datasette package for shipping additional files and directories | aviflax 141844 | open | 0 | 12 | 2020-02-19T22:55:56Z | 2020-12-28T18:49:21Z | NONE | I’m working on integrating Datasette into a documentation-oriented publishing workflow internally in my company, and in order to deploy the Docker image created by So it’d be excellent if there was an additional option for this command, something like, like, I’d envision it looking something like:
I’d be happy to help design, specify, implement, and test this feature, if you’d be interested. Thanks for the fantastic tools! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
612382643 | MDU6SXNzdWU2MTIzODI2NDM= | 758 | Question: Access to immutable database-path | clausjuhl 2181410 | open | 0 | 6 | 2020-05-05T07:01:18Z | 2020-05-28T08:23:27Z | NONE | Hi Simon Is there anywhere in the app-context where one can access the hashed urlpath of the database? Currently it's included in the template-context ( |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
617323873 | MDU6SXNzdWU2MTczMjM4NzM= | 766 | Enable wildcard-searches by default | clausjuhl 2181410 | open | 0 | 2 | 2020-05-13T10:14:48Z | 2021-03-05T16:35:21Z | NONE | Hi Simon. It seems that datasette currently has wildcard-searches disabled by default (along with the boolean search-options, NEAR-queries and more, and despite the docs). If I try out the search-url provided in the docs (https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort), it does not handle wildcard-searches, and I'm unable to make it work on my datasette-instance. I would argue that wildcard-searches is such a standard query, that it should be enabled by default. Requiring "_searchmode=raw" when using prefix-searches seems unnecessary. Plus: What happens to non-ascii searches when using "_searchmode=raw"? Is the "escape_fts"-function from datasette.utils ignored? Thanks! /Claus |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
624490929 | MDU6SXNzdWU2MjQ0OTA5Mjk= | 28 | Invalid SQL no such table: main.uploads | dmd 41439 | open | 0 | 1 | 2020-05-25T21:25:39Z | 2020-12-24T22:26:22Z | NONE | http://127.0.0.1:8001/photos/photos_with_apple_metadata gives "Invalid SQL no such table: main.uploads" |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
629473827 | MDU6SXNzdWU2Mjk0NzM4Mjc= | 5 | Set up a demo | harryvederci 26745575 | open | 0 | 1 | 2020-06-02T19:56:49Z | 2020-09-01T06:18:43Z | NONE | First off, thanks for open sourcing this application! This is a suggestion to increase the amount of people that would make use of it: an example in the readme file would help. Currently, users have to clone the app, install it, authorize through pocket, run a command, an then find out if this application does what they hope it does. Another possibility is to add a file Keep up the good work! |
pocket-to-sqlite 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
639542974 | MDU6SXNzdWU2Mzk1NDI5NzQ= | 47 | Fall back to FTS4 if FTS5 is not available | hpk42 73579 | open | 0 | 3 | 2020-06-16T10:11:23Z | 2020-06-17T20:13:48Z | NONE | got this with version 0.21.1 from pypi. twitter-to-sqlite auth worked but then "twitter-to-sqlite user-timeline USER.db" produced a tracekback ending in "no such module: FTS5". |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
642388564 | MDU6SXNzdWU2NDIzODg1NjQ= | 858 | publish heroku does not work on Windows 10 | simonlau 870912 | open | 0 | 7 | 2020-06-20T14:40:28Z | 2021-06-10T17:44:09Z | NONE | When executing "datasette publish heroku schools.db" on Windows 10, I get the following error
to
as well as the other |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
664485022 | MDU6SXNzdWU2NjQ0ODUwMjI= | 46 | Feature: pull request reviews and comments | bhrutledge 1326704 | open | 0 | 6 | 2020-07-23T13:43:45Z | 2022-12-20T14:40:15Z | NONE | Hi there! I saw your presentation at Boston Python. I'm already a light user of Datasette (thank you!), but wasn't aware of this project. I've been working on a "pull request dashboard" to get a comprehensive view of the state of open PR's, esp. related to reviews (i.e., pending, approved, changes requested). Currently it's a CLI command, but I thought a Datasette UI might be fun. I see that PR's are available from the |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/46/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
664793260 | MDU6SXNzdWU2NjQ3OTMyNjA= | 2 | Yak shave | ekg 145425 | open | 0 | 0 | 2020-07-23T22:04:18Z | 2020-07-23T22:04:18Z | NONE | Just a quick note... The 23andme data is not exactly your genome, but a SNP chip of your genome. It's "some of your genotypes." Or about 0.1% of your genome. Nice work in any case! It deserves to be liberated!!!!! |
genome-to-sqlite 209590345 | issue | { "url": "https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
697162939 | MDU6SXNzdWU2OTcxNjI5Mzk= | 20 | Add more tags so people can find your project. | ran88dom99 7902810 | open | 0 | 0 | 2020-09-09T21:14:09Z | 2020-09-09T21:14:09Z | NONE | quantified-self habit-tracking google-fit time-tracking wearables quantifiedself for example |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/20/reactions", "total_count": 1, "+1": 0, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
702386948 | MDU6SXNzdWU3MDIzODY5NDg= | 159 | .delete_where() does not auto-commit (unlike .insert() or .upsert()) | spdkils 11712349 | open | 0 | 9 | 2020-09-16T01:55:52Z | 2023-04-01T17:21:05Z | NONE | When you use the delete_where() function on a table, it never commits.... Is that intentional? |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/159/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
707849175 | MDU6SXNzdWU3MDc4NDkxNzU= | 974 | static assets and favicon aren't cached by the browser | obra 45416 | open | 0 | 1 | 2020-09-24T04:44:55Z | 2022-01-13T22:21:28Z | NONE | Using datasette to solve some frustrating problems with our fulfillment provider today, I was surprised to see repeated requests for assets under /-/static and the favicon. While it won't likely be a huge performance bottleneck, I bet datasette would feel a bit zippier if you had Uvicorn serving up some caching-related headers telling the browser it was safe to cache static assets. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/974/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
718238967 | MDU6SXNzdWU3MTgyMzg5Njc= | 1003 | from_json jinja2 filter | mhalle 649467 | open | 0 | 4 | 2020-10-09T15:30:58Z | 2020-10-09T17:17:07Z | NONE | When JSON fields are rendered in a jinja2 template, it is handy to be able to manipulate them as data (e.g., iterate over an array of values). Ansible has a "from_json" function, which just called json.loads. It's a trivial as a datasette plugin, but it seems generally useful. Does it makes sense to add it directly into the app? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
735852274 | MDU6SXNzdWU3MzU4NTIyNzQ= | 1082 | DigitalOcean buildpack memory errors for large sqlite db? | justmars 39538958 | open | 0 | 3 | 2020-11-04T06:35:32Z | 2020-11-04T19:35:44Z | NONE |
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
764059235 | MDU6SXNzdWU3NjQwNTkyMzU= | 1143 | More flexible CORS support in core, to encourage good security practices | yurivish 114388 | open | 0 | Datasette 1.0 3268330 | 6 | 2020-12-12T17:06:35Z | 2022-02-13T17:41:17Z | NONE | It would be nice if the As an example, Observable notebooks namespace every user's notebooks by their username and user content is served from username.observableusercontent.com, so you would set Thank you for all of your work on Datasette! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
769376447 | MDU6SXNzdWU3NjkzNzY0NDc= | 2 | killed by oomkiller on large location-history | khimaros 231498 | open | 0 | 2 | 2020-12-17T00:32:24Z | 2020-12-17T00:48:32Z | NONE | memory seems to grow unbounded and is oom-killed after about 20GB memory usage. this is happening while loading a ~1GB uncompressed location history. |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
769397742 | MDU6SXNzdWU3NjkzOTc3NDI= | 3 | sqlite-utils error on takeout import | khimaros 231498 | open | 0 | 0 | 2020-12-17T01:18:48Z | 2020-12-17T01:19:04Z | NONE |
there is no table create in additionally, this package and hackernews-to-sqlite have conflicting |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/3/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
771608692 | MDU6SXNzdWU3NzE2MDg2OTI= | 14 | UNIQUE constraint failed: workouts.id | n8henrie 1234956 | open | 0 | 5 | 2020-12-20T15:11:20Z | 2023-07-10T14:46:52Z | NONE | I'm getting an error on my initial attempt to import data:
|
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
778380836 | MDU6SXNzdWU3NzgzODA4MzY= | 4 | Feature Request: Gmail | Btibert3 203343 | open | 0 | 5 | 2021-01-04T21:31:09Z | 2021-03-04T20:54:44Z | NONE | From takeout, I only exported my Gmail account. Ideally I could parse this into sqlite via this tool. |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
787104850 | MDU6SXNzdWU3ODcxMDQ4NTA= | 1192 | Form Plugin for in-depth Datasette Querying | tomershvueli 1024355 | open | 0 | 0 | 2021-01-15T18:24:50Z | 2021-01-15T18:24:50Z | NONE | I envision a sort of easy-to-build form plugin that would be able to map a user's inputs to different fields/columns in a Datasette database. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
791237799 | MDU6SXNzdWU3OTEyMzc3OTk= | 1196 | Access Denied Error in Windows | QAInsights 2826376 | open | 0 | 2 | 2021-01-21T15:40:40Z | 2021-04-14T19:28:38Z | NONE | I am trying to publish a db to vercel. But while issuing the below command throwing I am using PyCharm and Python 3.9. I have reinstalled both and launched PyCharm as Admin in Windows 10. But still the issue persists. Issued command PS: localhost is working fine. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
795367402 | MDU6SXNzdWU3OTUzNjc0MDI= | 1209 | v0.54 500 error from sql query in custom template; code worked in v0.53; found a workaround | jrdmb 11788561 | open | 0 | 1 | 2021-01-27T19:08:13Z | 2021-01-28T23:00:27Z | NONE | v0.54 500 error in sql query template; code worked in v0.53; found a workaround schema: Live example of correctly rendered template in v.053: https://cosmotalks-cy6xkkbezq-uw.a.run.app/cosmotalks/talks/1 Description of problem: I needed 'sql select' code in a custom row-mydatabase-mytable.html template to lookup the series name for a foreign key integer value in the talks table. So The code below worked perfectly in v0.53 (just the relevant sql statement part is shown; full code is here):
In v0.54, that code resulted in a 500 error with a 'no such table series' message. A second query in that template also did not work but the above is fully illustrative of the problem. All templates were up-to-date along with datasette v0.54. Workaround: After fiddling around with trying different things, what worked was the syntax from Querying a different database from the datasette-template-sql github repo to add the database name to the sql statement:
Though this was found to work, it should not be necessary to add |
datasette 107914493 | issue | |||||||||
797728929 | MDU6SXNzdWU3OTc3Mjg5Mjk= | 8 | QUESTION: extract full text | darribas 417363 | open | 0 | 0 | 2021-01-31T14:50:10Z | 2021-01-31T14:50:10Z | NONE | This may be solved or a feature already, but I couldn't figure it out, is it possible to extract and store also full text from the saved pages? The same way that Pocket parses the text, it'd be amazing to be able to store (and thus make searchable later) the text. Thank you very much for the project, it's such an amazing idea! |
pocket-to-sqlite 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
801780625 | MDU6SXNzdWU4MDE3ODA2MjU= | 9 | SSL Error | jfeiwell 12669260 | open | 0 | 2 | 2021-02-05T02:12:56Z | 2021-02-07T18:45:04Z | NONE | Here's the error I get when running
Does this require python 3? |
pocket-to-sqlite 213286752 | issue | { "url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
802513359 | MDU6SXNzdWU4MDI1MTMzNTk= | 1217 | Possible to deploy as a python app (for Rstudio connect server)? | plpxsk 6165713 | open | 0 | 4 | 2021-02-05T22:21:24Z | 2022-11-04T11:37:52Z | NONE | Is it possible to deploy a In my enterprise, I have option to deploy python apps via Rstudio Connect, and I would like to publish a I welcome any pointers to converting |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1217/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
803333769 | MDU6SXNzdWU4MDMzMzM3Njk= | 32 | KeyError: 'Contents' on running upload | robmarkcole 11855322 | open | 0 | 3 | 2021-02-08T08:36:37Z | 2021-07-22T06:40:25Z | NONE | Following the readme, on big sur, and having entered my auth creds via ``` (venv) (base) Robins-MacBook:datasette robin$ dogsheep-photos upload photos.db ~/Pictures/Photos\ /Users/robin/Pictures/Library.photoslibrary --dry-run Fetching existing keys from S3... Traceback (most recent call last): File "/Users/robin/datasette/venv/bin/dogsheep-photos", line 8, in <module> sys.exit(cli()) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 829, in call return self.main(args, kwargs) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(args, **kwargs) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/cli.py", line 96, in upload key.split(".")[0] for key in get_all_keys(client, creds["photos_s3_bucket"]) File "/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/utils.py", line 46, in get_all_keys for row in page["Contents"]: KeyError: 'Contents' ``` Possibly since the bucket is in |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
803338729 | MDU6SXNzdWU4MDMzMzg3Mjk= | 33 | photo-to-sqlite: command not found | robmarkcole 11855322 | open | 0 | 4 | 2021-02-08T08:42:57Z | 2021-02-12T15:00:44Z | NONE | Having installed in a venv I get: ``` (venv) (base) Robins-MacBook:datasette robin$ photo-to-sqlite apple-photos photos.db -bash: photo-to-sqlite: command not found ``` |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
803356942 | MDU6SXNzdWU4MDMzNTY5NDI= | 1218 | /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory | robmarkcole 11855322 | open | 0 | 1 | 2021-02-08T09:07:00Z | 2021-02-23T12:12:17Z | NONE | Error as above, however I do have python3.8 and the readme indicates this is supported. ``` (venv) (base) Robins-MacBook:datasette robin$ ls /usr/local/opt/python3/bin/ .. pip3 python3 python3.8 ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
808771690 | MDU6SXNzdWU4MDg3NzE2OTA= | 1225 | More flexible formatting of records with CSS grid | mhalle 649467 | open | 0 | 0 | 2021-02-15T19:28:17Z | 2021-02-15T19:28:35Z | NONE | In several applications I've been experimenting with alternate formatting of datasette query results. Lately I've found that CSS grids work very well and seem quite general for formatting rows. In CSS I use grid templates to define the layout of each record and the regions for each field, hiding the fields I don't want. It's pretty flexible and looks good. It's also a great basis for highly responsive layout. I initially thought I'd only use this feature for record detail views, but now I use it for index views as well. However, there are some limitations:
* With the existing table templates, it seems that you can change the It would be helpful to at least have an official example or test that used a grid layout for records to make sure nothing in datasette breaks with it. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
811054000 | MDU6SXNzdWU4MTEwNTQwMDA= | 1230 | Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages | Kabouik 7107523 | open | 0 | 1 | 2021-02-18T12:27:02Z | 2021-02-18T15:22:15Z | NONE | I filtered a data set on some criteria and obtain 265 results, split over three pages (100, 100, 65), and reazlized that Vega plots are only applied to the results displayed on the current page, instead of the whole filtered data, e.g., 100 on page 1, 100 on page 2, 65 on page 3. Is there a way to force the graphs to consider all results instead of just the page, considering that pages rarely represent sensible information? Likewise, while the cluster map does show all results on the first page, if you go to next pages, it will show all remaining results except the previous page(s), e.g., 265 on page 1, 165 on page 2, 65 on page 3. In both cases, I don't see many situations where one would like to represent the data this way, and it might even lead to interpretation errors when viewing the data. Am I missing some cases where this would be best? Perhaps a clickable option to subset visual representations according visible pages vs. display all search results would do? [Edit] Oh, I just saw the "Load all" button under the cluster map as well as the setting to alter the max number or results. So I guess this issue only is about the Vega charts. |
datasette 107914493 | issue | |||||||||
814595021 | MDU6SXNzdWU4MTQ1OTUwMjE= | 1241 | Share button for copying current URL | Kabouik 7107523 | open | 0 | 6 | 2021-02-23T15:55:40Z | 2023-08-24T20:09:52Z | NONE | I use datasette in an This particular use prevents users to access the full URLs of their datasette views and queries, which is a shame because the way datasette handles URLs to make every view or query easy to share is awesome. I know how to get the URL from the context menu of my browser, but I don't think many visitors would do it or even notice that datasette uses permalinks for pretty much every action they do. Would it be possible to add a "Share link" button to the interface, either in datasette itself or in a plugin? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
818684978 | MDU6SXNzdWU4MTg2ODQ5Nzg= | 243 | How can i use this utils to deal with fts on column meta of tables ? | svjack 27874014 | open | 0 | 0 | 2021-03-01T09:45:05Z | 2021-03-01T09:45:05Z | NONE | Thank you to release this bravo project. When i use this project on multi table db, I want to implement convenient search on column name from different tables. I want to develop a meta table to save the meta data of different columns of different tables and search on this meta table to get rows from the data table (which the meta table describes) does this project provide some simple function on it ? You can think a have a knowledge graph about the table in the db, and i save this knowledge graph into the db with fts enabled. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
824750134 | MDU6SXNzdWU4MjQ3NTAxMzQ= | 1251 | facet option not appearing when table is big | verajosemanuel 15836677 | open | 0 | 0 | 2021-03-08T16:54:04Z | 2021-03-08T16:54:16Z | NONE | I have a big table with more than 500.000 rows. Trying to facet by one of my columns, the options are not available as for the other smaller tables. I have tried to set it in URL as:
to no avail. is there any limit? how can I force the option "facet" to appear for big tables? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
826064552 | MDU6SXNzdWU4MjYwNjQ1NTI= | 1253 | Capture "Ctrl + Enter" or "⌘ + Enter" to send SQL query? | rayvoelker 9308268 | open | 0 | 1 | 2021-03-09T15:00:50Z | 2021-10-30T16:00:42Z | NONE | It appears as though "Shift + Enter" triggers the form submit action to submit SQL, but could that action be bound to the "Ctrl + Enter" or "⌘ + Enter" action? I feel like that pattern already exists in a number of similar tools and could improve usability of the editor. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
826700095 | MDU6SXNzdWU4MjY3MDAwOTU= | 1255 | Facets timing out but work when filtering | robroc 1219001 | open | 0 | 2 | 2021-03-09T22:01:39Z | 2021-04-02T20:50:08Z | NONE | System info: Windows 10 Datasette 0.55 installed via pip Python 3.8.5 in a conda environment I'm getting the message |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1255/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
828858421 | MDU6SXNzdWU4Mjg4NTg0MjE= | 1258 | Allow canned query params to specify default values | wdccdw 1385831 | open | 0 | 5 | 2021-03-11T07:19:02Z | 2023-02-20T23:39:58Z | NONE | If I call a canned query that includes named parameters, without passing any parameters, datasette runs the query anyway, resulting in an HTTP status code 400, and a visible error in the browser, with only a link back to home. This means that one of the default links on https://site/database/ will lead to a broken page with no apparent way out. Is there any way to skip performing the query when parameters aren't supplied, but otherwise render the usual canned query page? Alternatively, can I supply default values for my parameters, either when defining my canned queries or when linking to the canned query page from the default database template. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1258/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
830283447 | MDU6SXNzdWU4MzAyODM0NDc= | 34 | bucket name | dsisnero 6213 | open | 0 | 0 | 2021-03-12T16:40:57Z | 2021-03-12T16:40:57Z | NONE | I followed the instructions to setup credentials but I am getting a invalid bucket name. Can you put a sample auth.json file in the base that shows the correct format for this? Thanks |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
834602299 | MDU6SXNzdWU4MzQ2MDIyOTk= | 1262 | Plugin hook that could support 'order by random()' for table view | henry501 19328961 | open | 0 | 3 | 2021-03-18T10:02:01Z | 2021-03-18T17:55:01Z | NONE | I am frequently using Datasette to quickly get a visual impression for a table without reviewing it in its entirety. Because I have some groups of similar records, the default sorting options mean that each page is very similar and not representative of the full dataset. The current interface allows sorting by columns, but random sorting is only available via custom SQL. Maybe this could be a button or link. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1262/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
836063389 | MDU6SXNzdWU4MzYwNjMzODk= | 17 | Datetime columns are not properly formatted to be recognizes as datetime | n8henrie 1234956 | open | 0 | 0 | 2021-03-19T14:33:04Z | 2021-03-19T14:33:04Z | NONE | Currently, the datetimes are formatted in a way that is not recognized by datasette-vega for plotting with a For example, if you have datasette running locally with
The plot is blank unless you choose The If instead the format for this column is changed slightly: I have a PR that addresses this issue, will submit shortly. |
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/17/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
836829560 | MDU6SXNzdWU4MzY4Mjk1NjA= | 248 | support for Apache Arrow / parquet files I/O | mhalle 649467 | open | 0 | 1 | 2021-03-20T14:59:30Z | 2021-10-28T23:46:48Z | NONE | I just started looking at Apache Arrow using pyarrow for import and export of tabular datasets, and it looks quite compelling. It might be worth looking at for sqlite-utils and/or datasette. As a test, I took a random jsonl data dump of a dataset I have with floats, strings, and ints and converted it to arrow's parquet format using the naive The only hangup is the automatic type inference of the naive reader. It's great for general laziness and for parsing JSON columns (it correctly interpreted a table of mine with a JSON array). However, I did get an exception for a string column where most entries looked integer-like but had a couple values that weren't -- the reader tried to coerce all of them for some reason, even though the JSON type is string. Since the writer optionally takes a schema, it shouldn't be too hard to grab the sqlite header types. With some additional hinting, you might get datetime columns and JSON, which are native Arrow types. Somewhat tangentially, someone even wrote an sqlite vfs extension for Parquet: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/248/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
842695374 | MDU6SXNzdWU4NDI2OTUzNzQ= | 35 | Support to annotate photos on other than macOS OSes | ligurio 1151557 | open | 0 | 1 | 2021-03-28T09:01:25Z | 2021-04-05T07:37:57Z | NONE | dogsheep-photos allows to annotate photos using Apple Photo's db. It would be nice to have such ability on other OSes too. For example using trained local model or using Google Vision API (see #14). |
dogsheep-photos 256834907 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/35/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
849512840 | MDU6SXNzdWU4NDk1MTI4NDA= | 1288 | Facets: show counts for null | jungle-boogie 1111743 | open | 0 | 0 | 2021-04-02T22:33:44Z | 2021-04-02T22:33:44Z | NONE | Hi, Thank you for Datasette and being a fan of SQLite! Not all rows in a record will always contain data. So when using a facet on a column where some records have data and others don't, you don't get an accurate count of the results. Please consider also counting and showing null records with facets. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1288/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
863884805 | MDU6SXNzdWU4NjM4ODQ4MDU= | 1304 | Document how to send multiple values for "Named parameters" | rayvoelker 9308268 | open | 0 | 4 | 2021-04-21T13:19:06Z | 2021-12-08T03:23:14Z | NONE | https://docs.datasette.io/en/stable/sql_queries.html#named-parameters I thought that I had seen an example of how to do this example below, but I can't seem to find it
Or, maybe this isn't a fully supported feature. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
870946764 | MDU6SXNzdWU4NzA5NDY3NjQ= | 1312 | how to query many-to-many relationship via json API? | bram2000 5268174 | open | 0 | 0 | 2021-04-29T12:09:49Z | 2021-04-29T12:09:49Z | NONE | Hi, Firstly thanks for Datasette, it's great! I'm trying to use the JSON API to query data from a Datasette instance. I have a simple 3 table many-to-many relationship, like so:
the Now I want to return "all documents within category X" but I cannot see a way to do this without executing two queries; the first to lookup the row_id of category X, and the second to join I could easily write this in SQL, but this makes programmatic handling of pagination much more difficult (we'd have to dynamically modify the SQL to select the row_id and include the correct where and limit clauses). Is there a way to achieve this using the JSON API? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
907795562 | MDU6SXNzdWU5MDc3OTU1NjI= | 265 | Using enable_fts before search term | prabhur 36287 | open | 0 | 1 | 2021-06-01T01:43:34Z | 2023-04-01T17:27:18Z | NONE | Many thanks for the sqlite-utils suite of utilities. Has made my life much much easier. I used this to create a table and enable FTS. All works fine. The datasette utility detects FTS and shows a text box. Searching for a term using that interface works well. However, when I start to use features by following https://www.sqlite.org/fts5.html section "3. Full-text Query Syntax" I seem to run into issues that I suspect is due to As an example, if i search for the term Similarly, when I try to restrict the search to a single column in FTS using a spec like
Any ideas why? How can I get the benefits of both escaping as well as utilizing different facets of providing / controlling search terms? Thanks. |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
915421499 | MDU6SXNzdWU5MTU0MjE0OTk= | 267 | row.update() or row.pk | Gravitar64 12721157 | open | 0 | 4 | 2021-06-08T19:56:00Z | 2021-06-22T17:27:27Z | NONE | Hi, fantastic framework for working with Sqlite3 databases!!! I tried to update spezific rows in a table and used for row in db[tablename]:
newValue = row["counter"] * row["prize"] This updates the value in the printet row, but not in the database. So I switched to db[tablename].update(id, {"Filedname": newValue}) This works fine. But row.update would be nicer, because no need for the id (its that row), no need for the tablename and the db (all defined in the for row ... loop). Thx |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
919822817 | MDU6SXNzdWU5MTk4MjI4MTc= | 1376 | Official Datasette Docker image should use SQLite >= 3.31.0 (for generated columns) | jcgregorio 1726460 | open | 0 | 3 | 2021-06-13T15:25:51Z | 2021-06-13T15:39:37Z | NONE | Trying to run datasette via the Docker container doesn't seem to work:
I have confirmed that the downloaded ``` [skia-public] jcgregorio@jcgregorio840 ~/Downloads $ sqlite3 fixtures.db SQLite version 3.34.1 2021-01-20 14:10:07 Enter ".help" for usage hints. sqlite> pragma integrity_check; ok sqlite> ``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
920636216 | MDU6SXNzdWU5MjA2MzYyMTY= | 64 | feature: support "events" | khimaros 231498 | open | 0 | 5 | 2021-06-14T17:42:49Z | 2021-06-15T00:48:37Z | NONE | the GitHub API provides the ability to fetch all events for a given user, organization, or repository: https://docs.github.com/en/rest/reference/activity#list-events-for-the-authenticated-user this would allow users to export all of the issue comments, new issues, etc. that they created. something which is currently missing from the GitHub takeout exports. |
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
924748955 | MDU6SXNzdWU5MjQ3NDg5NTU= | 1380 | Serve all db files in a folder | stratosgear 193463 | open | 0 | 5 | 2021-06-18T10:03:32Z | 2021-11-13T08:09:11Z | NONE | I tried to get the In more detail:
Is there an option/setting that I overlooked, or is this something missing? BTW, the Thanks! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1380/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
927385540 | MDU6SXNzdWU5MjczODU1NDA= | 8 | any guidance / experience on imessage-to-sqlite ? | Casyfill 2675621 | open | 0 | 0 | 2021-06-22T15:46:16Z | 2021-06-22T15:46:16Z | NONE | dogsheep.github.io 214746582 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
930946817 | MDU6SXNzdWU5MzA5NDY4MTc= | 7 | KeyError: 'accuracy' when processing Location History | davidwilemski 403152 | open | 0 | 0 | 2021-06-27T14:39:43Z | 2021-06-27T14:39:43Z | NONE | I'm new to both the dogsheep tools and datasette but have been experimenting a bit the last few days and these are really cool tools! I encountered a problem running my Google location history through this tool running the latest release in a docker container:
It looks like the tool assumes the My first attempt at a local patch to get myself going was to convert accessing the Now that I've done a successful import run using my initial fix of calling I'm happy to provide a PR for a fix but figured I'd ask about which direction is preferred first. |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
951817328 | MDU6SXNzdWU5NTE4MTczMjg= | 12 | 403 when getting token | treyhunner 285352 | open | 0 | 1 | 2021-07-23T18:43:26Z | 2021-10-12T18:31:57Z | NONE | I tried to use https://your-foursquare-oauth-token.glitch.me/ to get my Swarm auth token and got a 403 after I clicked the Allow button: I'm not sure if this is the right repo to report this in |
swarm-to-sqlite 205429375 | issue | { "url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
977128935 | MDU6SXNzdWU5NzcxMjg5MzU= | 21 | Duplicate Column | FabianHertwig 32016596 | open | 0 | 1 | 2021-08-23T15:00:44Z | 2021-08-23T17:00:59Z | NONE | Hey, thank you for this repo! When I try to convert my export, I get a multiple column error. Here is the stack trace:
|
healthkit-to-sqlite 197882382 | issue | { "url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/21/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
982803408 | MDU6SXNzdWU5ODI4MDM0MDg= | 1454 | Feature Request: Publish to IPFS | blitmap 1560788 | open | 0 | 0 | 2021-08-30T13:36:18Z | 2021-08-30T13:36:18Z | NONE | Hello, I am a huge fan of this being used for exploring data. I think it has a lot of flexibility not found in other tools. I'm not sure if what I'm asking for is possible: Can this be extended to publish to IPFS? IPFS is an attractive hosting option for decentralized journalism. Food for thought ~ |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
983221851 | MDU6SXNzdWU5ODMyMjE4NTE= | 34 | Data folder as index command parameter | humrochagf 1223625 | open | 0 | 0 | 2021-08-30T21:29:33Z | 2021-08-30T21:29:33Z | NONE | Hi, First of all, thank you for this wonderful project :smile: I started to use dogsheep to make my personal data searchable, and by using the project I noticed an issue with the index command. It always expects you are running it from the root folder from where the data is located, so I got some errors while trying to make it work on my setup. I separate all databases inside a Before, I configured
And running the index command like this:
It worked to the normal search feature with no problem this way, but when I started adding So my workaround to that was to cd into the data folder and run the indexer. You can check the way I'm doing it at this line of the makefile: https://github.com/humrochagf/my-dogsheep/blob/main/makefile#L3 It works but it would be nice to have an option to pass the path where the data is located to the index function. |
dogsheep-beta 197431109 | issue | { "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
986829194 | MDU6SXNzdWU5ODY4MjkxOTQ= | 14 | xml.etree.ElementTree.Parse Error - mismatched tag | step21 46968 | open | 0 | 1 | 2021-09-02T14:46:36Z | 2021-09-02T14:53:11Z | NONE | This is an error message I get upon parsing the enex file of my Inbox. Please find the full error message below. Any hints welcome.
|
evernote-to-sqlite 303218369 | issue | { "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1010112818 | I_kwDOBm6k_c48NRky | 1479 | Win32 "used by another process" error with datasette publish | kirajano 76450761 | open | 0 | 7 | 2021-09-28T19:12:00Z | 2023-09-07T02:14:16Z | NONE | I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette. Failed to deploy. Attaching logs:
1. Tried with an app created via Error error connecting to docker: An unknown error occured. Traceback (most recent call last): File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly__init__.py", line 156, in fly "--remote-only", File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpgcm8cz66\frosty-fog-8565' ```
Error not possible to validate configuration: server returned Post "https://api.fly.io/graphql": unexpected EOF Traceback (most recent call last):
File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main These are also the contents of the generated .toml file in 2 scenario: ``` fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00app = "dark-feather-168" kill_signal = "SIGINT" kill_timeout = 5 processes = [] [env] [experimental] allowed_public_ports = [] auto_rollback = true [[services]] http_checks = [] internal_port = 8080 processes = ["app"] protocol = "tcp" script_checks = [] [services.concurrency] hard_limit = 25 soft_limit = 20 type = "connections" [[services.ports]] handlers = ["http"] port = 80 [[services.ports]] handlers = ["tls", "http"] port = 443 [[services.tcp_checks]] grace_period = "1s" interval = "15s" restart_limit = 6 timeout = "2s" ```
```[+] Building 147.3s (11/11) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 396B 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.8 4.7s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.1s => => transferring context: 82.37kB 0.0s => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s => [2/5] COPY . /app 2.3s => [3/5] WORKDIR /app 0.2s => [4/5] RUN pip install -U datasette 26.9s => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s => exporting to image 1.2s => => exporting layers 1.2s => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Traceback (most recent call last): "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette\cli.py", line 283, in package call(args) File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpkb27qid3\datasette'``` |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1479/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1049946823 | I_kwDOBm6k_c4-lOrH | 1502 | Full-text search: No support to unary "-" operator | gustavorps 516827 | open | 0 | 0 | 2021-11-10T15:11:19Z | 2021-11-10T15:11:19Z | NONE | datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||||
1063982712 | I_kwDODEm0Qs4_axZ4 | 60 | Execution on Windows | bernard01 1733616 | open | 0 | 1 | 2021-11-26T00:24:34Z | 2022-10-14T16:58:27Z | NONE | My installation on Windows using pip has been successful. I have Python 3.6. How do I run twitter-to-sqlite? I cannot even figure out how "auth" is a command. I have python on my path: C:\prog\python\Python36;C:\prog\python\Python36\Scripts Where should the commands be executed, and where are the files created? Could some basics please be added to the documentation to get beginners started? |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/60/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1072106103 | I_kwDOBm6k_c4_5wp3 | 1542 | feature request: order and dependency of plugins (that use js) | fs111 33631 | open | 0 | 1 | 2021-12-06T12:40:45Z | 2021-12-15T17:47:08Z | NONE | I have been playing with datasette for the last couple of weeks and it is great! I am a big fan of Basically what would like to have is a way to say load my plugin after the plugins I depend on have been loaded and rendered. There seems to be no prior art where plugins have these dependencies on the js level so I was wondering if that could be added or if it exists how to do it. Basically what I want to do is: my-awesome-plugin has a dependency on datastte-cluster-map. Whenever datasette cluster map has finished rendering on page load, call my plugin, but no earlier. To make that work datasette probably needs some total order in which way plugins are loaded intialized. Since I am new to datastte, I may be missing something obvious, so please let me know if the above makes no sense. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1077560091 | I_kwDODEm0Qs5AOkMb | 61 | Data Pull fails for "Essential" level access to the Twitter API (for Documentation) | jmnickerson05 57161638 | open | 0 | 1 | 2021-12-11T14:59:41Z | 2022-10-31T14:47:58Z | NONE | Per Twitter documentation: https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api#v2-access-leve This isn't any fault of twitter-to-sqlite of course, but it should probably be documented as a side-note. And this is how I'm surfacing the message from utils.py: |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/61/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1082651698 | I_kwDOCGYnMM5Ah_Qy | 358 | Support for CHECK constraints | luxint 11597658 | open | 0 | 7 | 2021-12-16T21:19:45Z | 2022-09-25T07:15:59Z | NONE | Hi, I noticed the |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1091257796 | I_kwDOBm6k_c5BC0XE | 1584 | give error with recursive sql | tunguyenatwork 58088336 | open | 0 | 0 | 2021-12-30T18:53:16Z | 2021-12-30T18:53:16Z | NONE | I got an error "near "WITH": syntax error" after I upgraded to version 0.59 from 0.52.4. This error is related to recursive sql. It works great on the previous version but it failed after upgraded. Below is an example of sql: WITH RECURSIVE manager_of(position, super_position) AS (SELECT position, case ifnull(INDIRECT_SUPER_POSITION,'') when '' then super_position else INDIRECT_SUPER_POSITION end as SUPER_POSITION FROM position where super_position<>'SGV000000001' and super_position!='' and position <> super_position),chain_manager_of_position(position, level) AS (SELECT super_position, 1 as level FROM manager_of WHERE super_position!='' and (position=:pos or position in (Select position from employee where employee=:ein)) UNION ALL SELECT super_position, level+1 as level FROM manager_of JOIN chain_manager_of_position USING(position)) SELECT * FROM chain_manager_of_position left join employee using(position) where employee is not NULL order by level limit 1 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1091850530 | I_kwDODEm0Qs5BFFEi | 63 | Import archive error 'withheld_in_countries' | pauloxnet 521097 | open | 0 | 0 | 2022-01-01T16:58:59Z | 2022-01-01T16:58:59Z | NONE | Importing the twitter archive I received this error:
I found only a single tweet with the key I solved the error removing the key from the |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/63/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1097332098 | I_kwDODEm0Qs5BZ_WC | 64 | Include all entities for tweets | max 111631 | open | 0 | 0 | 2022-01-09T23:35:28Z | 2022-01-09T23:35:28Z | NONE | Per our conversation on Twitter: It would be neat if all entities (including URLs) were captured. This way you can ensure, that URLs are parsed out exactly the same way Twitter parses URLs – we all know parsing URLs with a regex ain't fun. Right now, I believe the tool filters out all entities that are not of type |
twitter-to-sqlite 206156866 | issue | { "url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/64/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1121121305 | I_kwDOBm6k_c5C0vQZ | 1618 | Reconsider policy on blocking queries containing the string "pragma" | strada 770231 | open | 0 | 6 | 2022-02-01T19:39:46Z | 2022-02-02T19:42:03Z | NONE | First of all, thanks for creating this cool project, and also supporting publishing to various hosting services out of the box. While testing out, I noticed legitimate queries such as
Example as seen from a Datasette instance: https://fivethirtyeight.datasettes.com/polls?sql=select+*+from+books+where+title+like+%27Pragmatic%25%27%0D%0A I'd propose a regular expression like
I can create a pull request with this change, unless the maintainers think it would allow unwanted queries to be executed. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1123393829 | I_kwDODFE5qs5C9aEl | 10 | sqlite3.OperationalError: no such table: main.my_activity | glxblt14 69208826 | open | 0 | 1 | 2022-02-03T17:59:29Z | 2022-03-20T02:38:07Z | NONE | Hello,
When i run the command |
google-takeout-to-sqlite 206649770 | issue | { "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1128466114 | I_kwDOCGYnMM5DQwbC | 406 | Creating tables with custom datatypes | psychemedia 82988 | open | 0 | 5 | 2022-02-09T12:16:31Z | 2022-09-15T18:13:50Z | NONE | Via https://stackoverflow.com/a/18622264/454773 I note the ability to register custom handlers for novel datatypes that can map into and out of things like sqlite From a quick look and a quick play, I didn't spot a way to do this in For example: ```python Via https://stackoverflow.com/a/18622264/454773import sqlite3 import numpy as np import io def adapt_array(arr): """ http://stackoverflow.com/a/31312102/190597 (SoulNibbler) """ out = io.BytesIO() np.save(out, arr) out.seek(0) return sqlite3.Binary(out.read()) def convert_array(text): out = io.BytesIO(text) out.seek(0) return np.load(out) Converts np.array to TEXT when insertingsqlite3.register_adapter(np.ndarray, adapt_array) Converts TEXT to np.array when selectingsqlite3.register_converter("array", convert_array) ``` ```python from sqlite_utils import Database db = Database('test.db') Reset the database connection to used the parsed datatypesqlite_utils doesn't seem to support eg:Database('test.db', detect_types=sqlite3.PARSE_DECLTYPES)db.conn = sqlite3.connect(db_name, detect_types=sqlite3.PARSE_DECLTYPES) Create a table the old fashioned waybut using the new custom data typevector_table_create = """ CREATE TABLE dummy (title TEXT, vector array ); """ cur = db.conn.cursor() cur.execute(vector_table_create) sqlite_utils doesn't appear to support custom types (yet?!)The following errors on the "array" datatype""" db["dummy"].create({ "title": str, "vector": "array", }) """ ``` We can then add / retrieve records from the database where the datatype of the ```python import numpy as np db["dummy"].insert({'title':"test1", 'vector':np.array([1,2,3])}) for row in db.query("SELECT * FROM dummy"): print(row['title'], row['vector'], type(row['vector'])) """ test1 [1 2 3] <class 'numpy.ndarray'> """ ``` It would be handy to be able to do this idiomatically in |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1129052172 | I_kwDOBm6k_c5DS_gM | 1633 | base_url or prefix does not work with _exact match | henrikek 6613091 | open | 0 | 2 | 2022-02-09T21:45:07Z | 2022-04-28T09:12:56Z | NONE | When i hit "Apply" button to search with "_exact" for a column syntax the URL prefix is removed from the url. And the result is: If I add the marked row to url_builder.py it seams to work: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1148725876 | I_kwDOBm6k_c5EeCp0 | 1640 | Support static assets where file length may change, e.g. logs | broccolihighkicks 57859326 | open | 0 | 2 | 2022-02-24T00:34:42Z | 2022-03-05T01:19:25Z | NONE | This is a bit of an oxymoron. I am serving a log.txt file for a background process using the Datasette --static CLI. This is useful as I can observe a background process from the web UI to see any errors that occur (instead of spelunking the logs via docker exec/ssh etc). I get this error, which I think is because Datasette assumes that the size of the content does not change (but appending new log lines means the content length changes).
Thanks, I am finding Datasette very useful. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1154399841 | I_kwDOBm6k_c5Ezr5h | 1645 | Sensible `cache-control` headers for static assets, including those served by plugins | curiousleo 697092 | open | 0 | Datasette 1.0 3268330 | 4 | 2022-02-28T18:12:03Z | 2022-03-08T02:59:29Z | NONE | What I'm seeingWith A table view returns A static asset returns no What I expected to seeI expected the static asset to return a Why this mattersI'm productionising a Datasette deployment right now and was looking into putting it behind a Varnish instance. I was surprised to see requests for static assets being served from Datasette rather than Varnish, this is what led me to look more closely at the response headers. While Datasette serves those static assets pretty quickly, I don't see why Datasette should serve them. By their nature, static assets like images and JS files are very cacheable, so it should be easy to serve them from a cache like Varnish. (Note that Varnish can easily be configured to override this header, enabling caching for static assets. But it would be better if this override was not necessary.) DiscussionIt seems clear to me that serving static assets without a I see two options here: A. Static assets use the same logic as table / SQL views to set the |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1645/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1174655187 | I_kwDOBm6k_c5GA9DT | 1671 | Filters fail to work correctly against calculated numeric columns returned by SQL views because type affinity rules do not apply | rayvoelker 9308268 | open | 0 | 8 | 2022-03-20T19:17:24Z | 2022-03-22T17:43:12Z | NONE | I found a strange behavior, and I'm not sure if it's related to views and boolean values perhaps, or if there's something else weird going on here, but I'll provide an example that may help show what I'm seeing happen. ```bash !/bin/bashecho "\"id\",\"expiration_date\" 0,2018-01-04 1,2019-01-05 2,2020-01-06 3,2021-01-07 4,2022-01-08 5,2023-01-09 6,2024-01-10 7,2025-01-11 8,2026-01-12 9,2027-01-13 " > test.csv csvs-to-sqlite test.csv test.db sqlite-utils create-view --replace test.db test_view "select id, expiration_date, case when julianday('NOW') >= julianday(expiration_date) then 1 else 0 end as has_expired FROM test" ```
Thanks again and let me know if you want me to provide anything else! |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1181037277 | I_kwDOBm6k_c5GZTLd | 1686 | heroku bails if app name specifed in datasette publish is the same as existing app | tlongers 2115933 | open | 0 | 0 | 2022-03-25T17:10:34Z | 2022-03-25T17:10:34Z | NONE | Seem that
The resulting error has the below traceback:
It's a solid failsafe, but does |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1205867842 | I_kwDODtX3eM5H4BVC | 4 | Retrieve the top-level story for a comment | telotortium 1755789 | open | 0 | 0 | 2022-04-15T20:25:39Z | 2022-04-15T20:25:39Z | NONE | I think that each comment inserted into the database should include a column |
hacker-news-to-sqlite 248903544 | issue | { "url": "https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/4/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1211283427 | I_kwDODFdgUs5IMrfj | 72 | feature: display progress bar when downloading multi-page responses | hydrosquall 9020979 | open | 0 | 1 | 2022-04-21T16:37:12Z | 2022-04-21T17:29:31Z | NONE | MotivationFor a long running command (longer than 1 minute) for a big table (like pull requests or commits), it can be tricky to know if the script is still running, or if a rate limit/error was encountered We know how many pages there are, so it may be possible to indicate how many remain. Resources
|
github-to-sqlite 207052882 | issue | { "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/72/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1221849746 | I_kwDOBm6k_c5I0_KS | 1732 | Custom page variables aren't decoded | tannewt 52649 | open | 0 | 2 | 2022-04-30T14:55:46Z | 2022-05-03T01:50:45Z | NONE | I have a page Datasette should unescape the url component before passing them into the template. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1224112817 | I_kwDOCGYnMM5I9nqx | 430 | Document how to use `PRAGMA temp_store` to avoid errors when running VACUUM against huge databases | rayvoelker 9308268 | open | 0 | 2 | 2022-05-03T13:33:58Z | 2022-06-14T23:26:37Z | NONE | I'm trying to figure out a way to get the Here's the bit that's causing the error, and the resulting error output: ```python combine these columns into 1 table "bib_properties" :best_titlebib_level_codemat_typematerial_codebest_authordb["circ_trans"].extract( ["best_title", "bib_level_code", "mat_type", "material_code", "best_author"], table="bib_properties", fk_column="bib_properties_id" ) db["circ_trans"].extract( ["call_number"], table="call_number", fk_column="call_number_id", rename={"call_number": "value"} ) ``` ```pythonOperationalError Traceback (most recent call last) Input In [17], in <cell line: 7>() 1 # combine these columns into 1 table "bib_properties" : 2 # best_title 3 # bib_level_code 4 # mat_type 5 # material_code 6 # best_author ----> 7 db["circ_trans"].extract( 8 ["best_title", "bib_level_code", "mat_type", "material_code", "best_author"], 9 table="bib_properties", 10 fk_column="bib_properties_id" 11 ) 13 db["circ_trans"].extract( 14 ["call_number"], 15 table="call_number", 16 fk_column="call_number_id", 17 rename={"call_number": "value"} 18 ) File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1764, in Table.extract(self, columns, table, fk_column, rename) 1761 column_order.append(c.name) 1763 # Drop the unnecessary columns and rename lookup column -> 1764 self.transform( 1765 drop=set(columns), 1766 rename={magic_lookup_column: fk_column}, 1767 column_order=column_order, 1768 ) 1770 # And add the foreign key constraint 1771 self.add_foreign_key(fk_column, table, "id") File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:1526, in Table.transform(self, types, rename, drop, pk, not_null, defaults, drop_foreign_keys, column_order) 1524 with self.db.conn: 1525 for sql in sqls: -> 1526 self.db.execute(sql) 1527 # Run the foreign_key_check before we commit 1528 if pragma_foreign_keys_was_on: File ~/jupyter/venv/lib/python3.10/site-packages/sqlite_utils/db.py:465, in Database.execute(self, sql, parameters) 463 return self.conn.execute(sql, parameters) 464 else: --> 465 return self.conn.execute(sql) OperationalError: database or disk is full ``` This database is about 17G in total size, so I'm assuming the error is coming from the vacuum ... where i'm assuming it's maybe trying to do the temp storage in a location that doesn't have sufficient room. The disk space is more than ample on the host in question (1.8T is free in the directory where the sqlite db resides) The I'm trying to think if there's a way to set the ```python SET the temp file store to be a file ...print(db.execute('PRAGMA temp_store').fetchall()) print(db.execute('PRAGMA temp_store=FILE').fetchall()) print(db.execute('PRAGMA temp_store').fetchall()) the users home directory ...print(db.execute("PRAGMA temp_store_directory='/home/plchuser/'").fetchall()) print(db.execute("PRAGMA sqlite3_temp_directory='/home/plchuser/'").fetchall()) print(db.execute("PRAGMA temp_store_directory").fetchall())
print(db.execute("PRAGMA sqlite3_temp_directory").fetchall())
Here's the docs on the Temporary File Storage Locations https://www.sqlite.org/tempfiles.html |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1227571375 | I_kwDOCGYnMM5JK0Cv | 431 | Allow making m2m relation of a table to itself | rafguns 738408 | open | 0 | 3 | 2022-05-06T08:30:43Z | 2022-06-23T14:12:51Z | NONE | I am building a database, in which one of the tables has a many-to-many relationship to itself. As far as I can see, this is not (yet) possible using Example: suppose I have a table of people, and I want to store the information that John and Mary have two children, Michael and Suzy. It would be neat if I could do something like this: ```python from sqlite_utils import Database db = Database(memory=True) db["people"].insert({"name": "John"}, pk="name").m2m( "people", [{"name": "Michael"}, {"name": "Suzy"}], m2m_table="parent_child", pk="name" ) db["people"].insert({"name": "Mary"}, pk="name").m2m( "people", [{"name": "Michael"}, {"name": "Suzy"}], m2m_table="parent_child", pk="name" ) ``` But if I do that, the many-to-many table This could be solved by adding one or two keyword_arguments to |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1236693079 | I_kwDOCGYnMM5JtnBX | 432 | Support `rows_where()`, `delete_where()` etc for attached alias databases | luxint 11597658 | open | 0 | 5 | 2022-05-16T06:38:58Z | 2022-06-14T22:16:48Z | NONE | Hi, I noticed Besides, |
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1247315144 | I_kwDOBm6k_c5KWITI | 1749 | LDAP auth plugin | benswift 380241 | open | 0 | 0 | 2022-05-25T01:35:12Z | 2022-05-25T01:35:12Z | NONE | A search of the plugins directory doesn't turn up anything, but is is possible to set up a Datasette app which uses my organisation's LDAP for auth? If not, how much work would it be to write one (I may have some spare cycles on my team to do this, but we haven't written a datasette plugin before). |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1250495688 | I_kwDOCGYnMM5KiQzI | 439 | Misleading progress bar against utf-16-le CSV input | frafra 4068 | open | 0 | 12 | 2022-05-27T08:34:49Z | 2022-06-15T03:53:43Z | NONE | The program crashes without any error.
|
sqlite-utils 140912432 | issue | { "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);