issue_comments: 823961091
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/173#issuecomment-823961091 | https://api.github.com/repos/simonw/datasette/issues/173 | 823961091 | MDEyOklzc3VlQ29tbWVudDgyMzk2MTA5MQ== | 3747136 | 2021-04-21T10:37:05Z | 2021-04-21T10:37:36Z | NONE | I have the feeling that the text visible to users is 95% present in template files (datasette/templates). The python code mainly contains error messages. In the current situation, the best way to provide a localized frontend is to translate the templates and configure datasette to use them. I think I'm going to do it for French. If we want localization to be better integrated, for the python code, I think gettext is the way to go. The .po can be translated in user-friendly tools such as Transifex and Crowdin. For the templates, I'm not sure how we could do it cleanly and easy to maintain. Maybe the tools above could parse HTML and detect the strings to be translated. In any case, localization implementing l10n is just the first step: a continuous process must be setup to maintain the translations and produce new ones while datasette keeps getting new features. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
281110295 |