html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/simonw/datasette/issues/1426#issuecomment-895522818,https://api.github.com/repos/simonw/datasette/issues/1426,895522818,IC_kwDOBm6k_c41YJgC,9599,simonw,2021-08-09T20:34:10Z,2021-08-09T20:34:10Z,OWNER,At the very least Datasette should serve a blank `/robots.txt` by default - I'm seeing a ton of 404s for it in the logs.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default", https://github.com/simonw/datasette/issues/1426#issuecomment-895510773,https://api.github.com/repos/simonw/datasette/issues/1426,895510773,IC_kwDOBm6k_c41YGj1,9599,simonw,2021-08-09T20:14:50Z,2021-08-09T20:19:22Z,OWNER,"https://twitter.com/mal/status/1424825895139876870 > True pinging google should be part of the build process on a static site :) That's another aspect of this: if you DO want your site crawled, teaching the `datasette publish` command how to ping Google when a deploy has gone out could be a nice improvement. Annoyingly it looks like you need to configure an auth token of some sort in order to use their API though, which is likely too much hassle to be worth building into Datasette itself: https://developers.google.com/search/apis/indexing-api/v3/using-api ``` curl -X POST https://indexing.googleapis.com/v3/urlNotifications:publish -d '{ ""url"": ""https://careers.google.com/jobs/google/technical-writer"", ""type"": ""URL_UPDATED"" }' -H ""Content-Type: application/json"" { ""error"": { ""code"": 401, ""message"": ""Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."", ""status"": ""UNAUTHENTICATED"" } } ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default", https://github.com/simonw/datasette/issues/1426#issuecomment-895509536,https://api.github.com/repos/simonw/datasette/issues/1426,895509536,IC_kwDOBm6k_c41YGQg,9599,simonw,2021-08-09T20:12:57Z,2021-08-09T20:12:57Z,OWNER,I could try out the `X-Robots` HTTP header too: https://developers.google.com/search/docs/advanced/robots/robots_meta_tag#xrobotstag,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default", https://github.com/simonw/datasette/issues/1426#issuecomment-895500565,https://api.github.com/repos/simonw/datasette/issues/1426,895500565,IC_kwDOBm6k_c41YEEV,9599,simonw,2021-08-09T20:00:04Z,2021-08-09T20:00:04Z,OWNER,"A few options for how this would work: - `datasette ... --robots allow` - `datasette ... --setting robots allow` Options could be: - `allow` - allow all crawling - `deny` - deny all crawling - `limited` - allow access to the homepage and the index pages for each database and each table, but disallow crawling any further than that The ""limited"" mode is particularly interesting. Could even make it the default, but I think that may be a bit too confusing. Idea would be to get the key pages indexed but use `nofollow` to discourage crawlers from indexing individual row pages or deep pages like `https://datasette.io/content/repos?_facet=owner&_facet=language&_facet_array=topics&topics__arraycontains=sqlite#facet-owner`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default",