{"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-985982668", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 985982668, "node_id": "IC_kwDOBm6k_c46xObM", "user": {"value": 95520595, "label": "knowledgecamp12"}, "created_at": "2021-12-04T07:11:29Z", "updated_at": "2021-12-04T07:11:29Z", "author_association": "NONE", "body": "You can generate xml site map from the online tools using https://tools4seo.site/xml-sitemap-generator. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-974711959", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 974711959, "node_id": "IC_kwDOBm6k_c46GOyX", "user": {"value": 52649, "label": "tannewt"}, "created_at": "2021-11-20T21:11:51Z", "updated_at": "2021-11-20T21:11:51Z", "author_association": "NONE", "body": "I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-902263367", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 902263367, "node_id": "IC_kwDOBm6k_c41x3JH", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-19T21:33:51Z", "updated_at": "2021-08-19T21:36:28Z", "author_association": "OWNER", "body": "I was worried about if it's possible to allow access to `/fixtures` but deny access to `/fixtures?sql=...`\r\n\r\nFrom various answers on Stack Overflow it looks like this should handle that:\r\n\r\n```\r\nUser-agent: *\r\nDisallow: /fixtures?\r\n```\r\nI could use this for tables too - it may well be OK to access table index pages while still avoiding pagination, facets etc. I think this should block both query strings and row pages while allowing the table page itself:\r\n```\r\nUser-agent: *\r\nDisallow: /fixtures/searchable?\r\nDisallow: /fixtures/searchable/*\r\n```\r\nCould even accompany that with a `sitemap.xml` that explicitly lists all of the tables - which would mean adding sitemaps to Datasette core too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-902260338", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 902260338, "node_id": "IC_kwDOBm6k_c41x2Zy", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-19T21:28:25Z", "updated_at": "2021-08-19T21:29:40Z", "author_association": "OWNER", "body": "Actually it looks like you can send a `sitemap.xml` to Google using an unauthenticated GET request to:\r\n\r\n https://www.google.com/ping?sitemap=FULL_URL_OF_SITEMAP\r\n\r\nAccording to https://developers.google.com/search/docs/advanced/sitemaps/build-sitemap", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-902260799", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 902260799, "node_id": "IC_kwDOBm6k_c41x2g_", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-19T21:29:13Z", "updated_at": "2021-08-19T21:29:13Z", "author_association": "OWNER", "body": "Bing's equivalent is: https://www.bing.com/webmasters/help/Sitemaps-3b5cf6ed\r\n\r\n http://www.bing.com/ping?sitemap=FULL_URL_OF_SITEMAP", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-895522818", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 895522818, "node_id": "IC_kwDOBm6k_c41YJgC", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-09T20:34:10Z", "updated_at": "2021-08-09T20:34:10Z", "author_association": "OWNER", "body": "At the very least Datasette should serve a blank `/robots.txt` by default - I'm seeing a ton of 404s for it in the logs.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-895510773", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 895510773, "node_id": "IC_kwDOBm6k_c41YGj1", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-09T20:14:50Z", "updated_at": "2021-08-09T20:19:22Z", "author_association": "OWNER", "body": "https://twitter.com/mal/status/1424825895139876870\r\n\r\n> True pinging google should be part of the build process on a static site :)\r\n\r\nThat's another aspect of this: if you DO want your site crawled, teaching the `datasette publish` command how to ping Google when a deploy has gone out could be a nice improvement.\r\n\r\nAnnoyingly it looks like you need to configure an auth token of some sort in order to use their API though, which is likely too much hassle to be worth building into Datasette itself: https://developers.google.com/search/apis/indexing-api/v3/using-api\r\n\r\n```\r\ncurl -X POST https://indexing.googleapis.com/v3/urlNotifications:publish -d '{\r\n \"url\": \"https://careers.google.com/jobs/google/technical-writer\",\r\n \"type\": \"URL_UPDATED\"\r\n}' -H \"Content-Type: application/json\"\r\n\r\n{\r\n \"error\": {\r\n \"code\": 401,\r\n \"message\": \"Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.\",\r\n \"status\": \"UNAUTHENTICATED\"\r\n }\r\n}\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-895509536", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 895509536, "node_id": "IC_kwDOBm6k_c41YGQg", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-09T20:12:57Z", "updated_at": "2021-08-09T20:12:57Z", "author_association": "OWNER", "body": "I could try out the `X-Robots` HTTP header too: https://developers.google.com/search/docs/advanced/robots/robots_meta_tag#xrobotstag", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1426#issuecomment-895500565", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1426", "id": 895500565, "node_id": "IC_kwDOBm6k_c41YEEV", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-08-09T20:00:04Z", "updated_at": "2021-08-09T20:00:04Z", "author_association": "OWNER", "body": "A few options for how this would work:\r\n\r\n- `datasette ... --robots allow`\r\n- `datasette ... --setting robots allow`\r\n\r\nOptions could be:\r\n\r\n- `allow` - allow all crawling\r\n- `deny` - deny all crawling\r\n- `limited` - allow access to the homepage and the index pages for each database and each table, but disallow crawling any further than that\r\n\r\nThe \"limited\" mode is particularly interesting. Could even make it the default, but I think that may be a bit too confusing. Idea would be to get the key pages indexed but use `nofollow` to discourage crawlers from indexing individual row pages or deep pages like `https://datasette.io/content/repos?_facet=owner&_facet=language&_facet_array=topics&topics__arraycontains=sqlite#facet-owner`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 964322136, "label": "Manage /robots.txt in Datasette core, block robots by default"}, "performed_via_github_app": null}