Recipe 3-2: Adding Fake robots.txt Disallow Entries
This recipe shows you how to add additional Disallow entries to the robots.txt file to alert when clients attempt to access these locations.
Ingredients
- ModSecurity Reference Manual2
- SecContentInjection directive
- Append action
- Apache Header directive
Robots Exclusion Standard
The Robots Exclusion Standard was created to allow web site owners to advise search engine crawlers about which resources they are allowed to index. A file called robots.txt is placed in the document root of the web site. In this file, the site administrator can include allow and disallow commands to instruct web crawlers which resources to access. Here are some examples of robots.txt entries:
User-agent: *
Allow: /
User-agent: Googlebot
Disallow: /backup/
Disallow: /cgi-bin/
Disallow: /admin.bak/
Disallow: /old/
The first entry means that all crawlers are allowed to access and index the entire site. The second entry states that Google’s Googlebot crawler should not access four different directories. Looking at the names of these directories, this makes sense. They might contain sensitive data or files that the web site owners do not want Google to index.
robots.txt serves a legitimate purpose, but do you see a problem with using it? The robots exclusion standard is merely a suggestion and does not function as access control. The issue is that you are basically letting external clients know about specific sensitive areas of your web site that you ...
Get Web Application Defender's Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.