Ignore URLs in robot.txt with specific parameters?

Here’s a solutions if you want to disallow query strings: Disallow: /*?* or if you want to be more precise on your query string: Disallow: /*?dir=*&order=*&p=* You can also add to the robots.txt which url to allow Allow: /new-printer$ The $ will make sure only the /new-printer will be allowed. More info: http://code.google.com/web/controlcrawlindex/docs/robots_txt.html Advanced Usage … Read more

Allow and Disallow in Robots.txt

The module documentation for robotparser and its Python 3 counterpart, urllib.robotparser, mention that they use the original specification. This specification does not have an Allow directive; that is a non-standard extension. Some major crawlers support it, but you (obviously) don’t have to support it to claim compliance.