Robots.txt doesn't allow a page scan

At times certain pages aren’t able to be analyzed. The scores of the page that couldn’t be analyzed are not displayed and an error message appears.

In the case where the analysis wasn’t carried out from a Robots.txt issue, this simply means that the site stopped our robot from analyzing the page in question. 

How can I let Cocolyze analyze my web page? 

You need to authorize our robot in the Robots.txt file, found in the website roots in the address http://mywebsite.com/robots.txt

This file contains a list of instructions explaining to all the robots (or robot by robot) the decision to allow the site to be explored or a part of a site. 

Our robot is (used for the robots.txt): 

Cocolyzebot

Our User-Agent (visible in the robot traffic logs) is:

Mozilla/5.0 (compatible; Cocolyzebot/1.0; +https://cocolyze.com/cocolyzebot)

You can find more information on how Robots.txt files work in the Google documentation

Don’t hesitate to contact us if you need help to allow your website to be explored 😉