How to fix Robots.txt and exploration issues

What is exploration?

In order to index your website in its results, Google needs to visit your website. Its robots browse your pages to understand the context and to link them to a keyword. This is what we call exploration.

If Google cannot access your website, it will have difficulties linking your page to a keyword.

In the Pages section, you will find an exploration report. Our robots work like Google robots. 

What is a Robots.txt file?

The Robots.txt file is a text file located at the root of your website ( This file is read by robots before the exploration. It contains indications allowing robots to know if your authorize them or not to explore one or certain pages of your website. 

You will find here the Google guide about the Robots.txt file

How to manage the exploration of my website?

Be careful, indexation and exploration are often confused. 

If you don't want a page to be in Google, you need to authorize its exploration in the robots.txt but you need to forbid its indexation (thanks to a tag meta NoIndex). 

If you forbid the exploration of the page, Google won't be able to find the NoIndex indication and your page might appear in the SERP. 

Generally, there is no interest to use the Robots.txt file except in some specific cases (in order to save a crawl budget for example). 

Ensure that pages you want to be in Google are correctly explorable. 

If a page is voluntarily not explorable, you can hide this issue in your crawl report and it won't be taken into account in your score.