Get a definition of any SEO terms.
It is a browser-native JavaScript method that dynamically writes HTML or JavaScript code directly into a page. When the page is fully loaded, it replaces the existing code with the defined one.
<html> ... <script> document.write("I replace the content of the page"); </script> </html>
The use of document.write can delay the display of the page by several seconds. Moreover, in some cases (a slow connection for example) chrome blocks its use and makes the page rendering uncertain.
When the instruction document.write is used in the JavaScript code of the page, it is necessary to remove or replace it with methods to manipulate the DOM.
For example, to insert HTML code :
<script> document.getElementById("container").innerHTML = "Properly added HTML content"; </script>
To dynamically integrate a JavaScript :
<script> var script = document.createElement('script'); script.src = 'https://cdn.example.com/script.js'; document.head.appendChild(script); </script>
If a third party script uses document.write, contact the supplier to find an alternative solution.
Keyword stuffing, also called over-optimization, is the excessive repetition of keywords in elements that contribute to SEO (titles, link anchors, ...).
Stuffing is a black hat practice that can be sanctioned by search engines. This practice can be harmful on several points :
The best way to fight stuffing today is to produce quality content combining the presence of targeted keywords for SEO and readability for the Internet user.
Do not hesitate to favor a developed lexical field rather than focusing on a repetition of your keywords, which can be considered abusive.
Indeed, Google and its algorithm now take into account the semantic richness of your content. This is why, even if you have not explicitly evoked certain keywords, you can still be positioned on a main expression and on secondary expressions, if your lexical field refers to them.
A plugin is a small program designed to be added to a main program called host software (for example, your browser). Plugins are used by many websites. They allow users to watch videos, hear sounds, play games, see computer graphics, etc.
Among the most well-known are Java and Flash plugins, for example.
The advent of the cell phone requires the creation of simple and quickly consumable content that does not require plug-ins. Some of them are not even readable by cell phones, which is detrimental to the user experience. Many security issues have also been identified because of some plugins, such as access to the microphone or camera of your computer for example.
Finally, search engine robots cannot exploit the content of certain plugins for SEO. This means that the content of the plugins does not appear on search engine results, especially with Flash and Java.
As Flash, Silverlight and Java are increasingly rejected by browsers, the most obvious solution to avoid being blocked by search engines is not to use plugins to display your content.
In addition, it is possible to convert plugin-based content using HTML5. This format allows :
The Accessibility Score © is an indicator created by Cocolyze that measures the easy access to a page for a user.
This indicator takes into account the performance of the page and its resources. It analyzes four main elements that will influence the loading time of a page, such as: the server speed, the page weight, the security and the size of the images.
According to each result, Cocolyze proposes a detailed analysis and, when necessary, a possible solution for each one of them.
The Accessibility Score © name is a trademark of Cocolyze.
AJAX (Asynchronous JavaScript and XML) is a computer architecture, a web programming concept that combines several technologies: Javascript, JSON, and XML. This association has been thought to create more dynamic pages, to optimize the interaction with the user, and to bring more comfort of use.
Ajax is a technique compatible with the majority of web browsers (Google, Safari, Mozilla Firefox, etc...). It can be applied to all media: smartphones, tablets, computers, etc... whatever the operating system (Linux, Mac, Windows).
An alternative attribute allows search engine robots to understand what an image is in order to be able to index it. In HTML language, it is called the alt attribute and it refers to a textual alternative to an image.
The alt attribute will tell Google what the image corresponds to. It is important to correctly write the title of your image in the HTML code, because the alt attribute is taken into account by the robots for the referencing of your site. In addition, when your image does not appear on the screen, the text written in the alt attribute of the HTML code will appear to the Internet users to give some details about it.
The simplest and most effective way to write an alt attribute is to use short and descriptive sentences.
For example : <img src="https://example.com/image.jpg" alt="clear text alternative" />
An alt attribute must be provided for each picture element <img>.
In summary, in order for an alt attribute to be written and correctly referenced, it must :
Cocolyze automatically analyzes all the best ranked pages of your website and of your competitors, everyday, on all your tracked keywords.
Number of Analyzes
Once a day we analyze a given URL to minimize the number of requests sent to your server. One URL can however be analyzed several times if you track this page:
The same URL can for example have a different score on mobile in English than for desktop in Italien.
Analysis Update
The analyzed pages are automatically updated everyday. You can force a new analysis, for example if you change something that you would like to immediately check, by clicking on the button 'Update the page analysis'.
The best ranked page isn’t the one I want to rank on
You can define the page associated with a keyword if you want to modify (or "force") the analyzed page for a keyword.
Deleting analyzes
The analyzed pages are automatically deleted if:
A deleted page will automatically reappear if we find it as the best ranked page on the same site on a tracked keyword.
An animation is a succession of images appearing in the same frame. We use the term "optimized animation" when the format used is optimal and allows the same rendering as another format for less weight. The most common format for animations is the GIF (Graphics Interchange Format), however this format is ineffective for large animations.
The GIF format for large animations weighs more than a video alternative and increases the network load required to download it. Using optimized animations improves the page load time and allows users to view the content faster.
In order to reduce the weight of animations, it is necessary to change the GIF format to an MPEG or WebM video format.
It is possible to recreate the characteristics of a GIF animation thanks to the HTML tag <video> :
<video autoplay loop muted playsinline> <source src="animation.webm" type="video/webm"> <source src="animation.mp4" type="video/mp4"> </video>
The "playsinline" attribute allows the video to be played online and does not automatically launch full screen on mobiles when playback begins.
An extension is simply the fact of increasing the size of a content on a WEB page. When a user navigates on a site and more particularly from a phone, he may not be able to read the text or wants to look more precisely at an image, in this case, he will zoom in on the page to obtain a suitable size.
It is possible to prohibit the extension of a WEB page via the "meta viewport" tag, which gives instructions on the size and scale of the elements depending on the browser window. When this prohibition is set, the user will not be able to zoom in on the content and may not even be able to read the text on the page. This frustration degrades the user experience and may cause the user to run away.
By default, browsers allow zooming, if no "meta viewport" tag is defined, there is no need to intervene.
If a "meta viewport" tag is defined, make sure that :
Example of a "meta viewport" prohibiting enlargement :
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
Example of a "meta viewport" allowing enlargement :
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=6" />
Caching is an action of the browser, at the request of the WEB server, which consists in saving a static resource (image, CSS style, script, etc.) on the user's computer so that he no longer has to load it when this same resource is needed on another page of the site, or during another visit to the same page. For example, on every page of a site, there is a logo, its caching allows to free the user from the loading necessary to retrieve it at each page visited during his navigation on the site.
The fact that static resources are cached improves the loading time on the user's repeated visits, because the browser no longer has to load them. Caching therefore contributes to the speed of the site by speeding up navigation. Only the first loading of the page will be complete and will require the transfer of all resources.
The caching instruction of a resource is located at the level of the HTTP header "Cache-Control" returned by the WEB server. This HTTP header informs the browser that it is necessary to save this resource, and informs it of the delay in seconds for which it must keep it thanks to the "max-age" instruction.
Cache-Control: max-age=31536000
Caching is generally done at the level of the WEB server that takes care of this management. In order to provide a maximum user experience, it is recommended to use a minimum of 97 days.
A canonical URL is a tag inserted in the HTML code of a web page that allows you to specify to search engines the preferred URL, that is, the one that shoud be crawled.
When multiple pages have similar content, search engines consider them duplicate versions of the same page. For example, desktop and mobile versions of a product page are often considered duplicates. Search engines select one of the pages as the canonical, or primary, version and crawl that one more. Valid canonical links let you tell search engines which version of a page to crawl and display to users in search results.
Add a <link rel=canonical> element to the <head> of the page :
<!doctype html> <html lang="en"> <head> … <link rel="canonical" href="https://example.com"/> … </head> <body> … </body> </html>Guidelines for Canonical URLs
Encoding is a way to allow browsers to correctly interpret characters. It is vulgarly speaking to inform browsers about how characters are encoded (example: alpha encoding, 1 = A, 2 = B / beta encoding, 10 = A, 2 = B).
Character encoding is important to prevent the browser from displaying incorrect characters. This can happen, for example, when you want to use language-specific characters (such as Mandarin, for example) that have not been encoded.
For example: My source code will be translated 私のソースコード into Japanese, if my encoding is correctly specified. On the other hand, if my encoding is incorrect, the same sentence may show abnormal characters (私a!!ソ§--スコ#€).
All documents (.txt, .html, or plain text) that contain text are saved with defined characters. This corresponds to the actual encoding of the document.
It is recommended to use the "utf-8" encoding on your pages which allows to encode the majority of the possible characters.
To check which encoding your HTML pages use, you can go to the configuration settings of your text editor (i.e. Atom, Sublime Text).
Don't forget to declare your character encoding information to the browser. To do so, you should :
Finally, it must be defined on at least one of the elements.
Color contrast is the ability of two things to stand out from each other. In a digital interface this contrast mainly concerns the legibility of text and icons in relation to the background.
Color contrast is one of the first elements that make a page readable to users. A low contrast between the text and the background can make it difficult for users to read, especially for those with vision impairments.
The analysis follows the WCAG 2.1 recommendations whose accessibility standards are as follows :
The visual presentation of text and images must have a contrast ratio of at least 4.5:1, except in the following cases :
The contrast ratio is a measuring of the difference in "brightness" or perceived brightness between two colors. This difference in brightness is expressed as a ratio ranging from 1:1 white text on white background to 21:1 black text on white background.
The Competition score is based on the number of advertisers that want to place a sponsored advert (Google Ads) on a keyword. This score depends on the search location. This score allows you to easily get an idea of the SEO competition on this keyword, given that a high level of competition means that a large number of websites want to be highly positioned on this search request.
Content width is the horizontal dimension of the page. When the width of the content is larger than the width of the browser window, and it does not fit, the browser adds a horizontal navigation bar so that the content can be scrolled from left to right, this is called the X scroll. Conversely, when the content adapts to the width of the browser window, the page is said to be "responsive".
The presence of a scroll X is detrimental to the user experience: horizontal scrolling is not intuitive for users, they don't have the reflex to scroll pages from left to right. If users have a bad experience, they may come back to the SERP and click on another more suitable site which affects your ranking.
In most cases, it is at the level of CSS styles that you have to intervene and set up "media queries". These instructions allow you to define CSS properties according to the dimensions of the browser window.
Example of the use of "media queries" :
<html> <head> <title>My responsive page</title> <meta name="viewport" content="width=device-width, initial-scale=1" /> <style> /* default width */ .bloc { display: inline-block; width: 100%; } /* Overriding the width of the blocks when the browser window is at least 600px */ @media screen and (min-width: 600px) { .bloc { width: 30%; } } </style> </head> <body> <h1>My content is adapted</h1> <div class="bloc">Bloc A</div> <div class="bloc">Bloc B</div> <div class="bloc">Bloc C</div> </body> </html>
When the browser window is 600px or more, the blocks are aligned, otherwise they are stacked. For more information on media queries and responsive, check Google's article.
Another method exists to not make the horizontal navigation bar appear when the content is larger, it consists in hiding what is overflowing thanks to a CSS instruction "overflow: hidden;". This method is to be avoided since it hides the eventual content from the user and the user cannot display it at all.
CPC (cost-per-click) is the average cost of a click on a paid search result on a keyword.
A 'click' on a PPC (pay-per-click) campaign represents a visit or interaction with your company's product/service. When you advertise, your CPC will always be less or equal to the maximum defined budget. Your actual cost-per-click depends on both you and your competitors ad rank, maximum bid and quality score.
For more information on CPC bidding please read Google’s official documentation.
A crawlable link is a link that can be followed by Google. Links not crawlable are therefore links with a bad URL, these links can be exploited by the JavaScript code of the page but not by crawlers. For Google to be able to track your links, they must use a <a> tag with the href attribute followed by relative or resolvable URLs (a real web address to which Googlebot can send queries).
Example 1 : crawlable link
<a href="https://example.com">
Example 2 : URLs that can be resolved
/products.php?id=123
https://example.com/stuff
It's important because only this format allows you to make your content available to Google's robots. Non searchable can be a problem since the robots will not know which page it should point to (the link) and will not index it (the destination page).
In order to solve a problem with a crawlable link you should :
Critical request chains are the dependent and important requests of the page. They are all the critical resources loaded in priority that block the rendering of the page.
For example, for a single page :
<html> <head> <title>Critical request</title> <link rel="stylesheet" type="text/css" href="css/style.css" /> <script type="text/javascript" src="js/jquery.js"></script> </head> <body> <h1>Critical request</h1> </body>
The critical request chains will be :
In this example, 2 chains are defined, the length of the longest chain is 3 and can result in a maximum latency of 1.73s (loading and playing).
The larger the size of the strings and the weight of the resources to be transferred, the greater the impact on page loading. It is therefore necessary to keep the critical request chains as small as possible so that the page can load quickly.
It is recommended not to exceed 8 chains, to reduce the size of the critical request chains, you should :
<link rel="stylesheet" type="text/css" href="css/unused.css" />
<script type="text/javascript" src="js/jquery.js" defer></script>
<link rel="preload" href="font/roboto.woff2" as="font" crossorigin="anonymous" />
Minimization consists in removing unnecessary characters from a file in order to reduce its size, so it compacts the instructions. The characters that are not useful for the interpretation of a CSS style are mainly comments, spaces and tabs.
It is important not to confuse minification with compression, because compression will use algorithms to reduce the size of the file. The browser of the recipient of the file will therefore have to perform a decompression operation to interpret the file, which minimization avoids.
Removing unnecessary characters improves the loading time because the CSS file weighs less.
To perform this treatment, it is interesting to use a tool (and not to do it manually) such as cssminifier.com that allows you to minimize an online CSS style.
Example of a style before minification :
body { line-height: 1; } ol, ul { list-style: none; } blockquote, q { quotes: none; }
After minification :
body{line-height:1}ol,ul{list-style:none}blockquote,q{quotes:none}
It is also possible to include external styles in their minimized version. Many vendors (CDN) offer this for the production version.
Manual optimizations can also be made in the declarations, such as, for example, reducing the color declaration to hexadecimal #000000, which can be shortened by #000. This is also possible if two elements have the same style, then they can be combined within the same block :
h1 { font-weight: bolder; font-size: 20px; background-color: #cccccc; } .title { font-weight: bolder; font-size: 20px; background-color: #cccccc; }
Optimization :
h1, title { font-weight: bolder; font-size: 20px; background-color: #ccc; }
Deprecated code is mainly JavaScript code that uses obsolete browser functions. The browser temporarily maintains these outdated functions (they are API - Access Programming Interface methods) for compatibility reasons but tends to remove or modify them in the near future.
When the browser is updated, the code using these methods may no longer work and thus cause JavaScript errors. The appearance of errors can lead to more or less serious problems depending on the usefulness of the code. The content may be missing, navigation in the site may no longer work, interactions with the different elements of the page may not respond. In other words, your page starts to have bugs that were not present until now.
When a method provided by the browser is marked as deprecated, it is necessary to intervene in the code in order to change the method for an alternative to ensure future compatibility. If this code comes from the use of a third party module, it is then necessary to update it or to find a more modern alternative.
Deprecated methods appear as a warning in the Chrome browser console. Here is a warning on the presence of an obsolete method whose use will soon change :
speechSynthesis.speak() : speechSynthesis.speak() without user activation is no longer allowed since M71, around December 2018. See https://www.chromestatus.com/feature/5687444770914304 for more details
A descriptive link is a link whose anchor provides clear information about its destination. For example, in the sentence "7 tips to improve your mobile SEO", the anchor of the link is "7 tips to improve your mobile SEO".
Descriptive links are important because they determine the subject of the destination page. This "indication" of the landing page content is taken into account by Google and is also a way to attract users' interest.
In order to optimize your descriptive links you should :
The Doctype is an indication in an HTML page to define the type of syntax used in the page.
If the Doctype is not correctly defined, the browser may have difficulty displaying your page and display it in an unexpected way. This is called the Quirks mode of a browser.
Just add the "<!DOCTYPE html>" insctruction at the top of the HTML document :
<!DOCTYPE html> <html lang="en"> …
It is recommended to use the HTML 5 doctype, the other versions are considered "outdate".
The DOM (Document Object Model) is the result of the interpretation of the HTML code of a web page by the browser. It is a representation of data in the form of a tree composed of nodes and elements where each element belonging to a node is a child. It also provides a useful programming interface (API) for modifying the page using JavaScript code.
Document - <html> - <head> - <title> - DOM size - <body> - <h1> - What is the DOM ? ...
The larger the DOM size is, the longer the browser will take to create and interpret it, which will slow down the page rendering and place greater demands on the resources of the user's computer.
The DOM must have :
The Domain Influence score analyses the popularity of a website. The more influential backlinks a site has, the higher its influence mark will be.
The Domain Influence score is a score from 0-100, with 100 corresponding to a very influential website.
The Domain Influence is useful to filter and compare different backlinks.
A website with a high Domain Influence can be more interesting to contact to get a backlink from rather than a site with a low Domain Influence.
Also check your lost backlinks: we recommend you try and contact the webmasters of the sites where you have lost backlinks that have a high Domain Influence.
A website with a low Domain Influence doesn't harm your optimization if they create a backlink towards your site. To analyze the link's toxicity, go to the article on the Spam Rating score.
A module is a JavaScript resource embedded in the page containing segmented and reusable code. It allows to bring additional functionalities to the browser in a simple way. For example, the JQuery library makes it easier to navigate and manipulate the DOM (Document Object Model: a programming interface useful for modifying the page using JavaScript code) thanks to a set of simplified functionalities.
Loading the same module several times is not useful because the code is already present in the page and can be exploited by the browser. A module declared several times therefore unnecessarily increases the weight of the page, as well as the interpretation required by the browser and delays its loading.
The detection of duplicate modules is based on their "source Maps" file (additional folder that allows debugging the code). Usually these files are filled by the CDN (Content Delivery Network) within the script itself with the instruction "//# sourceMappingURL=...".
When the same module is loaded several times, it is necessary to keep only one integration instruction in the HTML and/or JavaScript code.
For example, the Bootstrap framework is loaded twice despite different names :
<script src="/js/bootstrap.bundle.js"></script> <script src="/js/bootstrap.js"></script>
In order to correct it, you have to remove of one of the two statements :
<script src="/js/bootstrap.js"></script>
It is an unpleasant twinkle effect caused by the appearance of a text after loading a web font or font replacement on a text already formatted by the browser.
This effect harms the user experience when browsing the page, and in some cases the layout changes, which can be penalized by Google.
In order to avoid this effect on fonts whose display is defined as "optional" in the CSS style (font-display: optional), it is necessary to preload them with a <link rel="preload" instruction ... />, to be placed in the <head> section of the HTML code.
Example of preloading a web font :
<head> <style> @font-face { font-family: "Roboto"; font-display: "optional"; src: local("Roboto"), url(https://...roboto.woff2) format("woff2"); } </style> ... <link rel="preload" href="https://...roboto.woff2" as="font" /> ... </head>
It is simply the footer where all the main information of the website is usually present.
Users tend to access the footer to get an overview of what the site offers in terms of products and/or services. The longer the page is, the more mobile users will have to scroll to access it and may be discouraged. Keeping the page height low therefore allows users to easily access the information they are looking for, improving their experience.
For optimal access, it is recommended not to exceed 10,000px. If the page height exceeds this recommendation, it is necessary to :
A geolocation permission is a request made to the user via the browser to obtain his geographical coordinates. As long as the user has not authorized this request, the WEB page cannot obtain this information. If and when it obtains it, it is able to provide more adequate content to the user. This request is initiated using javaScript code : geolocation.getCurrentPosition() or geolocation.watchPosition().
Faced with a request for geolocation permission without context, i.e. when the page loads, users are confused and distrustful, which alters the trust of the website.
In order to offer a long-lasting experience, it is necessary to remove this request, if it exists, when the page loads and to ask for it during a user interaction, indicating that the action will perform a geolocation request. Finally, it is useful to provide a backup solution if the user refuses it.
Crawling is the process by which robots pass while they browse websites, going from links to links in order to discover and index the encountered contents. When Google's robots, named googlebot, discover a new page to explore, they check beforehand that they have the authorization to access it thanks to the directives in the robots.txt file. If the robots are authorized to access it, they will examine the page, otherwise they will ignore it.
If googlebots can't crawl a page, they won't know its content and will end by not indexing it, even if it's possible that the page is indexed by links pointing to it. In some cases, prohibiting crawling may also be necessary. For example, when a part of your site must not be explored for security or uselessness reasons, it is important to prohibit the exploration of these pages.
You can check it directly in the instructions in the robots.txt file. To allow or disallow googlebot crawling you must add an "allow" or "disallow" instruction in the robots.txt file.
Example 1 : Allowing googlebot crawling
user-agent: googlebot allow: /my-page
Example 2 : Forbidding googlebot crawling
user-agent: googlebot disallow: /my-page
It is the organization by order of relevance of the different titles of a content thanks to HTML tags. There are 6 hn tags (the H stands for "Heading"), from H1 to H6 (where H1 indicates the most important title and H6 the least important).
The structuring of hn headings is important, because it helps the indexing and referencing of your page. The hierarchy of titles and subtitles makes it easier for search engines to understand your content by highlighting its important elements. This structuring also improves the experience of the users who find and understand your content more easily.
To be able to structure hn tags, it is necessary to respect the hierarchy of these titles: their order and size. The recommended structure is therefore, (for example) :
<h1>Heading structure</h1> <h2>Why is heading structure important?</h2> <h2>How to structure your content</h2> <h3>Example of a heading structure</h3> <h3>Some tips to structure your content by relevance</h3> <h2>...</h2>
To succeed in structuring your hn tags, it is necessary to :
A library is a third-party resource embedded in the page containing JavaScript code. It provides a set of additional functionality to the browser. The integration of the library is done through the HTML <script> tag.
For example, the D3.js library allows you to create different types of graphics very quickly.
<script src="https://d3js.org/d3.v6.min.js"></script>
Heavy JavaScript libraries can result in poor performance because they consume a lot of browser resources when they are interpreted and increase the network loading required to download them. For this reason, it is worth using lightweight libraries to improve page loading and interactivity time.
When a library has a lighter alternative, or one that offers less functionality but fulfills the usefulness that you make of it, it is interesting to replace it. Replacing a JavaScript library is not something trivial since you have to intervene in the code, and modify all the calls using the old library for the new one.
For example, the "Moment" library, which allows you to manipulate dates, can be replaced by "dayjs", which lightens the weight of the page by 69Ko.
As a reminder, the HTTP protocol (Hyper Text Transfer Protocol), is a mechanism used by Internet browsers to request information from the web server and display pages on the screen of the device being used.
The HTTP/2 protocol is the most modern and powerful version of this protocol.
It is important to use the HTTP/2 protocol rather than the old one because it is more powerful and allows users to enjoy a better browsing experience. This is possible because the HTTP/2 protocol reduces the weight of requests by compressing headers and also connections, called request multiplexing (sending multiple requests on the same connection).
The protocol has to be set up on the web server of the site.
The HTTPS protocol is used to encrypt HTTP (Hypertext Transfer Protocol: WEB protocol necessary to exchange with a server) communications between users and a WEB site. In addition, when a browser makes a secure connection with a site using the HTTPS protocol, the WEB server sends it a certificate of authenticity. This certificate is used to verify the identity of the site with recognized certification authorities such as Symantec, VeriSign or Digicert. This behavior is directly integrated in the browsers and if a problem occurs, the browser will prevent users from accessing the WEB site.
The HTTPS protocol effectively prevents tampering and eavesdropping of communications between a user and a Web site, which greatly enhances the site's relative security and gives users confidence. Moreover, this criterion is very important to Google and has a direct influence on the positioning of a page.
The implementation of the HTTPS protocol is done at the server level where a specific configuration must be set up. Likewise, it is necessary to redirect all unsecured HTTP traffic to HTTPS. Find more information on how to set up the HTTPS protocol on Google's documentation.
When HTTPS is active on a site, it is necessary that all the resources of the page are also loaded via this protocol, otherwise the phenomenon of "Mixed Content" (mixing HTTPS / HTTP) occurs and can cause problems with the loading of unsecured resources. Find all the solutions to solve the problem of Mixed Content on the Google documentation.
The dimensions of an image correspond to its size, expressed in pixels, for example: 600px x 200px.
As a reminder, pixels are a set of points that make up an image. The number of points present in the image constitutes the definition of the image.
The resolution is different from the definition. The resolution of an image is the ratio between the number of pixels in an image and its actual size when it appears on a physical medium.
The dimensions of an image can be modified during the display of a web page and not respect the original dimensions.
In addition, rendering an image with an aspect ratio that is too different from the aspect ratio specified in the source file may cause the image to appear distorted, creating a poor user experience.
It is important to respect the original dimensions and not to deform the image by more than 5% when displaying it on the WEB page.
The dimensions can be defined in the HTML code, for example, thanks to the attributes width=500 and height=400 on the <img> tag displaying it :
<img src="image" width="500" height="400" />
It is also possible to manage the dimensions of an image in a CSS style, this technique has the advantage of being more flexible and offers more possibilities :
img { max-width:100%; height:auto; }
The weight of an image, just like any other file, corresponds to the size it occupies in bytes on a hard disk. It also corresponds to the amount of data to be exchanged with the WEB server, in order to be retrieved by the browser to display it on a page.
Images can be saved in different formats and can be optimized using compression algorithms, with or without loss of quality.
The most common formats are :
The weight of images can quickly become problematic, it generally takes up a significant place in the overall weight of resources to be loaded on a page. Images can therefore seriously slow down the complete loading of a page and its global rendering by the browser. Compressing an image can drastically reduce its weight, so it is essential to optimize it since the user will not see any difference in its quality.
Optimizing images means adapting their format and quality for the WEB, it is useful to :
It refers to the page's possibility of being displayed at the SERPs. A non-indexable page means that it will not be present on search engine results and will end by not bringing traffic to it. You can prevent search engines from indexing a page by using the "noindex" or "none" instruction.
By using the meta tag "noindex" or "none" in the HTML code or with a header "X-Robots-tag: noindex" you can decide which page should be visited by search engines.
If you want to prevent search engines from indexing a page, you can add the "noindex" or "none" instruction as a meta tag or HTTP response header. Both have the same effect and you can choose depending on the degree of control you have over your server and your specific publishing process.
Example - <meta> tag
Add the following meta tag in the <head> section of your page :
<meta name="robots" content="noindex">
Example <meta> tag for Google
Add the following meta tag in the <head> section of your page if you only want to block Google crawlers:
<meta name="googlebot" content="noindex">
Example - HTTP Response Header
You can choose to display an X-Robots-Tag header with a noindex or none value in your response :
HTTP/1.1 200 OK (…) X-Robots-Tag: noindex (…)
If the page is not being indexed when it was supposed to be, you should :
Ps: It may be that the request for non-indexation is voluntary or that it is an error, hence the presence of this analysis.
A javascript error is a problem encountered during the interpretation of the code by the user's browser. This code is generally intended to produce content or manage user interactions at the page level. An error can occur as a result of programming or browser compatibility errors.
When an error occurs, the execution of the rest of the instructions in this code is then interrupted, which can lead to more or less serious problems depending on the usefulness of the code. The content may be missing, the navigation in the site may no longer work, the interactions with the different elements of the page may not respond. In other words, javascript errors can greatly alter the user experience and make your content invisible to the crawlers eyes.
Make sure that there is not a bug in the scripts used and that they are compatible with the majority of browsers.
If the problem is in your code you must :
Minification is the removal of unnecessary characters from a file in order to reduce its size, so it compacts the instructions. These useless characters are, for example, comments, spaces and tabs.
In JavaScript the names of the variables have no importance for the interpretation of the script by the browser, so they can be reduced (var variable = ... becomes var a = ... ). In the same way, the useless code can be removed (variable declared in a function and not used).
Removing unnecessary characters and compacting the code improves the loading time because the JavaScript file weighs less.
To carry out this treatment, it is interesting to use a tool (and not to do it manually) such as javascript-minifier.com, which allows you to minimize a JavaScript script online.
Example of a script before minification :
function myFirstFuction(parameter) { var maVariable = "minification 1"; } function mySecondFunction(parameter) { console.log(parameter); }
After minification :
function myFirstFunction(n){}function mySecondFunction(n){console.log(n)}
It is also possible to include external scripts in their minimized version. Many vendors (CDN) offer this for the production version.
As a reminder, keywords are terms composed of one or more words that correspond to the subject present in a page. These words allow Google to associate these articles or pages to the requests of Internet users in order to present them relevant results.
Keyword optimization is the improvement of the choice and use of keywords relevant to the content, product or service offered on a web page.
It is important to optimize keywords because it is through them that search engines will find your content when users search for specific terms. If your keywords are optimized, your page will be more easily associated with the users' query. This will help improve your click through rate and the quality indicator of your page.
In order to optimize your keywords, your content must :
Keyword repetition is the analysis of the repetition of the content of a page. This analysis takes into account the total number of words and the number of distinct words in order to calculate their repetition rate in the text. For example, in the sentence "here is a seo essay for the seo" the analysis would indicate that there are :
The calculation of the repetition rate is done as follows = (1- number of distinct words / number of words) x 100.
When keyword repetition remains below a certain threshold (less than 50%) it contributes to the referencing of the page. Since above this rate, search engines may consider it as an attempt of over-optimization the content. Moreover, it is important that the content itself is not repeated either. A content with a too high repetition rate can end up being penalized by search engines. It will be considered as "poor" content, of poor quality and with little added value for the users.
In order to optimize the repetition of keywords in your content without losing quality, you must :
In an HTML form each input field addressed to the user must be accompanied by a label informing the object of the input.
The HTML element <label> is a specific element for forms. It allows you to formulate a title to a form field, to create text input spaces, to propose option zones to the users, etc.
Thus, an HTML form will be made up of :
Labels are necessary :
The label element can be associated to a control via the "for" attribute. It is a control labeled by the label element. The "for" attribute refers to the "id" attribute of the form element. In other words, an identifier (the "id" attribute) must be provided to a <input> element in order to associate the input to a <label> element. The attribute "for" is then filled in with the value of the identifier.
Finally, it is possible to directly include a <input> element in a <label> element.
Example :
<div> <label for="user_email">Email</label> <input type="email" name="email" id="user_email"> </div>
This will allow a better ergonomy when clicking on the label to activate the field. This practice is used to adapt the content to small screens.
It is the language in which the content of a web page is written. It is also the element added on the HTML tag that indicates the language of a page. For example, if the content of your page is in English, you must add on the HTML tag lang = "en".
It is important to define the language of a page because it enables robots to interpret the content (even if Google does not necessarily use it). It is interesting to define it because it helps spell checkers and grammar checkers to verify or ignore it (when it is not in the language of the spell checker). Defining the language also helps translation tools recognize pages or parts of text in a specific language. Specifying the language also helps speech synthesizers and Braille translators get usable results.
The lang attribute can be defined as an HTML attribute, in an HTTP Content-Language header and also in an HTML tag. The language must be valid and defined thanks to its ISO-639 code (i.e. fr, fr-fr, fr-ca, en-us).
Here is how to manage the lang attribute in these different elements :
Example 1 : Language defined as attribute in the HTML tag
<html lang="en">
Example2 : Language defined in an HTML tag
<meta http-equiv="content-language" content="en">
Several languages can be defined within the same element, but it is preferable to define only one to facilitate the understanding of search engines. Note that in our analysis, we take into account the first language encountered.
Example 3 : Several languages defined
<meta http-equiv="content-language" content="fr, en">
They are the different versions of a content written in several languages and accessible from different URLs. These versions are indicated and specified thanks to a hreflang link present in the HTML code of the page.
It is important to clearly indicate and specify the localized versions of a content in order to help search engines to better understand it and redirect users to the most relevant and suitable version for them.
The hreflang link can be defined in the HTML code of the page at the <head> level thanks to the <link rel="alternate" hreflang="code_langue" href="url_absolue" /> tag.
<head> ... <link rel="alternate" hreflang="en" href="https://website.com/en/cat" /> <link rel="alternate" hreflang="pt" href="https://website.com/pt/gato" /> ... </head>
The localized versions can also be defined in an HTTP header :
Link: <url_absolue_1>; rel="alternate"; hreflang="code_langue_1", <url_absolue_2>; rel="alternate"; hreflang="code_langue_2". It is possible to define several of them, they are then separated by a comma ",".
Link: <https://website.com/en/cat>; rel="alternate"; hreflang="en", <https://website.com/pt/gato>; rel="alternate"; hreflang="pt"
The urls of these localized versions must be fully defined and the url must be accessible to crawlers. The language must also be valid and defined using the ISO 639-1 format: fr, fr-FR, en etc.
Layout stability is when the various elements of the page do not change position or change only a little as a result of the loading of new resources. For example, the user begins to read text on the page and at the same time, a block is dynamically added above it via JavaScript, the text then shifts to the bottom and the user loses the thread of its reading.
Google's CLS (Cumulative Layout Shift) indicator measures how often these changes occur. The higher the indicator, the more unstable the layout. When this indicator is 0, no unexpected movement of the page content has been detected. Read more about this CLS indicator and how it is calculated in the Google documentation.
Layout stability is important because it is taken into account by Google in its ranking criteria and most importantly, because it negatively impacts the user experience when browsing the page. Faced with an unstable page, the user may be frustrated by the loss of visual cues and worse, he may think that the site does not work because of a shift during interaction on buttons that move for example.
In order to provide an optimal experience for your users, it is necessary to keep the CLS indicator below 0.1.
Here you have a few tips to keep the indicator below the critical bar :
Link juice is the term used to evoke the beneficial effects on natural referencing that a website can transmit to another website or rather a page to another page through an outgoing link.
When a page mentions another page in its content, it ends up by transferring some of its authority (its juice). Imagine that in an article about Inbound Marketing, for example, we mentioned that Digital Marketing is important for this strategy. By inserting a link in the "Digital Marketing" section to the page that we consider the most complete on this topic, we tell Google that this page is the reference on this subject, and so we give it our "vote of confidence" - or as we said before, we pass on a bit of our authority/juice to it.
One of the most important points of Link Juice is the distribution of the notoriety of a website. What is important to understand is that each link used transfers a part of its notoriety. It is extremely important that your page contains more links to your site than to another. Why is this important? Because a site that has only outgoing links is not interesting to Google. However, you should not have only internal links either, since you cannot be the reference in all the domains addresed in your site/page. You can specify to Google the type of your links through the rel attribute on your tags <a> when they are less relevant :
You can also tell Google not to follow all the links of the page thanks to the general instruction for robots: <meta name="robots" content="nofollow">, which will enable you to apply the instruction on all the links of the page.
Be careful, specifying the type of link does not lead to the transmission of "juice", that is to say that through these specific links, no fame is transferred, however, the not transmitted juice is lost.
Make sure that more than 50% of the links on the page point to your site in order to limit the amount of lost juice :
The html language offers the possibility to create lists <ul>, where each enumeration is defined by the <li> tag. Each list element <li> nests in the associated list <ul>.
Depending on the nature of the list you create, you can adapt the content :
If you do not embed your list item in a <ul> or <ol> block item, your list will not be interpreted correctly by your browser and will not be supported by assistive technologies.
Your list should contain the following information :
For example :
<h1>Example of an ordered list of fruits</h1> <ol> <Banana <orange/li>Orange/li> <pineapple>/pineapple>/li> </ol> This will result in :
Example of an ordered list of fruits
Banana
Orange
Pineapple
It is pieces of code that are added to the HTML code of a site in order to improve the visibility and understanding of the content of a page for search engines.
This data enriches the information that is transmitted to the search engines. They will then be able to understand information on your page such as an address, phone number, notices, etc... and display it in search results.
The purpose of Microdata is to make your site more understandable to search engines. This will allow them to offer content that is more adapted and more responsive to the requests of Internet users.
As a result, Microdata has an influence on your positioning because it helps crawlers to index your content and associate it with user requests.
Microdata also allows you to enrich the semantics of the page by specifying data related to content, context, etc. For example, for a blog article, additional data such as author, date, etc. can be found.
By specifying these data, Google will be able to give you access to the "rich snipped" that will make your content even more attractive to the Internet user in the SERP and increase your click-through rate.
To write these tags, there are 3 possible syntaxes: JSON-LD, RDFa and Microdata.
Example of syntax in JSON-LD :
<html> <head> <title>Party Coffee Cake</title> <script type="application/ld+json"> { "@context": "https://schema.org/", "@type": "Recipe", "name": "Party Coffee Cake", "author": { "@type": "Person", "name": "Mary Stone" }, "datePublished": "2018-03-10", "description": "This coffee cake is awesome and perfect for parties.", "prepTime": "PT20M" } </script> </head> <body> <h2>Party coffee cake recipe</h2> <p> This coffee cake is awesome and perfect for parties. </p> </body> </html>
It is recommended to use tags listed on schema.org, which is the protocol that structures web pages as they are known all over the world. There, you will also find all the indications to set up your Microdata.
When a browser interprets the code of a page and converts the HTML, CSS and JavaScript into pixels, it performs what is called the pixel pipeline. It is a process that consists of 5 sequential steps :
At each step, the browser uses the result of the previous step to create new data. A non-composited animation is a rendering that forces the browser to go back to a previous step (JavaScript / CSS, Style, Paint and Layout) which increases the necessity of calculations and therefore the rendering time.
Non-composited animations may appear irregular when rendered by the browser on low-powered machines or when heavy tasks are executed at the same time. This lack of fluidity affects the visual perception of the page for users.
There are several reasons why an animation is not composited and to correct them it is necessary to intervene on the CSS rules in use :
For more information about the pixel pipeline and optimizing performance for rendering, check Google's documentation on rendering performance.
A notification permission is a request made by a website to the user to obtain the right to send notifications. When this request is accepted, the user will be able to receive notifications that will be displayed off the page in their system. This request, which is displayed on the user's browser, is initiated using javaScript code: notification.requestPermission()
Faced with a request for permission to notify without context, i.e. when the page is loading, users are confused, suspicious and may refuse it. An early request therefore hinders the user experience and alters the relative confidence in the website.
To provide an optimal experience, it is necessary to make the request at the right time, as a user interaction and not at the page loading. In addition, it is interesting to offer several types of notifications to the user and to make the request only after his choice.
As a reminder, keywords are terms composed of one or more words that correspond to the subject present in an article or in a page. These words allow Google to associate these articles or pages with the requests of Internet users in order to present them relevant results.
The number of keywords corresponds to the number of words (excluding stopwords) on your page. For example, if in your content there are the words "shoes" and "green shoes", the analysis will indicate that there are 2 keywords.
If in the content of your page there aren't any or very few keywords, Google and other search engines consider that your page has no added value so they will not rank it.
In order to be ranked and to have an optimized page at the level of keywords, we recommend :
An offscreen image is simply an image that is not visible to the user when the page loads. It is located below the waterline or is not visible without user interaction.
It is useful to defer their loading after all the essential resources the page have loaded in order to reduce the delay before interactivity and thus improve the user experience.
There is no specific HTML attribute to inform the browser that these images are to be uploaded later. To delay their loading it is therefore necessary to use JavaScript code that will take care of loading the image at the appropriate time.
An example of use with lazysizes, which is a script for lazy loading and is optimized for SEO :
... <script src="lazysizes.min.js" async></script> </body>
<img src="images/hors-ecran.jpg" alt="Mon Image hors écran"><img class="lazyload" data-src="images/off-screen.jpg" alt="My Image offscreen">
The meta viewport is an element added to the HTML code that instructs the browser to control the display dimensions of a web page. This element allows the layout of the content to be responsive, i.e. it adapts to the screens of different user devices, mainly mobile devices, such as mobile phones and tablets for example.
It is important to have a meta viewport because today, the majority of user searches are done from a mobile device. When a web page doesn't adapt to these devices, it makes the user experience difficult which penalizes its referencing.
How to optimize your meta viewport
The meta viewport must be declared in the HTML code at the <head> level :
<meta name="viewport" content="width=device-width, initial-scale=1.0">
In order for the page content to fit on a mobile screen, it is necessary to define the meta viewport with the following 2 instructions, otherwise the browser will not perform the layout correctly :
As a reminder, the h1 title is the main and most important title of a content. This is the first title you usually see at the top of a web page.
An optimized h1 title is a title that contains the targeted keyword related to the main content of the page. It is also the title written with the largest font size.
The h1 title is important because it is through it that search engines and users will identify the content of a page. It is also important to know that the title h1 is one of the most influential factors for ranking on search engine results.
In order to optimize your h1 title you must :
For example :
<h1>How to optimize a h1 title to get organic referencing</h1>
A meta description is the description of the content of your website or web page. It informs the Internet user about what he will find by clicking on your site. It is located below the title link that allows the access to your site in the pages of search engines.
The meta tag "description" plays an essential role in the referencing of a website because it appears in the SERPs of search engines. Even if it is no longer crawled by search engine robots, it is the main showcase of your website (along with the title). Having a nice showcase attracts more traffic and increases your CTR, improving your SEO. Its goal is therefore to attract the interest of Internet users to visit your site.
When you write your title in your meta tag "description", it is important to respect several criteria in order to optimize it :
Reminder: The title of a web page appears in the title bar of the browser, but also in the pages of search engines. The page title in HTML language is called <Title> tag.
An optimized page title is therefore a title that actively participates in your referencing in the SERPs, thanks to its relevance and attractiveness for the user and for search engines.
The title of a page is as important as the title of a novel. It must :
When you write your title in your <title> tag, it is important to respect several criteria in order to optimize it as well as possible :
When a resource on a page is loaded later by a JavaScript script or CSS style, the browser cannot anticipate the necessary connection to the site (origin) and wastes time connecting to it when it has to load it instead. A preconnection is an instruction that informs the browser that this connection will be needed later, allowing it to prepare for it.
It is useful to specify these different later connections so that the browser anticipates them and improves the loading of the relevant resources when the time comes. Pre-connections therefore improve the page loading time.
Establish early connections with important external sources. To tell the browser which pre-connection to make, it is necessary to add the code at the <head> section of the HTML code.
Example for the subsequent loading of the resource "https://exemple.com/script.js" :
<link rel="preconnect" href="https://exemple.com">
Preload resource is the anticipation of the loading of a resource required later by another resource, necessary for the rendering of the page. For example, in a CSS style, the loading of a web font is necessary. When loading the page, the browser will load this font after interpreting and the CSS style. Preloading then consists in informing the browser that this font must be loaded before the CSS style is interpreted.
It is interesting to inform the browser of future loadings required by other resources so that it anticipates them and improves the page loading time and renders the page more quickly.
To tell the browser which resources to preload, it is necessary to add the instructions in the <head> section of the HTML code of the page.
Example for loading a web font required by a CSS style :
<head> ... <link rel="preload" href="https://fonts.googleapis.com/css?family=Quicksand|Lato" as="font" crossorigin="anonymous" /> ... </head>
The "as" attribute is used to define the MIME type of the resource to be preloaded, which is useful for the browser in prioritizing the resources to be loaded. In the absence of this attribute, the browser can ignore the instruction. The main types are: script, style, font, image, video, audio, document, object, fetch, track and embed.
The "crossorigin" attribute allows to manage the CORS (security linked to the loading of a resource from another site). This attribute is optional when the resource is on the same site. If the resource is external and the attribute is not specified, the preload will not be taken into account by the browser.
Links are HTML elements materialized by the <a> tag. They are placed in a page and allow users, by clicking on them, to access another page. We talk about external links when they lead the user to another site. It is possible to define the behavior of a link on the browser with the attribute "target". When it is not set, the page opens on the current tab and replaces the current page. When defined with the value "_blank", it opens a new tab on the browser and leaves the current page accessible.
Example of an external link opening on a new tab :
<a href="https://cocolyze.com" target="_blank">Cocolyze</a>
External links opening on a new tab expose sensitive code from the current page to the destination page. If the destination is malicious, it may maliciously redirect the current page to another destination. Similarly, the destination page may be running on the same process as the current page, and if a lot of JavaScript code is executed, the current page may be slowed down.
In order to secure external links, it is necessary to add the attribute "rel", on all links <a>, specifying the value "noopener" or "noreferrer" when they open on a new tab.
Example of a protected external link :
<a href="https://cocolyze.com" target="_blank" rel="noopener">Cocolyze</a>
Ranked pages are pages that we found on your keyword tracking (in the top 100 results) during the last analysis by our robot.
The readability of content concerns the facility of reading and comprehension of the content by the users. Structuring aspects, such as paragraphs, sections, titles and subtitles, number of words, length of sentences, language used, but also more aesthetic aspects such as the font of the text, its size and color, for example, directly influence the readability of a content.
Readability of content is important because it has a strong influence on the reader's interest and engagement with content. A content that is difficult to understand, whether because of its structure, the language used or even because of its design, will have difficulty attracting visits. Even less will it make them come back.
Regarding readability from a structuring and language point of view, think about :
Render-blocking resources are CSS styles or JavaScript scripts that need to be fully loaded to perform the first rendering of the page.
Depending on their position in the HTML code of your site, these render- blocking resources can considerably slow down the loading time of your page.
The HTML code of a web page is read from top to bottom by browsers. If your JavaScript script is located at the beginning of the page for example, the reading by the browser of your HTML code will "pause" to process your Javascript script. Thus, the rest of your HTML content will not be displayed during the processing of this element, which is penalizing for the loading time of your site, for the user experience and therefore for your referencing.
The resources that block the rendering of a web page are generally derived from JavaScript and CSS.
Place JavaScript scripts at the bottom of the page or defer their loading thanks to the "defer" attribute :
<head> <script src="/script.js" defer></script> </head> ... ou ... <script src="/script.js"></script> </body> </html>
Specify the relevant media for stylesheets :
<head> <link rel="stylesheet" href="print.css" media="print"> </head>
The resource weight corresponds to the bytes to be downloaded by the browser to retrieve all the elements necessary for the page (CSS styles, JavaScript scripts, images etc).
We distinguish the real weight of the resource from the transferred weight. The real size corresponds to the brute size of the file (the space it occupies in memory or on a hard disk) as for the transferred size, it is the space it occupies after compression. Usually WEB servers compress the resources before sending them in order to reduce the bandwidth needed for their download, then browsers decompress them after reception in order to restore them.
The more resources to load, the more the global weight of these resources is important and it takes time for the browser to recover everything. Also, the browser may limit simultaneous connections to the same site and put other connections on standby, which further increases the time it takes to load the page. Integrating a lot of resources into a page, therefore, directly impacts its loading speed, especially if the user does not have a good Internet connection, or if he is on a phone with a limited connection.
Beyond all the methods to reduce the weight of resources (image compression, CSS style minimization, etc.), it is recommended to :
The fact that an image is responsive to a screen means that its dimensions correspond to the dimensions of the screen, it is expressed in pixels, for example: 600px x 400px. The objective is therefore that the size of the image adapts to the dimensions of the screen, but also its resolution and quality.
The dimensions of an image must be adapted to the browser's screen: an image that is too large may take a long time to load and may not be visible during this loading time. An image that is too small may be displayed in a pixelated way.
This is why these adaptive images are necessary to make your website "responsive", i.e. the same page of a site will have a rendering adapted to the size of the screen you are using. It is therefore necessary to adapt your images to avoid having too big images on a smartphone, for example.
It is possible to save an image under different dimensions so that the one with the most adapted is loaded.
Defining different image dimensions is possible thanks to the "srcset" attribute. It is indicated in the following way: "(url alternative image 1) (width description), (url alternative image 2) (width description)".
<img src="image.jpg" srcset="small-image.jpg 480w, big-image.jpg 1080w">
Note that the width description is specified with the unit w, not px. This is the actual size of the image. It can be found by examining the properties of the image file on the computer.
When this attribute is specified, the browser loads the appropriate image.
It is important to leave the src attribute set, so that browsers that do not support the srcset attribute can display it.
The Robots.txt file tells search engines which pages they can explore or not. This file is placed at the root of the site at https://exemple.com/robots.txt.
This file can, if misused, cause some of your pages to be indexed in search engine results, even if you don't want them to.
It is the time spent by the browser analyzing, compiling and executing JavaScript code. These steps are essential after loading the scripts so that the page can take advantage of the defined features and behaviors. The main execution is called "main thread" (it is possible to download some processing on secondary "threads"), it is popularly known as the main process in charge of the rendering and interactivity of the page.
When the page includes a lot of JavaScript code, it degrades the performance of the page in several ways :
When a page suffers from slowness due to too much script execution, it is useful to :
The scroll X is the element that enables you to scroll on a page from left to right, that is, horizontally. You can see it when there is a horizontal scrolling bar on the browser. It happens when the content doesn’t adapt to the size of the screen meaning that the site is bigger than the screen window and it indicates that the website is not responsive enough.
The presence of a scroll X has a bad influence on users' experience: horizontal scrolling is not genuine for users, they don’t have the reflex to scroll pages from left to right. If users have a bad experience on your site, they are probably going to return to SERP to choose a more adapted website increasing your bounce rate, and this will influence directly your ranking emplacement.
You need to work on the CSS style/ template of your page.
The Search Volume corresponds to the estimated number of searches carried out on Google every month.
This score allows you to measure the visibility and click potential of a keyword compared to another.
Keywords with a high search volume can bring in more clicks, but they are often more competitive.
When you create a link to an another site using the target="_blank"
attribute (open in new tab), you can expose your website to performance and security issues.
The other page may use the same ressources than your page and your page's performance may suffer.
The other page can access to informations from your page, using the window.opener
property. This may allow the other page to redirect your page to a malicious URL.
To avoid these issues, add a rel="noopener"
or rel="noreferrer"
attribute to your links using a target="_blank"
attribute.
The SEO Value score allows you to measure the importance of a position or its movements. It corresponds to the estimated value of an SEO position.
It’s more useful to track the SEO value rather than the average rank. The SEO value allows you to measure the ranking evolution by taking into account the importance of keywords. For example, it will significantly increase if you’ve gained a position on a very important keyword. On the contrary, it will barely change if you gain a lot of positions on keywords with a weak potential.
You can use this score to measure your Return on Investment (ROI) by calculating your budget and the time spent, and comparing it with the SEO drops.
Data regarding keywords (volume and CPC) are updated every month and this can impact the SEO value that's why you can see variations at the beginning of the month.
The SEO Value is calculated according to our algorithm calculation. It takes into account different factors such as rank, type of keyword, search importance, AdWords competition, etc
To summarize: the SEO Value is the amount of money you have saved in AdWords to retain the same number of visitors you have with your current position in the search results.
The server response time (SRT) corresponds to the time that the WEB server takes to render a page, between the moment it receives the request and the moment it starts to send the page content. This time does not depend on the quality of the connection or the geographical location of the server (latency), but depends only on the computing performance of the WEB server.
The server response time can greatly affect the position of a site in search results, either because Google judges that the site is too slow to be proposed, or because users do not have the patience to wait for the page to load.
It is recommended that the server returns the page in less than 300ms.
There are several solutions to improve the server response time :
Access Google's server response time documentation for more information.
Spam Rating is a score created by Cocolyze which allows us to analyse the toxicity risk of a link. The Spam Rating is a mark, between 1-10, 10 being of high toxicity risk.
We consider a potentially toxic link a link that could badly influence the quality of a website's netlinking, and therefore its Google ranking. A toxic link for example can be a link from a poor quality website, such as an online casino in a foreign country with an uncertain reliability which points a link towards your site, even though you have no connection with the casino world. Google could penalize your ranking if there is a high number of poor quality sites linked to yours.
The Spam Rating score doesn't indicate the toxic links but the links that have a high toxicity risk. We advise that you manually check the links with a Spam Rating superior or equal to 8: if you think that some of these links aren't linked to your activity and are suspciously unreliable, you can disavow these links with Google to tell them that you don't want Google to take them into account.
Here is an article from Google explaining how to disavow your backlinks.
The Spam Rating is calculated by powerful algorithms from our own research and development. In simple terms, this score rests on popularity, confidence of the referring site, its quality and the analysis of the present links on the page. A website with a high popularity level (i.e. with lots of links) but poor quality (i.e. a weak confidence score) will be considered as potentially toxic.
It is mainly JavaScript code whose role is to fill in missing functionalities on some browsers, and more particularly on older versions. These functions, whose objective is to ensure compatibility on the various supports, are called polyfills. They can be considered superfluous because the majority of recent browsers do not use them.
Polyfills (and "transform" attributes) are good because they allow older browsers to use recent features and are therefore important for the accessibility of all users, even those who do not update their browser. What is problematic is the fact that they are loaded by modern browsers that do not need them. Like unused scripts, downloading unused code delays the page loading and makes the necessary code interpretation by the browser slower, which affects the speed of the page.
The most common solution is to detect the user's browser and load only the necessary polyfills :
if (recent_browser()) { lancement(); } else { charger_script('polyfills.js', lancement); } function lancement() { ... }
Another solution is to use the new syntax for loading JavaScript modules, which allows to load the appropriate script according to the interpretation of the HTML code by the browser :
<!-- Modern browsers will load this script and older browsers will ignore it. --> <script type="module" src="script.js"></script> <!-- Modern browsers will load this script and older browsers will ignore it. --> <script nomodule src="script_avec_polyfills.js"></script>
Find more information about this technique in the article by Philip Walton, engineer at Google.
A tap target is an HTML element of the page that users can press (or click on), mainly links and buttons.
When a tap target is too small, users using a phone may find it difficult to interact with it, which affects the user experience and therefore the positioning of the page on the SERPs.
In order for the tap targets to be easily accessible, it is necessary that they occupy a minimum size of 48px x 48px and that the spacing between these different elements be at least 8px.
When this type of problem is present on a page, it is necessary to intervene at the CSS style level in order to increase the size of the elements concerned, by increasing the "padding" property for example. Also, the spacing between different elements can be adjusted using the "margin" property.
Text compression allows you to compress all the resources of your website that use text (html, JavaScript, CSS style, AJAX calls).
This operation reduces the weight of the data sent to the client : the server will compress the targeted data, then send it to the browser, which will decompress it and use it to display the text content of your page. Thanks to this process, the user experience will be improved because the loading time of your web page that will be reduced.
There are several compression formats and methods: Gzip, Deflate and Brotli.
When a request is made from a browser to a server, the server specifies in an HTTP header "Accept-Encoding" the algorithms it supports.
Accept-Encoding: gzip, compress, br
In return, the server will then reply with the indication "Content-Encoding" in the HTTP header to specify which compression algorithm was used.
Content-Encoding: gzip
It is important not to activate the text compression on other types of resources such as images, videos, etc, and to set it up on the WEB server of your site.
Text size is the space that characters occupy in the content of a WEB page. The size of a text is defined in the CSS styles thanks to the "font-size" property. By default, in the absence of instructions, browsers generally use a size of 16px.
body { font-size: 14px; }
The value that the attribute "font-size" can take is expressed in pixels (px), it can also be expressed through other units (%, em, rem) which are translated into pixels by the browser.
If the text size is not big enough, especially on cell phones, users will have difficulty reading the content of your page and will have to zoom in to make it accessible. Having to manipulate the browser to make the content accessible degrades the user experience, so it is essential to provide the best accessibility to users from the first display of the WEB page.
To provide an optimal experience, at least 60% of the text must be larger than 12px. If this is not the case it is necessary to :
p {font-size: 8px;font-size: 14px; }
<head> ... <meta name="viewport" content="width=device-width, initial-scale=1"> ... </head>
Text visibility corresponds to the fact that the text of a web page is visible while the font is loading. The font is the visual representation of characters and is present by default on operating systems. It is possible to load a specific font (WEB font) in order to format the text in an original way.
When a web font is being loaded, text without the system's alternative font is not visible until the browser loads and during this time the user sees nothing. This can lead to a bad user experience and cause unexpected layout changes.
In order to solve the problem of the absence of text during loading, it is necessary to use a font present on the operating systems, so that the text is visible during loading.
The definition of a WEB font is done at the level of the CSS style and is declared in the following way (without instruction to use an alternative system font) :
@font-face { font-family: 'Pacifico'; src: local('Pacifico Regular'), local('Pacifico-Regular'), url(https://fonts.gstatic.com/s/pacifico/v12/FwZY7-Qmy14u9lezJ-6H6MmBp0u-.woff2) format('woff2'); }
The easiest way to use a temporary system font while it is loading is to add the "font-display: swap;" property :
@font-face { font-family: 'Pacifico'; src: local('Pacifico Regular'), local('Pacifico-Regular'), url(https://fonts.gstatic.com/s/pacifico/v12/FwZY7-Qmy14u9lezJ-6H6MmBp0u-.woff2) format('woff2'); font-display: swap; }
The "swap" instruction tells the browser that the text using the font should be displayed immediately using a system font.
The robots.txt file is a text file located at the base of your site that is intended to provide instructions to search engine spiders.
It allows, among other things, to :
define one or more links to sitemap files : files that list all the pages of your site
An example of robots.txt file accessible via the url : https://www.tikamoon.com/robots.txt
User-Agent: * Allow: / Disallow: /V2/ Disallow: /recherche Disallow: /articlepopup.php* Disallow: /recommander.php* Disallow: *filtreprix=* Disallow: *action=* Disallow: *artid=* sitemap : https://www.tikamoon.com/sitemap.xml
It is important to have a robots.txt file because, it allows you to clearly define the access rules of your website and, also it is where you fill in its sitemap file. When a part of your site must not be explored for security or uselessness reasons, it is interesting to prohibit the exploration of these pages. Prohibiting exploration does not mean that the pages cannot be indexed (contrary to the "no index" instructions) but it is very unlikely that they will be there. The main interest is that the robots do not waste time (crawl budget) to analyze the content of pages that you don't want in the SERPs.
For example, you have a part of your community site that contains user profile sheets that are poor in terms of content and added value, so it is better to prohibit access to these pages so that the robots mainly explore your pages with added value.
In the absence of this file and generally a HTTP 4xx error during recovery, the robots consider that they are authorized to explore your entire site, which can be a problem as they will eventually explore pages that you did not want them to explore.
If an error occurs during the recovery of this file with an HTTP 5xx error or no response (with a timeout, for example), then they consider that they are not allowed to explore the entirety of your site and you have very little chance that your pages appear on the SERPs.
Similarly, if syntax errors are present in the directives, it is possible that the robots misinterpret your intentions and therefore explore pages that should not be and vice versa.
In order to ensure that the robots.txt file is valid, it is necessary to :
And also, check the syntax of the file by following these few instructions :
A thematic cocoon is a group of keywords linked to a specific thematic (lexical field). A content page may contain several thematic cocoons.
Thematic cocoons are analyzed by the Writings feature. When you create new content, Cocolyze gathers and analyzes the contents of the 20 top-ranked pages found on Google SERPs and extracts thematic cocoons from their contents.
The thematic cocoons identified by Cocolyze will help you during the writing of your content allowing you to enrich your page with the right expressions and keywords. By intelligently integrating the keywords of the different thematic cocoons you will be able to :
Enrich your content at the lexical level and improve its quality
Reach in an easier way a sufficient textual content at the quantitative level (minimum number of words to be well-ranked).
Cocolyze scans the pages of the first 20 search results in the SERPs and analyzes the content.
Among different elements analyzed by Cocolyze, the algorithm identifies :
The weighted density of each keyword
Depending on the position of a keyword on the page, it will have more or less weight. For example, the keyword in the title H1 has more weight than in a paragraph. Cocolyze identifies the total number of occurrences, as well as their weight.
Keyword themes
Using an algorithm developed in-house, Cocolyze can group keywords, both generic and long tail by theme.
Third-party code is all the scripts from other sites and more precisely from CDN (Content Delivery Network) that can be directly integrated into the page. These scripts are in most cases essential to the proper functioning of the page. They are used, for example, to analyze traffic, provide sharing buttons for social networks, set up advertising, etc.
Third-Party scripts bring a set of useful features to websites but their use can also degrade the page performance if they are numerous and not optimized. The aspects that can be impacted are :
Beyond a blocking delay of 250ms caused by third party code, it is necessary to intervene and reduce its impact on page performance:
<script src="https://cdn.example.com/script.js" defer></script>
function load(event) { event.target.removeEventListener(event.type, arguments.callee); var script = document.createElement('script'); script.src = 'https://cdn.example.com/script.js'; document.head.appendChild(script); } document.getElementById("bouton").addEventListener("click", load);
<script src="https://cdn.exemple.com/script.js"></script><script src="/js/script.js"></script>
<script src="https://cdn.exemple.com/unused.js"></script>
<link rel="preconnect" href="https://cdn.example.com">
The loading time of the first byte(Time To First Byte) is the time spent between the sent of the request to go on a page and the first byte received. It is not about the first appearance of the page but the first code lines the browser receives. This time depends on the DNS resolution time, connection quality (latency), geographical location, server performance, quality of the application generating the page and the cache system.
At Cocolyze’s, we analyze the average latency of our servers toward yours in order to give you a server fastness indicator more interesting for the analysis of your website’s performance
Google tends to show contents with a good user experience first. This way, a page with a too long TTFB can be penalized. In a more simple way, if your page is too long to load, the user might leave you to go back on the SERP and click on a competitor’s link. This action will increase the bounce rate of your link and penalizes your positioning.
We recommend a TTFB under 400ms (0,4s)
Increasing server fastness is a very technical move. Well often, the site is highly slowed down by one of your CMS plugin (Wordpress, Magento, Prestashop, etc.).
A simple solution, easy to do for everyone, is to deactivate each of your CMS plugin one by one, and then to test the fastness of your server thanks an analysis update. You will then be able to know which plugin makes your server slows down. Once the plugin identified, you can look for solutions : updates, find another plugin, deactivate it on some pages, or simply stop using this plugin.
If it is not a plugin ?
Often, a high TTFB can come from :
The solution then consists in looking in the website’s source code step by step with your technical support in order to identify the elements that cause a slow down of the server. Indeed, it is not always helpful to change for a more powerful web server.
It is a page refresh or redirection to another page that occurs during the user's navigation. This refresh is usually caused by an HTML "refresh" instruction after a certain delay.
The fact that a page refreshes its contents automatically, or even worse, that the user is redirected to another page, can cause the user to lose track of his navigation and become deeply frustrated. In the majority of cases, the user will prefer a site with intuitive navigation where he is in control of the actions performed on the page.
If this type of redirection is observed, it is necessary to remove the instruction in question:
<meta http-equiv="refresh" ...>
Refresh: ...
Similarly, it may be useful to inspect the JavaScript code on the page to verify that there is no redirection caused randomly. A JavaScript redirection should preferably be triggered by a user action.
The unload event occurs when the user interups the loading of a page by leaving the page; or when the user clicks on a link, sends a form, closes the browser window or returns to the previous page. In contrast, the "load" event occurs when the page has finished loading.
The "onUnload" and "onLoad" attributes are embedded in the <body> tags in HTML or <frameset> in JavaScript.
However, the unload event does not work, especially on cell phones, when :
In addition, the presence of an "unload" event can prevent browsers from caching or placing pages in the background while browsing.
For example : If you are on a web application that plays a video and you decide to view another application at the same time, the unload event can prevent the application containing the video from stopping. The unload will prevent that application from being considered as background, which will not allow the video to be interrupted by switching to another application or web page.
It is recommended on all modern browsers to use the "pagehide" event to detect page unloads.
It is a CSS style critically loaded (essential for the first rendering) by the page whose formatting rules are not or little used by the page (use less than 25%). For example, the page loads a CSS library with a multitude of rules and the page only uses those necessary for the formatting of the titles.
Loading an unused resource unnecessarily increases the overall weight of the page and delays its loading time. In addition, the browser must also browse the CSS style to determine which rules to apply, which delays the first rendering. It is therefore a good idea to remove these unnecessary styles or delay loading them (to avoid blocking the first rendering) in order to improve page speed and user experience.
Before intervening on CSS files, it is necessary to understand that page analysis focuses on page optimization without taking into account the overall site. Thus a CSS style blocking unused on a certain page can be used on another one. The grouping of the different CSS rules useful to the site in the same file and declared in an HTML page template, is a good thing, even if these rules are hardly used by a particular page. It is therefore important to check this aspect before intervening and to ignore this criterion if this style is useful globally and whose loading cannot be delayed.
In general, to solve this problem, it is necessary to :
<head> ...<link rel="stylesheet" href="styles/unused.css" />... </head>
<head> ... <link rel="stylesheet" href="styles/unused.css" defer/> ... </head>
<head> <style> .titre { font-size: 20px; color: blue; } </style> ... <link rel="stylesheet" href="styles/unused.css" defer/> ... </head>
It is a JavaScript script included by the page whose code is not or almost not used (use less than 25%). For example, the page loads a JavaScript module offering a multitude of utilities and uses only the function necessary for date formatting.
Loading an unused resource adds to the overall weight of the page and delays its loading. Furthermore, the browser must also interpret the code, which delays the interaction time for the user. Even if the script is loaded asynchronously and is not required for the first rendering, the code competes for bandwidth with other resources during download, which has significant performance implications. Therefore, it makes sense to rework/remove these scripts to improve the speed of the page and the user experience.
Before intervening on JavaScript files, it is necessary to understand that page analysis focuses on page optimization without taking into account the overall site. Thus a script that is unused on a certain page can be used on another one. The grouping of different codes useful to the site in the same file is not a bad thing. It is therefore important to check this aspect before intervening and to ignore this criterion if the script found is useful globally.
In general, to solve this problem, it is necessary to :
...<script src="scripts/inutile.js"></script>...
A vulnerable module is a third-party JavaScript library with security holes that have been discovered and made public. When a vulnerability is discovered, a criticality score from 0 to 10 is determined according to several criteria (attack vector, complexity, necessary privileges, etc.) and is then classified into one of three levels:
Vulnerable modules can be exploited by hackers, which, depending on the flaws, can have a more or less serious impact on the integrity and trust of a WEB site. In order not to be hacked and not to compromise user data, it is necessary to close these loopholes.
Some examples of attacks :
When a JavaScript module has recognized security flaws, it is necessary to :