Learn how to use Google Search Console to enhance the health and performance of your website.
Google Search Console delivers information that is only available through Search Console to monitor website performance in search and enhance search rankings.
This makes it essential for internet businesses and publications looking to achieve maximum success.
Using the free tools and reports to take control of your search presence is much easier.
What is Google Search Console?
Google Search Console is a free Google online tool that allows publishers and search marketing experts to track their site’s general health and performance concerning Google search.
It provides a summary of indicators relating to search performance and user experience to assist publishers in improving their sites and increasing traffic.
When Google identifies security risks (such as hacking vulnerabilities) or if the search quality team has issued a human action penalty, it uses Search Console to communicate.
Monitoring indexing and crawling are important functions.
- Errors must be identified and corrected.
- A summary of search results.
- Request that updated pages be indexed.
- Examine both internal and external hyperlinks.
It is not essential nor a ranking factor to use Search Console to improve your ranking.
The Search Console, on the other hand, is crucial for improving search performance and driving more visitors to a website.
How to Get Started
Verifying site ownership is the first step in using Search Console.
Depending on whether you’re confirming a website, a domain, a Google site, or a Blogger-hosted site, Google offers many options for site verification.
When you register a domain with Google, it is immediately confirmed when you add it to Search Console.
The majority of consumers will use one of four techniques to validate their websites:
- Upload an HTML file.
- Meta description.
- Tracking code for Google Analytics.
- Google Tag Manager is a tool that allows you to manage your tags.
Some site hosting providers impose restrictions on what may be submitted and demand that site owners be verified in a specified way.
But, as many hosted site services include an easy-to-follow authentication process, which will be described below, this is becoming less of an issue.
How to Verify Site Ownership
With a conventional website, such as a standard WordPress site, there are two standard ways to authenticate site ownership.
- Upload an HTML file.
- The meta tag is a type of tag that is used to describe.
You’ll use the URL prefix properties process when evaluating a site using one of these two techniques.
Let’s take a moment to admit that the phrase “URL-prefix properties” means nothing to anyone save the Googler who coined it.
Don’t let it make you feel like you’re going to embark on a blindfolded journey through a labyrinth. It’s simple to verify a website with Google.
HTML File Upload Method
Step 1: Open the Property Selector menu in the top left-hand corner of any Search Console page by going to the Search Console.
Step 2: Enter the site’s URL in the Select Property Type pop-up, then click the Continue button.
Step 3: Choose the HTML file upload method and save the HTML file to your computer.
Step 4: Upload the HTML file to your website’s root directory.
Step 5: Return to the Search Console and click Verify to complete the verification process.
The process for verifying a normal website with its domain in website systems like Wix and Weebly are identical, only you’ll be adding a Meta description tag to your Wix site.
Duda has a straightforward approach, employing a Search Console App to quickly verify the site and get its customers up and running.
Troubleshooting With GSC
Google’s capacity to crawl and index webpages determine where they appear in search results.
The Search Console URL Inspection Tool alerts you to any crawling and indexing difficulties before they become a serious issue and cause pages to disappear from search results.
URL Inspection Tool
The URL inspection tool determines whether or not a URL has been indexed and is therefore eligible to appear in a search result.
A user can do the following for each URL they submit:
- Request that a recently updated webpage be indexed.
- Take a look at how Google found the page (sitemaps and referring internal pages).
- Find out when a URL was last crawled.
- Check to see if Google is utilizing a canonical URL that has been specified or if it is using a different one.
- Check the usability of your website on a mobile device.
- Look for features such as breadcrumbs.
The coverage section displays Discovery (how Google found the URL), Crawl (if Google successfully crawled the URL and, if not, why not), and Enhancements (how Google improved the URL) (provides the status of structured data).
The left-hand menu will take you to the coverage section.
Coverage Error Reports
While some reports are categorized as mistakes, this does not always imply that something is incorrect. Sometimes it just implies that indexing may be better.
Google, for example, is returning a 403 Forbidden server answer to roughly 6,000 URLs in the sample below.
The 403 error message indicates that the host has blocked Googlebot from crawling certain URLs.
Because Googlebot is unable to crawl the member pages of an online forum, the aforementioned problems occur.
Every forum user has a profile page that includes a list of their most recent posts as well as additional information.
The report includes a list of the URLs that are causing the problem.
When you click on one of the listed URLs, a menu appears on the right that allows you to investigate the URL in question.
A contextual menu in the shape of a magnifying glass icon may also be seen to the right of the URL, which includes the ability to inspect the URL.
The Inspect URL explains how the page was found by clicking on it.
It also includes the following information:
- This is the final crawl.
- Crawled like that.
- Is crawling permitted?
- Obtaining a page (if failed, provides the server error code).
- Is indexing permitted?
There’s also information regarding the Google canonical:
- It is canonical since it has been declared by the users.
- Canonical based on Google’s selection.
The crucial diagnostic information for the forum website in the preceding example may be found in the Discovery section.
This section identifies the pages that Googlebot sees as having connections to member profiles.
With this knowledge, the publisher may now write a PHP statement that causes the links to the member pages to vanish when a search engine crawler crawls the site.
Another solution is to add a new entry to the robots.txt file, which will prevent Google from crawling certain pages.
We free up crawling resources for Googlebot to index the remainder of the website by removing the 403 error.
The coverage report in Google Search Console allows you to diagnose and address Googlebot crawling issues.