Google introduces new crawler to optimize Googlebot performance

Google introduces new crawler to optimize Googlebot performance

Google recently introduced a new web crawler called “GoogleOther” that aims to offload Googlebot, its primary search index crawler.

The addition of this new crawler will ultimately help Google streamline and streamline its crawling operations.

Web crawlers, also known as robots or spiders, discover and scan websites automatically.

The Googlebot is responsible for building the index for the Google search.

GoogleOther is a generic web crawler used by various product teams within Google to pull publicly available content from websites.

In a LinkedIn post, Google Search Analyst Gary Illyes shares more details.

Allocation of responsibilities between Googlebot and GoogleOther

The main purpose of the new GoogleOther crawler is to take over the non-essential tasks currently performed by the Googlebot.

This allows Googlebot to focus solely on building the search index used by Google Search.

Meanwhile, GoogleOther is taking on other jobs, such as B. Research and development (R&D) crawls that are not directly related to search indexing.

Illyes explains on LinkedIn:

“We’ve added a new crawler, GoogleOther, to our crawler list that will ultimately take some of the load off Googlebot. This is a no-op change for you, but I think it’s interesting nonetheless.

While optimizing how and what Googlebot crawls, we wanted to ensure that Googlebot’s crawling jobs are only used internally to build the index used by search. For that, we added a new crawler, GoogleOther, which will replace some of Googlebot’s other tasks, such as: B. R&D crawls to free up some crawling capacity for Googlebot.”

GoogleOther inherits Googlebot’s infrastructure

GoogleOther uses the same infrastructure as Googlebot, which means it has the same limitations and features, including host load limits, robots.txt (albeit with a different user-agent token), HTTP protocol version, and fetch size.

Essentially, GoogleOther is a Googlebot operating under a different name.

Implications for SEOs and website owners

The introduction of GoogleOther shouldn’t have a major impact on websites as it works on the same infrastructure and limitations as Googlebot.

Nonetheless, it’s a notable development in Google’s ongoing effort to streamline and streamline its web crawling processes.

If you are worried about GoogleOther, you can monitor it in the following ways:

  • Analyze server logs: Regularly check server logs to identify requests from GoogleOther. This helps you understand how often your website is crawled and which pages it visits.
  • Update robots.txt: Make sure to update your robots.txt file to include GoogleOther-specific rules, if any. This helps you control access and crawling behavior on your website.
  • Monitor crawling statistics in Google Search Console: Keep an eye on crawl statistics in Google Search Console to monitor changes in crawl frequency, crawl budget or number of pages indexed since the launch of GoogleOther.
  • Track site performance: Regularly monitor your website’s performance metrics, such as: B. Load times, bounce rates and user interactions to identify possible correlations with GoogleOther crawling activities. This way you can see if the new crawler is causing any unforeseen problems on your site.

source: Google

Featured Image: BestForBest/Shutterstock

Previous Article

How to boost SEO for business growth

Next Article

Google Bard's latest update boosts creativity with more designs

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨