Google’s Gary Illyes and others answered many AI-related questions at Google Search Central Live Tokyo 2023 and shared new insights into Google’s approaches and recommendations on AI-generated content.
Japanese search engine marketing expert Kenichi Suzuki (Twitter profile) at Search Central Tokyo 2023 and then published a blog post in Japanese summarizing the key takeaways from the event.
Some of what has been shared is currently well known and documented, for example Google doesn’t care whether the content is AI-generated or not.
For both AI-generated and translated content, the most important thing for Google is the quality of the content.
How Google handles AI-generated content
Labeling of AI-generated content
Perhaps less well known is whether or not Google differentiates between AI-generated content.
The Google employee, believed to be Gary Illyes, replied that Google does not flag AI-generated content.
Should publishers flag AI-generated content?
Currently, the EU is asking social media companies to voluntarily flag AI-generated content to fight fake news.
And Google currently recommends (but doesn’t require) publishers that AI-generated images be tagged using IPTC image data metadata, adding that image AI companies will start adding the metadata automatically in the near future.
But what about text content?
Do publishers have to mark their text content as AI-generated?
Surprisingly, the answer is no, it is not required.
Kenichi Suzuki wrote that it is not necessary for Google to explicitly label AI content.
The Google employee said they leave it up to the publishers to decide whether it’s a better user experience or not.
The English translation of what Kenichi wrote in Japanese is:
“From Google’s point of view, it is not necessary to explicitly label AI-generated content as AI-generated content, as we evaluate the nature of the content.
If you think it’s necessary from the user’s point of view, you can specify it.”
He also wrote that Google warned against publishing AI content as is without having it checked by a human editor before publishing.
They also recommended taking the same approach to translated content, which should also be human-reviewed before publication.
Natural content is at the top
One of the most interesting comments from Google was the reminder that their algorithms and signals are based on human content, so natural content comes first.
The English translation of the Japanese original reads:
“ML-based (machine learning) algorithms and signals learn from content written by people for people.
So understand natural content and display it at the top.”
How does Google deal with AI content and EEAT?
EEAT is an acronym meaning “Experience”, “Expertise”, “Authority” and “Trustworthiness”.
This was first mentioned in Google’s guidelines for search quality raters and encourages raters to look for evidence that the author is writing from a position of experience on the topic.
An artificial intelligence currently has no experience in any topic or product.
Therefore, it is seemingly impossible for an AI to reach the quality threshold for certain types of content that require experience.
The Google employee replied that they were having internal discussions about this and hadn’t agreed on a policy yet.
They said they would announce a policy once they agreed on it.
Policies on AI are evolving
Due to the availability of AI and its lack of trustworthiness, we live in a time of transition.
Mainstream media companies that have rushed to test AI-generated content have quietly made a reevaluation.
ChatGPT and similar generative AI like Bard have not been specifically trained to create content.
So it’s perhaps not surprising that Google is currently recommending publishers to keep an eye on the quality of their content.
Read the original article by Kenichi Suzuki:
What I learned from Google #SearchCentralLive Tokyo 2023
Featured image from Shutterstock/takayuki