Google Search on using AI to write content

[ad_1]

All the buzz these days is around AI, artificial intelligence, as a Googler killer, Bing’s plans to add ChatGPT, and also as a tool, we can use for our own websites. Recently, Bankrate was highlighted as having some of its content written by machines but reviewed by human editors, and the SEO community wanted to know Google’s policies around these efforts.

Bankrate example. The example was highlighted by Tony Hill, who posted on Twitter that “BankRate.com, one of the largest finance sites on the web has now started using AI to write some of its content. A big moment in web publishing and SEO.”

The screenshot reads “this article was generated using automated technology and thoroughly edited and fact-checked by an editor on our editorial staff.”

bankrate ai

The potential issue. If we can have machines and AI write content for us, then the amount of content that can be produced at incredibly low costs can be somewhat exciting and, at the same time, frightening. Exciting for content networks as ways to cover more for less but also frightening for consumers and search engines as to what to spend your time reading and what Google should rank in search.

So much content is being produced daily already; how much more can we consume, and how much more does Google need to crawl, index, and decide to rank for a given query?

Also, just this morning, we heard that CNet is quietly using AI to write entire articles from the ground up.

Google’s statement. Danny Sullivan, Google’s Search Liason, reiterated some of what was said before about Google’s stance on this topic. Is Google okay with crawling and ranking content written by machines? If not, is Google okay if those AI-generated pieces of content are reviewed by human editors before they are published?

Danny Sullivan wrote on Twitter this morning on the topic of AI-generated content, “content created primarily for search engine rankings, however it is done, is against our guidance.” But he added that “if content is helpful & created for people first, that’s not an issue.”

Danny then referenced their guidelines for the helpful content update, saying that the “key to being successful with our helpful content system — and if it’s not helpful content, the system catches that.” If the AI can right helpful content, then it should be fine, one would assume.

Then Danny references the revised EEAT quality raters guidelines, saying, “For anyone who uses *any method* to generate a lot content primarily for search rankings, our core systems look at many signals to reward content clearly demonstrating E-E-A-T (experience, expertise, authoritativeness, and trustworthiness).”

Finally, Google has a spam policy for automated content, which states “generated through automated processes without regard for quality or user experience” which is against Google’s guidelines.

Why we care. We know machine-generated content is not new, but what is new, is that the machines are getting better with the help of AI, in generating human-like, high-quality content. The question is, is it being produced with the intent of helping people or for the purpose of ranking in Google Search. If it is the latter, the helpful content system’s aim is to make sure such content does not rank well.

For the time being, Google wants content by the people, for the people but you can use AI for ideas and help you along the way. If machines and AI can write the content, you’d think similar technology can detect content written by AI.


New on Search Engine Land

About the author

Barry Schwartz

Barry Schwartz a Contributing Editor to Search Engine Land and a member of the programming team for SMX events. He owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on very advanced SEM topics. Barry can be followed on Twitter here.

[ad_2]

Source link