Google’s algorithm doesn’t like much AI content. Most of it doesn’t meet their standards and on top of that they’ve said they’re trying to proactively identify AI content and downrank it.
But here’s something a bit contradictory: Google is helping some publishers make their own AI content and publishing it. Let’s put aside the broad strokes of the AI debate for a moment and get down to the specifics of what Google is doing and what that might mean for the future.
Google Doesn’t Like AI Content
After months of telling the world that they weren’t going to punish AI generated content, Google’s March 2024 core update has started to punish AI generated content. If you’re been paying attention to search over this past year, no doubt you’ve noticed a serious dip in quality, in part due to a tsunami of AI spam. Here’s a more specific example of what we’re talking about: the world’s first “SEO heist” happened this year. Basically, a Exceljet competitor used AI to clone hundreds of Exceljet’s articles for their own site, “stealing” 3.6 million page views.
So, in a blog post promising to weed out “spammy, low-quality content on Search”, Google has said that they are targeting “unoriginal” content, creating updates that target new manipulative tactics (such as the one outlined above, presumably), targeting “scaled content creation”, targeting third-party content creators who abuse sites with better reputations, and targeting expired domain abuse. Nowhere in the post do they mention AI, but they get pretty close with the terms “automation to generate low-quality or unoriginal content” and “scaled content creation”.
In fairness, much of AI content is already against Google’s Webmaster Guidelines. In a chat last year, John Mueller, Google’s Search Advocate, said that automatically generated content has been against their guidelines from the beginning and that AI generated writing falls into that category.
On top of that, Google’s own documentation for publishers on how to rank on Google emphasises much of what AI is not. Basically, they like authors who are authorities on their topics, bylines, information about the page’s author, experience, and more things that are inherently antithetical to AI. The company also specifically addresses AI content generation, saying that sites should explain why AI is used to generate content, how this is done, and that disclosures or other ways of making AI evident is a good idea.
Google Is Paying Some Publishers to Make AI Content
So, understanding all that, it may come as a surprise that Google isn’t just encouraging some publishers to use AI to create content, they’re paying them to do so.
According to an article in Adweek, Google has a private program targeted at independent news publishers. They get beta access to a secret Google generative AI platform, which they have to use to create three articles per day, one newsletter per week, and one advertising campaign per month. Google pays said publishers a monthly stipend that comes to five figures annually. Google also gets to look at all the analytics and feedback.
In terms of the content created, Google says that the publishers aren’t using it to rewrite competitors stories. Instead, they use publically available information, such as press releases from a government’s public information office, and then use the AI platform to create articles.
Google, What the Heck?
AI has played a big role in messing up search this past year. And Google has played a bit of a role in that happening, given their role in creating generative AI tools. Now, Google is struggling to reign in the problems with search created by AI while also promoting their own AI tool and the content it creates. So, some questions.
One, does this mean that Google will eventually start privileging content generated by it’s own AI tools over others? We know that Google doesn’t yet know how to identify AI content with certainty (see that interview with John Mueller), but will they be able to identify AI generated content if it’s generated by their own AI tools, and if so, will that content avoid being downranked?
Two, will Google be sticking with it’s opinion that AI content should be labelled or easily identified as such?
Three, will Google be sticking with it’s opinion that publishers should explain how and why they use AI content? And if so—why aren’t we hearing that from the news publishers in their program? Why are we hearing about it from Adweek?
Where This Puts Us
It’s clear that Google is struggling with AI generated content. On the one hand, they want to stop as much of the garbage as possible. On the other hand, they seem to want to be able to make AI generated content better—or at least, see if such a thing is possible.
If you’re a content publisher, it’s clear that Google still doesn’t want you to publish your own AI generated content (or, at least, they want you to label it and explain why it exists). It’s also clear that Google sees at least a possible future where AI generated content is good enough to play a role in their ecosystem.
There are only three real responses to Google’s position. One, you try and cheat it. The people who pulled off the so-called SEO heist on Exceljet probably made some cash out of the deal, and since it’s still hard-ish to identify AI generated writing, the unscrupulous content creator can probably make a quick buck with AI content schemes. Two, you can try and learn as much as possible about AI content generation to get a leg up for the moment when Google and the other tech giants figure out how AI fits into their businesses—which, we’ve established, is what Google is doing at the moment. And third, you could ignore AI on the assumption that it won’t work out.
One thing is certain: we’re very interested to see what results Google’s experiment yield. Of course, given Google’s position, we may never know.