Can AI-Generated Content Add Actual Value to Readers?
A few months ago we started a series of content creation tests using AI writing software platforms. Our objective was to determine how best we could use one of these tools to generate content faster for our clients.
All webmasters and web strategists suffer from the same issue: it is hard to get business owners to produce their own content. Yet “great content remains king” both for lead generation and for SEO, so there is no viable alternative to producing new content of good value. The issue then becomes: can it be produced at an affordable cost, within an acceptable time frame, at an acceptable level of quality?
And so with the much-touted rise of AI and Natural Language Programming, we embarked on a series of tests…
At first, it did not go well…
Our team tested several of the most well-known AI writing platforms. We also received the help of friend ly webmasters who were doing the same for their clients. After 3 months of testing, we were dismayed by the results.
All of the AI platforms we tested suffered from similar ailments:
- A lack of originality and subtance (e.g., poor arguments, no examples, simplistic binary 0/1 logic, etc.)
- An overuse of trite expressions such as “when it comes to” or “in today’s ____”
- A tendency to rehash the same arguments inside an article
- Simplistic grammar (subject – verb – object)
- Multiple errors in citations of statistics and hard numbers.
In other words, AI-generated content required heavy human editing to produce anything that moderately well informed humans could consider as adding value to their minds.
We concluded that if the goal of a blog writer was just to get a paycheck in exchange for crappy rehashed copy, then AI-generated content would, with minimal edits, probably be a good way to achieve that goal with the least possible effort.
If, on the other hand, a writer’s mission was to bring readers value in the form of good information well articulated, this writer would either have to invest time in editing the AI-based content… or just use this content as a base for ideas, and then develop the article or post.
Three main concerns
Outside of the usual ethical issues inherent to publishing content on the web, the results of our tests raised 3 main risks:
- Could visitors to a local business site be turned off and leave the site early because its AI-generated content is unremarkable?
- What potential negative effects could Google’s guidelines regarding AI-generated content have on our clients’ sites?
- Would AI-generated content pass plagiarism detection tests now and in the future?
We rated Risk #1 as moderate-to-severe, depending on whether or not AI was used to create sales copy (severe risk) or blog copy (moderate risk). Even with editing, AI does not have what it takes to generate sharp sales copy. With heavy editing, it can generate decent blog posts. On a local business site, blog posts are not a determining factor in deciding visitors to call or leave. Good blog posts may contribute to inspiring trust, but the “home page + services page + about page + reviews” combination remains the reason why the phone ring. so as long as these pages were written by a well-informed human, an AI-generated and human-edited blog would likely not waste visitors.
Risk #2 was higher. Google had made clear that AI wasn’t its preferred flavor of content, and that it had the tools to detect AI-generated content. In fact, we had run the OpenAI detection tool on the content generated by AI software platforms tested, and ALL had failed the sniff test. Their content had been detected as “fake” (vs. “real human”) within a probability range of 80-99%.
So assuming Google would do a better job than OpenAI at sniffing out AI-generated content (after all, Google has a bigger dataset than OpenAI, and NLP is a Google specialty since 2012), there was no chance in hell to escape the fate reserved by Mighty G to low-quality content.
We rated Risk #3 moderate-to-high as the OpenAI industry-specific dataset is limited and if hundreds of SEOs and copywriters use the data available to get AI-written content, this content will be repetitive. This also depends on the capacity of users to work their prompts to get more precise data. The data served first seemed trite and fairly general. It would take work on the prompts to get better, more informational data.